Feds launch "AI readiness scorecard" for early-stage teams
The Canada School of Public Service, a division of the federal government, has introduced an AI readiness scorecard designed to help early-stage projects determine whether a specific problem is suitable for artificial intelligence solutions.
Rather than benchmarking Canada against other countries, the tool serves as an internal decision framework that reflects a broader shift from experimentation to implementation in federal AI policy.
At its core, the school says the scorecard is a structured assessment tool that can be used during the early stages of a project to evaluate whether AI is the right solution to a defined problem.
The framework is intended to be applied before teams commit to building or procuring AI systems, positioning it as a gatekeeping mechanism rather than a performance metric.
"This scorecard should be completed during the discovery phase of your project. It is best used after you have a basic understanding of the problem you want to solve, but before you have committed to a specific solution and conducted a full technical assessment with IT," the scorecard stated.
By the end of the process, teams are expected to produce a "clear profile" of an AI opportunity, allowing them to "proceed, pivot, or pause" depending on the outcome.
The scorecard is built around a five-part evaluation that examines both the problem itself and the organisation's capacity to deploy AI: The clarity and severity of the problem, the availability and quality of data, the suitability of AI techniques, organisational readiness and resource risks (including ethical and operational considerations).
Each category is scored, producing an overall readiness profile that informs decision-making.
The methodology is deliberately applied to individual "friction points" rather than to broad programs to avoid overgeneralisation and ensure specificity in deployment decisions.
Why it matters
The federal organisation says the scorecard's introduction reflects a persistent issue in public-sector AI: projects failing not because of technical limitations, but because of poor problem selection.
By forcing teams to articulate the underlying problem and validate whether AI is appropriate, the tool aims to reduce misaligned investments and improve return on public spending.
It also serves as a mechanism to build internal business cases, aligning technical proposals with policy and operational priorities.
Limits and gaps
Despite its structured approach, the scorecard highlights ongoing limitations in Canada's AI readiness, particularly around data quality, interoperability and institutional capability.
AI systems "rely on data to function", making data availability and integrity a central constraint in public-sector deployment.
The tool also explicitly acknowledges that it does not address deeper ethical, legal or privacy considerations, which must be handled through separate processes. In practice, this creates a multi-layered approval environment in which projects must meet both readiness and compliance thresholds before deployment.