A Response to Canada's Six AI Pillars
A Response to Canada's Six AI Pillars (Download PDF)
The Spring Economic Update introduced the government's vision for "Artificial Intelligence for All," built on six pillars distilled from over 11,000 submissions and a 28-member task force. That is a serious consultation footprint, and the pillars are well-chosen.
The pillars are the right pillars. The harder question is whether they can move at the same speed, and what happens to the strategy if they cannot.
What follows is a response to each pillar in turn, written from the perspective of someone who spends most of their time inside executive conversations about AI adoption. The view from the boardroom is not the only view that matters, but it is one the strategy will live or die on. The pillars become real or remain aspirational based on what executives actually do with them.
Pillar 1: Protecting Canadians and Safeguarding our Democracy
AI will only deliver on its promise if Canadians trust it. That requires modern privacy and online safety laws, strong national AI safety capabilities, and secure government systems.
Trust is the foundational value for everything else in the strategy. Without modern privacy law, online safety frameworks, and AI safety capabilities, adoption either stalls or proceeds recklessly. Both outcomes are bad.
The challenge is that the institutions Canadians have historically trusted, banks, governments, regulators, were built on a slower update cycle than AI operates on. Cybersecurity in the AI era previews the problem. Threats evolve faster than the procurement cycles that defend against them, and the trust foundations that took decades to build can be eroded in a single high-profile failure.
Inside large organizations, the same protective infrastructure shows up not as a precondition for trust but as five concurrent review cycles: legal, IT, finance, HR, procurement. Each is acting rationally inside its own mandate. The cumulative effect is that the organization moves at the speed of its slowest approval while the technology moves at the speed of its fastest release cycle.
The pillar will be judged on whether the protections are designed to scale at the speed of the technology they govern, or whether they default to the procurement pace of the institutions that house them. The line between trust infrastructure and institutional inertia disguised as compliance is thinner than most policy documents acknowledge.
Pillar 2: Empowering Canadians
Canada must become an AI skills nation, where AI creates good jobs for Canadians, by giving access to AI training and education for all Canadians, and by representing and including Canadian voices, languages, and culture.
The ambition of an AI skills nation is directionally worthy. Training and education for all Canadians is a defensible commitment. The cultural and linguistic dimension is real. AI is surprisingly good at language, and that capability will continue to improve. Done well, it could help support and preserve French, Indigenous, and other cultures in ways previous technologies could not. We are getting only a glimpse of this so far.
The harder tension has to be named honestly. The pathway from training to employment has historically run through entry-level cognitive work: research assistance, junior analysis, first-level customer support, basic copywriting. Those tasks are precisely where AI adoption is most advanced. Hiring of younger workers in AI-exposed occupations has measurably slowed. A skills strategy that produces credentialed workers without addressing what happens to the on-ramp risks producing the credential without the career.
Empowerment has to arrive quickly and has to be validated by the market it is preparing people for. If either side lags, the mismatch resolves itself in ways that are hard to reverse. Workers pivot away from skills firms are not hiring for. Firms stop expecting to find skills the system is not producing. The mismatch becomes self-reinforcing because each side has rationally adjusted to the other's failure, and at that point the problem is no longer a coordination problem that can be solved by accelerating either side. It is a structural problem that requires rebuilding the linkage itself.
General AI literacy and specific market-valued skills are often presented as competing demands on the same training budget. They are not. They are two layers of the same stack, and they need to be built differently. Literacy is the public-good layer: a population that can engage with AI as citizens, consumers, and workers in any field. The market will not pay for literacy directly, but it depends on a population that has it. Specific skills are the conversion layer: the roles firms are actually hiring for, which change quickly enough that the delivery mechanism has to update on a quarterly cycle, not an annual one. Universities can do the first well. They are structurally not built for the second. A skills strategy that funds both through the same channel will produce a lot of literacy and very little conversion, and the empowerment side of the strategy will fall behind the adoption side for reasons that have nothing to do with effort and everything to do with the wrong delivery mechanism for the wrong layer.
The harder question is who runs the conversion layer in Canada. There will be dollars wasted and dollars doing good. That is the nature of a problem this new. But the strategy needs to make harder choices about delivery mechanism than the pillar currently signals.
Pillar 3: Powering AI Adoption for Shared Prosperity
The gains of AI will come from putting it to work across the Canadian economy and developing pro-worker, industrial AI technologies. AI for All will support accelerated adoption among small- and medium-sized enterprises and transform public service delivery to deliver better services to Canadians.
This is the strongest pillar, and also the one with the most hidden complexity.
The data is encouraging at the surface. Most organizations now report using AI in at least one business function. Generative AI has diffused faster than any prior technology measured. The infrastructure for adoption is largely in place.
The complication is that access adoption and integration adoption are not the same thing. Access means an individual uses AI for a specific task. Integration means the workflow, the function, and eventually the product are redesigned around what becomes possible when cognition is abundant. The first is widespread. The second is rare. Roughly 7 percent of organizations report fully scaled AI deployments, against 88 percent reporting at least pilot use. That gap is where competitive advantage actually compounds, and it is also where the productivity gains the strategy is counting on actually live.
Shared prosperity depends on what kind of adoption the strategy actually drives. Surface-level adoption produces incremental productivity. Function-level and product-level adoption reshapes industries. The first is easier to subsidize and easier to measure. The second is what actually moves an economy.
The pillar specifically calls out small and medium enterprises, which is correct, but SMEs are the ones least equipped to do the integration work. The risk is a strategy that achieves its access metrics while the integration gap quietly widens. Industrial AI is a real opportunity here. Manufacturing is one sector where Canada has scale, expertise, and a credible case for early integration. A strategy that pushes hard on industrial AI in manufacturing could create competitive advantage that compounds in a way that broad SME literacy programs cannot.
Pillar 4: Building the Canadian Sovereign AI Foundation
AI for All will support the building of sovereign compute infrastructure at scale — resilient, sustainable, and under Canadian governance, and grow Canada's exceptional AI researchers and talent pool.
This is where Canada is in trouble.
The strategic logic is correct. Dependency on foreign compute and foreign models is a real exposure for any country, and especially for one with serious AI ambitions. Recent trade tensions with the US have demonstrated that the relationship cannot be relied on as a stable input to long-term planning. Prime Minister Carney was direct about this at Davos. Nostalgia is not a strategy.
The problem is the gap between the recognition and the response. Most of Canada's inference compute today comes from the US. Many of the data centres operating on Canadian soil are run by US companies. The four largest US hyperscalers, Microsoft, Amazon, Alphabet, and Meta, collectively committed roughly $320 billion to AI infrastructure in 2025 alone. Canada's Sovereign AI Compute Strategy commitments are fractions of a percent of a single year of US hyperscaler spending. Even allowing for the difference in scale and ambition, the gap is not a rounding error. It is structural.
The question executives should be asking is whether Canada has enough sovereign capacity for today and tomorrow as inference demand continues to grow. What happens if the US restricts access to frontier models or imposes export controls on advanced compute? Resilience or reliance is not a rhetorical question. It is a planning input.
Jensen Huang has framed compute in four categories: training, inference, sovereign, and industrial. Canada needs to know where it sits in each, and the honest answer is that on three of the four it is significantly behind. A company evaluating where to invest in AI capability is going to ask whether the country it operates in has the compute foundation to support what comes next. If the answer is no, that company will make different choices.
What "Canadian governance" of compute actually means operationally, and whether it is stable across political cycles, is the second-order question. The first-order question is whether there is enough compute on Canadian soil to govern in the first place.
Pillar 5: Scaling Canadian Champions
To scale great AI companies in Canada, AI for All will unlock growth capital and leverage government as a strategic anchor customer.
Growth capital and government as anchor customer are two of the most powerful tools a country has to scale firms in a category where capital intensity is high and early demand is unpredictable. The pillar names them correctly.
The underlying picture is more uneven than the language suggests. Cohere's recent merger activity is genuinely encouraging and represents one of the more credible Canadian AI scaling stories. But Cohere is doing serious work without competing at the frontier with OpenAI, Anthropic, or Google, and beyond Cohere it is hard to point to a deep bench of Canadian companies operating at global scale in AI. The comparison with the US and Chinese markets is not flattering, and pretending otherwise is not strategy.
Three questions are worth pressing on. What is the right definition of a Canadian champion in a market where the most valuable AI companies are global from day one? Is government as anchor customer a genuine accelerator, or does it produce firms optimized for public sector procurement rather than market competition? What is Canada's actual comparative advantage in AI, and does this pillar invest behind it or around it?
The answer matters operationally. If you are building an AI company in Canada, this pillar changes how you should think about capital, customers, and exit. If the strategy genuinely deploys growth capital at scale and uses government procurement as a real anchor, the calculus shifts. If it does not, the rational move is to look elsewhere for both, which is the outcome the pillar is presumably trying to prevent.
Pillar 6: Building Trusted Partnerships and Global Alliances
Canada will work with a variety of trusted partners to align standards, co-invest in innovation, and help Canadian companies access global markets while shaping an AI ecosystem anchored in democratic values.
Standards alignment, co-investment, market access, democratic-values anchoring. All defensible objectives.
The harder question is whether democratic values function as a real differentiator in international AI markets, or as a positioning layer that gets priced out when capability and cost diverge. The honest read is that values-based positioning works when it sits on top of competitive capability and cost. It does not substitute for them.
The US and China are the two countries leading the AI race, and the gap to everyone else is meaningful. The interesting strategic question is which alliances actually matter for Canadian AI firms in practice. The EU and India are still in the race but significantly farther behind. There may be real opportunity in alliances with countries that have similar positions: serious about AI, committed to democratic governance, and unable to compete head-to-head with US and Chinese scale. A coalition of the credibly committed might be more useful than aspirational alignment with the leaders.
For Canadian executives, the practical question is which standards and alliances should be on the radar right now, and which are noise. The answer is changing quickly, and a strategy that names the partnerships and standards specifically would be more useful than one that gestures at trusted partners in general.
The pillars in tension
The pillars are the right pillars. They are also pillars in tension.
Pillar 3 will move faster than Pillar 2, because adoption decisions are made at firm level on quarterly horizons while skills systems update on annual or multi-year cycles. Pillar 1 will struggle to keep pace with both, because the protective infrastructure runs on procurement timelines that predate AI entirely. Pillars 4, 5, and 6 operate on longer horizons still, and depend on capital, geopolitics, and institutional commitments that extend well past any single budget cycle.
A strategy is not a list of objectives. It is a sequencing decision under constraint. The question that matters is whether these six pillars can be sequenced well enough that the parts of the strategy that move fast do not finish before the parts that move slowly have started. Acceleration without absorption is the failure mode.
The pillars will not be judged on whether they were the right pillars. They will be judged on whether the strategy held them together when the speeds diverged.
Joseph Peters
Joseph Peters has been a technology executive for almost 30 years. He is the author of The Canary Papers, a six-essay series on AI adoption, organizational change, and economic impact. Papers 01 through 03, covering the pace of change, the diffusion gap between capability and adoption, and the hidden dependency every business runs on, are available at lightlabs.ai for readers who want to go deeper on the dynamics referenced above.