A Century in a Decade

An AI Thought Experiment on Scale, Acceleration, and Organizational Friction


01 - The Canary Papers: Signals in the Age of AI (Download PDF)


This is a thought experiment, not a forecast.

It’s a way to feel scale because the human brain is good at imagining storms and famines and heartbreak, but strangely bad at imagining rates of change. We can understand disruption in theory and still be blindsided by it in practice. So instead of arguing about timelines, let’s start with a scene.

Imagine it’s 1926.

In many places, electricity is partial or absent. Running water is not a given. The telephone exists, but it’s a luxury, and often communal. In parts of the UK, outdoor toilets and shared taps are still part of normal life. Horses are not quaint; they are infrastructure. Radio is the nightly focus with families gathered around a box that pulls voices out of the air. Television is more rumour than reality: a fragile novelty, grainy and mechanical, impressive in the way a magic trick is impressive, but just a glimpse of what is to come.

Hold that world in your mind.

Now do the unfair thing a thought experiment allows: compress time. Take what normally unfolds over a century and force it into a decade. Imagine the person in 1926 waking up in 1936 with the world of 2026 arriving all at once: a Cybertruck beside a horse carts. “Glowing rectangles” in every hand that contain a library, a map of the planet, a camera, a broadcast studio, and a portal to anyone, anywhere. This is all happening before many rural homes have even finished plumbing.

What if the next decade delivers the kind of transformation that normally takes a century?

The idea isn’t mine. It sits, implicitly, behind the way some of the most serious people in artificial intelligence talk about the next few years. Demis Hassabis, the CEO of Google DeepMind, has credibility that doesn’t depend on hype: in 2024 he and John Jumper were co-awarded the Nobel Prize in Chemistry for AlphaFold, the system that cracked a foundational problem in biology by predicting protein structures at scale.

In a 2025 conversation with Lex Fridman and again at the World Economic Forum in Davos in 2026, Hassabis shared his belief that progress will be accelerated over the next ten years at a rate that is difficult to contemplate. He thinks that the next decade will bring 10x the innovation at 10x the speed. In other words: something like 100 years of progress in ten.

Maybe that’s wrong. Maybe it’s hype. Maybe the curve flattens, the breakthroughs slow, the world shrugs and keeps moving.

But the interesting question isn’t whether the future is exactly ten years away. It’s whether we have crossed a threshold where the pace of capability improvements, and the gap between those who adopt them and those who don’t, starts widening faster than our institutions can mobilize.

One way to talk about this threshold is what Hassabis describes as jagged intelligence: systems that are strongly capable in some domains and strangely weak in others. The jaggedness matters because it explains both the wonder and the skepticism people feel with today’s AI.

You can see jagged intelligence in the wild:

  • A model can draft a persuasive legal memo in minutes with a clean structure, coherent reasoning, and plausible citations. Then it fails at a simple scheduling task because it misreads a constraint, invents an option, or confidently commits an error a human assistant would never make.

  • It can write working code, explain the architecture, and refactor an ugly function into something elegant and then stumble on basic arithmetic.

  • It can summarize a complex medical paper with nuance and caution, yet be oddly gullible about a fake statistic presented with confidence.

The unevenness is real and it fuels the “this is hype” reaction among smart, experienced people who notice the failures and conclude the whole thing is unreliable.

The mistake is assuming jaggedness is a permanent state. The history of technology suggests the opposite: what begins jagged often becomes smooth enough to be dependable, not perfect, but reliable enough to reorganize work around it. And once work reorganizes, the economy follows.

That’s why the deeper shift isn’t about chatbots or novelty. It’s about what happens when cognition gets cheap.

For centuries, the knowledge economy has been built on the value of specialized mental labour like research, synthesis, drafting, analysis, pattern matching, scenario planning. These weren’t just outputs; they were the stepping stones of a career. A junior associate becomes a senior associate by doing the work no one else wants to do. A consultant learns by producing the deck. A policy analyst earns judgment by writing the first memo no one reads but everyone uses.

When AI absorbs the first draft, the research pass, the initial synthesis, it doesn’t merely “increase productivity.” It changes the shape of organizations.

The risk is not that expertise disappears. It’s that the path to expertise erodes.

And once you recognize that, the conversation shifts. The question stops being, “Will this tool replace people?” and becomes: What becomes valuable when many forms of cognition are on tap?

In an economy where intelligence is increasingly abundant, the new scarcities don’t vanish but rather they take a different shape.

  • Judgment becomes more valuable: not just deciding what’s true, but deciding what matters. What’s worth building? What’s worth fixing? What’s worth ignoring?

  • Trust becomes a differentiator: what is legitimate, safe, compliant, ethically defensible, and socially acceptable.

  • Coordination becomes decisive: the ability to align technology with people, incentives, and systems quickly enough to act while the window is open.

  • Responsibility becomes unavoidable: who is accountable when a system fails, harms, discriminates, or simply makes a costly mistake.

But two additional scarcities deserve a place on that list, especially for those who have spent decades building real businesses with the experience, wisdom and scars that come from it.

The first is relationships.

Businesses run on trust networks: customers who pick up the phone because they know you, partners who take the meeting because you’ve delivered before, colleagues who share the ugly truth because you’ve earned candor. AI can draft the email, but it cannot replace the reputational history that makes the email matter. In many industries, the actual competitive moat isn’t information, but it’s confidence and trust, built over years of competence and follow-through.

The second is domain-specific knowledge, the kind people often don’t appreciate because it lives in the body and in memory rather than in a PDF.

If you’ve worked in a field for twenty years, you don’t merely “know facts.” You know which facts are misleading. You know where the bodies are buried in legacy systems, regulatory landmines, customers who say yes and mean no, edge cases that don’t show up in documentation. You know the difference between what is theoretically optimal and what is operationally doable.

AI can make novices sound fluent. That is not the same thing as making novices wise.

So yes: cognition gets cheaper. But wisdom with the blend of domain context, pattern recognition, relationships, and judgment does not instantly become abundant. It becomes the bottleneck.

And this is the point where skeptics often say, “Fine. But society won’t move that fast.”

They’re right, and also not safe.

Because society has speed limits, and those limits are not simply technical. They’re legal, economic, cultural, and emotional. And the emotional part is the one minefield that is often overlooked.

Regulation and liability slow adoption, not because regulators are stupid, but because the cost of being wrong is real. A company can tolerate a few hallucinated emails; it cannot tolerate a hallucinated medical recommendation in a high-risk setting. Procurement cycles are designed to prevent disasters, which means they also prevent speed. Unions and professional bodies exist to protect workers and standards, which means they will push back when adoption looks like a wage-cutting exercise disguised as “innovation.”

Then there is culture: the inertia of the status quo.

The status quo is not merely a set of habits; it is a settlement between interests. It is budgets, titles, processes, compliance frameworks, social prestige, tacit agreements, and unspoken power. Disrupting it is difficult for the same reason moving a boulder is difficult: it has settled into a groove. Even when everyone agrees change is inevitable, nobody wants their part of the organization to absorb the shock.

And finally, there is backlash that is predictable and already taking shape.

When a technology threatens identity and livelihood, the first emotion is often resentment: they’re replacing us. Resentment hardens into suspicion: they’re lying about what it can do. Suspicion turns into resistance: slow-walking adoption, public campaigns, litigation, sabotage-by-procedure, internal revolts, mass refusals to use the tools, and in the political realm rules written to freeze progress in place.

The future doesn’t just arrive. It negotiates.

We’ve seen this pattern before. Nuclear power offered abundance and delivered anxiety. Genetically modified crops delivered yield and triggered cultural rejection in many places. Ride-sharing arrived as a convenience and was met with lawsuits, bans, protests, and regulatory trench warfare. Even when the technology “works,” adoption is a social bargain, and social bargains are slow.

This is the best argument against the “century in a decade” claim: diffusion is constrained. People don’t change at the rate machines improve. But here is the part skeptics sometimes miss: the constraints do not apply evenly.

Some organizations, some sectors, some countries, some founders will negotiate the bargain faster. And once they do, the advantage compounds. The gap between capability and adoption becomes the story. And the gap between adopters and hesitators becomes the competitive landscape.

You can see the taxonomy emerging already:

  • AI-native companies are built with AI as the default operating system not a tool bolted on, but the foundation. They hire fewer people, move faster, ship more experiments, and often look “too small” for the market share they can capture.

  • AI-forward companies are not born with it, but they adopt effectively: they redesign workflows end-to-end, instrument results, train teams, and treat AI as a strategic capability rather than a pilot program.

  • AI-hesitant companies dabble: a few licenses, a few workshops, a few “centers of excellence” that never touch the core business. They talk about adoption as if the goal is familiarity, not transformation.

  • AI-resistant companies actively avoid it: bans, blanket prohibitions, cultural scorn, or a belief that their moat is too strong for disruption to matter.

The cost of staying still is not immediate collapse. It’s something more subtle and more dangerous: falling behind in the compounding game.

When AI-forward organizations learn faster, they improve products faster. When they improve products faster, they win customers faster. When they win customers faster, they generate data, feedback loops, and capital that make them even faster. Meanwhile, AI-hesitant organizations are still debating policy. AI-resistant organizations are still debating whether the threat is a hoax.

Skepticism, in that environment, becomes a strategic position. And like any strategic position, it has a shelf life.

How long until displacement? It depends on the market’s friction. In heavily regulated sectors, it may take many years. In software, marketing, customer support, and many professional services, it can happen shockingly quickly sometimes not through a slow erosion, but through a single funded entrant that re-prices the category and collapses margins overnight.

This is where the widening gap becomes undeniable.

Two companies can live in the same year and inhabit different eras. One is running dozens of experiments a week, compressing cycle time, learning at speed. The other is still writing a policy memo about whether employees should be allowed to use AI at all. Over time, the difference isn’t incremental. The gap accelerates to a chasm that will be challenging to traverse.

So what do we do with all this?

A thought experiment is only useful if it changes the way you see the present.

If intelligence is becoming cheaper, then a lasting advantage shifts toward the human capacities that don’t scale the same way: judgment, relationships, and domain understanding, plus the organizational capacity to move without breaking.

For leaders, that means treating AI not as a line item or a tool rollout, but as a three-part discipline: vision, capability, and change. First, leadership has to articulate a clear view of how the business will be meaningfully different with AI, what gets faster, what gets cheaper, what becomes possible, and where advantage will come from. Second, it requires the technological acumen to translate that vision into reality: choosing the right use cases, data foundations, architectures, and governance so AI can be safely embedded into the work. And third, it demands serious change leadership and management, the ability to bring people with you, redesign roles and incentives, and rewire delivery models so adoption is reinforced rather than quietly punished.

For individuals, it means something less glamorous and more practical: becoming bilingual in your domain and in AI-enabled work. Not “learning prompts,” but learning how to frame problems, verify outputs, and use these systems as collaborators without surrendering your judgment. If you’ve spent twenty years building expertise, don’t throw it away by acting like you’re starting over. Use AI to amplify what you already have: your context, your taste, your network, your sense of what fails in the real world.

For institutions, education, government, professional bodies, the task is hardest. They are designed for legitimacy through stability. But if the ladder changes shape, if entry-level work shrinks, if assessment and credentialing no longer signal competence the way they used to, then stability becomes a form of denial. The goal should not be to “protect” the past, but to create new on-ramps into competence and new safety rails around accountability.

Now, back to 1926.

Imagine trying to explain, to that family around the radio, what the coming century would contain: satellites, antibiotics, jet travel, computers, the internet, a phone that is also a camera and a map and a bank and a newsroom. It would sound like fantasy. And even if you could describe it perfectly, the description wouldn’t transmit the feeling of the disorientation of living through it.

That is what this thought experiment is for. Not prediction. Perspective.

If the next decade really does compress change, if not a full century, then enough to reorder industries and rewrite foundations, then the central question won’t be whether we can build smarter machines. We will.

The question is whether we can build businesses, institutions, and lives that can absorb the shock without letting resentment harden into resistance, without letting inertia masquerade as prudence, and without waking up one morning to discover that the people who adopted early are no longer in the same decade as the people who didn’t.

In 1926, the radio felt like the future.

In our moment, the future may not feel like a device at all. It may feel like work itself changing shape, quietly at first, and then all at once.

That’s the journey.

That’s the thought experiment.

And it ends where it began: with a simple question—if the century can be compressed into ten years, what are you doing with your decade?


A Counterargument

The Mirage of Acceleration: Why the World Won’t Move at the Speed of Code



The premise of “A Century in a Decade” is seductive.

It invites us to compress the timeline of human progress, ten years delivering a century of innovation, on the assumption that jagged AI will smooth out, cognition will become cheap, and the main barrier is our inability to imagine scale.

But the argument slips into a classic error of technological determinism: it confuses capability with deployment, and deployment with transformation. The history of technology is not the story of inventions triumphantly conquering society. It is the story of the institutions of law, finance, labour, legitimacy, and trust deciding what is acceptable, who pays, and who benefits. The next decade will not be a compressed century. It will be a decade of friction, uneven diffusion, and expensive bargaining.

1) Jaggedness isn’t a phase you simply “graduate” from it’s a governance problem

The thought experiment treats “jagged intelligence” as a transitional state: brilliant at coding, clumsy at scheduling, but inevitably smoothing out into something dependable. That assumption understates the central constraint in high-stakes domains: liability.

In medicine, finance, critical infrastructure, and regulated hiring, “99% accurate” is not a productivity bonanza, it is a lawsuit waiting for a plaintiff. A hallucinated medical recommendation is not a glitch; it is an incident report. A discriminatory screening decision is not an edge case; it is a reputational crisis with legal consequences.

Worse, as systems become more complex, errors can become more subtle and harder to detect. The verification burden does not shrink automatically; it often grows. Organizations respond rationally: they add human review layers, audits, documentation, and controls. The result can be a perverse short-run outcome, what looks like “cheap cognition” producing expensive oversight.

In other words: the question is not whether models improve. It’s whether institutions can define a trust threshold that is operationally and legally survivable.

2) The erosion of expertise doesn’t just “shift value” it increases fragility

The thought experiment is right to worry about the apprenticeship ladder. But it frames that erosion as an economic reallocation: we’ll move from doing to judging; from execution to intent. That is too optimistic.

Expertise isn’t a set of facts; it is the embodied pattern recognition that comes from repeatedly being wrong in consequential ways and learning why. If AI absorbs junior-level work, the first research pass, the initial code scaffold, the early synthesis, we risk severing the feedback loops that produce competent seniors.

You cannot reliably “judge” what you have never had to do. You end up with leaders who sound fluent but cannot smell failure. That is not acceleration; it is systemic fragility, especially when decisions become higher leverage and mistakes become costlier.

History offers a warning: societies that deskill too aggressively often become dependent on brittle systems and a shrinking cadre of true experts. The result is not progress at 10x speed. It is progress punctuated by breakdowns.

3) The trust paradox raises transaction costs and it may be the dominant macro effect

The thought experiment correctly identifies trust as a scarce resource. But it underestimates how paralyzing a collapse of trust can be.

If the marginal cost of generating plausible text, images, voices, and video drops toward zero, then so does the cost of generating noise: spam, synthetic fraud, deepfakes, fake “internal memos,” counterfeit vendor communications, and automated social engineering. When you cannot trust what you see or hear, the rational response is “defensive verification.”

That increases transaction costs: more authentication, more compliance, more security, more process. Firms retreat into smaller trust networks. Procurement gets slower. Deals take longer. This is not “friction” as a speed bump. It is friction as an immune system with society paying an overhead cost to prevent collapse.

The world moves at the speed of trust because trust is the substrate of coordination. When the substrate is attacked, the economy does not glide into efficiency; it braces for impact.

4) Diffusion is expensive because it requires complements, not because people are unimaginative

The “century in a decade” framing treats adoption as a function of belief: if leaders had vision, they’d move; if institutions were faster, they’d change. But diffusion is often constrained by complements the boring infrastructure that makes a technology useful at scale.

AI deployed inside real organizations requires:

  • clean and governed data,

  • redesigned workflows (not just bolted-on tools),

  • cybersecurity and access control,

  • evaluation and monitoring,

  • training, incentives, and new roles,

  • procurement pathways and vendor accountability.

These are not optional. They are the condition of deployment. And they are costly, slow, and political because they redistribute power inside firms. This is why so many “transformations” stall: the technology is easy to demo, but hard to institutionalize.

5) Institutions don’t merely slow technology they reshape it through constraint and bargaining

The history of technology expects not a smooth curve but a sequence of settlements: lawsuits, regulation, labor action, insurance requirements, industry standards, and public backlash. Resentment turns into resistance when people believe the gains accrue privately while the costs are socialized, job displacement, surveillance, wage compression, loss of autonomy.

In response, society often chooses constraint: speed limits, safety standards, audits, licensing regimes, or outright prohibitions on certain uses. The future doesn’t just arrive; it negotiates. And negotiation frequently produces throttles, not acceleration.

Conclusion: The next decade will be uneven, fast in pockets, slow in aggregates

The more plausible forecast is not a compressed century, but islands of rapid capability inside a continent of institutional inertia. Some sectors and firms will move quickly, especially where liability is low and feedback is fast. Others will adopt slowly because the cost of error is catastrophic and legitimacy must be preserved.

So when the thought experiment asks, “What are you doing with your decade?” the honest answer for most organizations is not “becoming AI-native.” It is struggling through integration, governance, trust repair, and political bargaining.

The technology may move at the speed of code. But the economy moves at the speed of coordination, and coordination moves at the speed of institutions.


Acknowledgements

My sincere thanks to Awanish Sinha, Linda Martin, David Rodier, Brenda Gardiner, Darrell Alfonso, Trevor Langdon, and Ryan Vong for taking the time to review this essay and for their thoughtful suggestions for improvement.


The Canary Papers

The Canary Papers is a six-essay series drawn from executive conversations on AI adoption and strategic change. Named after the early detection systems once used in coal mines, the series focuses on signals rather than headlines, where capabilities are compounding, where organizations are lagging, and where competitive gaps are quietly widening. Each paper examines a distinct pressure point, from displacement timelines to diffusion barriers and trust costs. The aim is not prediction, but disciplined clarity: to help leaders recognize structural shifts early enough to act with intention rather than react under pressure.

Next
Next

AI Policy and Guidelines - An Outline For Your Business.