The Hidden Dependency

What AI Is Doing to the System Your Business Runs On


03 - The Canary Papers: Signals in the Age of AI (Download PDF)


On the morning of September 15, 2008, Lehman Brothers filed for bankruptcy. By the end of that week, the global financial system was closer to collapse than most people watching it knew. What followed was a decade of consequences: foreclosures, unemployment, a generation of young workers entering the labour market under conditions their education had not prepared them for, and a sustained loss of public trust in institutions that has not fully recovered.

The remarkable thing about 2008, in retrospect, is how invisible the risk had been to the people carrying it. Executives running businesses that had nothing to do with mortgage origination discovered that their companies depended on a financial architecture they did not understand. Consumer demand evaporated because households that had never taken out a subprime loan found their home equity vanished, their credit tightened, and their employers suddenly cautious. The failure did not stay contained in the financial sector. It propagated through credit, consumer spending, workforce stability, and eventually institutional legitimacy.

The lesson was not that finance is dangerous. The lesson was that businesses depend on external systems they did not design and mostly do not think about, and that those systems can fail in ways that look invisible until they are not.

AI is acting on a different external system, but the structural pattern is the same. And most executives have not yet priced in what their business actually depends on.

The system under the system

Every business depends on three external systems to function. The first is a labour market that produces skilled employees at predictable costs. The second is a consumer base with stable enough income to make purchasing decisions over time. The third is an institutional environment, legal, regulatory, and civic, that provides the stability required for long-term planning. These are not free goods. They are the product of an arrangement that has been quietly restructured over the last thirty years.

Through the postwar period, large institutions absorbed most of the volatility in working life on behalf of individuals. Employers provided stable careers, defined-benefit pensions, health coverage, and predictable progression. Government provided the backstop for what employers did not cover. Education was a one-time investment that produced a durable credential. The individual’s job was to participate. The institution’s job was to absorb the risk of a volatile world.

Beginning in the 1980s and accelerating through the 1990s and 2000s, that arrangement inverted. Employers shed long-term risk in the name of flexibility. Pensions became defined-contribution, shifting investment risk to individuals. Benefits became portable, requiring individuals to assemble their own coverage. Careers became self-directed. Housing became an individual asset-accumulation problem. Credentials became continuous rather than terminal, turning education into a lifelong cost.

Each of these shifts was rational at the level of the individual institution making it. Cumulatively, they produced an economy in which individuals carry far more volatility than they did a generation ago, and in which their ability to carry that volatility depends almost entirely on one thing: a functioning labour market that provides the income, the benefits access, and the credential value required to assemble a stable life.

The labour market, in other words, became the most consequential pressure point in the system that businesses and individuals both depend on. When it works, much of the rest holds. When it stops working, what is downstream of it begins to strain.

Why AI is the stress test

AI is different from previous labour market shocks in one specific way: it is acting on entry-level cognitive work across sectors simultaneously. Prior disruptions were sectoral. Manufacturing automation hit specific regions and specific occupations over decades. Trade liberalization reshaped particular industries. The internet eliminated some roles and created others, but the transition was gradual enough that workforces could restructure around it.

The data from Paper 02 made the current pattern concrete. Anthropic’s observed-exposure research found that hiring of younger workers aged twenty-two to twenty-five had slowed in AI-exposed occupations since ChatGPT launched, even as overall employment held steady. The tasks where AI adoption is most advanced, research assistance, junior copywriting, basic data analysis, first-level customer support, are precisely the tasks that have traditionally served as on-ramps into professional careers.

The Stanford AI Index released in April 2026 sharpens the picture further. Drawing on US payroll data through September 2025, research by Erik Brynjolfsson and co-authors, published under the title Canaries in the Coal Mine, found that employment for software developers aged twenty-two to twenty-five had fallen close to twenty percent from its 2022 peak. Headcount for older age groups in the same occupation continued to grow. A separate study covering sixty-two million workers and two hundred and eighty-five thousand US firms describes the pattern as “seniority-biased technological change”: AI substituting for junior labour while leaving senior roles intact. The resonance with this series is not a coincidence of vocabulary. It is the same phenomenon observed from a different angle. The structure is still standing. Only the entry floor is being removed.

The front door is narrowing before the building shakes.

This is not, on its own, an argument that the labour market is collapsing. Total employment remains strong. Senior workers continue to be hired. The compression is specifically at the entry point. But the entry point is where the system produces the next generation of workers, consumers, and citizens. A labour market that cannot absorb new entrants is a system that has stopped reproducing itself, and the consequences of that stoppage do not arrive all at once. They arrive as a slow attenuation of everything that depends on the system working.

The restructuring is uneven

Before the argument hardens into something that reads as catastrophic, an important qualification. The front door is not narrowing everywhere at the same rate, and not every kind of work is compressible by current AI systems.

Companies that produce physical goods have different exposure profiles than companies that produce software. A shoe still needs to be designed for a human foot, manufactured in a physical facility, and distributed through physical channels. Nike will continue to need designers, supply chain managers, retail operations, and the people who coordinate between them. The AI tools that are compressing entry-level knowledge work in SaaS firms are a productivity layer for a physical-goods business, not a replacement for its core workforce.

Companies that deliver experiences are in a similar position. Tourism, hospitality, live events, sports, healthcare delivery, education at its best, personal services, trades: these are businesses where the value is produced in human presence and cannot be compressed by a model that operates at a distance. The experience economy is not immune to AI, but its exposure is to productivity enhancement rather than role elimination.

Blue-collar work sits in a more complicated position. It can absorb some displacement from compressed knowledge work, and is doing so already in the form of university graduates taking positions below their credentials. But that absorption has a finite limit, and it changes the nature of the work itself in ways worth examining elsewhere.

The executive implication is that sectoral exposure to the AI stress test is not uniform, and the businesses with physical, experiential, or presence-based components have more time and more optionality than businesses built entirely on compressible cognitive work. That is a useful thing to know, but it is not a reason to relax. Every business, regardless of sector, still depends on the same three external systems. Even a physical-goods company needs a labour market that can produce its designers, a consumer base that can afford its products, and an institutional environment that supports long-term planning. The dependency does not go away because one sector is less exposed than another.

New shapes of entry work

The more interesting development, and the one most underappreciated in the current discourse, is that AI is not only compressing old entry roles. It is creating new ones that did not exist two years ago, which favor workers fluent in AI tooling and well suited to the translation work between executives and systems.

Two roles are emerging with particular clarity in conversations with executives adopting AI seriously.

The first is what might be called the AI Interpreter. A growing number of executives are discovering that they can prototype software themselves using AI coding tools, building rough versions of applications that would previously have required a contract with a SaaS vendor or a full engineering team. The cost structure is compelling. Instead of paying ongoing license fees for a platform that does eighty percent of what they need, they can build the specific thing they actually need in days. The problem is that they cannot productionize, secure, maintain, or scale what they build. The gap between what an executive can vibe-code in an afternoon and what a business can actually run on is substantial, and closing that gap requires a specific kind of worker: someone technically capable enough to refine and harden the prototype, and communicatively capable enough to understand what the executive was actually trying to accomplish. This is a translation role. It is also genuinely entry-level work. A developer fluent in AI tooling and comfortable with executive ambiguity can do this work as well or better than a senior engineer who does not natively understand how the prototype was built, and in practice this describes early-career workers more often than experienced ones.

The second is the Agent Conductor. As AI systems move from single-shot tools to persistent agents that take actions on behalf of a business, the work of orchestrating, supervising, and establishing trust in those agents becomes its own function. Someone has to decide which agents are authorized to do what, monitor their outputs, catch their failures, and build the governance scaffolding that makes them safe to deploy at scale. The labour market is already signalling this shift before most organizations have internalized it. The Stanford AI Index reports that agent deployment remains in the single digits across nearly all business functions, with a majority of organizations reporting no agent use at all. Yet job postings referencing agentic AI, AI agents, and orchestration frameworks grew exponentially between 2024 and 2025, while postings mentioning chatbots and conversational AI declined. Stanford’s own framing is precise: demand is shifting from general familiarity with chat-based tools toward the skills required to coordinate and operationalize task-oriented systems. The role is forming faster than the deployments are. The people most likely to grow into it quickly are those already fluent in the tooling. A senior executive can write the policy. Someone trained in the mechanics of agent orchestration can implement the practice.

Both roles share a useful property. They are genuinely entry-level but they are not replaceable by the AI itself, because they require judgment, communication, and the ability to work across the technical-executive boundary. That is exactly the kind of work the old contract used to produce at the entry level and is struggling to produce now. An executive paying attention to this will notice something the labour market data does not yet fully capture: the front door is not simply narrowing. It is changing shape. The question for any business is whether the new doors get built deliberately, or whether they get left to form by accident in the handful of firms that happen to think about them.

Consider how this plays out inside a single firm. A regional insurance company adopts AI tooling aggressively in its claims processing, underwriting research, and customer service functions. Productivity rises quickly. Senior adjusters handle more complex cases, claims cycle times drop, and the finance team revises the hiring plan downward for the next two years. Junior analyst and first-level claims roles quietly disappear from the headcount model. Two years on, the pattern reveals what was traded. There is no training bench, no one two years into the job learning how to read a file, spot the patterns that matter, and build the judgment that experienced adjusters take for granted. There is no Interpreter layer, because no one was hired to translate between the AI tools and the specific needs of the firm. There is no Agent Conductor function, because the agents being deployed are being managed by people too senior to touch them day to day. The firm has optimized its present at the cost of its future, and the cost does not show up in the numbers until the first senior adjuster retires and there is no one ready to replace them.

The downstream effects executives should track

Scale that pattern across an economy and it produces a specific causal chain. Fewer entry positions mean weaker early-career earnings formation for a generation of workers. Weaker earnings mean people delay buying homes, save less, and spend more cautiously over time. That caution shows up, eventually, as a slowly thinning customer base for the businesses that sell to households. As the pattern becomes visible, the workers most affected begin to perceive the system as failing them, which feeds political volatility, regulatory unpredictability, and erosion of the long-term planning environment every business depends on. None of these effects arrive as a single shock. They arrive as a slow propagation, each stage amplifying the next. Any executive has a legitimate business interest in tracking three of them in particular.

Workforce formation. The question is whether the talent pipeline a business depends on can still be built under current conditions. If traditional entry roles are being compressed and new roles are not being deliberately designed to replace them, the result is a gap in the middle of the workforce five to ten years from now. The senior people retire. The mid-career people advance. The replacements at the junior level were never hired. McKinsey’s 2025 survey, cited in the Stanford AI Index, found that one-third of organizations expect AI to reduce their workforce in the coming year, and that expected reductions exceed observed reductions in nearly every business function. Executives themselves are telegraphing that the pace is accelerating. This is not an HR problem. It is a strategic planning problem with a long lag time.

Consumer capacity. Businesses that sell to households depend on those households having predictable income. A labour market that produces fewer entry-level positions, or entry-level positions at lower wages, produces consumers with less stable purchasing power over time. The effect is slow and diffuse, which makes it easy to underweight in quarterly planning. It is also cumulative, which makes it dangerous to ignore over a five-year horizon.

Institutional trust. The most underappreciated downstream effect is what happens to the legitimacy of the institutions businesses depend on when the underlying arrangement stops delivering. When workers and consumers no longer believe the system is working as advertised, regulators face pressure to act quickly, rules get rewritten under political duress rather than careful design, and the long planning horizons that capital-intensive industries depend on get shorter. Executives operating under the assumption of institutional stability are making a bet that the current turbulence is cyclical. If it is structural, that bet becomes more expensive each year it stays on the table.

What executives can control

The payoff of understanding the hidden dependency is that it clarifies what is and is not within an executive’s control. Policy is not. Macroeconomic conditions are not. The shape of the labour market a decade from now is not. What is within an executive’s control is the set of decisions any business makes about how it participates in the system it depends on.

How you design entry points matters. If traditional junior roles are being absorbed by AI, the question is whether you deliberately design the Interpreter and Conductor roles that are emerging, or whether you let your entry-level hiring quietly disappear.

How you think about internal development matters. A workforce that cannot form itself externally has to be formed internally, which is a different kind of investment than most firms have had to make for thirty years.

How you communicate about AI adoption matters. Workers and customers are watching how businesses talk about AI, and the language of efficiency without the language of obligation produces exactly the kind of institutional distrust that weakens the systems every business depends on.

How you think about the components of your business that are not compressible matters. Physical goods, experience delivery, presence-based services: these are the parts of your operation that retain value in a world where cognition is cheap, and they are also the parts that continue to employ the people whose purchasing power keeps your consumer base stable.

The quiet restructuring

2008 taught a lesson that most executives absorbed at the time and many have since forgotten: distributed risk illusions fail catastrophically when they fail, and the businesses that understood their hidden dependencies fared better than the ones that did not. The failure of 2008 was not about banking. It was about the collective decision, made by thousands of rational actors each acting within their own mandate, to build an economy that depended on one assumption remaining true.

AI is a different stress test on a different distributed illusion. The assumption this time is not that housing prices will keep rising. The assumption is that the labour market will continue to absorb new entrants, produce reliable workers, and sustain the consumer base and civic environment that every business quietly depends on.

The executives who treat AI as an operational question will optimize their internal systems while the external systems they depend on continue to restructure around them. The executives who understand what their businesses actually depend on will make different choices about where to invest, how to hire, and how to build.

The rate of diffusion has changed. The architecture beneath it is changing too. The first change is already visible in every quarterly report. The second is still quiet, still early, still available to be noticed in time.


A Counterargument

The Hidden Resilience

A Response to “The Hidden Dependency”


The strongest challenge to Paper 03 is not that it is alarmist. It is that it may be misidentifying where the adjustment is actually taking place.

The essay argues that AI is quietly weakening the mechanisms that reproduce the next generation of workers, consumers, and citizens. That is a serious claim, and the data on compressed entry-level hiring is real. But the argument rests on a premise that deserves more skepticism: that the traditional entry-level job is the indispensable front door through which economic stability must continue to be built.

That assumption was under pressure long before AI. The old model of degree, junior role, apprenticeship, and linear progression had been fraying for years. Employers had already compressed training, outsourced development, credential-inflated hiring, and demanded experience for roles that were nominally entry-level. AI may be accelerating that change, but acceleration is not systemic failure. It may simply be forcing firms and workers to abandon an outdated mechanism of workforce formation and replace it with a different one.

Put differently: the front door may not be narrowing. It may be relocating.

The 2008 analogy frames the problem in a way that biases the conclusion.

The financial crisis was a balance-sheet failure rooted in leverage, opacity, and contagion. Its defining feature was that the risk was genuinely hidden, mispriced, and concentrated in opaque instruments until the moment the system failed. The AI transition has some of these properties in smaller form. It produces opacity inside certain model-dependent workflows, concentrates capability in a handful of frontier firms, and can propagate second-order effects through labour markets and demand. But the scale and mechanism are different in ways that matter. The AI transition is distributed across firms, observable in quarterly data, extensively researched in real time, and publicly debated in every executive conversation. A risk that is being actively measured and incrementally absorbed is not the same kind of risk as one buried inside a AAA-rated CDO. Using 2008 as the organizing analogy flattens that distinction, and it biases the reader toward collapse logic when the more relevant frame is reallocation logic.

The strongest empirical claim in the paper is narrower than it appears.

A twenty percent decline in 22-to-25-year-old software developer headcount from a 2022 peak is striking, but 2022 was the peak of a historic tech hiring bubble. ZIRP-era over-hiring, followed by a 2023 correction that swept through the industry, confounds the AI signal. The same cohort in the same occupation is being measured against an anomalously inflated baseline. That does not make the AI effect zero. It means the size of the structural component is genuinely uncertain, and reasonable analysts place it anywhere from modest to substantial. Building a systemic-risk case on a contested effect size, in an early window, against a distorted baseline, is the kind of move the paper’s own epistemic standards would normally flag.

History suggests labour markets reproduce themselves through reorganization, not preservation.

The disappearance of switchboard operators did not break the communications sector. The spreadsheet did not destroy the pipeline into finance. The internet did not end marketing because junior media-buying and basic research tasks changed shape. In each case, entry work was reconstituted around new skills, tools, and expectations. Painful? Yes. Uneven? Certainly. But not evidence that the system itself had stopped functioning. Paper 03 acknowledges this pattern in its discussion of Interpreter and Conductor roles, then treats those roles as fragile exceptions. Add AI operations analyst, agent QA lead, and workflow orchestrator to that list and the shape of the new entry tier becomes easier to see. The more honest read is that rapid emergence of new categories is exactly what a labour market reproducing itself under new technical conditions looks like.

Consumer welfare does not arrive only through wages.

A household experiences economic welfare through cost of living, access, convenience, and quality as well as through earnings. If AI reduces the cost of legal help, tutoring, software creation, healthcare administration, or customer service, consumer capacity may rise in ways a narrow focus on entry-level salaries cannot capture. The fair concession is that these benefits may arrive unevenly and later than the wage disruption that concerns Paper 03. Cost reductions propagate through markets on slower timelines than hiring decisions, and the households most exposed to entry-level compression are not always the first to capture the consumer-side gains. But the paper’s chain from compressed junior earnings to a thinning customer base assumes productivity gains accrue entirely to firms as margin. Historically they have also flowed to lower prices, better products, and new categories of consumption. Treating only the pessimistic half of that distribution as the base case is a choice the paper does not quite defend.

Fewer juniors may reflect a flatter organization, not a hollowed-out one.

If AI allows senior people to become dramatically more productive, firms may genuinely need fewer apprentices in the old sense because the work itself has changed. Training may happen in shorter cycles, closer to live systems, with AI as scaffolding. A worker may become productive in six months rather than two years. A small team may build what once required layers of coordination. In that world, reduced junior headcount does not automatically imply optimizing the present at the cost of the future. It may reflect a different and more compressed development cycle.

This compression applies unevenly across domains. In software, design, marketing, and much of knowledge work, the development cycle genuinely can shorten because the work product is inspectable, iterative, and relatively forgiving. In judgment-heavy, high-stakes, regulated domains such as insurance claims, clinical practice, legal adjudication, and parts of financial services, apprenticeship earns its keep. Pattern recognition is built through exposure to thousands of cases, edge cases carry real consequences, and the cost of a wrong call can surface years later. Firms in those domains that compress their training bench too aggressively are taking on a specific and identifiable risk, and Paper 03 is right to flag it. The general point stands that reduced junior headcount is not automatically a warning sign. The particular point stands that in some domains it is.

Institutional trust is mediated, not determined.

Labour-market dislocation can feed distrust, but distrust is not a direct or inevitable property of technological change. It is mediated by governance quality, policy speed, and institutional response. Two economies absorbing the same AI shock can produce very different levels of public trust depending on how training systems, credentialing, benefits portability, and labour-market policy respond. The paper treats institutional erosion as a downstream consequence of the compression. It is at least as plausibly a function of the governance response, which means the civic substrate of business is more within policy reach than the essay implies.

The strategic inversion.

The real executive risk may not be underestimating hidden dependency. It may be overestimating fragility and responding too defensively. Firms that cling to legacy workforce models out of fear that AI is removing the entry floor may end up protecting obsolete structures while more adaptive competitors build faster, leaner, and more accessible ones.

The strategic question is not whether AI is quietly breaking the system beneath the firm. It is whether leaders can distinguish between the breakdown of a familiar labour-market form and the emergence of a new one.

That is a harder question, and a more useful one. The future may not belong to the businesses that preserve the old front door. It may belong to the ones that stop mistaking a moving threshold for a collapsing building.

REFERENCES

Stanford Institute for Human-Centered AI (April 2026) — Artificial Intelligence Index Report 2026. Stanford University. https://hai.stanford.edu/ai-index/2026-ai-index-report

Brynjolfsson, E., et al. (2025) — “Canaries in the Coal Mine: Six Facts About the Recent Employment Effects of Artificial Intelligence.” Stanford Digital Economy Lab. US payroll data through 2025 (ADP). https://digitaleconomy.stanford.edu/news/canaries-interest-rates-and-timinga-more-on-recent-drivers-of-employment-changes-for-young-workers/

Hosseini Maasoum, S. M. & Lichtinger, Y. (2025) — “Seniority-Biased Technological Change.” Study of 62 million workers across 285,000 US firms.

Anthropic — Massenkoff & McCrory (March 2026) — “Labour market impacts of AI: A new measure and early evidence.” https://www.anthropic.com/research/labour-market-impacts

McKinsey (November 2025) — “The State of AI in 2025: Agents, Innovation, and Transformation.” https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

PwC (2025) — “The Fearless Future: 2025 Global AI Jobs Barometer.” https://www.pwc.com/gx/en/services/ai/ai-jobs-barometer.html

Lightcast (2025) — AI job postings data, cited in Stanford AI Index 2026, Chapter 4.


The Canary Papers

The Canary Papers is a six-essay series drawn from executive conversations on AI adoption and strategic change. Named after the early detection systems once used in coal mines, the series focuses on signals rather than headlines, where capabilities are compounding, where organizations are lagging, and where competitive gaps are quietly widening. Each paper examines a distinct pressure point, from displacement timelines to diffusion barriers and trust costs. The aim is not prediction, but disciplined clarity: to help leaders recognize structural shifts early enough to act with intention rather than react under pressure.

Next
Next

Reading the Stanford AI Index 2026 Against the Canary Papers