The Intelligence Crisis That Won't Happen

The Intelligence Crisis That Won't Happen

Why Citrini's 2028 Doomsday Scenario Mistakes a Transition for a Collapse

François Bossière, Co-CEO — Polynom | AI Strategy & Agentic Consulting


Citrini Research's "2028 Global Intelligence Crisis" memo has been making the rounds on finance Twitter and LinkedIn boardrooms alike. It is well-written. It is internally coherent. And it is, in my professional assessment, profoundly wrong — not because its authors lack intelligence, but because they lack operational proximity to what AI actually does inside enterprises.

I run an AI consulting firm. We deploy agentic systems for mid-market and large enterprises across Europe. Every week, I sit across the table from CFOs, CTOs, and COOs who are making real decisions about AI adoption. What I observe on the ground bears almost no resemblance to the frictionless, exponential displacement narrative Citrini constructs.

Let me be precise about where the argument breaks.


The Memo Confuses Technical Capability With Organizational Adoption

This is the foundational error, and everything else cascades from it.

Citrini's scenario assumes that because an agentic coding tool can replicate a mid-market SaaS product in weeks, enterprises will do so at scale. Anyone who has spent time inside a Fortune 500 procurement cycle knows this is fantasy.

Enterprise technology decisions are not made by a single enlightened CIO watching a demo. They are made by committees, shaped by compliance requirements, security audits, integration constraints, change management budgets, and internal politics that have nothing to do with what is technically possible. The average enterprise software deployment cycle — from evaluation to production — runs 9 to 18 months. AI does not change this. If anything, the novelty and perceived risk of agentic systems lengthens it.

We see this every day. A client's engineering team builds a prototype in three weeks that replaces a six-figure SaaS contract. Impressive. Then the prototype spends four months in security review. Then legal needs to assess liability for AI-generated outputs. Then the IT architecture team raises concerns about maintenance burden. Then the CFO asks who supports this when the engineer who built it leaves.

The gap between "a developer can build this" and "an enterprise can rely on this" is not a detail. It is the entire story. Citrini treats it as a rounding error.

The "Negative Feedback Loop" Ignores How Companies Actually Allocate Capital

The memo posits an elegant doom loop: AI improves, companies cut workers, savings fund more AI, AI improves further. No natural brake.

This misunderstands corporate budgeting at a fundamental level.

First, AI investment is not fungible with headcount savings. When a business unit reduces headcount by 15%, that budget does not automatically flow into AI spend. It flows into margin improvement, debt service, share buybacks, or — most commonly — it gets absorbed by the CFO and redistributed according to priorities that have nothing to do with the department that generated the savings. The idea that layoff savings mechanically fund the next wave of AI adoption reflects a misunderstanding of how P&L ownership works in any organization above 500 employees.

Second, AI adoption has diminishing marginal returns within any given business process. The first wave of automation captures the obvious inefficiencies. The second wave is harder. The third is often not worth the integration cost. Every operations leader I work with hits this ceiling. The memo assumes linear or exponential returns to AI investment. Reality is logarithmic.

Third, and critically: companies that cut too aggressively lose institutional knowledge, client relationships, and the capacity to adapt. We have already seen this. Several of our clients came to us after aggressive headcount reductions left them unable to manage the AI systems they had deployed. The correction mechanism Citrini claims does not exist is, in fact, already operating.

The Consumer Agent Thesis Is a Category Error

The most speculative section of the memo — agentic consumers routing around interchange, dismantling DoorDash, collapsing real estate commissions — reads like a product roadmap mistaken for a macroeconomic forecast.

Consumer agents that autonomously optimize spending, run in the background 24/7, and transact via stablecoins to avoid card fees? This is not happening in 2027. It is not happening in 2029. Here is why.

Consumer behavior is not a pure optimization problem. Humans do not want the cheapest protein bar. They want the protein bar they saw their favorite athlete endorse, or the one with packaging they like, or the one available at the store they are already walking past. Behavioral economics has spent fifty years documenting that human purchasing decisions are driven by heuristic, emotion, identity, and habit — not by price minimization. The idea that an AI agent will override all of these by running a multi-platform comparison in the background is a technologist's fantasy projected onto human psychology.

More importantly, the regulatory environment for autonomous financial agents acting on behalf of consumers is essentially nonexistent. Who is liable when an agent moves your insurance to a cheaper carrier that denies your claim? Who bears fiduciary responsibility when an agent routes your payment through a stablecoin that depegs? These are not implementation details. They are structural barriers that will take years of legislative and legal work to resolve.

The DoorDash example is particularly telling. The memo argues that coding agents collapsed the barrier to entry for launching a delivery app. This conflates building an app with building a logistics network. The barrier to competing with DoorDash was never the software. It was the driver network, restaurant partnerships, demand density, and operational infrastructure. A delivery app with no drivers and no restaurant contracts is not a competitor. It is a side project.

The Mortgage Crisis Analogy Is Backwards

Citrini draws an explicit parallel to 2008, then carefully distinguishes this scenario by noting that the loans were good at origination. This is presented as making the problem worse — a slow-moving crisis with no clear villain.

But the distinction actually makes the problem smaller.

In 2008, the systemic risk came from leverage, opacity, and correlated exposure. Trillions in derivatives were written against mortgage pools whose underlying quality was fraudulent. The system did not know what it owned. Banks were levered 30:1 against assets they could not value.

None of this applies to the scenario Citrini describes. Modern mortgage underwriting, post-Dodd-Frank, involves stress-tested borrowers, lower loan-to-value ratios, and a regulatory framework specifically designed to prevent the contagion mechanisms of 2008. If a cohort of high-income borrowers experiences income impairment, you get a regional housing correction — painful, but contained. You do not get systemic financial contagion because the transmission mechanisms (synthetic CDOs, unregulated CDS markets, shadow banking leverage) simply do not exist at the same scale.

The memo presents rising delinquencies in San Francisco and Austin as harbingers. These are cities that have experienced housing corrections before, driven by tech sector volatility, and recovered. A correction in overheated tech-heavy metros is not a national mortgage crisis. It is a local repricing.

The "Ghost GDP" Concept Reveals the Analytical Flaw

"Ghost GDP" — output that shows up in national accounts but never circulates through the real economy — is a rhetorically effective phrase and an economically confused one.

GDP measures the value of goods and services produced. If AI systems produce more goods and services at lower cost, that is genuine economic output. The question is not whether the output is real but how it is distributed. That is a policy question, not an existential economic one.

Every previous wave of productivity growth — mechanization, electrification, computing — produced the same initial pattern: output rose, labor share fell, and then institutions adapted. The adaptation took the form of labor regulation, progressive taxation, social insurance, and new industries. It was not automatic and it was not painless, but it happened because the political economy eventually forces redistribution when concentration becomes untenable.

Citrini's scenario assumes institutions are frozen — that the government is too slow, too divided, and too confused to respond. This is the weakest assumption in the entire piece. Governments have historically responded to economic crises with speed that would have seemed impossible beforehand. The CARES Act was drafted and signed in two weeks. The TARP was enacted during a presidential election. The New Deal remade American economic policy in months. The claim that democratic institutions cannot adapt to AI displacement is an assertion, not an analysis.

What Citrini Gets Right — And What to Do About It

The memo is correct that AI will compress the premium on routine cognitive labor. It is correct that some business models built on information asymmetry and consumer friction will erode. It is correct that the transition will be painful for specific cohorts of workers and that policy needs to move faster than it currently is.

Where it goes wrong is in extrapolating from real trends to an imagined cascade. Each link in Citrini's chain — from SaaS disruption to mass layoffs to consumer collapse to mortgage crisis to systemic financial contagion — requires the previous link to operate at maximum severity with no institutional response, no behavioral adaptation, and no friction. The probability of every link firing simultaneously and at full force is vanishingly small.

For C-level executives reading this, the practical implications are clear.

AI is a restructuring force, not a detonation. The companies that will be damaged are the ones that either ignore it or adopt it recklessly. The ones that will thrive are the ones that treat it as an operational transformation — deliberate, measured, integrated into existing processes with clear accountability.

Do not let speculative macro fiction drive your AI strategy. Build from the specific economics of your business, your processes, your workforce, and your competitive position. The future will not be written by a thought experiment. It will be written by the organizations that execute with discipline while others oscillate between panic and paralysis.

The canary is alive. And it is going to stay that way — provided we stop mistaking a noisy mine for a collapsing one.


François Bossière is Co-CEO of Polynom, an AI-native consulting firm specializing in agentic AI deployment for enterprise operations. Polynom advises mid-market and large enterprises across Europe on AI strategy, process automation, and organizational transformation.

Read more