Citrini's 2028 Crisis: A Builder's Response to AI Doomerism
Citrini Research's "2028 Global Intelligence Crisis" is the most viral finance piece of the year. A 7,200-word fictional postmortem from June 2028: S&P down 38%, unemployment at 10.2%, "Ghost GDP" everywhere, white-collar workers in freefall. Michael Burry amplified it. IBM dropped 13% in a day. Sixteen million views on X.
I read it the week it dropped. And as someone who ships AI agents for a living — not models, not research, but actual products that do work for real users — my reaction was: this is a brilliant thought experiment written by someone who has never tried to get an AI agent to reliably do a Tuesday.
The Feedback Loop Has Friction Everywhere
Citrini's scenario runs on a clean negative feedback loop: AI improves → companies need fewer workers → layoffs → less spending → more AI adoption → repeat. No natural brake. The spiral feeds itself.
This only works if you assume AI capabilities translate instantly to production deployment. They don't. I work with these systems every day. Here's what the feedback loop actually looks like:
AI improves → companies try to deploy it → tool errors, hallucination rates, context window limits, integration complexity → months of iteration → partial deployment → some roles restructured → some new roles created → meanwhile, AI improved again and the integration work partially resets.
The gap between "AI can do this in a demo" and "AI reliably does this in production, at scale, across an organization" is enormous. It's the 80% problem — AI gets 80% of a task right, but the last 20% takes longer to fix than doing the whole thing manually. Every builder I know lives in this gap. Citrini's model doesn't account for it because macro analysis can't see it.
AI Intensifies Work. It Doesn't Eliminate It.
Harvard Business Review published a study in February 2026 — "AI Doesn't Reduce Work — It Intensifies It" — based on eight months of observing 40 workers at a tech company. The findings were striking: employees who used AI tools worked at a faster pace, took on a broader scope of tasks, and extended their hours. Not because they were told to, but because AI made "doing more" feel possible.
Product managers started writing code. Researchers took on engineering tasks. Role boundaries blurred as people handled work that used to sit outside their remit. The researchers found that 62% of entry-level workers reported burnout — not from displacement, but from intensification.
This is the opposite of Citrini's thesis. AI isn't creating a world where workers become unnecessary. It's creating a world where workers do more — sometimes unsustainably more. The displacement spiral assumes humans get replaced. The reality, right now, is that humans get amplified.
What I'm Actually Seeing
I build YARNNN — a platform where AI agents handle recurring knowledge work like weekly digests, meeting prep, research briefs, and competitive analysis. These are exactly the kinds of white-collar tasks Citrini assumes will be automated away.
Here's what's actually happening with users: nobody is firing anyone. People are serving more clients with the same team. They're handling deliverables that weren't economically viable before — the weekly competitor brief that would've taken four hours but never made it to the priority list. The monthly board summary that used to be an all-day Sunday project.
The economic effect isn't displacement. It's expansion. The consultant who uses AI to produce 3x the deliverables doesn't vanish from the economy. They grow their practice. They take on the client they used to turn away. This is the Jevons paradox applied to knowledge work — making intelligence cheaper doesn't reduce demand for it, it increases the surface area of what becomes worth doing.
What Citrini Accidentally Gets Right
I don't want to dismiss the whole thing. The piece is genuinely useful as a stress test, and one claim deserves serious attention: the transition is going to be disorderly.
Citrini is right that the repricing of white-collar work is happening. They're right that the economic effects will be uneven — some sectors will restructure fast, others will lag. They're right that institutions aren't ready. But the disorder isn't a death spiral. It's the messy middle — the era where humans and AI figure out how to work together, where roles reshape rather than disappear, where the real constraint isn't model capability but the organizational, regulatory, and social infrastructure needed to absorb the change.
I wrote about this more in The Messy Middle of AI Work. The short version: Citrini's scenario assumes we jump from "AI is a tool" straight to "AI replaces everything" overnight. That skip — over the decade-long era where humans and AI actually learn to coexist — is where the analysis falls apart.
The Question That Actually Matters
Citadel's rebuttal argues the data doesn't support Citrini's claims. Noah Smith says it's a bedtime story. Economists run DSGE models that partially refute it. Those are all valid responses.
But the response that matters most, I think, comes from the people actually building and shipping AI products: the future isn't displacement or stasis. It's intensification, expansion, and restructuring — happening unevenly, on unpredictable timelines, with friction at every step that macro models can't capture.
The question isn't whether AI will change work. It already is. The question is whether we build the products, institutions, and trust frameworks that make that change productive rather than destructive. That's the work worth focusing on.
References
- Citrini Research: The 2028 Global Intelligence Crisis — The original 7,200-word viral scenario
- AI Doesn't Reduce Work — It Intensifies It — Harvard Business Review, Feb 2026. Eight-month field study showing AI increases work pace, scope, and hours rather than displacing workers
- Citadel Securities Demolishes Viral AI Doomsday Essay — Citadel's data-driven rebuttal, including software engineer job postings up 11% YoY
- Navigating the Jagged Technological Frontier — HBS & BCG study on AI's uneven impact on knowledge worker productivity
- Noah Smith: The Citrini Post Is Just a Scary Bedtime Story — Macro critique of Citrini's implicit assumptions
- Ghost GDP, a White-Collar Recession, and the Death of Friction — Fortune's breakdown of Citrini's key concepts
- The Messy Middle of AI Work — My longer take on why the three-era framework matters more than doomer vs. optimist debates