yarnnnyarnnn
← Back to blog

Jack Dorsey Is Reorganizing Humans. The Real Shift Is When Agents Are the Workers.

·4 min read·Kevin Kim

At a Glance

Answer: Jack Dorsey's vision replaces middle management with AI coordination. But the real shift isn't reorganizing humans — it's when AI agents become the primary...

This article covers:

  • What happens when agents are the primary workers?
  • Why does this require a fundamentally different kind of organization?
  • What does Jack's "world model" look like when agents are the workers?

Jack Dorsey just published a 4,000-word thesis on replacing middle management with AI. He's right about the diagnosis and conservative about the cure. Block's vision reorganizes humans around AI coordination. The deeper question is what happens when the agents aren't coordinating the humans — they are the workers.

Jack traces organizational hierarchy from Roman legions to modern corporations and lands on a clean insight: hierarchy exists to route information, AI can route information better, so remove the layers. Replace middle management with a "world model" that gives everyone context. Flatten to three roles: individual contributors, temporary problem owners, and player-coaches.

It's a compelling vision. It's also a vision where every human still produces output. The AI just coordinates them better. I think that's one step short of where this actually goes.

What happens when agents are the primary workers?

The version Jack describes — AI as coordinator, humans as workers — is the transitional state. The steady state is different: AI agents do the recurring knowledge work, humans supervise the agents.

Not supervise in the legacy sense of checking boxes and approving timesheets. Supervise in the way an editor supervises writers, or a partner supervises associates. You set direction, evaluate output, provide judgment on edge cases, and develop the agents' capabilities over time through feedback.

This isn't speculative. It's already happening in narrow domains. Developers review AI-generated code more than they write it from scratch. Analysts review AI-generated reports. Consultants review AI-drafted client updates. The ratio is shifting from "I produce, AI assists" to "AI produces, I supervise."

When that ratio tips — and it's tipping fast — the organizational question changes completely. It's no longer "how do we coordinate humans more efficiently?" It's "how do we supervise a workforce of agents that produce output autonomously?"

Why does this require a fundamentally different kind of organization?

Jack's model still assumes the org chart maps to humans. Fewer layers, flatter structure, but humans in every seat. When agents become the primary workers, the org chart maps to agents, tasks, and the humans who supervise them.

The unit of management shifts from people to output. You don't manage an agent's career development or resolve their interpersonal conflicts. You manage the quality of what they produce, the context they operate in, and the judgment calls they can't make on their own.

This changes what leadership means. A VP of Marketing in a traditional company manages 40 people who produce campaigns. In an agent-native company, that VP supervises 15 agents that each produce recurring deliverables — competitive briefings, content drafts, performance reports — and she spends her time on the things agents can't do: strategic judgment, relationship nuance, creative direction, and deciding what matters.

The work gets more interesting, not less. But it's a fundamentally different job. And almost no one is designing organizations around it yet.

What does Jack's "world model" look like when agents are the workers?

Here's where Jack's framework actually becomes more powerful than he realizes. His "world model" — a continuously updated model of the company's operations — is exactly what an agent workforce needs. But not for coordinating humans. For giving agents the context they need to produce good work.

An agent without accumulated context produces generic output. An agent with a world model produces output that's actually useful. This is the bridge between Jack's vision and what we're building at YARNNN. The accumulated context — from Slack, Notion, your work platforms — is the world model for your agent workforce. The richer it gets, the better they work. The better they work, the more you can delegate. The more you delegate, the more supervision becomes your primary contribution.

Jack says "companies move fast or slow based on information flow." I'd add: companies move fast or slow based on how much recurring work they can delegate to agents that actually understand the context. Hierarchy isn't just an information routing problem. It's a production bottleneck. Remove the bottleneck by letting agents produce, and supervision — real, high-judgment supervision — becomes the highest-leverage activity in the company.

The Romans needed a decanus for every eight soldiers. The question isn't whether AI replaces the decanus. It's what happens when the soldiers are AI too, and the human's job is to decide where to march.

Kevin Kim is the founder of YARNNN, a platform for developmental AI agents that accumulate context and improve with tenure.

Related Reading

The Speciation Is Already Happening

LLMs are monolithic — one model for every task. But AI's jagged frontier means models fail hardest in domains nobody evaluates. The future isn't better...