Back to Articles

Our AI Future Is Now Arriving. But Who Does It Serve?

Three futures, three sets of winners and losers — and the question that gets harder to ask honestly the more you already know which side of the line you're on.

By Tim Kapp • Published on March 31, 2026

Cover image for Our AI Future Is Now Arriving. But Who Does It Serve?

In 1900, there were 21 million horses in the United States. One for every three people. They pulled freight, broke soil, built canals, and kept the economy moving. The economy was built around them.

By 1960, there were 3 million. Not because horses stopped being able to work. Because the economy stopped needing them. They didn't adapt. Their population collapsed.

We usually tell this story wrong. We say the industrial revolution replaced human muscle — the steam engine beat John Henry, machines took over the factories. But that's not quite right. Before machines arrived, it wasn't mostly humans doing the heavy work. It was animals. A single draft horse could do the work of six to eight men. Humans had already moved up. We were the managers, the engineers, the craftsmen. The animals were the engine.

So when tractors and trucks arrived, they didn't primarily take work from people. They took work from horses. We held the reins. The horses lost their jobs.

Every big technology shift follows this pattern. When humans moved from hunting and gathering to farming, we didn't lose our edge — we used it differently. We planned, organized, and managed what nature had been doing for us. When factories replaced farms, machines took over animal power and physical labor. Humans moved up again — into offices, into management, into designing the systems. When those jobs got automated, we moved into what Peter Drucker, in 1959, finally gave a name: knowledge work.

Drucker meant it as a description of something new. But he was wrong about that part. It wasn't new. It was a recognition of what humans had always been.

The farmer who succeeded wasn't the strongest one. He was the one who understood crop rotation, water management, and when to plant. The medieval craftsman's value wasn't showing up — it was the decade of knowledge he carried in his hands and his head. Even the factory foreman's real contribution wasn't his hours. It was his understanding of the machine, the workflow, the people. We did hard physical work, often brutal work — but our advantage was never physical. It was always the thinking behind the doing.

Animals were the physical engine. We were always the intelligence directing it.

Which means "knowledge worker" was never a new category of human. It was just the first time we stopped pretending otherwise.

And that makes what's happening now much more serious than we like to admit.

Generative AI is not coming for a rung we recently climbed to, while we simply decide what rung sounds fun next. It's coming for the only rung we ever really had.

Not muscles. Not assembly lines. The thinking itself — the reading, writing, analyzing, and coordinating that we called knowledge work and finally acknowledged as our core advantage. In every age, the human brain has been the scarce resource the whole economy was organized around. We just didn't always say so out loud.

Which leaves one question worth sitting with. Are we now the horse?

I'm not saying yes. I'm saying the question is more serious than it sounds. For the first time, the technology is coming for the only rung we ever really owned. We don't know how this ends. That uncertainty is what everything below is about.

A Simple Thought Experiment That Changes Everything

In 1971, the philosopher John Rawls proposed a thought experiment he called the veil of ignorance. The idea is this: before you're born, you have to choose the rules of the society you'll live in. The catch is you don't know who you'll be. You might be rich or poor, male or female, a doctor or a janitor, American or Bangladeshi. You have no idea.

What rules would you choose?

Rawls said rational people in this position would protect the worst-case outcome. Not because people are saintly, but because they might end up at the bottom. You insure against the bad outcome when you don't know if it's yours.

The Key Question: What would you demand if you didn't know which AI future was coming — and didn't know where you'd land in it? That question cuts through the noise of the AI debate in a way that "is AI good or bad" never will.

I think it's worth trying this experiment with AI, because the future is genuinely unknown. The biggest research institutions in the world look at the same data and reach different conclusions. The IMF says AI will affect roughly 40% of jobs worldwide. Goldman Sachs estimates 300 million jobs are in the exposure zone. The OECD says we haven't seen major job losses yet, but a quarter of all jobs are at high risk. The International Labour Organization says the impact is mostly task-level — AI handles parts of jobs, not whole jobs — at least for now.

These aren't contradictions. They're different mechanisms playing out at different speeds. Which one wins — and when — nobody honestly knows.

Behind the veil, you can't just root for the best-case scenario. You have to plan for the range. And the range includes a version where the horse analogy is not a metaphor.

Three Futures. Three Different Problems.

These aren't just optimistic and pessimistic versions of the same story. They work through different mechanisms and hit different people. Each one has early signs we can already look for. None has fully arrived. The veil hasn't lifted yet. That's the most important thing to understand about this moment.

Future One: The Middlequake

This future doesn't announce itself. The economy keeps growing. Unemployment stays low. Nothing looks like a crisis on the surface.

What happens underneath is that the middle of the job market quietly disappears.

Think about the jobs that mostly involve passing information around — scheduling meetings, summarizing reports, routing requests, writing standard documents, answering predictable questions, coordinating between teams. These aren't the most glamorous jobs, but there are millions of them. AI handles this kind of work well. The International Labour Organization found that clerical tasks are far more exposed to AI than almost any other type of work.

What's left after those jobs shrink? Two groups: a small set of high-paying roles that require real judgment, relationships, and decision-making authority. And a large set of in-person service jobs — nursing assistants, tradespeople, restaurant workers — that AI can't easily do because they require being physically present. The middle collapses. The job market looks like a barbell: heavy on both ends, thin in the middle.

The hardest part isn't the lost income. It's the lost ladder. The entry-level analyst job wasn't just a paycheck. It was how you became a senior analyst. The junior attorney wasn't just billing hours — they were learning to become a partner. When AI does the entry-level work more cheaply, those roles disappear. And so does the path upward.

Here's why the opening two sentences can both be true at the same time — and why that makes this future the hardest one to stop.

Unemployment measures whether people have jobs. It doesn't measure what kinds of jobs, what they pay, or whether the career ladder still exists. The people who lose coordination roles don't necessarily stop working — they get pushed into lower-paid service work, piece together gig work, or take jobs below their skill level. They show up as "employed" in the data. The crisis is invisible in the headline number.

We've seen this before. U.S. manufacturing employment peaked in 1979 and then drained away over three decades — not in a single crash, but in a slow bleed that never triggered a national alarm. Unemployment fluctuated normally. GDP kept growing. The statistics looked fine. The communities, and the people in them, didn't.

That's the signature of this future. None of the normal alarm systems go off. By the time the damage is legible in the data, a generation of entry paths is already gone.

Signs This Future Is Arriving Watch for: fewer entry- and mid-level job postings in administrative, legal support, financial analysis, and project coordination — even at companies that are growing. Fewer layers of management. The same revenue produced with smaller teams. College graduates taking longer to find their first real job, while unemployment stays low. Rising rates of underemployment — people with degrees doing work that doesn't require them. That gap between "employed" and "well-positioned" is the quiet attrition signature.

Better off in Future One

  • Senior professionals with established reputations and relationships — their leverage grows when the layer beneath them gets thinner
  • Founders and operators who can set strategy and direct AI systems
  • In-person workers — plumbers, nurses, electricians — whose jobs require being physically present and making real-time judgment calls

Worse off in Future One

  • Entry- and mid-level office workers whose jobs center on coordination, administration, or routine analysis
  • Women, disproportionately — clerical and administrative work is both the most AI-exposed category and a major source of female employment in wealthy countries
  • Recent graduates trying to break into professional careers through traditional entry-level paths
  • Countries like India, the Philippines, and parts of Africa that built their economies around call centers, data processing, and back-office services — not as a sideline, but as their primary path into the global economy

Future Two: The Legitimacy Economy

This future is more hopeful — but it has a catch hidden inside it.

The idea is this: as AI gets better at thinking, the thing that gets scarce isn't thinking. It's accountability. Someone still has to sign the doctor's note. Someone still has to stand behind the legal advice. Someone still has to make the call when the algorithm is wrong and a real person's life is affected.

In this future, human value re-anchors in accountability — the ability to be trusted, to be held responsible, to make decisions that other people and institutions will accept.

This isn't a soft idea. The entire architecture of the modern economy was built around the problem of accountability. The corporation exists to spread risk across owners and define who is answerable when something goes wrong. Insurance exists because risk has to sit somewhere. The legal system exists because someone has to be answerable when harm is done. We spent centuries building legal and financial infrastructure specifically because accountability is hard to assign and enormously valuable when you can. AI doesn't solve that problem. It intensifies it.

But accountability isn't established by credentials alone. It's built through relationships — through communication, trust, and the slow accumulation of demonstrated judgment. A doctor earns the right to make hard calls because of how they listen, not just what they know. A lawyer navigates a difficult negotiation because they can read the room, not just the statute. An AI safety officer looks a regulator in the eye and says "I take responsibility for this system" — and is believed, because of everything that came before that moment. Social skills aren't soft. They are the mechanism by which accountability becomes real.

There's real research behind this. Labor economists have tracked for decades that the jobs growing fastest aren't the ones that process information — they're the ones that carry responsibility. The market has been pricing accountability upward long before AI arrived.

Signs This Future Is Arriving Watch for: new job titles built around AI oversight — model risk managers, AI auditors, algorithmic accountability officers. Laws requiring a human to sign off on high-stakes AI decisions. Higher pay for professionals who combine expertise with the ability to build trust and absorb responsibility. Demand for people who can fix things when automation fails and a real person needs to be made whole.

Now for the catch. Legitimacy can concentrate in institutions rather than people. If the hospital or the law firm or the bank provides the accountable human, that doesn't help most workers — it helps a narrow group with the right credentials and the right employer. The legitimacy economy might create a new elite without opening doors for anyone else.

Better off in Future Two

  • Licensed professionals — doctors, lawyers, engineers, financial advisors — whose credentials serve as proof of accountability
  • People with strong relationship and communication skills, which the job market has long undervalued relative to technical output
  • Workers in safety-critical trades where being physically present and personally responsible is the whole point

Worse off in Future Two

  • Mid-level knowledge workers who produced information but never built the kind of trusted relationships that make someone a recognized decision-maker
  • Workers in countries without strong institutions — legitimacy conferred by institutional backing means the unaffiliated don't qualify regardless of skill
  • Women in professional roles — trust and authority are partly social constructs, and those constructs reflect existing power structures; accountability can be a credential that gatekeeps rather than rewards ability

Future Three: The Split Economy

The third future is the most unsettling — not because it's the most likely, but because it's the only one where growth and human prosperity can fully separate.

In this future, the economy splits in two — but not evenly. One layer involves humans: high-stakes decisions, trusted relationships, regulated industries. The other layer is machines transacting with machines, generating and reinvesting value largely on their own. The human layer doesn't disappear. It just stops being the center of gravity. Growth happens increasingly in the machine layer, and the human layer becomes a shrinking island within a much larger economy that no longer needs it to function.

This isn't science fiction. It's already happening in finance. Algorithmic trading — software that observes markets, makes decisions, and places orders with no human involved in each trade — accounts for the majority of volume on several major exchanges. The software buys and sells. The software generates and reinvests returns. Humans set the rules and audit the outcomes. But they're not in the middle of each transaction. And it's spreading. Gartner predicts that by 2028, 90% of all business-to-business purchasing will be handled by AI agents — over $15 trillion in transactions moving through autonomous systems with no human in the middle of each deal.

And it isn't just trading. In the pharmaceutical industry, fully automated discovery pipelines now identify disease targets, model protein structures, generate drug hypotheses, simulate their behavior, and deliver a shortlist of candidates for human researchers to test. The AI knows the need, understands the biology, and runs the experiment. The human receives the results. Companies like Insilico Medicine have taken drug candidates from target identification to clinical nomination in 18 months using end-to-end AI pipelines — a process that previously took a decade. The machine layer isn't coming. In pockets, it's already operating.

As AI agents get better, this expands. AI systems that buy software subscriptions for other AI systems. Automated procurement. Machine-generated content sold to machine-operated platforms. Value created and exchanged in loops where humans watch from the outside.

The standard reassurance people offer about this future is that it will self-correct. If workers lose income, they stop buying things, companies suffer, and the economy rebalances around human participation. It's a comforting argument. It also assumes humans remain the primary source of economic activity. In a machine-to-machine economy, that assumption breaks down. Growth and human wages can drift apart. The economy gets bigger while most people's piece of it stays the same or shrinks.

This is the full version of the horse analogy. Not that humans can't do anything. That the economy stops needing us as a condition of its own growth.

The dark path doesn't require a robot takeover. It just requires that growth stops needing us to happen.

Signs This Future Is Arriving Watch for: a rising share of business transactions completing with no human involved at any stage. Automated product pipelines delivering innovation with humans only at the receiving end, as we're already seeing in pharmaceutical discovery. Productivity rising while typical wages stay flat — even in industries that aren't laying people off. Political energy concentrating around who owns AI infrastructure — the chips, compute, energy, and platforms — rather than who does the work. GDP growing while people feel poorer.

Better off in Future Three

  • People who own equity in AI infrastructure — the chips, compute, energy, and platforms the machine economy runs on
  • The small group of engineers who design and govern the systems
  • Countries that control the infrastructure: the US, core EU countries, Japan, South Korea

Worse off in Future Three

  • Everyone whose income depends on labor rather than ownership — the majority of people on earth
  • Blue-collar workers facing a second wave: after AI handles cognitive tasks, physical robots come for physical labor
  • Women globally — near-term job exposure from Future One compounds with long-term exclusion from ownership in Future Three
  • Developing countries — the path poorer nations used to grow wealthier was cheap labor. Manufacturing for export. Call centers. Back-office work. All three futures erode that path. Future Three ends it.

Who Wins and Loses Across All Three

Step back from each individual future and look at who comes out ahead — or behind — no matter which one arrives.

Better off in all three futures

  • Top AI engineers and system architects
  • People who own AI infrastructure — chips, compute, energy, and platforms
  • Countries that control that infrastructure

The common thread isn't education or talent. It's ownership of and proximity to the infrastructure AI runs on. These groups gain leverage in Future One, gain value from accountability governance in Future Two, and gain compounding asset returns in Future Three. The specific reason differs. The direction doesn't.

Mixed — depends on which future arrives

  • Senior white-collar professionals — strong in Futures One and Two; at risk in Future Three if they don't own assets
  • Licensed professionals in regulated industries — best positioned in Future Two; vulnerable if institutions capture the legitimacy value rather than individuals
  • Blue-collar workers in safety-critical trades — relatively protected in Futures One and Two; at long-term risk in Three
  • Wealthy nations broadly — productivity gains in Futures One and Two; growing inequality risk in Three
  • China — large investments in AI infrastructure that pay off in Future Three, but also one of the world's biggest pools of workers exposed to the job losses in Future One. The gap between winners and losers inside China could be larger than in any other country.

Worse off in all three futures

  • Women in clerical and administrative roles — the most exposed jobs in Future One, locked out of the accountability economy in Future Two, excluded from asset ownership in Future Three. No version is good for this group.
  • Countries built around outsourced services — India, the Philippines, Bangladesh, much of sub-Saharan Africa. The development model their economies depend on gets eroded by all three futures, just at different speeds.
  • Entry- and mid-level office workers — the career ladder loses its lower rungs in Future One, the jump to legitimacy roles is hard in Future Two, and participation collapses in Future Three.
  • Anyone without meaningful asset ownership — labor income as the way to participate in the economy weakens across all three futures.

The ledger is not balanced. The group that wins in all three is small and already positioned near the top. The group that loses in all three is large, spread across the world, and cut off from the main source of upside — ownership — in every scenario. That gap is the most important fact in this analysis.

What You'd Demand Before You Knew

Go back behind the veil.

You don't know if you're a senior partner or an entry-level coordinator. A licensed doctor or a call center worker in Manila. A man or a woman. Born in the US or in Nigeria. You don't know which future is coming.

In that position, you wouldn't bet on Future Two fixing everything. Its benefits flow to people who already have credentials and institutional connections — not to people who need a path in.

You wouldn't ignore Future Three's warning signs. Once the infrastructure gets concentrated in a few hands, money begets more money and the gap locks in.

You'd look for moves that make sense across all three futures — not bets on the best outcome, but hedges against the worst. The ledger points to some hard questions we haven't honestly answered yet.

Should ownership of AI infrastructure be as concentrated as it currently is? The models powering the machine economy were trained on the accumulated knowledge, writing, and creative work of all of us. The raw material was humanity's. The value was captured privately. Norway faced a similar question with oil — and decided the resource belonged to everyone. The result is a sovereign wealth fund worth over $1.7 trillion that pays dividends to every citizen. Should we be asking the same question about AI? Who owns the machine layer determines who benefits from it. That's not a technical question. It's a political one we're currently answering by default.

Should entry paths into professional careers be deliberately protected, even subsidized, at a moment when the market is quietly eliminating them? The junior roles we're losing weren't just jobs — they were how expertise got built and distributed. Once the pipeline empties, rebuilding it is harder than preserving it.

What is the development path for lower-income economies in a world where labor-cost advantage no longer leads anywhere? Every version of this future erodes the model that lifted hundreds of millions of people out of poverty over the last fifty years. We don't yet have an answer to what replaces it.

Should AI development be steered — through incentives, regulation, or public investment — toward creating new categories of human work rather than purely toward automation? The research says it can be. The current incentives don't point that way.

The veil doesn't tell you what to build. It tells you how to decide. Before you ship a product, design a system, fund a company, or cast a vote — ask whether you'd accept the outcome if you didn't know where you'd land in it. That question won't give you the answer. But it will tell you whether you're asking the right one.

The Window Is Open Now

We're not fully behind the veil anymore. Some of this is already happening.

The early signals are already visible — in hiring data, in pharmaceutical pipelines, in the governance roles appearing inside regulated industries. But the data doesn't yet tell us which future is dominant. It may not be a choice between three paths. It could be all three, arriving in sequence. The window where choices matter is open. It won't stay open forever.

Rawls doesn't ask which future is most likely. He asks what you would have demanded before you knew. That question gets harder to answer honestly the more you already know which side of the line you're on.

Those who owned the horses could have asked it — but nobody mourned the horse's lost livelihood. If you've read Black Beauty, you know horses were better off not hauling coal. We don't have that same exit. If the economy stops needing us, there is no better life waiting on the other side.

We're still in the moment before the math is done. What we build, regulate, own, and share in this window — and whether we choose to protect each other rather than simply protect our own position — will determine whether the veil lifts on a future that a rational person would have accepted from behind it — or one they would have refused, if only someone had asked in time.


The research underlying this analysis draws on labor economics literature from the International Labour Organization, OECD, IMF, and academic work by Daron Acemoglu, Pascual Restrepo, and David Deming, among others. None of the institutions cited endorse the framework presented here. The three futures described are analytical constructs, not forecasts.

#ArtificialIntelligence#FutureOfWork#LaborEconomics#AIPolicy#ResponsibleAI#EconomicInequality#AIEthics#Technology
Share this article: