In 1994, The Wall Street Journal published an unusual document: a public letter titled Mainstream Science on Intelligence, signed by dozens of prominent researchers. Its purpose was straightforward—to clarify, once and for all, what science "knew" about intelligence.
This was meant to settle the matter. Instead, it revealed something more interesting.
Even before artificial intelligence entered the mainstream imagination, we could not quite agree on what intelligence actually was. The controversy wasn't really about IQ tests or genetics or measurement error—it pointed to something deeper: the definition of intelligence has never been fixed. It evolves, and it moves when it needs to, often to protect our sense of what makes us special.
A clear pattern emerges. When we feel threatened by what counts as intelligence, we abandon the old definition and adopt a new one that puts the distinction safely back in our favor.
Our habit of redefining intelligence did not begin with AI. But it explains almost everything about how AI—and eventually artificial general intelligence—will unfold.
Intelligence Has Always Been a Moving Target
For much of the twentieth century, intelligence was refreshingly simple. It meant logical reasoning, abstract problem-solving, verbal and mathematical aptitude. You could measure it. You could rank it. You could put a number on it and feel good—or bad—about where you landed.
Then things got awkward.
We noticed that some very intelligent people were surprisingly bad at dealing with other people, so we added emotional intelligence. Others could compose music, navigate social systems, or visualize complex spaces in their heads, so we added those too. Creativity followed. Originality. Insight.
Eventually we arrived at theory of mind—the ability to understand that other people have beliefs, intentions, and perspectives different from our own, and that those beliefs can be wrong. This felt promising. Surely this was the thing machines could never do.
Each rewrite followed the same pattern. When one definition of intelligence stopped working, the consensus shifted—we threw it out and declared the new one more "complete."
Intelligence, it turns out, isn’t a destination. It’s a boundary marker. And that boundary has a habit of moving in ways that feel…convenient.
AI Repeats the Same Pattern—Faster
We have been remarkably consistent about this. Machines, we were told, could never do certain things—not just difficult things, but human things.
They could calculate, but they could not create. That line held until the late 2010s, when generative models began producing original art, writing, and design—work compelling enough to be exhibited, published, and sold. When an AI-generated portrait sold at a major auction house in 2018, the conclusion wasn’t that machines were creative. It was that creativity had been misunderstood.
Music followed the same arc. For a long time, music—especially genres built on authenticity and emotion—was treated as safely human. Then AI-generated songs began circulating widely, attracting millions of listeners. In November 2025, "Walk My Walk" by Breaking Rust—a fully AI-generated country artist—hit #1 on Billboard's Country Digital Song Sales chart, accumulating over 3 million Spotify streams in less than a month. The audience responded first; the debate about whether it "counted" came later.
Language was supposed to be different. For decades, the Turing Test was treated as the line machines could not cross. If a machine could convincingly hold a human conversation, we agreed, that would mean something.
Then, in the early 2020s, machines started holding up their end of the conversation. They were fluent. Contextual. Occasionally charming. And, in blind settings, often more convincing than the humans they were compared against.
There was no parade. The Turing Test didn't fail—it was quietly retired from mainstream discourse. Conversation, it seemed, was never the point.
Understanding other minds felt safer. Theory of mind—the ability to reason about what others believe—takes children years to develop and philosophers centuries to argue over. Surely machines couldn’t do that. And yet by 2023, large language models were reliably solving classic false-belief tasks used in developmental psychology. They weren’t conscious or self-aware, but they could track who knew what, who believed what, and what would happen next. Naturally, this didn’t count either. It was “just pattern matching.”
Empathy felt safest of all. Empathy wasn’t just cognitive; it was emotional. Machines could simulate concern, perhaps, but they could never actually be empathetic.
Then researchers began testing this directly. In a 2025 study published in Communications Psychology, participants evaluated responses to questions posted on a public medical forum—some written by physicians, others generated by AI. When shown these responses without attribution, participants consistently rated the AI responses as more empathetic, more compassionate, and more validating than those from human doctors.
This, too, didn’t count. It wasn’t “real” empathy. Just performance.
Each of these was supposed to be the final red line. And each one fell faster than the last.
What’s different now isn’t just that machines keep crossing boundaries we once thought were safely human. It’s that they’re doing so across creativity, music, language, reasoning, and empathy at the same time—and without a single moment dramatic enough to force collective recognition.
There is no obvious breakpoint. No ceremony. No clear threshold where consensus crystallizes that something fundamental has changed.
And so the pattern repeats.
The goalposts shift.
Why the Bar Keeps Moving (and Why AGI Won’t Be a Moment)
This isn’t a technical failure. It’s a human one.
Intelligence isn’t just something we measure. It’s something we identify with. When machines begin to perform well at the things we once treated as evidence of intelligence, our instinct isn’t recognition. It’s to question and narrow the criteria, qualify the result, or move the bar just far enough to keep the distinction intact.
That instinct long predates artificial intelligence. We’ve always been uneasy about how we measure up against one another—whether along lines of class, race, nation, gender, or culture, intelligence has often been invoked to draw boundaries and preserve status. AI didn’t create that insecurity; it simply compressed the timeline and gave it an easy target.
Much of the public conversation about artificial general intelligence assumes that it will arrive as a discrete event: a moment when a system crosses an obvious threshold and forces recognition. A switch flips, a demo circulates, and debate ends. That framing is appealing, but it misunderstands both intelligence and how societies recognize change.
AGI is unlikely to appear as a single breakthrough because there is no single capability that will ever satisfy a definition of intelligence that keeps evolving in response to pressure. Instead, what we call “general” intelligence is already being assembled piecemeal. Reasoning improves here. Language becomes more fluid there. Planning, coordination, and synthesis advance in parallel, often in different systems, on different timelines, under different labels.
Each advance feels impressive but incomplete. Each success invites a qualification: it performs well, but it doesn’t really understand; it reasons effectively, but it lacks intent; it explains convincingly, but it isn’t conscious. None of these objections is obviously wrong. The problem is that there's always another one waiting—no matter how capable the systems become.
The result is not denial, but perpetual deferral. Recognition is always postponed to a future version that is just slightly more human than the present one. AGI doesn’t fail to arrive; it fails to trigger consensus.
What we experience instead is normalization. Capabilities that would have seemed extraordinary a decade ago quietly become infrastructure, absorbed into workflows, institutions, and expectations without ever being labeled as a turning point. By the time the system as a whole looks “general” in hindsight, there is no longer a moment left to point to.
Why This Matters Now
This isn’t an argument for panic. It’s an argument against surprise.
Artificial intelligence doesn’t need artificial general intelligence to take your job, hollow out professions, or reshape institutions before anyone agrees on what just happened. It doesn’t need consciousness to reorganize work or self-awareness to redraw the boundaries of expertise. Those shifts are already underway across fields that once assumed a high degree of insulation, and they are happening without a single moment dramatic enough to force collective attention.
If we continue to treat AGI as a distant cliff rather than a gradual slope, we postpone the practical work that preparedness actually requires—redesigning roles, rethinking education, updating institutions, defining policy, and helping people adapt before change hardens into crisis.
Take just one industry as an example. In 2024, several major law firms began restructuring junior associate roles—not in response to some breakthrough, but because AI was already handling the legal research that once consumed a first-year lawyer's week. They redesigned the role around client communication, judgment in ambiguous situations, and strategic framing—work AI struggles with. California's State Bar followed in 2025, mandating AI competency training for continued licensure. Neither change felt dramatic at the time. Both recognized a slope that others are still treating as a distant cliff. The firms and jurisdictions that waited are now scrambling to explain why their standards haven't kept pace with tools their clients adopted two years ago.
Change management is the opposite of panic. It requires acknowledging what is already happening early enough to respond deliberately rather than reactively.
When historians look back, they may struggle to pinpoint when AGI arrived—not because it lacked significance, but because it never announced itself. What they will see instead is a long sequence of incremental shifts that were each easy to explain away, until the accumulated reality became impossible to ignore.
And the story that emerges may ultimately say less about machines than about us: how we respond when the old definitions stop protecting us—either by acknowledging our insecurity and beginning the serious work of preparation, or by continuing to kick the can down the road in the comfort of our ego.
References and Further Reading
- Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236).
- IBM (1997). Deep Blue defeats Garry Kasparov.
- Silver, D., Huang, A., Maddison, C. J., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529, 484–489.
- Christie’s (2018). Portrait of Edmond de Belamy auction results.
- Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models.
- Ayers, J. W., Poliak, A., Dredze, M., et al. (2023). Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Internal Medicine, 183(5), 589–596.
- Ovsyannikova, D., Oldemburgo de Mello, V., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology.
- Picard, R. W. (1997). Affective Computing. MIT Press.
- Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. Basic Books.



