The Unasked Question: Part 1
We Could, So We Did
Part 1
There's a line from Jurassic Park that has been living in my head lately. Not the one about life finding a way — the other one. The one where Jeff Goldblum's character leans across a table and says, "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
It was written as a critique of genetic hubris. It turns out it's a pretty accurate description of the most important technological transition in human history.
We are living through something that doesn't have a clean name yet. The phrase "AI revolution" gets used, but revolution implies a before and after — a moment of rupture you can point to. What's actually happening feels less like a revolution and more like a tide. You don't notice it coming in until the water is already around your ankles, and by the time it's at your chest the people who could have built the seawall are busy arguing about whether the ocean is real.
Here's what I've been watching: A handful of companies — Microsoft, Google, Amazon, Meta, Anthropic, OpenAI — are all pouring capital into AI infrastructure at a scale that strains comprehension. Not sequentially, not collaboratively, but in a kind of frantic parallel that looks, from the outside, like a circular rhythm. The investment creates capability. The capability justifies more investment. The investment creates more capability. Nobody is steering. Everyone is accelerating.
This is not a conspiracy. That's the important part to understand. There is no room where people in expensive suits decided to restructure human civilization for their benefit. The dynamic doesn't require intent. It requires incentives — and the incentives are perfectly, catastrophically aligned toward speed.
Each company, acting in its own rational interest, is racing because stopping means falling behind. Falling behind means losing. And in a winner-take-most technology landscape, losing isn't a setback. It's extinction. So everyone runs, and nobody asks where they're running to, because asking would require slowing down.
Layer on top of this the humanoid robot moment we appear to be entering.
For decades, the comfortable assumption was that automation would take the physical jobs — the repetitive, the dangerous, the low-skill — while human cognitive work remained a protected category. Knowledge workers told themselves a story: machines might replace the factory floor, but they'd never replace the analyst, the writer, the doctor, the lawyer. The work of thinking was inherently human.
That story is no longer holding up.
What's emerging now isn't a machine that does one thing very well. It's a convergence — AI systems that can perform cognitive work combined with physical platforms that can perform manual work, arriving in the same decade, at the same time that both are becoming economically viable at scale. The division of labor that industrial civilization was built on — humans think, machines do — is collapsing from both ends simultaneously.
This isn't speculation. The companies building humanoid robots are not hobbyists. They are well-funded, serious operations moving toward mass production timelines measured in years, not decades. And the AI systems being deployed alongside them are not parlor tricks. They are being integrated into legal work, medical diagnostics, financial analysis, and software development right now, today, at scale.
The question nobody in these rooms appears to be asking seriously is: then what?
There's a particular kind of economic reasoning that tends to dominate these conversations, and it goes something like this: yes, automation displaces workers, but it always has, and it always creates new categories of work we couldn't have imagined before. The cotton gin displaced hand-pickers. The car displaced the horse-and-buggy industry. The internet displaced travel agents and video stores. And yet here we are, with lower unemployment than almost any point in history. The system absorbs the disruption. New work emerges. People adapt.
This argument isn't wrong about the past. It is potentially catastrophically wrong about the present, for one reason: speed.
Every previous automation wave moved at a pace that allowed something crucial to happen — generational adaptation. The children of displaced workers grew up in a world that had already partially adjusted. They trained for different jobs. The social infrastructure — education systems, apprenticeship models, labor markets — had time, imperfect and unequal time, to bend toward the new shape of work.
What's happening now is not moving at generational pace. It's moving faster than a single career. A person who trained for a specific profession a decade ago may find that profession functionally automated within the span of their working life — not at retirement, but at forty. The system has no mechanism for absorbing disruption at this velocity. We have never needed one before.
So here is the question I keep returning to, the one that started this whole line of thinking:
We are watching the most significant restructuring of human economic life in recorded history unfold at a pace that outstrips every governance structure, every regulatory framework, and every cultural adaptation mechanism we have ever built. The people with the most power over the trajectory have the least structural incentive to slow down. The people with the most to lose have the least power over the trajectory.
And the question that should have been asked first — should we? — was never really on the table. Not because anyone decided not to ask it, but because the way the incentives are structured, asking it is indistinguishable from losing.
That is the unasked question. Not whether we can. Not even whether we will. But whether, in the accumulated weight of a million individually rational decisions made by individually reasonable people inside a system with no wisdom function, we are building something that any of us would have chosen if we'd been asked first.
We weren't asked. We're not being asked now.
But the water is rising, and it seems worth saying out loud.
This is Part 1 of a five-part series. Part 2 — "The Replicator Problem" — examines what happens to the economic structures that human civilization is built on when the foundational assumption that labor has value stops being true.