The Unasked Question: Part 4
Path of Least Restraint
In December 2025, the President of the United States signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." The stated purpose was to prevent a patchwork of state regulations from stifling innovation and undermining American competitiveness in the global AI race.
To enforce this framework, the order established an AI Litigation Task Force within the Department of Justice — charged with challenging state AI laws in federal court. It directed the Department of Commerce to condition $42 billion in broadband infrastructure funding on states refraining from passing AI regulations deemed onerous. It instructed the FTC to issue a policy statement defining when state AI disclosure requirements would be preempted by federal law.
This was presented, and widely reported, as deregulation.
It was not deregulation. It was something considerably more consequential.
Deregulation, in the conventional sense, means the removal of rules. Less government involvement. More market freedom. The rhetoric around AI policy in 2025 and 2026 has consistently used this language — removing barriers, eliminating friction, unleashing innovation, getting government out of the way.
But look at what is actually happening beneath that rhetoric.
The federal government became the largest shareholder of Intel in August 2025, acquiring a 9.9% stake in exchange for $8.9 billion in funding. It has taken equity positions across semiconductors, critical minerals, and nuclear energy — committing more than $10 billion to ownership stakes in at least nine firms within six months. It is using the funding leverage of a $42 billion infrastructure program to coerce state governments into abandoning their own regulatory frameworks. It has established a federal litigation apparatus specifically designed to challenge any state-level attempt to govern AI development.
This is not the absence of governance. This is governance of extraordinary scope and ambition — operating through equity stakes, funding conditions, and legal coercion rather than through the visible, formal rulemaking processes that democratic accountability depends on.
What has actually been removed is not regulation. What has been removed is accountability. The shift is from governance that is legible, contestable, and subject to democratic input — to governance that operates through executive discretion, financial leverage, and the systematic elimination of the state-level experimentation that has historically been the laboratory of American democracy.
The government has not gotten out of the way of AI. The government has become a direct financial stakeholder in AI's success, with every incentive to protect that investment and no structural obligation to weigh those incentives against the interests of the people it is supposed to represent.
The international picture is no more encouraging.
The European Union, historically the most aggressive regulator of technology in the world, proposed in late 2025 to delay the implementation of high-risk obligations under its AI Act from 2026 to 2027. The stated reason was competitiveness — the fear that strict European regulation would disadvantage EU companies relative to American and Chinese competitors operating under lighter regulatory regimes.
This is the race to the bottom made visible. No individual jurisdiction can afford to regulate seriously if its primary competitor treats regulation as a weapon to be neutralized. The incentive structure punishes caution and rewards speed regardless of consequence. The nation that asks the hardest questions loses ground to the nation that doesn't ask them. So nobody asks them.
China is pursuing its own AI development at scale with a different set of constraints and a different set of absent questions. The framing in Washington is explicitly that this is an arms race — that American AI dominance is a national security imperative — which means the already-thin appetite for meaningful governance disappears entirely behind the shield of strategic necessity.
When AI governance gets framed as a question of national security competition, it stops being a question about what kind of society we want to build and becomes a question of who builds the most powerful system fastest. The human stakes — the economic displacement, the concentration of power, the erosion of democratic institutions — become acceptable costs in a competition that cannot be lost.
This is how the most important decisions of our era get made: not through deliberation, not through democratic input, not through the careful weighing of benefits against costs — but through the logic of competition, which has no mechanism for asking whether the thing being competed over is worth winning.
There is a specific mechanism worth understanding here, because it explains why smart, well-intentioned people inside these systems consistently fail to apply the brakes.
It is called competitive dynamics lock-in, and it works like this.
Company A decides that the risks of moving too fast outweigh the benefits and voluntarily slows down. Company B, which has made no such calculation, gains ground. Company A's investors notice. Company A's board responds. The people inside Company A who advocated for slowing down are no longer making that argument, because the argument has been answered by the market. The responsible actors get selected out of the decision-making positions, not through malice but through the normal operation of competitive pressure.
This is why corporate commitments to responsible AI development are structurally fragile regardless of how sincerely they are held. The sincere commitment exists inside a system whose incentives consistently punish acting on it. You cannot sustainably choose to lose a competition on ethical grounds unless everyone is choosing to lose it together — which requires exactly the kind of coordinated governance framework that is currently being systematically dismantled.
The people calling for restraint are not wrong. They are just playing a game whose rules make restraint self-defeating for any individual player.
What would meaningful governance actually look like? It is worth naming, because the conversation tends to get stuck at critique without getting to alternative.
It would require, at minimum, three things that are currently absent.
First, international coordination at a level that has rarely been achieved outside of existential threats — the kind of framework that governs nuclear weapons or chemical warfare, where the consensus that these technologies require special treatment transcends competitive dynamics. There are early conversations in this direction. They are nowhere near the urgency or binding authority that the moment requires.
Second, governance structures that move at technological speed — adaptive, technically literate, capable of responding to capability jumps in months rather than years. Current democratic institutions are not structured for this. They were built for a world where consequential change happened slowly enough for deliberative processes to keep up. That world no longer exists.
Third, a genuine reckoning with the question of who bears the costs of this transition and who captures the benefits — and a political will to structure the answer differently than the current incentive map produces on its own. This is, ultimately, a question about power. And the people with the most power over the current trajectory have the least structural incentive to answer it in anyone else's favor.
None of this is impossible. All of it is politically very hard. And the window in which the choices remain genuinely open is, by most serious estimates, not long.
Here is what I keep returning to.
The path of least restraint is not a conspiracy. The people making the decisions that are removing friction from this transition are not, for the most part, acting in bad faith. They believe they are doing the right thing — protecting American competitiveness, preventing regulatory capture, ensuring that the most transformative technology in human history develops in the hands of actors with at least some accountability to democratic values rather than purely authoritarian ones.
These are not unreasonable positions. They contain genuine truth. American AI leadership, compared to the alternative of Chinese AI dominance without any of the cultural commitments to individual rights that at least exist in tension with American corporate interests — that is a real consideration, not a cynical talking point.
The problem is not the intentions. The problem is that the path of least restraint leads somewhere that none of the people on it have been asked to choose, and that the asking is getting harder with every month that passes without it.
The artists who built the fictional worlds we explored in the last essay understood something about this. The dystopia doesn't arrive because the people in charge wanted it. It arrives because the people in charge were too busy competing to notice where they were going.
We are on that path. The restraint that might redirect it is being removed, piece by piece, in the name of winning a race whose destination nobody has bothered to describe.
This is Part 4 of a five-part series. Part 5 — "Is the Federation Possible?" — asks the question this series has been building toward: given everything we know about where this is going, is there still a version of this story that ends well?