AI policy must fail gracefully
The conditions under which AI policy will operate over the next decades are unlikely to match those in which it was introduced.
In systems engineering, the concept of graceful degradation describes how a well-designed system behaves when components fail — like how an airplane lands when an engine dies. It’s also a useful frame for institutional design. How do we engineer governance that functions well even when some of its most critical organs fail?
In AI policy, one of the most fundamental assumptions is that the executive branch will be the institutional home for technical capacity, implementation, and oversight. Some of that is simply extrapolation based on recent AI governance: The Biden administration’s AI policy efforts relied heavily on voluntary commitments, agency reporting requirements, procurement guidance, and technical capacity sited in institutions like NIST. Although the Trump administration’s approach has a different, deregulatory valence, it too has extensively relied on executive action.
This assumption also owes something to people’s deeper cynicism about Congress’ ability to govern new technologies — and it has been 30 years since we last passed comprehensive technology legislation. Partisan polarization has been deepening for decades. The 118th Congress only passed 27 bills, the fewest since the Great Depression, and AI’s rapid evolution likely reinforces the perception that legislation will simply always be outpaced by technology.
But these conditions are not a fixture of life. Crisis conditions, in particular, can break Congressional inertia. Consider the bipartisan passage of the American Rescue Plan Act and Infrastructure Investment and Jobs Act during the pandemic. A major AI-related event — whether that’s a cybersecurity breach, labor market shock, or even a sign of pending catastrophe — could force Congress into action.
And the current administration’s approach to AI policy and broader governance makes a strong case for diffusing power beyond the executive branch’s discretionary control. The Department of Defense (or War, depending on your partisan persuasion) has retaliated against Anthropic for refusing to drop autonomous-weapons and mass-surveillance guardrails, including through attempting to designate the company as a supply-chain risk. At the same time, the administration is looking to override state AI laws en masse, including by conditioning federal infrastructure dollars on a state’s AI regulatory posture. Amid all of this, the president and his allies are actively punishing a variety of actors for public accountability or critique, including through filing retaliatory lawsuits against the press and threatening to revoke critical broadcasters’ licenses.
These actions cannot be understood in isolation. Risks from AI and threats to democracy compound each other in ways that the AI policy world has not fully reckoned with. To the extent that “concentration of power” has featured in AI policy discourse, it’s often been focused on model capabilities or corporate governance — not on statutory design or the distribution of public governance. And these risks are not unique to the Trump administration.
AI laws that depend on executive cooperation — as opposed to iterative policymaking and distributed, independent oversight — will always make for more brittle governance. The aforementioned Biden AI executive order, for example, was rescinded in the first week of the Trump administration. Legal infrastructure that depends on an ideologically aligned or even compliant governing environment is a house built on sand.
The conditions under which AI policy will operate over the next decades are unlikely to match those in which it was introduced. And if our increasingly polarized political landscape is any indication, the discontinuity between administrations may grow considerably. Good policy must therefore be somewhat user-agnostic — it must be designed in the knowledge that one’s allies will not be in power forever.

Two examples illustrate what graceful degradation looks like in practice, and how it can buffer against executive overreach.
Transparency: statutory whistleblower protections with a federal-court kick-out
Whistleblower protections currently operate across a patchwork of state and federal statutes. They work reasonably well within their scope, but their scope is limited in critical ways.
First, most existing channels are triggered by reporting violations of law, but AI is largely unregulated at the federal level and only sparsely regulated at the state level, so many of the harms it causes don’t yet fit cleanly into existing legal categories. This means that an engineer who watches a frontier model demonstrate dangerous capabilities during pre-deployment testing may have no clearly protected channel for raising an alarm, even if the safety implications are significant. Second, to the extent that existing channels do route through federal enforcement, an unfriendly administration can simply stall investigations or decline to pursue retaliation cases.
The AI Whistleblower Protection Act is a strong vehicle for addressing both issues. It explicitly protects disclosures about AI security vulnerabilities and AI-related violations of federal law, as well as failures to respond to substantial and specific AI-related dangers to public safety, public health, or national security. Since the bill specifically covers conduct the worker “reasonably believes” constitutes such a vulnerability or violation, it also protects disclosures made before a clear legal violation or concrete harm has materialized. And the bill’s federal court kick-out provision means that if the Department of Labor fails to issue a final decision within 180 days, the whistleblower can bring their action directly in federal district court. That means an unfriendly DoL can stall enforcement for a while, but it cannot indefinitely function as a chokepoint.
This design isn’t completely abuse-proof — a hostile DoL could issue a rushed or poorly-justified decision before the 180-day window closes — but it is far more resilient than one that relies on the executive as the sole enforcement channel. And critically, it compels the administration to take an official position on the record. In this sense, it exemplifies graceful degradation by compelling executive failures to be subject to public scrutiny.
Furthermore, the bill is bipartisan — supported by the kind of cross-ideological coalition that will be critical to enacting enduring AI laws.
Distributed capacity: independent technical expertise for Congress and state attorneys general
As of today, Congress often relies on industry lobbyists, the executive branch, and a small universe of nominally independent experts (with varying degrees of actual independence) to support them in developing AI expertise and policy. This leaves Congress dependent on those lobbying them and poorly positioned to evaluate outside claims rigorously. Congress should be able to evaluate external ideas against independent technical expertise of its own.
It once had an institution designed for this express purpose: the Office of Technology Assessment. Created in 1972 and defunded in 1995, OTA was a legislative branch support agency that provided Congress with nonpartisan and expert analysis of various scientific and technical issues.
In either restoring the OTA or reimagining a new technical advisory service for the modern age, members and committees could be empowered to legislate on complex technical matters, including model capabilities and performance on various benchmarks, emerging safety risks, procurement standards, and compute thresholds.
This is also one of the more politically plausible pathways to build AI capacity outside of the executive, as it strengthens Congress as a whole, not any one faction or party. Lastly, it has a bipartisan and cross-ideological record of interest: R Street has argued for reviving OTA as a way to strengthen Congress’s in-house technical expertise, and just three years ago, Senators Ben Ray Luján and Thom Tillis introduced bipartisan legislation to revive and revamp OTA, citing AI and quantum computing as technologies for which Congress could benefit from expert guidance.
Another, more ambitious iteration of this proposal would center on funding technical assistance for state enforcers. State attorneys general presently serve on the front lines of AI enforcement, but most lack the technical staff to evaluate the products being deployed within their borders. Congress could build on these capacities by creating a federally-funded — but statutorily protected — technical-assistance network, anchored at the Federal Trade Commission, but available on neutral terms to state AGs, state regulators, and courts. This would give non-federal regulators the technical capacity to evaluate AI products and harms for themselves.
The FTC is a workable home for this function. It already has built in-house technologist capacity through its Office of Technology, founded in 2023, as well as a preexisting relationship with state AGs through the Consumer Sentinel Network and joint enforcement actions.
But as a political matter, this approach would likely only be viable under unusually favorable conditions — a Democratic administration and at least one Democratic chamber in Congress, if not both. Republicans are generally skeptical of empowering regulators to bring more suits, and tend to view expanded enforcement as a path to overreach.
On the merits, however, this is the kind of policy that would make for more robustly accountable AI governance across multiple levels of government and across jurisdictions.
These are just two illustrations of what graceful degradation could look like when applied to AI policy; a serious AI agenda would incorporate similar mechanisms broadly. But it’s worth noting that codification is a critical feature of both remedies — capacity housed in statute is better positioned to endure changes in administration than capacity distributed across agencies.
Relying too much on good-faith administration is a vulnerability, and any remedy for that vulnerability will involve tradeoffs. The remedy I suggest — distributing authority beyond the executive branch — is no exception. Diversification cuts both ways: Distributed authority can slow a future administration trying to deploy genuinely good policy quickly, and not every state or Congressional experiment will be wise or well-designed.
But in these early days of AI diffusion, it is safer to err on the side of decentralized power. Any critical new technology requires some mix of state and federal regulation, and the appropriate limits of each can’t be determined ex ante — the risks of any technology are simply not fully knowable from the start. Decentralized AI regulation and oversight does create constraints, but they are the kinds of constraints that are essential to a flourishing democracy — ones that ensure AI governance isn’t subject to the whims of any single actor, and one that serve as a check on executive overreach at a time when democratic backsliding is far from a theoretical concern.
This is a rare instance in which we have time to apply long-term thinking to relatively proximate risks. We should act to diffuse administrative capacity and power broadly, so no one actor controls how AI risks are defined, which harms are investigated, or whose interests are considered.




Subscribed. Policy is the software update no one asked for.
I would extend the distributed ability for checks and balances to all organizations and actors, not just state ones. In fact, the organization with the largest capacity for coercion probably requires the most accountability.