Smart Trading Field
  • Politics
  • Investing
  • Tech News
  • Stock
  • Editor’s Pick
Investing

AI Pre-Approval Proposal Could Hand Washington a Kill Switch Over Speech and Innovation

by May 6, 2026
by May 6, 2026 0 comment

Juan Londoño and Jennifer Huddleston

artificial intelligence AI

Reports indicate that the White House is considering an executive order establishing a new working group to regulate artificial intelligence (AI) that would “examine potential oversight procedures.” This group would be tasked to devise a system for the government to “approve” the most advanced models before they could launch. The plausibility of the risks associated with the most advanced AI models is unclear, but government control or burdensome regulation of the technology would bring significant risks to innovation and speech. Such an approach would open the door to a level of government control that could lead to regulatory capture, restrictions on expression, and the overall weaponization of government power to punish politically disfavored companies. 

Requiring pre-launch approval was criticized as heavy-handed and anticompetitive when included in the Biden administration’s executive order on AI. If the Trump administration does carry through with such a requirement, it will raise similar concerns and represent a dramatic departure from the light-touch approach the administration has favored on this emerging technology.

According to additional reporting, the White House had been working on new safety-focused measures prior to the release of Anthropic’s and OpenAI’s recent models, but efforts seem to have been fast-tracked after these models raised additional cybersecurity concerns. The latest descriptions even call it an “FDA for AI” approval. This would abandon the approach that has led American technology to flourish and replace it with a framework that burdens innovation with a stagnant bureaucracy and a number of other problems. 

Concerns about cybersecurity are valid, but government pre-approval comes with significant tradeoffs. Alternative policies are likely better able to balance legitimate cybersecurity risks while preventing the chilling of speech and innovation that mandatory pre-approval would entail. 

A prescriptive, top-down approach in which the White House gatekeeps the market would subject a developing industry to unprecedented control driven by the executive branch’s whims. This would not only cause tremendous damage to technological and economic innovation but, for an expressive product such as AI, likely trample on Americans’ free speech rights. Such power could easily be abused not only to favor certain companies but even to engage in jawboning or censorship by controlling what information a model is allowed to produce. 

Recent events, such as the Anthropic-Pentagon feud, have shown that the dispute between the government and innovators over what their model should do is not merely hypothetical. While this case was limited to the defense application of an AI model, it was a perfect example of how the government can invoke regulation to retaliate against a company for design choices it disagrees with, particularly if such companies must seek government approval before launch. It would not be far-fetched to believe that if the White House is given the power to broadly manipulate the AI market, it will likely wield it for political purposes. 

If an administration considers a model “too woke,” “biased,” or to be spreading misinformation or disinformation, it would now have the power to prevent it from being rolled out. The establishment of a pre-market approval regime is likely to chill substantial speech, as companies will now avoid drawing political attention from the sitting administration to prevent any political clashes that could influence the approval process.

Installing a mandatory review process would also severely damage and slow technological innovation. As some have pointed out, the government will have an incentive to be slow rather than nimble and an active disincentive to approve models. This could put US companies at the type of global disadvantage typically faced in Europe, where companies have long had to first seek government approval. The political incentives then push the government to require AI developers to prove that a model is safe, rather than prove it has no evident flaws. This is a significantly higher bar that will undoubtedly take more time to clear, delaying the rollout of new features and potentially leaving fewer or more dated products available. When it comes to the underlying safety concerns around AI, there are less restrictive alternatives. 

The administration may already be considering some. For example, recently, several frontier AI companies voluntarily agreed to share information that would allow the Center for AI Standards and Innovation (CAISI) to test and review their models to identify potential safety and security-related risks and capabilities, without giving the government ultimate approval or disapproval. Such voluntary agreements for government review and safety auditing of AI models enable independent third-party review of companies’ safety and security claims. However, they should not result in the government being the ultimate arbiter in how technology develops, as mandatory pre-approval risks do.

It is important to note that, to this day, frontier models are not completely unregulated or without oversight. As mentioned above, CAISI can already enter into voluntary agreements with companies willing to submit their safety tests to independent auditing. At the same time, the National Institute of Standards and Technology (NIST) has already published an AI risk management framework (RMF), a guidance document that shares best practices on AI risk management for developers and deployers. By maintaining a voluntary nature, NIST has brought companies to the table to create a rapidly evolving document better suited to reflect the industry’s fast pace of change, making the RMF a valuable “soft law” governance tool. But these tools are all significantly less extreme than pre-market government approval.

Establishing a pre-release review or licensing regime for AI companies would grant the government, particularly the executive branch, significant control over AI technologies that could hinder innovation or control expression. The costs of technological and economic development would be onerous. But the impact it could have on AI-powered speech and content creation could be even worse. 

0 comment
0
FacebookTwitterPinterestEmail

previous post
From Cynicism to Change: Reflecting on One Year Since My Congressional Testimony
next post
How Measurement Choices Shape the Housing Debate—and the Charts in the President’s Economic Report

You may also like

How Measurement Choices Shape the Housing Debate—and the...

May 7, 2026

From Cynicism to Change: Reflecting on One Year...

May 6, 2026

The Pentagon’s Retaliation Campaign Against Senator Kelly Is...

May 6, 2026

Parametric Block Grants Could Fix Some of FEMA’s...

May 6, 2026

What If AI Chatbots Are Saving Lives?

May 5, 2026

An Obituary for the War Powers Resolution

May 5, 2026

What Spirit Airline’s End Signals for Financial Services

May 4, 2026

The Arson of Federalism: How Overcharging Scorches the...

May 4, 2026

Reputational Risk Is Still Here

May 4, 2026

Access to Choice Is the Best Way to...

May 4, 2026

    Sign up for our newsletter to receive the latest insights, updates, and exclusive content straight to your inbox! Whether it's industry news, expert advice, or inspiring stories, we bring you valuable information that you won't find anywhere else. Stay connected with us!


    By opting in you agree to receive emails from us and our affiliates. Your information is secure and your privacy is protected.

    • About us
    • Contact us
    • Privacy Policy
    • Terms & Conditions

    Copyright © 2025 smarttradingfield.com | All Rights Reserved

    Smart Trading Field
    • Politics
    • Investing
    • Tech News
    • Stock
    • Editor’s Pick