The Trump administration is engaged in a concerted effort to prevent states from regulating artificial intelligence (AI). This initiative includes a Department of Justice (DOJ) litigation task force, evaluations from the Commerce Department concerning state laws deemed 'burdensome,' and a legislative framework aimed at establishing a ‘minimally burdensome national standard’ that would preempt state regulations. In contrast, states have ramped up their regulatory efforts, with 1,208 AI bills introduced in 2025 and 145 enacted. Congress has twice rejected the idea of federal preemption, including a significant 99-1 Senate vote against an AI moratorium in the One Big Beautiful Bill Act.
Doug Fiefia, a Republican state representative from Herriman, Utah, and a former Google salesperson, introduced the Artificial Intelligence Transparency Act earlier this year. This bill aimed to require leading AI companies to disclose safety and child protection plans, along with providing whistleblower protections for employees reporting safety issues. It passed a House committee unanimously but was ultimately halted by the White House.
On February 12, 2026, the White House Office of Intergovernmental Affairs communicated its strong opposition to Utah HB 286, labeling it an 'unfixable bill' contrary to the administration’s AI agenda. Private discussions with Fiefia failed to yield any specific amendments that would make the bill more acceptable, leading to its demise in the Senate.
Fiefia emphasized the importance of defending states’ rights, particularly under a Republican administration, asserting that the principle should transcend partisanship. Notably, his bill focused solely on 'frontier developers,' which are companies utilizing at least 10^26 floating-point operations for model training, and proposed a penalty cap of $1 million. Despite its moderate stance, the White House viewed it as a severe threat.
The Federal Framework
The Trump administration's opposition to state AI regulation is structured around three main components. The initial step was Executive Order 14365, signed on December 11, 2025, titled 'Ensuring a National Policy Framework for Artificial Intelligence.' This order established an AI Litigation Task Force within the DOJ, set to challenge state AI laws in federal court based on claims of unconstitutional burdens on interstate commerce and federal preemption. Additionally, it mandated the Secretary of Commerce to provide a comprehensive evaluation of state AI laws by March 11, identifying particularly burdensome regulations.
The second step involved the Commerce Department’s evaluation, which highlighted state laws in Colorado, California, and New York for extra scrutiny. This evaluation is expected to feed into the DOJ task force, which is anticipated to initiate federal legal challenges by summer 2026, with cases expected to take two to three years for resolution.
The third component was the National Policy Framework for AI released on March 20, which outlined legislative suggestions across seven pillars: child protection, AI infrastructure, intellectual property, censorship and free speech, innovation, workforce preparedness, and the preemption of state AI laws. It asserted that Congress should act to preempt state laws that impose undue burdens, advocating for a cohesive national standard.
David Sacks, former AI and crypto czar, articulated the administration's viewpoint, stating the inconsistent regulations across states create challenges for innovators. He also expressed concerns about algorithmic discrimination rules in Colorado and denounced attempts from 'blue states' to impose ideological frameworks on AI models.
State Actions
While the federal government deliberates over AI regulation, states have been proactive. In 2023, fewer than 200 AI bills were introduced; however, this number surged to 635 in 2024, with 99 enacted. By 2025, every state had proposed at least one AI-related bill, leading to 145 new laws. In early 2026, 78 chatbot-specific safety bills were filed across 27 states.
California's Transparency in Frontier Artificial Intelligence Act and Texas’s Responsible Artificial Intelligence Governance Act took effect on January 1, 2026. Colorado’s AI Act, which prohibits algorithmic discrimination, has been delayed until June 30, 2026. This legislative momentum reflects a bipartisan consensus that AI regulation cannot be postponed any further.
Utah Governor Spencer Cox has asserted that states must retain the authority to regulate AI, emphasizing the need to ensure technology serves humanity rather than endangers it. He has initiated a 'pro-human' AI initiative with $10 million allocated for workforce readiness.
Congressional Disagreement
The administration’s framework necessitates Congressional action to become legally binding, as the executive order itself does not invalidate any state laws. Until legal challenges are resolved, parties must adhere to state regulations.
The most extensive federal AI proposal is Senator Marsha Blackburn’s TRUMP AMERICA AI Act, a 291-page discussion draft released on March 18. It would impose a duty of care for high-risk AI systems, require transparency on training data usage, repeal Section 230 of the Communications Decency Act, and establish a liability framework for AI developers. However, it remains a draft and has not been formally introduced.
The One Big Beautiful Bill Act initially included a provision for a ten-year moratorium on state AI regulation, later amended to five years linked to federal broadband funding. The Senate decisively voted to remove this preemption clause, with only one senator supporting its retention. The bill was enacted on July 4 without imposing restrictions on state AI legislation, signaling a clear Congressional stance that the regulatory question remains unresolved.
The Financial Stakes
The lobbying efforts surrounding this issue have intensified, with significant funding flowing to both sides. Leading the Future, a super PAC formed in August 2025, raised $125 million in 2025 and supports candidates in favor of uniform federal regulations. Conversely, Anthropic has contributed $20 million to Public First Action, a bipartisan group aiming to support candidates advocating for AI safeguards.
A coalition of 36 state attorneys general has voiced opposition to AI preemption, citing rising risks like scams and harmful interactions, particularly for vulnerable populations. Colorado's attorney general intends to challenge the executive order in court.
Important Precedents
The administration swiftly revoked Biden’s Executive Order 14110 upon taking office on January 20, 2025, labeling it as 'unnecessarily burdensome' for AI developers. Its replacement, signed shortly after, aimed to 'remove barriers to American leadership in AI.' This trajectory suggests a troubling narrative: if the federal government abstains from regulating AI, and simultaneously restricts states from doing so, the result may be a complete lack of regulation.
In contrast, the EU has implemented a unified regulatory framework through the EU AI Act, effective January 2026. The U.S. approach diverges sharply, with no binding federal standards and active efforts to prevent state regulations from filling the void. Consequently, AI governance in America is being shaped more by litigation and executive orders than by cohesive legislation.
Doug Fiefia, the Utah lawmaker whose transparency bill faced opposition, is now campaigning for a state senate seat. His opponent, who played a role in the bill's failure, claimed it would 'drive Utah out of the AI innovation business.' Fiefia co-leads the AI task force of the Future Caucus alongside a Vermont Democrat, representing a new generation of lawmakers with a tech background who believe that informed regulation is essential for the responsible development of AI. The enduring question remains whether the regulatory vacuum they seek to address will persist or become a permanent feature of the landscape.