President Donald Trump announced Monday he will sign an executive order preventing individual states from implementing their own artificial intelligence regulations, establishing exclusive federal oversight of AI development and deployment across the United States. The move marks a significant shift in AI governance that could reshape how the technology is regulated as it becomes increasingly embedded in critical infrastructure and daily life.

"There must be only One Rulebook if we are going to continue to lead in AI," Trump declared in a Truth Social post. "We are beating ALL COUNTRIES at this point in the race, but that won't last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS."

The State Regulation Landscape

The executive order would preempt a growing patchwork of state-level AI laws that have emerged over the past two years. California, Colorado, and several other states have passed or proposed legislation addressing AI safety, algorithmic bias, and transparency requirements. These state efforts accelerated after federal lawmakers failed to advance comprehensive AI legislation despite multiple congressional hearings and proposed bills.

California's proposed AI safety bill, which Governor Gavin Newsom vetoed in September 2024, would have required safety testing for large AI models and established liability standards for AI-caused harms. Colorado enacted laws requiring transparency in AI-driven insurance and employment decisions. New York City implemented regulations governing automated hiring tools.

Trump's characterization of some states as "bad actors" in the regulatory process suggests the administration views state-level AI rules as obstacles to innovation rather than necessary safeguards. The comment comes despite AI safety researchers warning that inadequate oversight could enable harmful applications ranging from sophisticated misinformation campaigns to autonomous weapons systems.

Federal Approach Takes Shape

The executive order aligns with the administration's broader deregulatory agenda aimed at accelerating AI development with minimal government intervention. Trump has previously stated that excessive regulation could cause the United States to lose its competitive advantage to China and other nations investing heavily in artificial intelligence capabilities.

The administration's approach contrasts sharply with the European Union's comprehensive AI Act, which establishes strict requirements for high-risk AI systems and has already influenced global AI governance discussions. The EU framework categorizes AI applications by risk level and imposes corresponding obligations on developers and deployers.

Trump's federal-only framework would likely establish lighter-touch guardrails focused on maintaining American competitiveness rather than prescriptive safety requirements. Industry groups have largely supported federal preemption, arguing that navigating different state requirements creates compliance challenges that disadvantage American companies.

Safety Advocates Push Back

AI safety organizations and researchers have expressed concern about eliminating state-level experimentation in AI governance. They argue that states serve as valuable testing grounds for regulatory approaches and provide protections when federal action stalls.

"States have historically been laboratories of democracy, testing policy approaches that later inform federal legislation," noted AI policy experts. Removing state authority could leave significant regulatory gaps if federal rules prove inadequate or take years to develop and implement.

The timing is particularly notable given recent developments in AI capabilities. Advanced models from OpenAI, Anthropic, Google, and others continue pushing boundaries in reasoning, coding, and multimodal understanding. These rapid capability improvements have intensified debates about appropriate oversight mechanisms.

Business and Political Implications

Technology companies have generally favored federal preemption to avoid compliance with varying state requirements. A uniform national standard would simplify product development and deployment strategies while reducing legal uncertainty. Major AI labs including OpenAI, Anthropic, and Google have advocated for clear federal guidelines rather than fragmented state laws.

However, the executive order could face legal challenges from states arguing that federal preemption exceeds executive authority or infringes on traditional state powers. Constitutional questions about the scope of federal versus state jurisdiction in emerging technology regulation remain unsettled.

The announcement also comes as Congress considers multiple AI-related bills addressing everything from deepfakes and election security to autonomous vehicles and healthcare applications. Trump's executive action could effectively sideline legislative efforts by establishing federal primacy through executive authority rather than statute.

What Comes Next

The executive order's specific language and enforcement mechanisms remain unclear. Key questions include whether it would grandfather existing state laws, how federal agencies would assume oversight responsibilities, and what happens to state enforcement actions already underway.

For AI companies, the order signals a more permissive regulatory environment that prioritizes innovation velocity over precautionary measures. For states that have invested in developing AI governance frameworks, it represents a significant reduction in their ability to protect residents from potential AI-related harms.

As AI capabilities continue advancing at unprecedented speed, Trump's "One Rulebook" approach will test whether centralized federal oversight can adequately balance innovation incentives with necessary safeguards—or whether eliminating state-level flexibility creates regulatory blind spots that allow preventable harms.