Anthropic pledges $20M to AI safety group

AI Safety Versus Innovation Battle Intensifies

Anthropic announced Thursday a $20 million donation to Public First Action, a political advocacy group supporting congressional candidates who favor AI safety regulations, directly countering a $125 million super PAC backed by OpenAI leaders and investors that opposes stringent government oversight.

The donation transforms AI policy into a major campaign flashpoint ahead of the 2026 midterm elections, with rival AI companies now funding opposing political movements to shape the regulatory landscape governing their industry.

Competing Visions for AI Governance

Public First Action plans to support 30 to 50 candidates from both parties in state and federal races who favor AI transparency, safeguards against high-risk applications like AI-enabled biological weapons, and opposition to federal preemption of state-level AI regulation without strong national standards.

The group immediately launched ad campaigns backing Republican Senator Marsha Blackburn's Tennessee gubernatorial campaign and Nebraska Republican Senator Pete Ricketts' reelection bid, highlighting their advocacy for reining in the AI industry and promoting children's online safety legislation.

"At present, there are few organized efforts to help mobilize people and politicians who understand what's at stake in AI development," Anthropic stated in its announcement. "Instead, vast resources have flowed to political organizations that oppose these efforts."

OpenAI-Backed Opposition

Anthropic's donation directly challenges Leading the Future, a super PAC backed by OpenAI co-founder Greg Brockman and venture capital firm Andreessen Horowitz. Leading the Future reports $70 million on hand with tens of millions more committed, focused on boosting candidates supporting industry-friendly rules and lighter government oversight.

Leading the Future has already spent hundreds of thousands opposing an AI safety advocate running for Congress in New York and supporting candidates aligned with lighter regulation. The group announced plans to spend half a million dollars in North Carolina and "seven figures" supporting two Democratic candidates elsewhere.

White House Criticism and Political Risk

Anthropic's political involvement carries risks given its strained relationship with the Trump administration. David Sacks, the White House AI and crypto czar, criticized Anthropic in October, accusing the company of running a "sophisticated regulatory capture strategy based on fear-mongering" and being "principally responsible for the state regulatory frenzy that is damaging the startup ecosystem."

President Trump subsequently signed an executive order establishing a single federal AI regulation framework, undermining individual state authority—particularly in Democratic-led states like California and New York where Anthropic had supported regulatory efforts.

Industry Outlier Position

Anthropic has positioned itself as the AI industry's safety-focused outlier, advocating for export controls on AI chips, proposing transparency frameworks, and supporting regulatory safeguards. The company's approach contrasts sharply with most AI industry peers who favor innovation-focused policies and preventing fragmented state-level regulation.

"The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests," Anthropic stated, emphasizing its belief that flexible regulation can simultaneously enable AI benefits, manage risks, and maintain American competitiveness.

Broader Political Implications

The AI policy battle represents a fundamental disagreement about technology governance at a critical moment when AI capabilities are rapidly advancing. Anthropic argues that meaningful safeguards, transparency requirements, and regulatory oversight are necessary to prevent catastrophic risks.

As billions of dollars flow into congressional races from competing AI interests, the 2026 midterms will effectively serve as a referendum on AI governance philosophy, determining whether the United States pursues safety-focused regulation or innovation-friendly lighter oversight as AI becomes increasingly embedded in daily life and critical infrastructure.

Keep Reading