AI regulation has become a political battleground as Congress prepares to debate comprehensive legislation while tech companies spend millions to influence outcomes. Former Palantir employee Alex Bores faces opposition from a super PAC funded by OpenAI’s Greg Brockman and Palantir cofounder Joe Lonsdale for supporting New York’s RAISE Act, which requires AI firms to publish safety protocols. Meanwhile, competing AI labs Anthropic and OpenAI clash over Illinois legislation SB 3444 that would shield companies from liability for large-scale AI-caused harm.
Congressional Races Shape AI Policy Direction
The 2026 congressional races reveal how deeply AI regulation divides Silicon Valley. According to Wired, the super PAC Leading the Future has launched an aggressive campaign against Alex Bores, targeting his support for “ideological and politically motivated legislation that would handcuff not only New York’s, but the entire country’s, ability to lead on AI jobs and innovation.”
Bores, who holds a master’s degree in computer science and worked at Palantir before entering politics, cosponsored New York’s RAISE Act that became law in 2025. The legislation represents one of the first comprehensive state-level AI safety frameworks in the United States.
Key provisions of the RAISE Act include:
- Mandatory safety protocol publication for major AI firms
- Regular model testing and evaluation requirements
- Transparency reporting for high-risk AI applications
- Public disclosure of training data sources and methodologies
The tech industry’s opposition to Bores highlights the growing tension between innovation advocates and safety-first regulators. His background in the industry makes him particularly threatening to companies that prefer self-regulation over government oversight.
EU AI Act Sets Global Regulatory Precedent
While American lawmakers debate frameworks, the European Union has already implemented comprehensive AI regulation through the AI Act, creating a global template for governance. The legislation establishes risk-based categories for AI systems, with stricter requirements for high-risk applications in healthcare, education, and law enforcement.
The EU’s approach influences American policy discussions, particularly around questions of algorithmic transparency and bias prevention. However, critics argue that European regulations could hamper innovation and competitiveness in the global AI race.
According to MIT Technology Review, the US and China remain “almost neck and neck on AI model performance,” making regulatory decisions increasingly consequential for maintaining technological leadership. The speed of AI development has outpaced regulatory frameworks, creating urgent pressure for comprehensive governance.
Current regulatory challenges include:
- Benchmark systems struggling to measure AI capabilities accurately
- Job market disruption from rapid AI adoption
- Infrastructure demands requiring 29.6 gigawatts of power globally
- Supply chain vulnerabilities concentrated in Taiwan’s TSMC
Industry Splits on Liability Protection
The debate over AI liability reveals fundamental disagreements within the tech industry about responsibility and accountability. OpenAI supports Illinois bill SB 3444, which would protect AI companies from liability for large-scale harm caused by their systems. However, Anthropic strongly opposes the legislation, calling it a “get-out-of-jail-free card against all liability.”
Cesar Fernandez, Anthropic’s head of US state and local government relations, told Wired that “good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology.” The company has been actively lobbying Illinois lawmakers to either significantly modify or kill the bill.
This split between leading AI labs demonstrates the complexity of balancing innovation incentives with public safety protections. The outcome of SB 3444 could establish precedent for how states approach AI liability nationwide.
Key stakeholder positions:
- OpenAI: Supports liability shields to encourage innovation
- Anthropic: Demands accountability paired with transparency
- State legislators: Seeking balance between safety and economic growth
- Consumer advocates: Pushing for stronger protection mechanisms
Federal Communications Commission Regulatory Overreach
Regulatory capture concerns extend beyond AI-specific agencies to broader media oversight. The Federal Communications Commission under Chairman Brendan Carr has created fast-track processes for conservative groups to file complaints against media critics of the Trump administration.
According to Wired, internal emails show the Center for American Rights used direct access to Carr’s senior counsel to accelerate complaints against Jimmy Kimmel and ABC, bypassing career staff review processes. This preferential treatment raises questions about regulatory independence and fair enforcement.
The FCC’s actions against broadcast networks demonstrate how regulatory agencies can be weaponized for political purposes, highlighting the importance of structural safeguards in AI governance frameworks.
Enforcement Challenges in Emerging Technologies
Regulatory enforcement faces practical challenges beyond political considerations. The electric bike industry illustrates how rapidly evolving technologies outpace regulatory frameworks and enforcement capabilities.
Many devices sold as “ebikes” actually exceed federal speed and power limits, operating more like motorcycles while avoiding appropriate safety standards. According to Wired, bike shop owners report safety incidents from working on uncertified devices, with some requiring third-party UL 2849 certification before accepting repairs.
Enforcement gaps include:
- Classification systems that don’t match technological reality
- Safety standards lagging behind product development
- Limited resources for monitoring compliance
- Cross-jurisdictional coordination challenges
These challenges preview similar issues likely to emerge in AI regulation, where rapid technological development makes static regulatory categories obsolete quickly.
What This Means
The current AI regulatory landscape reveals a technology sector in transition, where traditional approaches to governance struggle with unprecedented innovation speeds and societal impacts. Political battles over congressional seats and state legislation demonstrate that AI regulation has become a defining issue for both parties and the tech industry.
The split between major AI companies over liability protection suggests that industry consensus on regulatory approaches may be impossible to achieve. This fragmentation could lead to a patchwork of state and federal regulations that creates compliance burdens while failing to address core safety concerns.
Regulatory capture risks, evidenced by the FCC’s preferential treatment of conservative groups, highlight the need for structural safeguards in AI governance. Without proper oversight mechanisms, regulatory agencies could become tools for political retaliation rather than public protection.
The speed of AI development, as documented in Stanford’s AI Index, continues to outpace regulatory responses. This creates a dangerous gap where powerful technologies deploy widely before adequate safety frameworks exist. Closing this gap requires both faster regulatory adaptation and industry commitment to responsible development practices.
FAQ
What is the EU AI Act and how does it affect US companies?
The EU AI Act is comprehensive legislation that regulates AI systems based on risk levels, requiring transparency and safety measures for high-risk applications. US companies operating in Europe must comply with these standards, potentially influencing American regulatory approaches.
Why are AI companies split on liability protection laws?
Companies like OpenAI support liability shields to encourage innovation without fear of massive lawsuits, while Anthropic argues that accountability measures are necessary to ensure responsible development and public safety.
How do congressional races impact AI regulation?
Candidates’ positions on AI regulation directly influence future legislation, with tech industry PACs spending millions to support or oppose specific candidates based on their regulatory stances, as seen in the campaign against Alex Bores.






