Republicans thwart Trump's push to block state AI laws in defense bill

A Donald Trump-backed measure to prevent states from enacting AI regulations for a decade failed to make it into the National Defense Authorization Act. Republican opposition, joined by a bipartisan coalition, led to its removal from the bill. House leaders now seek alternative legislative paths for the proposal.

The effort to include a federal preemption of state AI laws in the National Defense Authorization Act (NDAA) collapsed this week amid resistance from within the Republican party. House Majority Leader Steve Scalise (R-La.) informed reporters on Tuesday that the NDAA "wasn’t the best place" for the measure, though Republicans remain interested in advancing it elsewhere. This marks the second recent failure for the Trump administration's initiative, following its exclusion from a budget bill earlier.

Trump has repeatedly urged Congress to enact a unified federal standard, arguing that a "patchwork of 50 State Regulatory Regimes" would hinder innovation and allow China to surpass the US in AI development. In a Truth Social post last month, he stated: "We MUST have one Federal Standard... Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America." Critics, including Rep. Marjorie Taylor Greene (R-Ga.), Arkansas Gov. Sarah Huckabee Sanders, and Florida Gov. Ron DeSantis, opposed bundling it with the defense legislation, aligning with advocates who emphasize states' roles in addressing AI risks swiftly.

A bipartisan coalition, including state lawmakers, parents, faith leaders, and unions, celebrated the measure's removal. Americans for Responsible Innovation (ARI), a lobbying group, highlighted the "widespread and powerful" backlash against what it called a "rushed attempt to sneak preemption through Congress." ARI President Brad Carson noted: "Americans want safeguards that protect kids, workers, and families, not a rules-free zone for Big Tech." Senate Majority Leader John Thune (R-SD) described the proposal as "controversial," pointing to ongoing White House efforts for a compromise that might preserve some state rights.

The debate underscores a $150 million lobbying battle between safety advocates like ARI and industry-backed groups such as Leading the Future (LTF), funded by Silicon Valley investors. In New York, the RAISE Act—authored by Alex Bores—awaits Gov. Kathy Hochul's signature and would require AI firms to submit risk disclosures, with fines up to $30 million for non-compliance. Bores, a tech engineer running for Congress in 2026, told Ars Technica that compliance would be simple, requiring just one additional hire per company. He defined "critical harm" under the bill as affecting 100 people or causing $1 billion in damages, focusing on extreme risks like bioweapons.

Similar laws exist in Colorado and Illinois, while California Gov. Gavin Newsom signed a transparency act after vetoing a stricter measure. A New York poll shows 84 percent support for the RAISE Act. State Sen. Andrew Gounardes criticized preemption as a "gift for Big Tech," arguing firms prefer influencing federal inaction over state regulations. Scalise defended the need for preemption to foster an "open marketplace" and attract AI investments to America.

Gumagamit ng cookies ang website na ito

Gumagamit kami ng cookies para sa analytics upang mapabuti ang aming site. Basahin ang aming patakaran sa privacy para sa higit pang impormasyon.
Tanggihan