What does OpenAI really want from Trump?

2 hours ago 1

When AI giant OpenAI submitted its “freedom-focused” policy proposal to the White House’s AI Action Plan last Thursday, it gave the Trump administration an industry wishlist: use trade laws to export American AI dominance against the looming threat of China, loosen copyright restrictions for training data (also to fight China), invest untold billions in AI infrastructure (again: China), and stop states from smothering it with hundreds of new laws.

But specifically, one law: SB 1047, California’s sweeping, controversial, and for now, defeated AI safety bill.

Hundreds of AI-related bills are flooding state governments nationwide, and hundreds or even thousands are expected by the end of 2025, a regulatory deluge the AI industry is trying to dodge. Broad safety regulations like SB 1047 are particularly threatening, posing a perhaps existential threat to OpenAI. Strikingly absent, however, is any notion of how AI should be governed — or whether it should be governed at all.

“OpenAI’s proposal for preemption, just to be totally candid with you, it confuses me a little bit, as someone who’s thought about this issue a lot,” Dean Ball of the libertarian Mercatus Center told me. While federal preemption — the legal concept that federal laws supersede contradicting state laws — is a longstanding practice in several fields, OpenAI’s proposal notably does not include a national framework to replace any AI laws the states are working on. OpenAI did not respond to a request for comment on the plan.

OpenAI’s argument against state regulation reflects a common perception that 50 states enforcing 50 versions of AI law would create chaos for citizens and AI companies alike. But with a lack of federal action and an explosion of AI impact on people’s everyday lives, state-level proposals are skyrocketing. As of today, less than 80 days into 2025, there are 893 AI-related bills up for consideration in 48 states, according to government relations firm MultiState.ai — compared to 734 submitted in all of 2024. The bills address a broad range of concerns, from political deception and AI-generated nonconsensual intimate imagery to the use of AI tools in handling utilities during natural disasters. Some bills proscribe using AI tools under certain circumstances, while others — like a bill that encourages AI-powered searches for objectionable content in school libraries — promote their use.

“This is not just unprecedented for technology policy. I believe this is unprecedented for policy, period, full stop,” said Adam Thierer, a

senior policy fellow on technology and innovation at the right-leaning R Street Institute. “Name me another area where you’ve witnessed, within the first 70 days of any given calendar year’s legislative cycle, 800 bills introduced. I’ve just never heard of such a thing.”

But the fact that OpenAI targeted one specific law paints a picture of what it doesn’t want: a law that holds large-scale frontier model developers liable for damages and implements whistleblower protections, especially if it’s in a state where it doesn’t have enough political backing.

SB 1047, which was passed and then vetoed by California Governor Gavin Newsom in 2024, would have hit the nascent AI industry the hardest. The bill would have placed strict security restrictions and legal liability on frontier labs developing new AI models that conducted business in the state — which, essentially, meant every AI lab in the country would be forced to comply. Though its supporters and opponents hardly fell along ideological lines — even competitors like Anthropic ended up backing the proposal — OpenAI publicly opposed the bill, arguing in a letter to the California State Legislature that it would hamper the industry’s growth and global leadership.

Federal laws preempting state ones is standard practice — except that, in this case, there are effectively no federal laws, even ones addressing less existential crises. Two bipartisan AI-related bills in Congress gained momentum but ended up not being passed: the No Fakes Act, which would have criminalized generative AI replicas of individuals, and a bill that would have codified the AI Safety Institute that then-President Joe Biden had created via executive order. But with Trump rolling back that order, little is currently in place — at the nationwide level, at least — to rein in AI developers.

For now, OpenAI and the tech industry at large are relying on the threat of China to push their goals forward. In a post-DeepSeek world, they’re leaning heavily on the potential of national security threats and the loss of America’s AI superpower status, inflected with Trumpian nationalism. “The Trump Administration’s‬‭ new AI Action Plan can ensure that American-led AI built on democratic principles‬‭ continues to prevail over CCP-built autocratic, authoritarian AI,” OpenAI writes in its comments.

“I think the frontier labs, and approximately everyone trying to do any policy entrepreneurship in Washington, DC, has the sense that China is a good way to get people to pay attention,” Ball observed.

Read Entire Article
×

🔍 AI Summary

Generating summary...