Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
California Senate bill to crush OpenAI competitors fast tracked for a vote (twitter.com/psychosort)
33 points by robertn702 on April 29, 2024 | hide | past | favorite | 16 comments


Anybody got any additional context on this? The headline / tiny screenshot really leaves a lot to be desired. What is the bill? what does it do? how does this benefit OpenAI? Does it also benefit other "big players" (Meta, xAI, etc)?


You're likely not seeing the context because you're not logged in. People need to stop using Twitter for this because it's not accessible.


Especially in cases where the tweet isn't any kind of primary source or analysis, just "OMG Look at this!"


[flagged]


"The experts are handling it" is a term for a totalitarian authoritarianism not a democracy.



The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.

Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.



Do I hear the sound of GPU farms being packed up in preparation for being moved to another state (or, more likely, the next generation being built in another state)?

Raw compute doesn't care where it occurs. Anywhere with plenty of reliable electric power is fine (which is increasingly not California to begin with).

The people who use the compute power don't even have to live there.


With how expensive California electric is I'm not sure why they would be there in the first place honestly.


And here is the response thread from the state senator that introduced it.

https://twitter.com/Scott_Wiener/status/1784964914236227757

Has anyone with less of a bias actually analyzed this bill because I'm not sure if I trust either of these guys after reading these two threads?


Thread succinctly summarized as: "Nothing will happen bro, just trust us." Fucking liar. This is a trojan horse and a power grab. The only thing that matters is the text of the bill, not some bullshit explanations by the authors.


Is there an easy way to actually read the whole thread? All I see as a non-user of X/Twitter is a post alluding to a response, but without any follow-up:

> However, this thread contains some extreme inaccuracies — indeed outright fear-mongering — about our AI safety legislation (SB 1047) that need to be corrected:

[then nothing]



Oh of course its from this guy...


Burdensome restrictions on AI seem poised to benefit entrenched players and government. On that we agree.

I am terrified of the societal implications of AI to the extent I am plotting out short stories.

The author of the linked tweet seems to think AI can do no harm and there is nothing to worry about. Bad actors can't abuse AI for bioweapons research, or to develop misinformation. And there is no chance that an AI technology could make itself better unassisted.

I'll follow up on their position but it seems naive or disingenuous in the extreme.


What the hell?

It establishes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to regulate the development and use of advanced AI models, termed "covered models." So now LLMs will be subject to the whims of an unelected governing body and you will comply under threat of vague penalties?

Requires entities operating computing clusters to assess if customers intend to use resources to deploy covered models and maintain related records. If your judgement is wrong you are fined and go to jail?

If a model does not qualify for exemption, developers must implement cybersecurity protections, shutdown capabilities, safety protocols, and capability testing. Annual compliance certification to the Frontier Model Division is required. GTFO.

Developers must report AI safety incidents to the Frontier Model Division within 72 hours of occurrence. Oof.

Creates the Frontier Model Division within the Department of Technology to oversee compliance, issue guidance, review safety reports, and advise the Attorney General and Legislature on AI matters. Jobs! Jobs! Jobs!

Requires entities operating computing clusters to assess if customers intend to use resources to deploy covered models and maintain related records. There goes our freedom of association.

Allows the Attorney General to bring civil actions for violations, with potential penalties including injunctions, damages, and fines up to 30% of model development cost. Just lovely.

Directs the Department of Technology to commission consultants to create CalCompute, a public cloud computing cluster for AI research and innovation. Good luck with that!

LLM model development will be buried by red tape in California.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: