Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI image generation is speech.

Sure, but I don’t think an AI model is speech, at least not one trained on billions of parameters and massive quantities of compute. Comparing it to regulated heavy machinery or architecture is apt.

You can’t create an AI model without huge, global industries working together to create the tools to produce said model. And baked in are all sorts of biases that if met with wide-spread adoption could likely have profound social consequences, particularly because these models aren’t easy to make, so the likelihood that they will see large amounts of adoption/use is likely, they are useful tools after all. Group prejudice in these models jumps off the page here, whether race, sex, religion, etc. but black box algorithms are fundamentally dangerous.

Speech is fundamental to the human experience, large ML models are not, and calling them speech is nuts.



You can't create Hollywood blockbusters or video games without huge, global industries working together to produce the tools and content. Popular media has biases and, clearly, social consequences as well—there's a reason people talk about how much "soft power" the US has!

And yet pop culture content is speech both in a casual and in a legal/policy sense.

AI models are not identical to movies or video games, but they're not different in any of the aspects you listed. On the other hand, there is a pretty clear difference between AI models and heavy machinery or architecture: AI models cannot directly hurt or kill people in the physical world. An AI model controlling a physical system could, which is a case to have strict regulations on things like self-driving cars, but doesn't apply to the text/image/etc generation models we're talking about here. Plans for heavy machinery or buildings are not regulated until somebody tries to create them in the real world (and are also very clearly speech, even if they took a lot of engineering effort to produce). At least in the US, nobody is going to stop you from publishing plans for arbitrarily dangerous tools or buildings with narrow exceptions for defense-critical technology.


Aren't people allowed to release source code on free speech grounds? There's not much difference. One could publish a model with its weights, and that would qualify as free speech.


I believe it depends. Free speech is not without limits. I think there's still a lot of discussion in the biomed community about the legality/ethics of releasing methods that can be used by bad actors to generate potentially harmful results.


All sorts of software can be used to “generate potentially harmful results”. Think about an algorithm for displaying content on social media sites, a search engine, a piece of malware on a computer. Do we ban books on how to make bombs? It’s such a broad point to make that it’s practically meaningless. The computers job is to read a set of instructions and execute them. It’s abstractly no different to a human writing instructions for another human, with the adage that the computer is a much faster processor than a human. You can quite literally run these ML models (a collection of statistically weighted averages) with pen and paper and a calculator (or even none!). Things will be maliciously used, that means we don’t ban sharp forks (at the determent of legitimate users), we ban those that intentionally misuse those tools.


You’re correct, and, yes, it can apply to virtually anything. But we don’t let that hold us hostage for properly mitigating risk.

I think the distinction is when a threat is “imminent”. To the point of this thread, I don’t think the dialogue has progressed enough to form a consensus on where that “imminent threshold” lies.

Teaching someone about DNA doesn’t constitute an imminent threat. But the equivalent of teaching the recipe for a biological weapon may be considered enough of an imminent risk to warrant regulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: