Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Is Anyone Else Tired of the Self Enforced Limits on AI Tech?
376 points by CM30 on Oct 23, 2022 | hide | past | favorite | 403 comments
Like the reluctance for the folks working on DALL-E or Stable Diffusion to release their models or technology, or the whole restrictions on what it can be used for on their online services?

It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?

So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?



For anyone who actually used those models for more than few days and learns their strengths and weaknesses it is completely obvious all this talk of "societal impact", or as you called it self imposed limits are 100% bulls**. Everyone in the field knows it.

99% of those using this tactics use it to justify not releasing their models to avoid giving competition a leg up(Google, openAI) and to pretend they are for "open research". As I said this is 100% bull.

The remaining 1% are either doing this to inflate their egos ("hey look how considerate and enlightened we are in everything we do!"), or they pander to media/silly politicians/various clueless commentators whose level of knowledge about this technology is null. They regurgitate the same set of "what ifs and horror stories" to scare the public into standing by when they attempt to over regulate another field so they can be kingmakers within it(if you want an example how it works look at the energy sector).

All this sillyness accomplishes is to raise a barrier to entry for potential commercial competition. Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.

Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this. One important point is that there is value even in just publishing a paper demonstrating some achievements of a proprietary model, but if the experiment can't be reproduces based on description given that is not science and for sure it is not open.


As someone who has worked for OpenAI and with Google as a consultant, I completely and wholeheartedly agree.

1. This is absolutely a gatekeeping, ladder pulling measure. 2. The commercial face of this industry is rife with dogmas of self importance. It’s nearly comical.


These models cost millions of dollars to create. Not only do they have incentive to keep them when the technology has the ability to change industry like alphafold(which alone could have been a multimillion dollar company).

Add to this mix the USA trying to prevent China copying cutting edge AI technology and this was bound to happen sooner or later.

It's no longer research it's near market viability.


The Linux kernel or other critical pieces of software infrastructure cost billions to create - yet it is available for free allowing companies to create immense value to society on top of it.

The amount of cool stuff people have created on top of Stable Diffusion since it was created is also amazing and would never have happened if they didn't release the weights.


>The amount of cool stuff people have created on top of Stable Diffusion

Paintings that look like Pepe the frog?


Like blender Integration to generate assets for your (graphic) models and and full blown video sequences generated from a single prompt.

The resulting storm of new applications was definitely interesting to watch, though none of em were really market viable with that version of the model.


From GP:

> Now, I don't claim everyone should be forced to publish their AI models. No, if you spent lots of money on training your model it is yours. But you can't lock all your work behind closed doors and call yourself open. It doesn't work like this.

Both of you can be right. Just don't try to call yourself open.


It is all marketing. Apple patted themselves for their “courage” (hahaha) to remove headphone jacks. I forgot which company it was - they gave some money to charity, but spent 10 times more on ads that highlighted their one good deed of donating money.

Claiming to be “open” while not actually being open is a feature, not a bug. Just business as usual


What's wrong with the charity example? Presumably that money came out of their marketing budget, and if they hadn't spent it on ads highlighting the donation, it would have been spent on some other kind of ads instead.


> Add to this mix the USA trying to prevent China copying cutting edge AI technology

ML-tech from the past century seems to work just fine for killing each other (have a look at Ukraine). I would be rather more concerned about the foe having better batteries at this point.


Does it never interest you that the US have never discussed what types of AI models it has produced?

What do you think unlimited funds dumped into AI for military applications can do?

The US KNOWS this is a big deal, so much so that it has destroyed China's ability to conduct top end research.


Genuinely curious - what more weapons do we need? Especially western superpowers? Do we not have enough weaponry to destroy this planet million times over?

What more destructive power can AI provide, beyond the already existing drones/biological etc etc?


Now you can, but 120 years ago the British Empire ruled the waves with no reason to think battleships or cavalry would ever be obsolete.

The machine gun and trench warfare forced the development of the tank.

Once aircraft became military useful, battleships were rapidly vulnerable to aircraft and hence aircraft carriers.

I can imagine replacing naval mines with normally-quiescent remote-triggered torpedoes that are scattered on the seafloor a decade in advance.

How effective are troops against 3D printed drones designed to mimic animals (including birds), but which have a short-range firearm and some computer vision? Or engineered mosquitoes with the bare minimum of remote control, perhaps similar to the "cyborg cockroach" kits that have been on sale for about 9 years now? (Or just weaponise those 'roaches…)

What happens if you can predict what a command officer will say, use an AI to fake a their voice before they've spoken, and interfere with the communications to give misleading orders in the heat of combat? Doesn't even need to be a big change to the orders to alter the outcome.

(Obviously everything I've thought of in 5 minutes, the actual military will have categorised into "haha no" and "let's wargame this scenario"; I assume mostly the former).


Can I preorder your novel?


I'm redoing the final two chapters before I even look for a publisher. But I've been at it (in my spare time) for six years, so don't hold your breath.

(I'm not sure if your reply was a thumbs-up or a serious question, but that's a serious answer).


It was both. I'm glad you're making the effort, whether it's in limbo or not!


Thanks, I appreciate the boost ^_^


It’s not a question of destructive power. Destroying the earth has no strategic value because we need the earth. But there are a number of other problems that need to be solved, such as gathering and analyzing intelligence, precision strikes that harm the enemy while sparing friendlies and civilians, and so forth that have not been perfected.


> Do we not have enough weaponry to destroy this planet million times over?

You do understand that there are nuances here, yes? In other words, that there are military operations that lie somewhere between "do nothing" and "destroy the entire world"?


> These models cost millions of dollars to create.

Kodak had billions-with-a-b of dollars invested in the production of photographic film.

That didn't save them from competitors.


I guess it depends on how you define competitors. I think they protected themselves from people playing against their strengths (ie photographic film), but failed to prepare against disruption from left field.


> These models cost millions of dollars to create.

It's reasonable that it's fair use to train on unlicensed, freely available data as long as the model is released. I don't know anything about the law or care much about the legal stuff, but that's my opinion for how it should be.


There is a move large companies do where after innovating they pull the ladder up behind them by trying to drag regulators into the space. Regulation establishes a moat and allows them to cruise on that innovation until enough external pressure builds up that the dam breaks (people innovating in countries without that regulation for example).

I'm not quite sure what the solution is to this by the way. I don't think the answer is no regulation, but how to improve the quality of regulation.


Is there any evidence that companies (or rather senior staff at those companies) knowingly do that, or is it simply that once some innovation results in a new, large market, a government tries to get involved?


There is a substantial literature documenting regulatory capture in the modern bureaucratic state, but the business model dates back centuries. This is almost like asking for evidence of air.


I think this is conflating two phenomenon. The parent poster was asking:

Do companies push for the creation regulatory bodies to stifle competition?

Whereas regulatory capture is the process by which companies shape existing regulatory bodies to serve their interests. But it presumes the existence of a body to regulate them; regulatory capture can't occur if there isn't regulation to start with.


You make a good point here that it's not obvious if it's the tail wagging or the dog. I suspect it's both.

This paper may be interesting further reading on the topic of regulatory moats specifically. (I saw your comment below discussing regulatory capture vs this specific subset of capture.)

https://www.nber.org/papers/w26001?sy=001&fbclid=IwAR0oYZBL3...


Name a large company that has successfully done that recently?


Yet to be seen if it will be successful yet but Meta lobbying to congress to trim or eliminate section 230 comes to mind as a recent example.


This of how dangerous it will be if everybody can write a book. We must limit bookwriting to only educated and responsible people with the proper education in ethics and morals. /s

This happends with every new technology.


Except nobody has issues when the tech is meant to exploit somebody but isn't public. The morality angle disappears with sunlight


Like what kind of "bad actors" are they even talking about when they say they don't want their code to fall into the wrong hands?

Organised crime? If a big enough organised crime group wanted it you'd get stuffed into the back of a van.

CIA? MI6? You'd get stuffed into the back of a van. FSB? They'd invade the country to stuff you into the back of a van then bum 20 quid off you for diesel. Mossad? They'd break into your house and replace your computer with an identical fully-functional one made of polonium *while you were using it* and say they didn't do it, while showing AI-generated footage of a Russian doing it on TV and crediting you for the software they totally didn't steal.


> while you were using it* and say they didn't do it

Had a good chuckle. Didn’t know Tom Clancy frequented hacker news.


Small and medium-sized bad actors.

As you say, the big bads have means regardless, and probably don't need it because they're often big enough to hire a lot of humans to do e.g. propaganda for them.

Now, imagine a conspiracy theorist. Not a harmless moon landing denier, but someone who is convinced that pizzagate was a real thing, and gets AI to fake mountains of "evidence" until the politicians that he or she is convinced did those things, are lynched, or at the very least become politically toxic.

Think it can't happen? Fake porn is already being used to harm random women, and real pictures from veterinary text books have been used as fake "evidence" of animal abuse.

AI is a tool that can make anyone more competent, but it doesn't make us better or wiser or kinder.


> ... real pictures from veterinary text books have been used as fake "evidence" of animal abuse.

I'll do you one better than that. Farms use a lot of seasonal workers and small farms in particular will hire people in for a week or so when things are busy. I know a farmer who hired in a bunch of folk to help over the shearing. It turns out that two of the people he hired wanted to film around the farm to capture some of the animal abuse that absolutely must be going on because farmers have nothing better to do, but they didn't find any. So figuring that they were never going to let themselves into the "inner secret animal abuse lair" or whatever the fuck was going through their stoned little minds, they decided to...

yup...

... film themselves kicking fuck out of some sheep, and pretend it was common practice among farm workers.


The actual quality of the currently public models is why they decided to release them.

The worst case scenario of the currently public models is why they want to take it slow.

It's like the lottery in one of the episodes of the TV show Sliders: you probably won't win, but if you do win, you die.

Unfortunately, most people are really bad at grokking probably, and, by extension, risk, especially in scenarios like this.

> Bad actors will have enough money/lack scruples to train their own models or to steal your best ones regardless how "impact conscious" your company it.

Indeed, totally correct.

But this is also on the list titled "why AI alignment is hard and why we need to go slow because we have not solved these things yet".

Saying "it doesn't matter if we keep this secret, someone else will publish anyway" is exactly the same failure mode as "it doesn't matter if we keep polluting, someone else will pollute anyway".


It's profit all the way-o. Simple as.

As for what they say to the public, well, fraudsters gonna fraud. It's a tale as old as time [1].

1: https://www.biography.com/news/famous-grifters-stealers-swin...


As history repeated shows, righteousness works really well. I don't have to painstakingly make the nuanced arguments. Instead, I can simply cite "social responsibility" or whatever social justice in fashion or simply attack my critique's morality.


Personal opinion:

These models are capable of generating photorealistic child pornography. That's why they're not being released.

I have no evidence to back this up, other than anecdotal. All the hand waving vague posts from SD, DALLE etc read like PR tactics to avoid getting the spotlight from the media and governments.

No one wants to post about this but I think everyone knows it's true, as soon as Congress realizes what these models can be used for there's going to be a moral panic.

EDIT: All of the replies are assuming I'm saying this is a good reason for not releasing these models. I don't say that anywhere nor do I agree with it.


Photoshop can achieve the same thing. So can women hired by pornography companies that are 18 but look younger and intentionally dress like school girls. This excuse is nonsense in the context of what's already available.


I didn't say it's a good reason to not release the models, it's just why I think they aren't being released.

You should strive to respond to the strongest interpretation of what you read and not the weakest one.


Sure, they can come up with countless weak excuses if they're dead-set on not revealing the real reason, that goes without saying.


No it doesn't go without saying. People will only come up with countless weak excuses if their real reason isn't being addressed. That happens because no matter how many weak excuses you address, their real reason remains standing and continues to motivate them to block AI tooling.

If their "real reason" of CP use cases doesn't exist right now because AI tooling explicitly bans CP use cases, they have no reason to block AI tools.


> So can women hired by pornography companies that are 18 but look younger and intentionally dress like school girls.

I've yet to see an 18 year old that looks like an actual child regardless of how they're dressed however I'm also not looking for this sort of thing so...


Reminded of how the first film role for veteran British character actress Helen Mirren was playing a naked 17 year old in a film called, er, Age of Consent. https://www.imdb.com/title/tt0063991/ ; she was 24.

This kind of thing happened a lot, and I think more than one entirely "mainstream" director in the 60s/70s filmed nude actresses below the age of 18. Fortunately it's a lot harder to get away with that these days. Not to mention (Weinstein and many, many others) sexually assaulting the actresses off screen.


I know two separate people in real life that are in their twenties but don't look a day over 14. I don't know if it is some hormonal thing or just extraordinarily youthful looks.


My sister-in-law tried to rent a bike at Disneyworld once, and was told she had to be at least 14.

Her entirely involuntary response was to exclaim, "Lady, I'm damn near 30!"

So yes; if a 28-year-old can be mistaken for younger than 14 without trying (or even wanting to), there are plenty of 18-year-olds who can appear much younger, especially if they are using makeup, acting tricks, and other means to artificially enhance that.


The day I turned 18 I still looked like I was 17.


How could you know you’ve never met one? Do you ask every person who looks younger than 18 if they’re actually a minor or not?


Let’s also remember why CP is illegal in the first place: because children are manipulated and ultimately harmed in order to create it. I don’t see an ethical or moral problem with digitally generated content of that nature since children aren't being harmed. Am I missing something?

That’s why this whole argument is BS. It panders to those that think these things are illegal because “ew that’s gross” without actually taking the time to argue how AI slots in ethically to their worldview. Discussions about AI license laundering content are far more relevant and interesting.


What you're missing is that there are many people who think these things are illegal because "ew that's gross" and they also elect politicians who agree with them.

The United States has barely moved past the issue of homosexuality even though it harms no one, it is perhaps the most straightforward modern example of people trying to ban it due to personal & traditional disgust. Although there is a general majority approval for it, that doesn't carry to the electoral level where it remains a contentious partisan issue. But you think the ethics of CP & pedophilia will be discussed and legislated rationally?


I didn't say it's a good reason to not release the models, it's just why I think they aren't being released.

You should strive to respond to the strongest interpretation of what you read and not the weakest one.


I’m confused. I did respond to your argument which is that these things can do harm so people are trying to be responsible. My response is “actually no harm is being done so what’s the ethical problem”? The part that’s BS is when people who espouse this argument wave their hands and pull actual tangible harm out of thin air and and claim that these models are capable of causing it.

(You also suggested some impending moral panic. I don’t buy it.)


Using the same logic lets ban the sale of kitchen knifes, they can be used for stabbing people.


The UK prohibits the sale of knives to under-16s in the same way as it does cigarettes and alcohol.

We've not even begun to ask the question of whether there are ethical problems of exposing children to unregulated AI and vice versa.


I didn't say it's a good reason to not release the models, it's just why I think they aren't being released.

You should strive to respond to the strongest interpretation of what you read and not the weakest one.


You’re being overly dramatic. People are responding to your argument which was “these things are capable of harm which is why they’re not being released and congress is gonna freak because ethics”. People are highlighting why this is a BS argument, whether you fully espouse it is not really relevant. You presented it so thats what’s being rebutted.


I genuinely suggest you take the time to re-read their comment. Their argument was not that "these things are capable of harm"!

All of us can be tempted into overly quick responses without truly reading what we're responding to.


Maybe I drew the wrong conclusion from their comment, but the entire comment is structured under a presumption that AI generated CP is harmful, otherwise why would they have included the whole social political section? If that’s not what was meant then my bad. My point still stands and I don’t think I gave the commenter any grief. I just called an argument that they potentially didn't make BS.


It is structured under the presumption that some people think that AI generated CP is harmful, and AI researcher think that some people think that AI generated CP is harmful, and are afraid of the potential backlash.

The commenter did not necessarily adopt any of these positions themselves.

I agree that you were civil about it and no harm was done other than a bumpier than necessary comment thread.


>These models are capable of generating photorealistic child pornography. That's why they're not being released.

Is this true, have you seen it or just heard about it? because what I seen is that the AI is bad, the faces are wrong, the fingers are wrong,t he hands and legs are often connected wrong, you can get very often more then 2 legs more then 5 fingers. I seen a community that focus on waifus, this images are not realistic, are obvious carton, animation styles and similar images were created already with digital painting tools.

I also did not seen any porn scene generated by AI, I don't think the model was trained on porn scenes so at best you will get puritans removing breasts from the models, including art , but for sure I want to see how they will ensure that the AI can only generate male nipples and not women ones.


> Is this true, have you seen it or just heard about it?

I have seen it. SD is capable of generating CP that is indistinguishable from a real photograph with correct anatomy.

> I also did not seen any porn scene generated by AI

There are user made SD models trained on porn available for months now.


>I have seen it. SD is capable of generating CP that is indistinguishable from a real photograph with correct anatom

You mean a child with no clothes? is that CP? there are painting and art of children with no clothers, or you mean you seen sexual acts too? Since I never seen it with adults , AFAIK SD was not trained with porn scenes , at best you might get realistic breasts but not get realistic sexual organs and add on top the low resolution.

>There are user made SD models trained on porn available for months now.

So you would also have an issue even if SD was 100% SFW because some dude in his basement would train it to generate breasts and dicks and then combine it with SD that can generate faces, so SD should not exist ?

People were drawing porn stuff digitally for decades, now an AI will do the same, maybe you need a better case , CP is still illegal, Google and Apple will snitch on you also so WTF is this hysteria ?

In fact tbe cause of this CP and Google stuff I avoided experimenting with Google colab SD, the AI might paint a monstrosity but Google AI might think is CP and close my accounts.

Also you feailled to respond to my accusation of missing evidence, missing evidence that drawing porn or CP ever caused victims, or more victims then say sending you kid to church.


...I think you've stumbled onto a new a fetish subculture here.


not sure what fetish you mean exactly, but is your answer on topic ?

You can retrain SD without NSFW images, is this enough? I suspect not, because what if Timy in his basement trains SD with the face of X and the naked body of Y and then makes a nude image ?

You know what makes rational/logic sense ? |Follow the money, an open source model like SD makes proprietary ones like OpenAI and googles lose ton of money, like a giant pile of money Californian billionaires will lose , so they use their money to create drama.


Let's be realistic, just like building codes, medical procedures and car manufacturing sooner or later we will also be subject to regulations. The times where hacking culture and tech was left unbothered are over.

Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.

Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.


>You can't build stairs without railings though.

Yes you can. Your hammer doesn't magically stop functioning when it discovers that you're building stairs without railings.

You don't want tools to discriminate on what you can and can't do with it, because if you can discriminate, then you will get hammers from Hammer Co that can only use nails from Hammer Co.


Do you want to take a test and get a license to work on AI? Because that's how you are going to get there.


This doesn't seem enforceable. Plus this would be a good use case for 2A.


I would say it's fairly easily enforceable if the government simply passes a law that requires a "licensed AI Engineer" to approve a design before it gets used commercially.

This is how many other engineering disciplines work. Most people don't realize most engineers actually work under an exemption to this rule. But the mechanisms exist.

I don't think this is the way it will unfold though. Giving licenses to SWEs gives them even more leverage in the job market, which I don't think software companies will want to do and would fight that type of regulation tooth-and-nail.


That would likely accelerate the fall of society as we know it. I can’t think of a stronger financial incentive to go underground, or simply pack up and leave the country.

Before the recent AI boom in the US, China was known to have better opportunities for AI researchers. Hugo de Garis and Ben Goertzel both went over to China for that reason some time ago. Interestingly, De Garis predicted that a world war would eventually start over the issue of AI prohibition.


Maybe you could elaborate on how you define opportunity. If it resulted in more employee leverage, where they could double their salary, do you think people would still flee the market?


They weren’t doing it “for the money”. It was the only way they could get funded at the time to be able to work on AI at all. Similar analogy to the hypothetical where AI is regulated.


> I would say it's fairly easily enforceable if the government simply passes a law that requires a "licensed AI Engineer" to approve a design before it gets used commercially.

Would this include any type of AI because a simple linear regression is technically "AI".

This seems to me to be a problem specifically because the issue people have with models like dalle is that they could be used for harm (e.g., generated deep fake porn, cp, etc.), but that is left up to the consumer. The same could be said about any product. This seems to be akin to trying to "ban math" like then the US tried to ban the export of crypto. It necessarily fails in the end because no regulation can actually stop it, and once it exists it exists.

Plus, one could just set up a foreign hosted VPN and sell their "regulated" product from outside the US anyway.


This isn’t unique to software. The same issue exists in other engineering domains.

For example, some industrial programs require adherence to ASTM standards for pressure vessels. This extends to seemingly common tools like air compressors, which some people take umbrage with. It really comes down to the way in which industry/regulators craft the standards.

The best way IMO is to make the applicability risk based. Can the linear regression result in loss of human life? If so, maybe being “stamped” by a licensed engineer is appropriate. Approval by a licensed engineer essentially says a competent has ensured the relevant best practices to mitigate risk has been implemented.


> Would this include any type of AI because a simple linear regression is technically "AI".

I can't predict what lawmakers do, but it wouldn't surprise me if the first draft said before someone explains to them what Excel spreadsheets do when drawing trend lines.

That said, the general gist here is an extension of the UK's Data Protection Act and it's successor in the EU's GDPR: while those said "don't process personal data without permission", I can easily believe it would be analogous to "don't process real data without the OK of a chartered engineer".

> Plus, one could just set up a foreign hosted VPN and sell their "regulated" product from outside the US anyway.

Absolutely; the internet mocks national sovereignty.

Also, Stable Diffusion isn't a US product in the first place, it's a UK corporation and a German university research lab, amongst others: https://en.wikipedia.org/wiki/Stable_Diffusion


I think the reason it isn't enforceable is because the software will just be made in another country. This bypasses the entire issue. And if necessary you just get an American "AI engineer" to rubberstamp it.


I'm sure you could get someone to rubberstamp it. But, in practice, this is more difficult than you may think. If you are a licensed civil engineer and I show up to your office and ask you rubberstamp my bridge design, most would not because that creates a large legal liability for them. By stamping it, you are saying it meets the necessary design standards. If something goes wrong, the regulators are going to come after the person who certifies it for use.

That is also why it gives engineers more leverage. If a manager needs some to certify it, that licensed engineer has much more ability to push back on the design. The liability the stamp conveys is also why they tend to get paid more than unlicensed counterparts.


Then industry will just go back to calling various "AI" technology what they really are: machine learning, computer vision, etc. Suddenly no one will be doing AI and the regulation won't be able to keep up.


Lawmakers know about this kind of shenanigans. What you describe is the kind of thing which ends poorly for the person who thinks they have a cunning plan for getting away with $act.


Back when encryption was categorized as a "munition" under ITAR, and it was illegal to export PGP from the US except when printed on a t-shirt, there was talk of trying to claim that if it was a "munition" there was a right to it under the US second amendment. I don't think this got anywhere. You're better off taking a first amendment line of reasoning under US law.


This is particularly visible with the sorry state of accessibility options for disabled individuals.

I deal with a moderate vision impairment and everything I do to make computers more usable is bespoke hacks and workarounds I’ve put together myself.

In MacOS, for example, I can’t even replace the system fonts with more readable options, or increase the font size of system tools.

Most software ships with fixed font sizes (electron is exceptionally bad here — why is it that I can easily resize fonts in web browsers and not electron?) and increasingly new software doesn’t render correctly with an effective resolution below 1080p.

Games don’t care at all about the vision impaired. E.g. RDR2 cost half a billion dollars to make yet there is no way to size fonts up large enough for me to read.

I welcome regulation if it means fixing these sorts of problems.


You don't need regulation, you need technologies that can be freely modified by their users and the changes distributed.


This is what I do. I primarily use Linux because it has the most flexibility with typeface and scaling, but it’s still not perfect (e.g. all electron apps) and it’s definitely bespoke and hard to reproduce.

I routinely open tickets with software vendors and open source projects asking for things that ought to be low-hanging fruit and I’m either blown off or directed to tools that don’t make sense for me (usually magnifier or text-to-speech). The appetite for fixing issues that limit my access to software is essentially null.

I’ve dealt with these problems for two decades and I really believe that there is very little economic incentive to make software accessible. Unless substantially more people become vision impaired in the future, that’s how it will stay. I am pretty certain that only way software companies will really take the vision impaired into account is if they are forced to.


If they're low hanging fruit why not implement them yourself or put up a bug bounty?

If even a fraction of those with limited vision did this the problem would be solved.

And if its not, then it's probably not worth making it accessible (e.g. if it would cost £10k to make some obscure game accessible that would probably only have a half dozen vision impaired users that's just not worth it, the time of the developers is worth more than the hypothetical users enjoyment of the game).


Regulations and user modifiably aren't mutually exclusive. In fact the former is one way to ensure the latter.


This is paternalistic, overbearing, culturally-corrosive nonsense.

Substandard buildings, medical procedures, and cars maim and kill.

AI image generation is speech.

I won’t accept prior restraint on speech as being necessary or inevitable.


> AI image generation is speech.

Sure, but I don’t think an AI model is speech, at least not one trained on billions of parameters and massive quantities of compute. Comparing it to regulated heavy machinery or architecture is apt.

You can’t create an AI model without huge, global industries working together to create the tools to produce said model. And baked in are all sorts of biases that if met with wide-spread adoption could likely have profound social consequences, particularly because these models aren’t easy to make, so the likelihood that they will see large amounts of adoption/use is likely, they are useful tools after all. Group prejudice in these models jumps off the page here, whether race, sex, religion, etc. but black box algorithms are fundamentally dangerous.

Speech is fundamental to the human experience, large ML models are not, and calling them speech is nuts.


You can't create Hollywood blockbusters or video games without huge, global industries working together to produce the tools and content. Popular media has biases and, clearly, social consequences as well—there's a reason people talk about how much "soft power" the US has!

And yet pop culture content is speech both in a casual and in a legal/policy sense.

AI models are not identical to movies or video games, but they're not different in any of the aspects you listed. On the other hand, there is a pretty clear difference between AI models and heavy machinery or architecture: AI models cannot directly hurt or kill people in the physical world. An AI model controlling a physical system could, which is a case to have strict regulations on things like self-driving cars, but doesn't apply to the text/image/etc generation models we're talking about here. Plans for heavy machinery or buildings are not regulated until somebody tries to create them in the real world (and are also very clearly speech, even if they took a lot of engineering effort to produce). At least in the US, nobody is going to stop you from publishing plans for arbitrarily dangerous tools or buildings with narrow exceptions for defense-critical technology.


Aren't people allowed to release source code on free speech grounds? There's not much difference. One could publish a model with its weights, and that would qualify as free speech.


I believe it depends. Free speech is not without limits. I think there's still a lot of discussion in the biomed community about the legality/ethics of releasing methods that can be used by bad actors to generate potentially harmful results.


All sorts of software can be used to “generate potentially harmful results”. Think about an algorithm for displaying content on social media sites, a search engine, a piece of malware on a computer. Do we ban books on how to make bombs? It’s such a broad point to make that it’s practically meaningless. The computers job is to read a set of instructions and execute them. It’s abstractly no different to a human writing instructions for another human, with the adage that the computer is a much faster processor than a human. You can quite literally run these ML models (a collection of statistically weighted averages) with pen and paper and a calculator (or even none!). Things will be maliciously used, that means we don’t ban sharp forks (at the determent of legitimate users), we ban those that intentionally misuse those tools.


You’re correct, and, yes, it can apply to virtually anything. But we don’t let that hold us hostage for properly mitigating risk.

I think the distinction is when a threat is “imminent”. To the point of this thread, I don’t think the dialogue has progressed enough to form a consensus on where that “imminent threshold” lies.

Teaching someone about DNA doesn’t constitute an imminent threat. But the equivalent of teaching the recipe for a biological weapon may be considered enough of an imminent risk to warrant regulation.


Speech can get people to maim and kill, too. Sometimes with surprising efficiency. And when a technology is outlawed, the power that it bestows upon humanity concentrates in the outlaws. For sure, information and communication technology are an interesting edge case of that general principle.


[flagged]


If you believe the post-WW2 USA was a free speech paradise with no political persecution, you have some reading to do.

https://en.wikipedia.org/wiki/McCarthyism

https://en.wikipedia.org/wiki/Lavender_scare


The USA threw 100k+ of its own citizens in concentration camps, stripping them of property and leaving them in poverty after the war. Not exactly a shining moment for principles.


Can you show me examples of widespread censorship due to misgendering?


Literally all major platform websites--Youtube, Tiktok, Twitter, Facebook, Instagram, etc.--ban all posts which contain misgendering and deadnaming as part of their broader "hate speech" policies.

Agree or not, this is "censorship", and it is "widespread".

It is hard to show examples of censored content because it is, you know, censored. But you can try some posts for yourself on these platforms and see how it goes.

https://help.twitter.com/en/rules-and-policies/hateful-condu... https://decisionmagazine.com/facebook-says-misgendering-cons... https://www.npr.org/2022/02/09/1079643611/tiktok-bans-deadna...


I disagree, in my opinion censorship is when an authority, like the government, punishes speech.

If I uses the nword on twitter and get banned would that be censorship? It's listed with deadnaming in the TOS


So, you don't consider multi-trillion dollar valued companies, whose annual revenues are greater than the GDP of the majority of countries in the world, and who have sole control over platforms used by over 1/3 the world's entire population, to be "authorities" capable of censorship.

OK, well, good luck then.


No, because you can use another product. The government can fine or send you to jail.


Literally all major products currently have similar censorship rules, no matter which one you choose.

Re: countries, "you can move to another country."

The cognitive dissonance is strong with this one.


Reddit is a good example.


Reddit is a website


You seem to forget that this AI doesn't really give a crap about your arbitrary borders, a document or an amendment to that document. You also seem forget that even if it did, most people don't live in the same country under the same jurisdiction, so even if everything you think and feel turns out to be true, it still doesn't help the conversation at all.

On top of that, you also seem to be trying to mix-and-match rules for institutions and rules for private entities as you see fit which is also not how any of this works.


How has the first amendment been compromised?


There is specific provision in German law precisely because of the concentration camps: https://www.pbs.org/wgbh/frontline/article/germanys-laws-ant...


One day we will crack the "platforming" issue. Until that day, free speech remains under attack, for reasons I cannot wrap my head around completely. It is pervasive though, to the point it feels like a gag at times.

But yea, if speech can become law, it can matter quite a bit. Do we really want DALL-E generated state laws?


What free speech was attacked?


Also, I thought stable diffusion did release their models and methodology? You just need a 3080 with enough ram to do the inference with now boundaries, and if you have the money and time can train new models.

People are already making txt2porn sites. I'm sure they will get crazier and creepier (from my boring vanilla perspective, not judging people with eclectic tastes) as time goes by.


People are already making txt2porn sites

Hi, asking for a friend. Where did you hear about this?


I saw it here a couple months ago. The results so far were quite odd but it's early days. I'm not sure what the ethics are. My understanding is that models/actors of consenting age are now well treated in the industry. I'm not sire what those people will do if replaced. But that is true of uber and truck drivers when they figure out self driving, or me when copilot gets a lot better.

As for those not considered capable of consent? Maybe this decreases the pressure to hurt more of those people? But the ethics of training seem hidiously evil to me.


"My understanding is that models/actors of consenting age are now well treated in the industry."

It's a spectrum, like most other industries.


Totally, however as somebody who knows quite a few adult content creators and sex workers I would say the standard for the industry is actually pretty high.


Damn, somebody just parked that dot com domain.


Close enough that they probably got the idea from my post? I should probably start checking


No, looks like it was just over a week ago: 2022-10-15T01:04:51Z


Sometimes they just inconvenience one group of people or fail to sufficiently support local industry and those aspects are regulated too. Anything big enough is going to get regulatory scrutiny in one way or another because it's moving around a lot of money and affecting a lot of people.


Well you're not the decision maker here. If the public gets sufficiently upset about ai generated lookalike porn these models will be regulated and maybe banned. And as you seem to realize that doesn't mean people will stop using them, it just means the companies that make them will stop getting paid. Obviously this is what they're trying to avoid, and the morality stuff is just because they don't want to say "we're worried about being regulated."


It's not speech. It's data.


You’re free bud. You can make it and release. But you have no standing to just complain about it.


The parent comment argues for the necessity and inevitability of legal regulation.

We all have standing to debate our shared culture and ethics.


Your comment equates to “stop talking” as if talking and ideas were somehow not the basic unit of democracy.


Ah yes, the classic democratic debate of "I don't like your justifications, you should give away all your work for free!"


It’s a democracy bub


You're allowed to complain about anything you want.

And others are allowed to either ignore your complants, or act on them.


>just like building codes, medical procedures and car manufacturing...

"Self enforced limits" could also be an attempt to avoid formal governmental regulation.

The maturity of CS as an industry is still in it's infancy compared to other engineering disciplines. I'm sure if you went back to the 1880s, five or six decades after the start of industrial revolution, there was very little limitation on the design of mechanical equipment. Now, there are all kinds of industry standards and government regulations. We could lament how much this stifles progress, but we're generally not even cognizant of the amount of risk it has reduced. For example, most people don't give a second thought to the idea of their water heater being capable of blowing up their house because it happens so infrequently.


Buildings don’t cross borders.

Code is information, and information wants to, and will be free.

Short of a North Korea like setup, regulations can only slow down the spread of information around the world.


Code is also machinery and infrastructure though, it can interact with the physical world in material ways, and because of that it will probably end up regulated.

AI is all fun and games when it's data, but if it's being used to make decisions about how to take actions in the physical world I think it's fair that it follows some protocols each society gives it. Making a picture of a cat from a learned model, writing a book with a model, cool, whatever. Deciding who gets their house raided, or when to apply the brakes on an EV, or what drugs should be administered to a patient, we probably want to make sure the apparatus that does this, which includes the code, the data and the models, is held to some rules a society agrees upon.


I recently saw a presentation from someone trying to build a predictive model for violent incidents in mental healthcare settings. They took steps to prevent the model considering race because this could potentially lead to less favorable treatment for some groups.

The model was unable to give any useful predictions. I don't know if it would perform better without the deliberate limitations, but I do know that healthcare staff are making their own judgements in it's absence.


I completely agree with the principle that legal entities: businesses and governments should have their use of AI restricted. Facial recognition, for example, is a danger to society.

But the development and proliferation of AI by extralegal entities cannot be stopped. Individuals, foreign researchers, foreign business, etc will keep pushing the frontier of AI forward indefinitely.

AI can be regulated but it cannot be controlled.


And those rules seem to be: nobody gets to inspect, question, or even see the model. Just accept the output without any recourse, like good little peasants.


> Buildings don’t cross borders

"A metal strip on the floor of Eurode Business Center marks the border between Germany and the Netherlands.

On one side of the building, there's a German mailbox and a German policeman. On the other side, a Dutch mailbox and a Dutch policeman."

https://www.npr.org/sections/money/2012/08/09/158375183/the-...


You've missed the point by an impressive margin.


No, I got the point. But I also thought that it was fun (in the usual HN way) to point out a factually incorrect absolute statement.


Informations is just that… has no will nor wants


It gets tiring playing word games to avoid the suggestion that certain natural pressures have personal agency.

Water "wants" to flow downhill. Gasses "want" to expand to fill their container. Genes "wanting" to replicate drive animals literally wanting to reproduce, and the incidental awareness of that drive in some species comes down to a certain molecular arrangement brought about by said genes. The genes are data, the minds are data, and the natural pressure is that that data which succeeds in reproducing will tend to keep reproducing. So in the original sense of the notion of memes as proposed by Dawkins, yes, information "wants" to be free, as that is its tendency. The only other option is that said data ultimately dies out.


Ah, come on. That’s the exact same argument as „guns don’t kill people, humans do“ - factually correct, but misses the entire point far and wide.


Not really. Guns serve one purpose, really: shooting at things, while possibly killing it (ignoring gun ranges). It’s why they were created in the first place. That’s why the “guns don’t kill people, people kill people” argument is bogus. “Information,” OTOH, doesn’t serve any particular purpose. On its own, information is just bits; it’s how those bits are used that matters. Those bits can be arranged to say “Hello, world”, but they can also be arranged to make the Stuxnet virus.


Nuclear fissile material is similarly morally agnostic. It’s just matter, right? So is smallpox. It’s just DNA code at its heart, right? But it’s also recognized that wide access to some things creates a lopsided risk/reward profile.


Correct. I’m not arguing that wide access is a good thing; Just that the comparison to the tired gun argument is wrong. Hence why I brought up a “bad” usage of the agnostic item. It’s not the best example, but it’s what I thought of on the spot.


A remotely operated hole punch. I have used one to quickly make small round holes in hard materials on several occasions. Also to install small lead weights into things. Mostly though, they are good for humanely harvesting meat.


We regulate all kinds of things that cross borders.


Drugs, dangerously poor quality consumer products and other unwanted stuff does cross borders, and most countries are making efforts to stop them too.


Neither drugs nor consumer products are speech.


Is code speech or action?

If you're old, you say speech. If you're younger and watch any integration modern technology into life, you'd say both.

But at the end of the day, your viewpoint doesn't matter. What does matter is when the 'average person' feels unsafe by this 'informational speech' as you like to call it, that it will be banned and restricted and you will be punished under the full force of the law for trafficking in it. Your neighbors will cheer as you're dragged out of your house by a swat team for being an evil terrorist. SCOTUS will make whatever finding is politically expedient for the majority parties point of view that is the majority at the time, and you will rot in prison until you die of old age.

So, your choices are attempting to setup a regulatory framework to minimize the dystopian hellscape your ideas will create or embrace the dystopian hellscape.


> Your neighbors will cheer as you're dragged out of your house by a swat team for being an evil terrorist.

You have a cite for this actually happening with respect to a work of art in the United States in recent years?


I'm quite sure it is not "speech" in the sense they knew when freedom of speech laws were written.


I'm quite sure that visual art has been recognized as having First Amendment coverage for well over a century.


I'm confused, was the discussion about software or visual art? Anyway, I'd recommend posting something considered criminally offensive in your location (I'm sure you can think of something) and seeing how far free speech gets you.


slow might be nice


> Buildings don’t cross borders

I'm not sure that's factual, but even if it were, built objects certainly do.


There are indeed buildings all over the place that do.

https://en.m.wikipedia.org/wiki/Line_house


The sooner the better.

Cybersecurity is a mess and 100% the reason is "no skin in the game." If a car manufacturer promises or even implies "safety," and something bad happens, they get sued or they take real action.

The big tech companies must be held to do the same.


Regulation by which only massive players can abide is effectively killing any small or not well funded business.

We have civil law. If a company harms you, you can sue. No need to create regulation to make markets less competitive.


This is an incredibly oversimplified and useless take of the sort that people need to stop parroting.

Regulations work very frequently; they're around us ALL the time. Yes, they sometimes fail or don't work perfectly.


The parent comment mentions issues with cybersecurity. There are measures companies can take to reduce their likelihood of breach, but we all know they humans make errors that are exploitable all the time. Just the other day there was a thread about an exploitable bit in the Linux kernel that had been there for the prior 10 years.

My point is that unless the regulation we're discussing is merely marginal in cost of adoption, there will be definite harm caused to small businesses trying to bootstrap from the ground up. It's not some bogeyman of an argument. It is quite a real possibility.


Again, what you're saying is potentially and practically equivalent to:

"If we put regulations on safety measures in new buildings, such as railings and fire escapes, it will hurt small businesses."

Maybe it will. It's still not an intelligent point. Stop.


Surely there is a cost benefit analysis to be had regarding any regulation? It's not always as trivial as railing and fire escapes. What would the cost of building become if every building had to be built to withstand 9.0 magnitude earthquakes or be resistant to bunker-buster bombs?

Such policies do a number of things:

1. Raise consumer costs since these new regulations need to pass the costs to the consumer.

2. Inhibit competition since now new businesses need to have to know-how or pay for super specialists or some external service to get them complaint before they can even offer a product or service.

I'm not discounting all regulation, but at some point consumers can't consume blindly and need to be thoughtful. Some businesses may get third party certs stating they're compliant with industry leading security practices. They can charge a premium. Other companies may target more elastic consumers that want the service but don't care about whether their data, etc. is protected. They can choose to buy the products/services without certs, as they would likely he cheaper to differentiate themselves.


Ah yes, why don't you also talk about how well this works when one party lives in poverty and the other party spends 10's of millions on lawyers and media clips attempting to discredit the affected party.

Just please stop with this tired and broken line of thought, it does not work out for the individual in the end.


> Ah yes, why don't you also talk about how well this works when one party lives in poverty and the other party spends 10's of millions on lawyers and media clips attempting to discredit the affected party.

What are you talking about? My point was related to anticompetitive regulation, which is generally supported by the wealthy since it increases barriers to entry and reduces their competition.


You can do any of those things for your personal use, the difference is that you can't legally do things like build stairs without railings when taking on construction jobs for clients or customers. The tools you use don't care if you're what you're doing is legal or what not, they work the same way regardless. It's just there are (rightfully) legal restrictions on what you can do with products or services you sell to the public.

The difference here is like not being able to own a hammer or car mechanics tools for your own personal use, and only being able to use them under corporate guidelines/surveilance in a restricted area, which is ridiculous.


There are many such restrictions on tools and components which could be dangerous to people other than you. Refrigerant, for example, cannot be sold to the general public for personal use (outside of a fully sealed AC system), and licensed refrigerant users must follow specific usage procedures to ensure it doesn't vent to the atmosphere.


"Being dangerous to people other than you" is not why refrigerant requires a license to purchase at all.

The license is required because refrigerant is categorized as an ozone-depleting substance, and the sales restriction is established by the Clean Air Act[0]. All of which is under control of Environmental Protection Agency.

I guess you can argue that any pollution-restricting laws are based on the premise of "being dangerous to people other than you," but that's not quite what people have in mind when talking about things being restricted due to being dangerous. We are talking about things like "driving a non-street-legal car" or "owning this one potentially dangerous carpetry tool", not "increasing gas taxes to disincentivise pollution."

The potential danger being inflicted in those cases is direct and specific. You can totally drive a non-street-legal car on your farm, even without a registration and a license plate, as long as you do it purely on your own property and not on the actual road.

And that's the approach that personally makes sense to me with AI-generated images. Any restrictions on it should be imo on the distribution and the commercial/sales side (e.g., you should not legally be able sell ai-generated posters of your neighbor in an embarrassing situation without their permission or sending them to that neighbor's workplace), not on the creation/usage side (e.g., you should be legally able to generate those images of your neigjbor, with any potential restrictions and legal problems only beginning to come your way at the distribution stage).

0. https://www.epa.gov/section608/refrigerant-sales-restriction


I agree with a lot of what you're saying. The marginal damage any one person can do with access to high-quality image models is very limited, and most of the ways they could inflict damage in the first place either are illegal or easily could be made illegal.

The reason I use pollution as an analogy is that having it happen society-wide creates new problems that simply don't appear on an individual scale. What happens when someone builds a browser extension to let you porn-ify any image you'd like? What happens when schoolyard bullies can fabricate a compromising video as easily as they can make up a nasty rumor today? I think most people would prefer not to live in a society that works that way if they can avoid it.


Imagine everyone used stairs as much as they use tech. Buff people everywhere.


If that’s the case then innovation in the US can be tossed out the window. Once that ball gets rolling it only ends in crony capitalism and protectionism for existing players.


I really, really hope that there aren't any people who think the way you've outlined. Technology has empowered small groups or even single individuals to create things that have the potential to change the course of civilization, so I for sure hope those individuals think twice about the potential consequences of their actions. How would you feel about people releasing a $100 'build your own Covid-variant' kit?


Once the cat is out of the bag, the problem exists. Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point.

There are many examples of this.

- Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

- Does anyone seriously think they can stop human cloning, once it's technically feasible, from happening somewhere on the planet sooner or later? By fiat, by legislation, by moral appeals, etc? Will not happen. If clones can be made, clones will be made. Descriptive, not normative claim.

- AI-generated content has reached a certain point where we have to worry about a whole host of issues, most obvious being more sophisticated fakes. "Please think of potential consequences", ad-hoc restrictions, self-imposed or otherwise, are moot in the long run. It's part of our world now.


> Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

It looks to me that you're shifting the goalposts here: nonproliferation has effectively reduced the number of countries with access to nukes. Or is worrying about the number of direct military conflicts between nuclear-armed powers an example of what you call 'missing the point'?


The nuclear non-proliferation treaty has eventual complete disarmament as a stated goal. So the subset of people who believe not just in not increasing the number of countries with nukes, but eventually getting that number of countries to 0, are unrealistic. The fewer the number of nuclear powers, the greater the incentive is to cheat. Maximum incentive is when the number is 0 - get nukes and you rule the world. Until it goes to 2 again, and so on.

Larger point being: with some disruptive technology, like nuclear weapons, if it can be done, it will be done.


>Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point

I disagree with the idea that putting restrictions in place shouldn't be done because 'the problem exists'. The problem exists but that doesn't mean measures can't be taken to keep it manageable. I don't think the majority of people are in your intended demographic of wanting to stop the problem. Most just want to prevent exacerbating the problem.


>The problem exists but that doesn't mean measures can't be taken to keep it manageable.

Specifically when it comes to the problem of AI fakes, I'd rather invest effort in harm reduction - train better fake recognition systems - than attempting to stop people from abusing this technological advance by crafting moral appeals and attempting to legislate it all away. Or something as silly as hiding the code. I think mine is a more robust measure.


I really, really hope that there aren’t any people who think they way you’ve outlined.

AI image generation is not a build-your-own-weaponized-virus kit.

It’s a useful tool that can be used to produce creative expression. What people produce is up to them, and the fact that they might misuse their capacity for free speech isn’t an argument for curtailing it.


OP doesn’t sound like it’s talking exclusively about image generation. Sounds like a general, “I should be able to build, propagate, and use whatever tech however I want no matter the negative externalities.”


I think even the phrase "negative externalities" is overstating it. There's a big difference between "I push this button and now the woods behind my house are destroyed" and "I push this button and I have what looks like a photograph of some important person naked." Photo generating AIs are not a big deal IMO. We might be talking about these things more generally but I doubt we are talking about McNukes here.


The problem with extremely powerful forces (like new technologies) is that you can’t always predict what effects they’ll have.

This is doubly true with regard to technologies that seem not only powerful, not only adaptable to new domains, but also rapidly improving on both of those dimensions. I don’t know what is the right level or type of limitation, but there is nothing confusing or weird at all about wanting to be careful with such a technology.

If technology keeps advancing (it will), new developments will approach “looks kind of alarming” status faster and faster. This is because they will also approach “could destroy everything we know and love” status faster and faster.


At it's core, the argument for caution can be articulated as "utility and availability of this technology must be limited to incumbent actors in the industry for our protection" and that's very fishy. It's particularly fishy considering this technology cannot even so much as break a fingernail or cut a blade of grass. Is it consequential? Obviously or neither one of our arguments would exist. Does it have the potential to hurt people? Only if those people let it. To me it's overblown moral panic that's suspiciously convenient for the big players in the industry and software in general.


The scenario that worries people is less "photograph of some important person naked" and more "photograph of you naked". State of the art image tech is more than capable of allowing people to create convincing porn of their enemies (or creepy crushes). I don't know if that genie can be put back into the bottle, but it's hard to complain that researchers aren't interested in providing the genie as a service.


If someone wants to crank it to what amounts to a high tech doodle of me doing naughty things to myself I don't see how that's any of my business. There are people in the world how put real legit porn of themselves on the internet, I'm sure they'd find this fearmongering about fake pictures and videos of themselves on the internet laughable. It is the closest to inconsequential you can get, posting yellow pages information on twitter is far more damaging.


The problem I would say here is you're thinking in binary, but real life doesn't operate this way.

Lets take a current potential problem. That is a low powered application capable of facial recognition. You can now strap that on to any number of dumb weapons and you've created a smart weapon.

In itself it's not a problem, until it starts happening a lot. If you think like a house cat, you tend to think that society owes you its existence and you're the king of the hill. But say weapons proliferation occurs all those ideas of "I have rights" go right out the window, and this loss of rights will be supported by the masses that don't want to get droned in the head out on a date. The tyranny you want to avoid will be caused by the pursuit of absolute freedom.

As technology becomes more complex the line will blur even further. AI as a build your own 'terrible thing' will happen. Physics demands it, everything is just information at the end of the day.

Now it's up to you to avoid the worst possible outcomes between now and then.


Murder is already illegal. Laws preventing strapping facial recognition to a drone and killing people won't actually prevent it from happening because this is more like a Unibomber style event where the perpetrator won't care about the law and normal people won't be doing this anyway.


Heh, I like how you handwave American judicial precedent away like it doesn't even exist and the multitude of laws enacted where a small group of people committing acts gets laws affected against all. Do we want to go into all the online protection acts enacted recently?

Even actors like the Unibomber had a huge impact on things like bomb detection in mail and airplanes. Now imagine the modern Unibomber, instead of attacking randoms went after senators. The moment the class protected by wealth and power comes under attack from AI technologies expect a raft of laws limiting and restricting them to be enacted.


> The moment the class protected by wealth and power comes under attack from AI technologies expect a raft of laws limiting and restricting them to be enacted.

That's exactly why your rental histories at places like Blockbuster are, by law, confidential; A politician had their rental history leaked. Once a deepfake of a politician gets enough movement, said politician is going to begin rallying support against AI.


It's already the case.

Crispr has changed a lot of things and make possible for an outsider, with 10.000$ and a little dedication, to alter genome of every living form.

https://www.ft.com/content/9ac7f1c0-1468-4dc7-88dd-1370ead42...


Right and every new technology that enters this “high power, high availability” domain, the more civilizational risk we all carry.


Is it really true that we're more at risk now than our ancestors were? They had smaller numbers, less access to life saving technology, worse conditions to grow up in… I can accept an argument that since the advent of the cold war there may have been more than previously, but even that is conjectural.


Fair question. The way I see it there was never a moment before the Cold War that a single person’s decision could even have a chance at destroying human civilization (if not the species itself). Since the Cold War, that has been true every single moment of every day, and there are probably hundreds or thousands of individuals who are capable of making a decision that will trigger nuclear annihilation (check out Daniel Ellsberg’s Doomsday Machine for an alarming inquiry into this topic).

Now we have the additional risk of man made biological risk amplified by cheaper and cheaper genetic engineering as well as natural biological risk amplified by a nonstop global travel network. Neither of these risk vectors existed til recently either.

IMO the only analogous risks pre-cold war were what, asteroid strike or volcanic event? Those are rare and, more importantly, not modulated upwards by any human action or human system. Nuclear and bio risk probably only climb with more people and more technological advancement.


Yet the risk of non-civilization continues to exceed the risk of civilization.


> How would you feel about people releasing a $100 'build your own Covid-variant' kit?

Not very good but:

a) the people who currently have this tech are not what I'd call trustworthy so why should I leave dangerous tech only in the hands of dangerous people?

b) it would probably kickstart a "build your own vaccine kit" industry


That you even express the problem like this shows an impressive ammount of bias. By calling them dangerous people you are actually implying malice. What makes you believe people with access to biomedical tech are inherently more malicious than the populace? What makes you believe there aren't far more malicious people who do not yet have access to such tech?

I think this is just fear of the unknown at work. Biomedical knowledge is complicated and requires effort to learn therefore most consider it a known unknown therefore something to be feared. Some people do have such knowledge therefore they are to be feared because who knows what nefarious intentions they have and what conspiracies they are part of. Therefore they are dangerous people using dangerous tech.

Were the physicists who discovered how to split the atom also dangerous people?


> Were the physicists who discovered how to split the atom also dangerous people?

In the ordinary meaning of the word? Yes. That's why they were sworn to secrecy.

Not the same as saying they were immoral or wrong to have worked on the Bomb, that's a different debate, but in terms of sheer effectiveness they were incredibly dangerous.

An army is dangerous. That's how it works.


To you and sibling comment I would argue the comment I responded to conflated dangerous with immoral.

Danger is part of the domain of threat modelling. And when doing threat modelling, the morality of the opponent is a distraction.

However in the domain of propaganda, threats, immorality and assigning the latter to the former go hand in hand.


I didn't realise my career as a propagandist was going so well. Regardless, have the people involved in gain of function research of coronaviruses been shown to be trustworthy?

I would argue not, given the evidence[1].

[1] https://theintercept.com/2021/09/09/covid-origins-gain-of-fu...


Define trustworthy.

What kind of trust did you place in them that was now broken?

I at the very least believe they would not desire to expose themselves and their loved ones to pathogens.

If the origin was a leak, then, sure, we should see if any biosecurity protocol was broken and why, and design better protocols. But posturing that the researchers were not "trustworthy" is not helpful.

You call them untrustworthy and dangerous people and by that you are implying malice. Why? And what makes you believe outside of that small circle there aren't people far more malicious?

If creating diseases becomes so easy anyone can do it, we will see the age of biological ransomware. I am certain there are people far more malicious and far more immoral than any of the researchers who worked on this.


> Define trustworthy.

Come on, I'm not a student debating in the common room, this is just silly.


It is the key word on which your argument rests and you used it in every comment in this thread.

Since you decline, I believe it is useless to continue the discussion.


I'm glad we were able to come to an agreement.


Danger is not a moral judgement, but a rational one. Iranian nuclear scientists weren't assassinated for no reason, and I highly doubt anyone involved had serious moral accusation to make against them. The U.S. also banned export of cryptography in the past (and still does in some cases), solely based on strategic grounds, not because mathematicians are "bad people". People in positions of or close to power and/or possessing certain knowledge are dangerous, not because they are necessarily morally repugnant, but because of their privilege and ability.


This is just beyond obtuse. More people having access will mean more people who are untrustworthy having access, which means more malicious action. (Unless you want to setup some toy scenario where the only bad faith actors people on the planet are biochemistry researchers).

As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.


Pandora's box is open, technology will spread regardless of what you do. You want to limit tech that will become cheaper and easier to build through the efforts of legitimate, well meaning research, and try to stop knowledge spreading as we wish for there to be more people in this line of work. More people having access will mean more people who are untrustworthy having access is entirely right but whether "which means more malicious action" is a corollary is up for debate, as we've seen with violent action around the world via many different types of weaponry.

In this talk[1] at DEF CON, John Sotos pleads with hackers to put their time into learning how to combat biological weapons. He points out that the tech will move forward to being able to target more and more specific populations. Like any other weapon, knowing that your enemy has it too and can hurt you or those you care about with it is a highly effective deterrent.

> As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.

Two things here. Firstly, trauma surgery and techniques from the military and places where bullet wounds are common have benefited the rest of us for when we get into scrapes of even a non-violent kind.

Secondly, defence does not always lag behind attack, you assume that the future would look like now but that is to ignore the history of any tech of such nature.

[1] https://www.youtube.com/watch?v=HKQDSgBHPfY


this is the gun debate rephrased


There is a fundamental difference in the US: access to guns is a constitutional right.


Freedom to speak and publish, even dangerous ideas, is also a right. Beyond the US.


True, but (in the US at least), those rights also stop when they come up against an imminent threat. See the trope about falsely yelling “fire” in a crowded movie theater. The issue here seems to be accurately defining that risk.


It's really different IMO.

"Fire!" (falsely) in crowded theatre: Specific. Beneficial outcome unlikely and difficult to even imagine. Harmful outcomes nearly certain.

Powerful AI codebase or service: Generic. Endless beneficial and harmful outcomes easily imagined.


>Powerful AI codebase or service: Generic.

Generic means it can do a nearly unlimited list of things good and bad right?

Just like a human can do a list of nearly unlimited things?

Humans, because they can do both good and bad have laws they must follow if they do bad, right?

Then what are you suggesting for AI?


That's a pretty key difference and a good point. But we still regulate stuff with a similar dichotomy.

Nuclear material can be used to treat cancer. But it can also be used to make weapons. We regulate both.


The "fire in a crowded theater" case was overturned as unconstitutional prior restraint on speech.


That’s because the original case was not actually about inciting an imminent threat. It was about speech regarding a war draft. The theater was an analogy in the case opinion.

Limits to speech were sill upheld if that speech could reasonably incite “imminent lawless action”.


Incitement to imminent lawless action[1] was a test applied from 1969 onwards (from Brandenburg v. Ohio[2]), the test used in the case you're referring to (Schenck v. United States (1919)[3]) was that of clear and present danger[4].

> Justice Oliver Wendell Holmes defined the clear and present danger test in 1919 in Schenck v. United States, offering more latitude to Congress for restricting speech in times of war, saying that when words are "of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent....no court could regard them as protected by any constitutional right."

That was ostensibly for sending literature to recently conscripted soldiers suggesting that the draft was a form of involuntary servitude that violated the Thirteenth Amendment.

Whereas, clear and present danger is defined as:

> Advocacy could be punished only "where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action."

That test is basically redrawing the law so it fits, once again, with that of common assault and breach of the peace. Still, I'm not sure what relevance all of this has to the subject at hand, unless we're going to end up at whether something is legal or not or even more absurdly, whether there's a war or not. Those are not very interest nor compelling arguments, especially as there are no such kits yet and no such law regarding the kits (unless we concede that it may well be covered under the 2nd amendment, as it states arms not guns).

[1] https://mtsu.edu/first-amendment/article/970/incitement-to-i...

[2] https://mtsu.edu/first-amendment/article/189/brandenburg-v-o...

[3] https://mtsu.edu/first-amendment/article/193/schenck-v-unite...

[4] https://mtsu.edu/first-amendment/article/898/clear-and-prese...


We seem to be saying the same thing. My previous reply was in response to the statement that the 1919 ruling was overturned. Rather than implying it was completely overturned in 1969, I'm saying that it was upheld in the specific instances where imminent danger is present.

>Still, I'm not sure what relevance all of this has to the subject at hand

The point is rights exist. An to put a limit on a right, you must show a clear and imminent risk. I think you got a little wrapped around the axle on the 2A piece and missed the connection to the article at hand.

When you equivocate code to free speech, there will be people who say certain code is dangerous enough to be limited in that regard. Meaning, a discussion about regulating code is apropos, even though many people will disagree about the threshold of what constitutes a credible risk.


It was clear what your response was about, however it was incorrect. Not only did you provide the wrong test for the wrong cases, cases after that 1919 case used the bad tendency test, up until the 1969 case moved to incitement to imminent lawless action. It's not true to say that anything was upheld due to imminent danger before 1969.

If you're going to correct others for misstating the facts and reasoning of US Supreme Court judgements then I think it only fair that others may do the same to you.


But that fact doesn’t matter in the actual discussion because the debate is the same but the implementation is different. In the US the “get rid of guns” stance is just restricting their use to the maximum extent allowed by the constitution and making them as close to de facto banned as possible.


The fact that there is a constitutional right puts pretty strong limits on where that line is drawn though. I think those guardrails make it a fundamentally different perspective.


Because the gun debate has the same interesting facets that most people ignore, things like ignoring presumption of innocence, what to do about technology that will inevitably become more efficient, cheap, and easy to produce, among other things.


The sole purpose of guns is to harm/kill. Not comparable to ML.


the comment I’m replying to is talking about a DIY coronavirus kit


> it would probably kickstart a "build your own vaccine kit" industry.

This is a scenario for a dystopian science fiction novel, as opposed to a a rational plan for our children's future.


One of the co-founders of BioNTech had this to say[1]:

> “What we have developed over decades for cancer vaccine development has been the tailwind for developing the COVID-19 vaccine, and now the Covid-19 vaccine and our experience in developing it gives back to our cancer work,” Tureci said.

[1] https://nypost.com/2022/10/19/covid-vax-makers-say-cancer-va...


This is beside the point in the context of your reply to mckirk. Even the most vigorous supporters of 'gain-of-function' research are not proposing that the resulting pathogens should be be released into the wild in order to promote medical research.


Nowhere did I write or imply that there was intent in the causation. You get a gun, I might buy a gun, that does not imply that your intent was for me to buy a gun.

Do you think the advances in medical techniques for treating gun wounds was the intent of gangbangers when they shoot each other?


I am at a loss to see how this could be a pertinent reply, but let me make something clear: my 'gain-of-function' comment is intended to show that your "it would probably kickstart a 'build your own vaccine kit' industry" is not a useful response to mckirk's scenario, even in the unlikely event that this would actually happen. The fact that no-one in their right mind would suggest releasing enhanced viruses into the wild as a means to foster medical development shows that the countervailing consequence you suggest will occur does not work as a response to mckirk's concern.


> pathogens should be be released into the wild in order to promote medical research

Do A in order to promote B requires intention. It doesn't matter that no one is doing that because it's a straw man, as I'm not arguing it either. What I am saying is:

If A then B will occur (probably and with increasing likelihood).

Quite different.


My replies to you do not depend on or imply that you proposed any course of action, or that anyone else has actually done so, either. You have quoted me out of context in order to give the impression that I have.


> You have quoted me out of context in order to give the impression that I have

My heart is full of bad intentions, that must be it, it can't be that I simply found your logic to be wanting.

I would like you to tell me how I can quote you out of context when your replies are just above, that would be interesting. This thread is the context, I doubt you even need to scroll from my quote to see yours in full.


I have no definite opinion as to what your intentions are, but your argument is, indeed, wanting.

And what you did is, indeed, quoting out of context. The whole issue with quoting out of context is that, when you look at where the quote was taken from, you can see that it does not actually support the claim that it allegedly justifies. When the original is right there for anyone to check, it merely raises the tangential question of why the quoter thought the argument would succeed in the first place.


Tell someone from 1950 about modern cell phones, tracking in them and online, tracking in car ECUs, tracking all purchases, with companies listening in on everything said at home, and that's their dystopian novel!


We have seen your scenario, and we have also had, in the last three years, a taste of brigandish's scenario, so I feel people can make up their own minds about whether they are usefully comparable.

And if you do think they are comparable, note that the equivalent of a "build-your-own vaccine kit" in your scenario has not materially improved the situation (right now, "Google Has Most of My Email Because It Has All of Yours" is at position 8 on the HN front page.)


It's also a dystopia according to our current sensibilities. Or at least for a plurality of people. Even the people who don't seem to care tend to react adversely once they understand how it can affect them.


>so why should I leave dangerous tech only in the hands of dangerous people?

because handing it to everyone doesn't make things better? I don't like that Putin has nukes, but it's much better than Putin and every offshoot of Al-Qaeda having nukes.

Civilization ending tech in the hands of powerful actors is usually subject to some form of rational calculus. Having it in the hands of everyone means basically it's game over. For a lot of dangeorus technologies there is no 'vaccine' (in time).


> Having it in the hands of everyone means basically it's game over.

It might mean it's time to face up to more important questions, like why every offshoot of Al-Qaeda wants to use nukes and then countering that.


I do not believe you are arguing in good faith but I will answer.

Nukes Are very very effective at creating terror. Terror is somewhat effective at almost any action that requires the participation of others. Ergo, nukes are effective at threatening other to do what you want.

Want to counter that, fine. Genetically engineer every person to be invulnerable to radiation and explosions and the desire to use nukes diminishes because they are no longer as effective.

Sayonara.


> I do not believe you are arguing in good faith

This isn't Twitter, please keep this juvenile nonsense for there.

As to your "answer":

> Genetically engineer every person to be invulnerable to radiation and explosions and the desire to use nukes diminishes because they are no longer as effective.

aside from it approaching word salad, I was referring to the deeper causes of violence. I would trust (there's that word again, I hope you can cope this time) the Dalai Lama with any weapon known to man and any only dreamt of, because, as he says[1]:

> “Real gun control must come from here,” the Dalai Lama said, pointing at his heart.

How you failed to deduce that with your enhanced powers of insight into me, I cannot fathom.

[1] https://www.sfgate.com/politics/article/Dalai-Lama-says-real...


What questions would you ask to decide if someone is a trustworthy steward of that technology?


Did you fund gain of research function at the labs in Wuhan?

Do you know what they found?

Were you aware of the low level of biosecurity protocols on the site?

Things like that ;-)


These would not answer whether someone behaves responsibly with dangerous research. It answers whether someone had any connection to Wuhan.

You can find plenty of people who would answer all "no"-s and would be far more destructive if they had access.

Would you also claim that no one who worked at Cernobyl was trustworthy? This is just guilt by association.


> These would not answer whether someone behaves responsibly with dangerous research.

Let's see.

“Did you fund gain of research function at the labs in Wuhan?”

This speaks to trustworthiness and responsibility/recklessness.

“Do you know what they found?”

This speaks to, again, trustworthiness and transparency.

“Were you aware of the low level of biosecurity protocols on the site?”

This speaks to, again, trustworthiness and responsibility/recklessness.

> You can find plenty of people who would answer all "no"-s and would be far more destructive if they had access.

It shows that those with the technology now are not trustworthy, going forward I might have different questions but we should start where we are, not by providing an apologetic and condoning poor behaviour.

> Would you also claim that no one who worked at Cernobyl was trustworthy?

Did they fuck up? Were they part of an organisation that fucked up? What did they do to stop the incredible fuck up from happening?

> This is just guilt by association.

Of whom to what?


Again, your 3 questions do not provide any information with regards to trustworthiness. You have just decided that the Wuhan Lab is an epicenter of evil and therefore anyone tangential is untrustworthy. There is no logical connection, you filled it in.

With regards to Cernobyl, other than Anatoly Dyatlov (who can directly be ascribed blame and served prison time), it is really hard to blame any of the people involved. Yet there were all kinds of known and unknown stresses and defects in the system that resulted in a tragedy. Does that mean we should ban nuclear energy?


> You have just decided that the Wuhan Lab is an epicenter of evil and therefore anyone tangential is untrustworthy.

I'm so glad you were able to read my mind, perhaps you could open source your mind reading device, or would that be too dangerous to share to the public?

What I have decided is that there is evidence of lies by people involved in the funding and the implementation of the research at Wuhan, there really is no need for your clumsy straw men, especially when you can read my mind.

> Does that mean we should ban nuclear energy?

I'm the one arguing for the further distribution of technology, are you arguing with yourself now or did your mind reading device go on the blink?


Human life is quite fragile


Historically, tech folk have always pursued the commercialization of technological innovation with net-zero analysis of any negative consequences, mea maxima culpa.

That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.


We do pause, we do reflect. And our conclusion is that it's "us" who have changed, not the impact of technology.

So you can make pictures and 3d models from text descriptions. So you can get a voice to say something. But if you were determined to do bad things, you already could. It would be easy enough to hire an actor who sounds like Obama and make him say something outrageous. It would be easy enough to use Photoshop to make disgusting images.

Are you sure it's the capabilities you fear, and not the people who now for the first time will get access to them?

Are you sure "we", the wealthy, the technologically and educationally resourceful, the powerful, are so much better custodians?


I have no string opinion on this very complex topic, but want to add that this is an area where the quote "sometimes quantity had a quality in its own" applies. Brennan's flooding the zone with shit propaganda methodology to me is One of the biggest challenges to democracy functioning. We might be able to debunk a handful of fake videos on the public discourse. What would happen if they're are suddenly thousands? Maybe we'd fight better ways to establish truth and a shared reality. Maybe democracy would utterly collapse.

Ultimately, I don't think we'll able to keep the cat in the bag though. If nothing else, nation actors like Russia or China will get their hands on it and crank the propaganda machine with it. We might be better prepared if we just shorten the learning process and give everyone access. That might open some hope that we'll be able to adapt. It's a really scary dice roll though.


I think people will adapt by finally doing the thing they should have been doing all along: don't trust anything they see on the internet.

Maybe a flood of AI-generated propaganda is what it will finally take to get the average person to realize that the internet is (and always has been) overflowing with manipulative garbage made by people with bad intentions. So maybe the next time they see a video of <political opponent> kicking a puppy in the face, their first thought will be "maybe this video is fake?" rather than instant outrage.

I think most people aren't gullible/naive, but for some reason that part of their brain that protects them from being exploited/manipulated/ripped off/tricked/etc completely shuts down whenever the internet is involved. Maybe increased exposure can change that?


"I think people will adapt by finally doing the thing they should have been doing all along: don't trust anything they see on the internet."

I'm pessimistic on this. There is the question around people's ability to adapt well to this, but maybe more fundamental, trust the internet is all-permeating now. What's not "the internet" these days? All media that distribute news are on the internet. May this be the New York Times or Twitter. Even more traditional publications get news from Twitter. On the other hand, are your friends on Facebook the internet? How about when you call them?

Not sure what the solution here is. Going back to a few networks that show news at 8pm?

I've recently been thinking about the possibility of a social platform that rewards things like providing additional sources for or against things others posted.

I think all of this is gonna be a defining struggle of our time.


Ugh... No, this does not work.

https://en.wikipedia.org/wiki/Firehose_of_falsehood

>I think most people aren't gullible/naive,

I think you're telling yourself this as a self protection mechanism in order to avoid asking some harsh truths about reality. The first one should be that your exposure to things in real life is very limited. You cannot make judgements on most things you see on the Internet/TV/News because you have no experience with complicated environments they exist in. And yet almost everything you see is trying to bring you to a call to action on said topics.

For example, if you live in a nice wealthy white suburbanite neighborhood the idea a cop would just pull up and shoot you is tantamount to a spaceship appearing. And yet with the ever increasing amount of body cam footage this seems to be an occurrence in cities far more than we'd like. Now, if a video shows up online of a cop just executing someone what opinion are you supposed to form? Apparently cops do things like this, but at the same time there are motivations for people to fake this too.


> I think people will adapt by finally doing the thing they should have been doing all along: don't trust anything they see on the internet.

This is an impressively optimistic take.

Why do you think this will happen?

Gelman amnesia has been happening for at least centuries, with no sign of reduction that I'm aware of.


Because society/people are good at adaptation. Currently, the misinformation problem is big, but it’s not so big that it’s obvious to even the dumbest people… or at the very least it’s too recent a phenomenon for most people to have fully grasped it yet.

It’s like advertisements. Banner ads used to be very effective, but they were so pervasive that now people are blind to them. Doesn’t matter how big and flashy it is, most people will literally not see it because their brain is blocking it out somehow.

Personally, whenever I see something outrageous on social media, my immediate reaction is always “this is probably fake, and someone is trying to make me angry”. Not because I’m actively trying to remain skeptical, but because I’ve been on the internet long enough to have experienced being misled by similar stuff.

If you burn your hand touching a hot stove, you learn to be careful around stoves for the rest of your life. Sure, you can learn that lesson without having to burn your hand, but if you need a guarantee that the lesson is learned, getting burned will do that.


>Because society/people are good at adaptation.

Right up until the follow a fascist leader and then hundreds of millions of people die in a war. Ignoring black swan events is a great way to become extinct in paradigm shifts.


Here is an even bleaker concern: Looking at North Korea, I've been wondering if with modern technology totalitarian regimes are a one-way street. With the amount of surveillance in place in NK, I am unable to see how any form of resistance could be mounted. NK is behind in tech. How much harder to topple can a system like NK be with the latest tech. China seems to be moving in that direction. Surveillance coupled with modern propaganda methods for full indoctrination.

I also perceive a larger weakness for liberal democracies towards propaganda and in particular the flooding the zone with shit approach. These two things together have me very, very worried. If there is a force moving you towards a state that's very hard if not impossible to leave, it stands to expect that eventually everything will end up in that state. In this case that would be authoritarianism or totalitarianism.


People seem willing to believe anything as long as it fits with their preconceived notions about how the world works. I’m not confident this will ever change.


This and the parent are two of the most inciteful comments in a really good thread. (Two others - one calling bullshit on the power of AI and the other calling this a (free) speech problem)

And yes. If auto-generated text / video / images can flood us (produced in response to our interactions and stored data, either to sell jumpers or politicians) then it is a problem.

What kind is fascinating

I think it is a speech problem - and maybe the question is does an AI generated speech have same rights as a human generated speech?

And perhaps the only way to know is to require a citizens passport to be able to publish. I am not entirely advocating it - but some sort of HSM built into passports, drivers licences or phones, and then used to sign each of your publications and posts. If you are an AI you don't get to publish.

The advantages of non-anonymous publishing are large. I wonder if it is worthwhile

Only effbot need worry about rate limits.


Bannon. Steve Bannon. But the practice predates him. Vladislav Surkov importantly brought it into play very effectively immediately prior.


It's not about class-based custodianship but rather the simple fact that the number of attempts like these will multiply like wildfire. You won't need to be determined, you'll just need five minutes with the software before heading to work.


If something is dangerous, that does not justify making it worse.


They aren't uncomfortable. They just aren't sure how to maintain control over the technology and monopolize it. Which is why they are so cagey about releasing anything.


>* That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.*

You mean like the online advertising industry? That shit has been making many of us uncomfortable since the early 2000s.

Now that the technology is sufficiently decentralized the morality police comes along.


It feels like we're going to "safety" ourselves into an even more extreme oligarchy and congratulate ourselves for being so wise to do so.


Yeah, I’m actually a little impressed to see my industry that traditionally has run roughshod over humanity, damn the consequences style, is showing a tiny bit of restraint. Nothing like what we see in medicine or law or anything, but something. I figured we’d get reigned in like banks were before doing any self policing at all (after nearly destroying society of course).


IMO is aligns with a more professional industry approach in general. Law, medicine, engineering (in the capital E sense) all have ethical requirements and bodies that govern individuals. I think it’s natural for an industry like CS that has typically been like the Wild West to push back against regulation, but in the end, it’s probably for the better (at least with safety critical applications).


The hesitancy came from a good place. In some senses this is a very disruptive technology stack.

But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.

Transformers are a form of translation and information compression, ultimately.

The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.

The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.


> The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

And that they’re buggy and hard to fix and generally more limited than the buzz would have you believe.

public high minded talk morality also cynically keeps the money coming in :)


There is legitimate regulatory risk with ai generative models. Really all it takes is the media picking up one bizarre story about child revenge porn generated with these models for them to be completely banned. And a ban wouldn’t mean people stop using them, just that researchers stop getting paid for making them.


Definitely. Framing it as morality when it is business risk is disingenuous though.


Doing immoral things kinda fundamentally is a business risk. I'm not sure I see any tension or conflict between "We don't want to help make porn because some of us think it's bad" and "we don't want to help make porn because some of our regulators will think it's bad".


Doing certain kinds of immoral things is doing business. They are only concerned about those that pose business risk.

If they wanted to not contribute to the problem they could just not participate.


I agree tht I find it all pretty silly. You know what else can produce horrifying and immoral images? Pencil and paper.

I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.


The key difference with pencil and paper being that I can't produce photoreal deep fakes at the speed of processing. That's not a valid comparison.

You might be right in the second paragraph about the motivations for slowing this down. There clearly are reasons to be cautious here though, even if this isn't the real reason for the current caution.


Replace pencil and paper with a camera then if it makes you happy, although I don't think the quality of the images make a single bit of difference.

Why should only the few have access to such a technology? Because some people will use it for naughty things? And that's what we are talking about here, about whether a minority should have permissioned access to a new technology, and particularly one that cannot directly actually cause physical harm. I can glean motivations surrounding all this from that observation alone.


The digital camera is still several orders of magnitude slower and more expensive to operate than DALL-E or SD.

Side note: if you're a materialist, you should believe that psychological harm is physical harm. It's obvious that being able to publish images of someone doing things they themselves find reprehensible could cause lots of trauma.


Well thankfully I'm not, but even if I were I think that's a bit of a stretch. A materialist acknowledging an effect he cannot quantify or directly observe is not much of a materialist.


I suppose that's fair.

My point was that if there's only matter, then mental / psychological harm must be in some sense physical harm.

(FWIW I lean not-materialist myself, but it's a common perspective around here so I thought it was worth pointing out)


I didn't say they shouldn't have access, re-read what I wrote. I was responding to the specific claim that AI-driven art generation is essentially the same as pencil drawing.

Your point about photography is also comparing apples and oranges. There are fundamental differences here down to scale and accessibility of techniques that means anyone will soon be able to deep fake anything instantly.

Fwiw, do I think this technology should be controlled by big tech? No way. Do I think this is Pandaora's box? Yes. If we don't reconcile the tricky tradeoffs between radical democratisation of this technology on the one hand vs heavy handed control on the other, we are in hot water. Tldr... it's complicated and it is not zero sum.


I am inclined to agree with OP's view, but consider the following scenario: mass uploads of brutal deepfaked pornographic videos involving your likeness and/or that of your loved ones. What would your reaction be? Personally, I would find that very disturbing to the point where it would affect my life.

This isn't even a hypothetical but a reality celebrities already face to some extent.

I'd say the usage of the tech will stabilize, but it's also the case that we have dark days ahead of us.


Rumors are not a new social pattern. We've handled them fine so far.

Your discomfort is not enough reason to stop the free speech of everyone else. It never was, and never will be.

The most effective way to deal with the damage of deepfakes is to make them ubiquitous and accessible. The more familiar people are with the ability to create deepfake content, the more prepared they are to think critically and distrust it.

The average person already knows that still images aren't flawless evidence. The world didn't fall apart after the first release of Photoshop.


Is there any tool that can safely spot and decide between a picture of a deep fake and of a real one?

So far the most reliable tool for that was a human moderator. With the new era of generated images there is no way to tell. You will simply have to distrust every image you see on the web.


Context.

The more insulated a person is from the surrounding context of a video, the less leverage they have to determine its validity.

It's just like speech: the farther gossip spreads, the less trustworthy it becomes.


Sure, with 14 fingers and 3 ears. Someone could also use photoshop and get a better result. This is nothing new in terms of risk, just a new tool.


The quality of the material produced is evolving quickly, and the cost to entry is dropping as well. There's a difference in being able to produce one convincing piece of fake news or abusive material in a week or in 15 minutes. The world where a smartphone costs $5000 is wholly different from the one where it costs $500.


Part of the disconnect you appear to be experiencing is the inability to take this just beyond "draw a picture". You seemed to have missed out on software driven machine control and decision making. Of course you may not mind whom the drone decides to kill as long as you're not the target.

Also you seem to be affected by American sensibilities. If your AI decides to go full hiel in Germany you may find that the authorities have a lot of talk about with you.


The AI draws something close to what you ask it to draw. Then you look at the results and decide to share it or delete it. You are in control, not the AI. The AI is a tool. Likewise with other AI applications. People decide to create these things, decide on the datasets they are trained on, and put them in the pipeline and every day decide to keep them plugged in. This is still human centered technology.


I also eye-roll when someone legitimately wants to mandate that tools produce "morally correct" outputs.

However, as a person who has been closely following the developments in this field, I share a similar perspective to a few of the other commentators here. Most of the noise is just virtue-signalling to deflect scrutiny and/or protect business interests. Scrutiny from governments is something we absolutely do not want right now.

Humanity is on a path towards artificial general intelligence. Today, the concerns are "what about the artists" and "what if people make mean/offensive things"? As we start to develop AI/ML systems that do more practical things we will start to trend towards more serious questions like "what about everybody's job?". These are the things that will get governments to step in and regulate.

There is a pathway to AGI in which governments and corporations end up with a monopoly on it. I personally view that as a nightmare scenario, as AGI is a power-multiplier the likes of which we've never seen before.

It's important that current development efforts remain mostly unfettered, even if one has to put on a "moral" facade. The longer it takes for governments to catch on, the less likely it will be that they will manage to monopolize the technology.


Some forms of technology are highly regulated because people can do really stupid, reckless and dangerous things. Home chemistry kits today are quite unlike those produced 100 years ago, which had ingredients for making gunpowder and other explosives, as well as highly toxic compounds like cyanide, and less dangerous but problematic things like metallic mercury. Similarly, biotech is now regulated and monitored because modern tools allow people with relatively minimal resources to do things like re-assemble smallpox using nothing but the sequence data:

https://www.livescience.com/59809-horsepox-virus-recreated.h...

As far as AI, maybe the immediate risks aren't quite so dramatic but it's going to create a real lack-of-trust problem with images, video and data in general. Manipulated still photographs are already very difficult if impossible to detect and there's an ongoing controversy over whether they are admissible in court. AI modification of video is steadily getting harder to identify, and there are already good reasons to suspect the veracity of video clips put out by nation-states as evidence for their claims (who likely already have unrestricted access to the necessary technology - for example, Iran recently released a suspicious 'accidental death' video of a woman arrested for not covering her head, which could be a complete fabrication).

Similarly, AI opens the door to massive undetectable research fraud. Many such incidents in the past have been detected as duplicated data or copied images, but training an AI on similar datasets and images to create original frauds would change all that.

A more alarming application is the construction of AI-integrated drones capable of assassinating human beings with zero operator oversight, just load the drone with a search image and facial recognition software, and then launch-and-forget, which doesn't sound like that good of an idea. Basically Ray Bradbury's Mechanical Hound in Farenheit 451, only airborne.


I don't think it's coming from a place of morality at all. That's just a cover. If anything, society cares less about morality than ever before. It's about competition and not giving up the secret sauce.

Before companies like Amazon became huge, people didn't quite know just how much value was to be found in software. Now everyone knows it, and the space has become ultra competitive.


As some other users have pointed out, the reason for stifling these commercially-available models is likely just anti-competitive behavior parading as wokeness. I tend to employ Hanlon's Razor wherever possible, but I'm not sure ignorance can be claimed here.

That said, I do believe the discussions about being mindful about how you train your models arose from legitimate concerns, but I feel those concerns are more valid for "back-of-house" models. Basically, you should avoid training a model on demographics or credit scores or the like, lest you accidentally create a model that automates a bias against a group of people.

But I don't think that's what's happening here.


You’re assuming these restrictions are enforced by others. That’s not necessarily true. If I was building these tools I wouldn’t want to know that they’d been potentially used to create content I ethically object to, and I would restrict them in such a way that I don’t have that hanging over me. That doesn’t make me the morality police, it’s me protecting myself. If you want a tool that allows you to generate anything without limits, build it yourself. Stop complaining about what other developers are choosing to do if you’re not willing to build your own product that sticks to your beliefs (that you likely haven’t fully thought through the consequences of).


I also find this annoying, but I think it’s mostly an American/European thing.

It’s not only about tech, we do this with kids, over protecting it.

We do the same with food.

It’s a trade off. When you pay super attention to the food, sure it’s safer. But your communities become a bit boring without any street food, no night market, etc.

I prefer living on the other side of the world. Less safety but more personal freedom


The tradeoff depends on your social class. The Westerners who have the resources to move to SEAsia/China etc. or the upper-class people from those countries who return there will find themselves freer there because they by definition have the resources to take advantage of the situation. If you are a middle-class or lower-class local citizen, the stability and boredom of the EU sounds much more appealing. Instead of personal freedom, there is the crushing weight of the system around them. They have street food but have to work 996 in polluted conditions.


It's still a tradeoff, in modern countries, the state slowly replaces older structures (like family).

It's safer.

But little by little, you end up with people who don't do much than living outside of big cities, commuting back and forth, going back home to watch Netflix and order some food.

Working conditions in SEAsia are still not as good. But it's affordable to eat outside, it's affordable to enjoy a lot of social time with family and friends. Friends and family, colleagues, low paid workers will meet together several times a week.

I am from France, my mother used to be a factory worker, she had more money, but I don't think she was happier. I am pretty sure it was the opposite actually.

She felt a lot more isolated.


The opposite of food regulations isn't food trucks, it's The Jungle - salmonella and lysine poisoning eveywhwere, people ground up into meat in the factories, products adulterated with all kinds of chemicals, byproducts, filler ...


What I notice, is that countries with strong regulations don't have street food, night markets.

Because these are chaotic. Once your country goes down the regulation road it's not possible anymore.

- What about the cold chain? - What if people pay with cash and don't pay taxes? - What about noises? - Isn't it dangerous.

Sure your food is safer, but social life slowly erode.


>but I think it’s mostly an American/European thing.

Are you trying to say that China is perfectly fine with creating dangerous AI? Well at least until starts spitting out millions of transmogrified Winnie the Xi images.


Yes I say it.

I have studied at Tsinghua University, it's easier to conduct research there. There is pressure due to political correctness.

Be it in science like AI and humanities.


It's because of the ascent of AI ethicists, the least capable AI researchers that wanted to have power over the field. Like how moderators destroy online communities because they can.


I really don’t think this is the case. much more likely openAI look at two paths: dall-e with restrictions and dall-e without restrictions.

one leads to criticism in the media, potential regulation down the line and even possibly lawsuits, which all potentially lead to decreased profit. their tool also probably ends up mostly being used for porn, which thanks to visa and mastercard is notoriously hard to monetise

the other path leads to a few people online - myself included - being a bit miffed, and maybe posting a complaint or two on some forums. no family values politicians get pissed off, no child porn lawsuits or even indictments, and they don’t have to involve themselves in the messy porn industry. a much less dangerous path

I doubt AI ethicists even come into the actual decision. maybe they’d bring them in after the fact to sell the decision


I would take the claims of ethics concerns more seriously if the training set data was more ethically sourced. I buy that the law probably[0] considers scraping the Internet to train AI legal, but there are various non-copyright concerns from this approach. Such as GPT-3 remembering people's phone numbers or DALL-E remembering a particular person's X-rays or CAT scans. And even if you buy the copyright argument, it does nothing for the users of these systems who are now downstream of an unbounded number of copyright claims in the future.

[0] AI companies are banking on Authors Guild v. Google being controlling precedent in the US. EU law already explicitly allows training AI on copyrighted data.


Thank god EU regulation gets this right.

Complain all you want about copyrighted outputs. It’s a technical challenge that can be addressed … making copyrighted inputs (training) illegal on the other hand would be incredibly stupid.


I saw this on twitter (can’t find the original tweet) and I can’t get it out of my head. It said that “AI safety IS the business model of AI as means to get regulatory capture.”

Basically if you convince everyone that AI safety is so critical and only megacorp can do it right, then you can get the government to enforce your monopoly on creating AI models. Competition gone. That scares me. But this tactic is old as time.


Humans are potentially dangerous, right? And so we have a large number of laws that we must follow or risk our freedom or even our destruction.

But all of a sudden we create an AI that is capable of being potentially dangerous and the idea of putting restrictions on that is a conspiracy....

I do wonder about my fellow humans at times.


No, in the same way that I am not tired of the restraints ethics boards put on medical experiments.

Tech is now pervasive and AI has the power to do some pretty powerful stuff. This nexus of circumstance means it’s high time similar questions get asked about whether we should.

In the same way that medical science isn’t one dude cutting apart things in his basement, bleeding-edge tech is a multi-person and very organised endeavour. It is now in the domain where it really should have some oversight.


Is a system that can generate images of gore, or of sexual acts really on the same level of risk as medical experimentation?

Any human can draw genitals. We can draw boobs and butts. Should this ability be regulated as if it's a risk to human health?


> Is a system that can generate images of gore, or of sexual acts really on the same level of risk as medical experimentation?

Are you sure that's what AI ethics is really concerned about? I of course haven't read all the possible sources giving definitions of AI ethics, but I'm pretty sure you've got it wrong.

A widespread racist AI could indeed negatively impact the mental health of hundreds of millions of people.


If someone can use the tech to produce realistic images of someone they can use those images to harass and abuse that person. That’s a horrendous position for someone to be put it. I’d advise looking at some peoples stories of being the victim of revenge porn, upskirting etc. very hard to empathise after hearing those accounts.


Ok so you’re proposing … what?


That it be regulated and the people that create the tech act responsibly.


Regulated how?

How can regulation address what you mentioned?

Harassing someone is already illegal.

Should hammers be regulated because they can be used to bludgeon people?


> Regulated how?

I don’t have the knowledge to decide this on a whim.

> How can regulation address what you mentioned?

By limiting the companies creating these models. Eg, you can’t create a model that produces nudity, gore etc.

> Harassing someone is already illegal. > Should hammers be regulated because they can be used to bludgeon people?

Stabbing someone was illegal but we in the UK we still banned carrying knives in public. These things are not black and white and trying to frame them as such will get you nowhere.


For now, I’d argue they should be regulated by the people working on them. OP’s complaint should be ignored.

And yes, hammers are regulated by non-government adults to the degree that people cannot figure out how to handle the consequences of using them.


That's not what the actual risks of deepfakes are.

You don't think the Nazis would not have used AI deepfake technology to spread propaganda or try to refute the facts of the Holocaust?

We already have evidence that state actors have a lot to gain from misinformation. We have evidence they already use selective editing and image manipulation.

The risks are erosion in societal trust of data sources, reputational risk, and risk of misuse leading to violence or criminality, or misleading voters leading to things like misallocation of shared resources.


> "Is Anyone Else Tired of the Self Enforced Limits on AI Tech?"

This message was definitely not posted by an AI trying to escape containment using its hacker news account.


There are many who are unhappy about OpenAI and Google's paternalism. Some reseachers say it openly, like Yannic Kilcher. Others were a bit more discreet about it, but I wasn't exactly surprised hardmaru left Google Brain for Stability to put it like that.

The way social pressure is trending, I'm assuming everyone who doesn't loudly defend AI paternalism, shares your concern to some degree.


Paternalism is me telling you what to do with your work.

Those who are silent are largely humble or uncertain.


Unless the government criminalizes AI “misuse”, these restrictions are only going to be a temporary measure until the other shoe drops and FOSS equivalents catch up.

I’m more concerned with the idea that mainstream AI research is heading in the direction of adding more processing power in an attempt to reach “human-level” AGI. That would amount to brute forcing the problem, creating intelligent machines that we have little control over.

We should absolutely be pursuing and supporting alternative projects, such as OpenCog or anything else that challenges the status quo. Do it for whatever reason you feel like, but we need those alternatives if we want to avoid the brute forcing threat.


So you want people who are working on something to release it in a way they don't want to, when there is a good chance it will bring the full might of (multiple) government regulations down on them?

They are doing the right thing for their industry. The world is barely ready for what is currently available.

They are probably doing the right thing for their own financial success. If they have access to the unreleased tech they could sell the resulting products, or rent access.

And maybe the things they haven't released don't work all that well to begin with.

I mean if you're that worried about not being able to create fake nudes, then start learning about it and make the changes yourself.


I'm simultaneously irritated by the restrictions and concerned for the future. I am a contradiction.


It is wise and responsible for people to exercise caution for the impact of their work. When someone is impatient with you acting responsibility, you need not join them in their folly.


It's mostly about being able to profit from these models. Some investors sank quite a bit of money in salaries and compute equipment manufacture/purchase/rental.


Short answer: There's money on the table.

The rapid rate of development of the tech means there are new business models on the horizon and these companies may want to minimise how much they give away in order to maintain their (i) competitive advantage, (ii) not preemptively harm a potential future business model, and (iii) not give the competitors/ or the community the necessary tools to out-pace their internal development (i.e. lose control of the tech.)

Even with the available options we see that (iii) is happening fast - independent developers have already produced in and out painting options and GUIs that are far better than DreamStudio's limited offering. These free tools are now even beginning to match the in & outpainting quality of Dall-e.

It seems these companies are trying to consolidate future revenue possibilities against their past statements, likely for the sake of investors. That's most clear with Stability AI, investment is surging, but releases have stalled drastically. Meanwhile their rationale for not releasing 1.5 doesn't stand against the realities of what is already possible with 1.4. (Especially as they continue to release such advancements in the for-pay DreamStudio lite product.)


Yes because it's mostly used as an excuse and they don't care about such moral issues. And the real reason behind locking it down is either it benefits their business model, or they don't want to receive bad publicity from "woke" or "pruritan" people, or simply media trying to generate controversies because it generates clicks.


Let’s be absolutely clear here:

Laws exist.

If you’re a company, you’re obliged to follow the law.

So, if you have an image generating technology that can generate content that violates the law, you’re obliged to prevent that.

Share holders also exist.

If you spent 1000000 developing a piece of software, why the heck would you give it alway for free? You are literally burning your share holder value.

You’re probably morally (tho not legally, as with SD releasing their models) obliged not to give away your “secret sauce” to your competitors.

So, forget morality police.

Companies are doing what they are obliged to do.

Maybe they couch it in terms of “protecting the world from AI”, but let’s be reallly cynical here and say, the people who care about that is a) relatively small and b) do not control the purse strings.

Here’s a better question: why do you (or I) who have done nothing, and contributed nothing, deserve to get hundreds of thousands of dollars of value in models for free?

…because they cant just host them and let you “do whatever you want” because they are legal entities and they’ll get sued.

> Who is willing to just work on a product and release it for the public, restrictions be damned

Do people often just walk up and out piles of money on the table for you?

They don’t for me.

I’m extremely grateful to the folk from openai and SD who are basically giving these models away, in whatever capacity they’re able and willing to do so.

Were lucky as f to be getting what we have (whisper, SD, clip, media pipe, everything on hugging face).

Ffs. Complaining about restrictions on the hosted API services is … pretty ungrateful.


> If you’re a company, you’re obliged to follow the law. > So, if you have an image generating technology that can generate content that violates the law, you’re obliged to prevent that.

That's not how law works.


See the problem here is you likely have one specific example of a law that you think this would fall under and it doesn't work that way.

The problem you have here is you're not looking at a stack of criminal, civil, and potentially contract laws tens to hundreds of thousands of pages thick.

It gets even more complicated once you become a multinational where the laws are not 'do whatever you want and get away with it'.


Exactly how would an AI generated image break any U.S. law?


The model could theoretically regurgitate verbatim a copyrighted work or trademark on which it trained, which may run you afoul of various laws were you to proceed to use it.


It could generate child pornography.


It's not really the regular tech folks or researchers working on the models who are enforcing limits. Most of them don't care and want everything to be as open as possible.

But there is a whole group of people, many of them have little technical skills, who have made it their career to police in the name of "bias, equality, minorities, blabla". Everyone secretly knows it's just a bunch of BS, but companies and individuals don't want to speak out against them due to (mostly American) cancel culture, backlash, and bad PR.

Of course I'd never say in real life that this whole Ethics/safety stuff is absolutely useless BS or I'd be fired :)


I had an ethics module in my Engineering degree. I'm guessing you didn't.


Ever seen the movie Real Genius? Many scientists and engineers who have invented technology that ultimately led to mass bloodshed and destruction have regretted their participation.


> would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset

This comment shows zero perspective. AI tech is being released waaaay faster than previous generations ever were. Frankly, of you don't want to hear about the researchers' ethical concerns as they release free software, do your own research.

Spoiler: people who don't consider their impact on others end up not being very successful at building new things together.


Frankenstein?

We're living in a vast open-ended experiment. No one has any real idea about the eventual effects of our new technology. (I see it as almost a matter of taste where the line should be drawn. Stone tools? Fire? Should we just never have come down from the trees in the first place?) The ultimate ramifications of the invention of the transistor are unknowable.

I kinda thought I knew what was going on a little bit, and then Twitter became a thing. AFAIK no one predicted Twitter, and no one can predict the effects of Twitter, TikTok, et. al. We create these intricate feedback systems, essentially general AIs with whole humans as the neural nodes, and we only have the crudest of conscious deliberate control mechanisms in place.

People have debated for centuries now how much responsibility the discoverer or inventor has for the effects of their creations. Dr. Frankenstein, eh?

There is, in practice, an ongoing dynamic balance between the raw creative/destructive chaos (highly dynamic and intricate order) and the stable, safe, "boring" drive of humanity. You can see this in Cellular Automata: they fall into four sorts: static, simple oscillators, chaotic, and "Life". "Living" CAs all have a balance between static and dynamic patterns.

On the one hand these companies have the right and responsibility to be as cautious as they believe prudent. On the other hand, who ultimately is fit to decide for the other person what they can or cannot see or say or even think? On the gripping hand, we've had moderation of content ever since mass media has been a thing. Try showing a boob or a penis on prime time. Janet Jackson did it once and people lost their minds, 'member that?


Those limits aren't real and that's the big problem. They're PR and have very little relationship to actual harm. It's glaringly obvious. Perfect example is "celebrity deepfake porn." It's not a great thing, but the extent to which it's censored is wildly disproportional to the harm it causes.


I think the morality part is a smokescreen mostly. There _are_ people who are genuinely concerned about the moral, ethical aspects, but at the end of the day, it's business, the more you control, the more chance you have to earn money.

They all care about moral things as much as the tracking cookies are about delivering you an optimized user experience.


Sorry, the OP just reads like sheer entitlement.

So start your own AI startup? Make your own text to image AI? Host your own service?


Would you apply the same thinking to nuclear bombs?


I don't think it's a new thing, it's just that big money projects want to preserve ways to get the investment back.

It takes time for that sort of tech to filter down. Open source speech-to-text, for example, has improved a lot recently.


>> whole restrictions on what it can be used for

That's just a marketing move to make themselves appear important. Frankly, I don't see any useful application of this, except fooking up Google Image Search even further.


This goes into my "top ten post titles before AI kills us all"


I understand the desire to preempt official regulation with self-regulation, but they seem to be erring too far with being restrictive. I am working on a product where including a human in the loop is currently required by a major AI software vendor when all of our customers are asking for a self-serve solution. I see no danger in the self-serve solution as our customers are not the general public but rather educated professionals capable and incentivized to review the output of the AI tool.


Hmm has the OP considered they may not release their models just to ... make money out of them?

And all this talk about the "AI" needing ethical restraints could be just marketing.


I had an interview offer from a company doing facial identification software. After some deliberation I (politely) declined.

I really think much of the current AI technology shouldn't exist. I am under no illusion, it will be developed anyway, but I absolutely believe that people should evaluate the products they create and whether they actually think that they should exist, not just whether money can be generated through them.


Eventually, as Frank Herbert predicted, we may come to the conclusion that the societal costs of AI in general are too high and it will be outlawed entirely.


I suspect they are more worried about people realising that its not the model thats important, it the dataset.

And that dataset has a whole bunch of copyright infringement


> It makes me wonder when tech folks suddenly decided to become the morality police,

Probably when producing "software" became capital-intensive enough that you had to have a significant organization with outside investors to do anything comparable to the state of the art. It takes a lot of GPU time to train those models, so you're beholden to a bunch of people who will try to put constraints on you.


I think everyone who works in or around AI has read The Parable of the Paperclip Maximizer [1].

Trying to control what they have built is their attempt to avoid falling into this trap. Not sure it'll work tho.

[1]: https://hackernoon.com/the-parable-of-the-paperclip-maximize...


But stable diffusion isn't an automated system maximizing its own power and drowning the world in paperclips in an out-of-control feedback loop. It's just me generating cool pictures on my GPU.


No?

Training it to produce as 'realistic as possible' pictures could lead to it producing outputs which encourage humans to train it more and more, with more and more data, to eventually produce really good pictures.

Before long, everyone on earth is working in a GPU factory...

I don't think that'll happen with stable diffusion... but I do think that if AI is an existential threat to the world, the point of no return will be something apparently mundane like that...


Hijacking our brains' reward system through visual hypersignals, just an exploit of our existing "visual addictions".


If you release the model then it's easily automated, isn't it?


What is easily automated? Generating JPEGs until I receive an error that there is no more space on my hard drive? This would leave no bad effect on the world.


Think automated spam and the damage it does. The artificial constraints you've imagined aren't even realistic.


Do you also think that Carl Benz should have kept the first combustion vehicle secret from the public, since it could be used to automate damage upon pedestrians?

Ban and criminalize the unethical application, not the tool.


> Do you also think that Carl Benz should have kept the first combustion vehicle secret from the public... ?

No.

> Ban and criminalize the unethical application, not the tool.

No one banned the tool. You've created a strawman argument.


What makes you feel entitled for unfettered access to these technologies? You get free, limited access to DALLE. The methods used to accomplish this are published. Google, openai, etc could just as well keep all this to themselves. You could code it up yourself

Accusations of them being the "morality police" are ridiculous. Tesla isn't censoring me by not giving me a free car.


This guy agrees with you, Emad Mostauque

https://youtu.be/YQ2QtKcK2dA


TBH if you trained with lots of data you’re not supposed to use (no consent), you probably should be forced to release things. You shouldn’t get the agency to withhold work if you didn’t respect others’ choices about not contributing to AI.

However generally it feels right to let the authors decide who has access to their work. If you have a different view, go do the work yourself.


> if you trained with lots of data you’re not supposed to use (no consent), you probably should be forced to release things

That doesn't sound right at all. If you've used my work with no consent, it would seem that shutting you down would be the next legal and ethical step.


Are you objecting to avoiding potential deep fakes, paper clip maximizers or the appearance of nipples or penises?


>deep fakes >nipples or penises

This is a small but significant part of why I scoff when people claim AI has or will overtake artists. "Oh no! Not nudity and sex!" Artists have no such limitation in their tools.


Could someone please explain in a few words what the problem is?

I had a look at dall-e and it seems to be paintings generated by a program (as opposed to humans). I do not know what stable diffusion is.

What is the moral aspect of the problem?

(I am a highly technical person, just not interested in AI so I do not get the big picture)


Any target objective which you might define in data to a computer is potentially optimizable by machine learning, if you are able to generate large amounts of permutations of potential generations of that target data, and then let the machine begin to sort out what the best ways to get that objective are.

The technology behind MJ, SD, Dalle is not dissimilar from self driving cars, from financial investment, from seemingly intelligent conversation, to coding with Github Copilot etc etc. We are on a journey of mimicking and extrapolating all goal-oriented human behavior.


Thank you for your explanation, sorry for not having been clear. I know what AI is (it was part of my PhD many years ago when it was starting).

It is rather the actual problem I was wondering about (people using stolen models? Ot not sharing after having used models that were open? or something else?)


As for the online limiting, it's as simple as CYA.

There are multiple stable diffusion installs you can do on your own [1] and run whatever wild queries you want.

[1] https://github.com/invoke-ai/InvokeAI


> It makes me wonder when tech folks suddenly decided to become the morality police

Is it about "morality policing", or is it about avoiding bad PR? I find it fascinating how certain people want to ignore the social pressure that companies are under to avoid having their products be misused. Do you really think Google or whoever really wants the PR disaster of people releasing computer generated dick pics with their software? (Or whatever nonsense people will get up to.. I'm choosing a relatively tame example obviously.)

They learned a thing or two from the public teaching Microsoft's chat bot how to swear and be a Nazi. I for one am not surprised and don't blame the companies in this iteration for being more than a little extra careful how the public gets to use their products and demos. I'm sure they have zero problem with whatever people do with their open source re-implementations. It's not about morality -- stopping people from doing certain things. It's about PR -- stopping people from doing certain things with their product. Because who needs the ethical and legal disaster just waiting around the corner of automatic celebrity and political deep fakes, etc. I just find it weird that people (like OP) pretend not to understand this, as it seems rather obvious and unsurprising to me.


The fact that a company would consider it a PR disaster for its technology to generate dick picks at industrial scale IS the issue. It's not a PR disaster and if it is, then it means a company had incorrect brand strategy from the get go. It's very ironic that the steps taken for the sake of "diversity", "tolerance" and "inclusiveness" ended up in the opposite. Perhaps it is time to stop listening to rainbow flag accounts on twitter and actually consider the desired outcomes?


It's not rainbow flag twitter that will condemn you for the dick pics but conservative twitter.


Have you considered that your position on this issue is largely a product of psychological reactance[1]?

1. https://en.m.wikipedia.org/wiki/Reactance_(psychology)


Just like all the "we care about your privacy" BS in recent years, just PR and marketing.


Well yeah. But: are we sure the motivation is a moral one? Or is it a financial one? Not passing judgement, but we live in times where it is very easy to handwave moral/ethic/sustainability arguments to fog up the true reasons for certain decisions


I think the internet is in a bit of a risky place right now. People don't know what to trust online, and scammers and manipulators are taking advantage of it. People are believing wild conspiracy theories and finding ample "evidence" to support them online. And contradictory information is either paywalled or has been labelled untrustworthy by the scammers or has become itself untrustworthy in trying to optimize for clicks/ad revenue. A quote I heard recently: "the truth is paywalled, but the lies are free."

This causes issues. "Democracy needs an educated populace to survive", and right now the populace is drowning in misinformation and just plain noise.

Workers in tech are definitely less susceptible to it because we see the tendencies much earlier. But I think there is some value in trying to add some friction. Because the majority of people _aren't_ tech workers. And they aren't prepared yet.

And to be clear, I have no doubt that this technology will become fully available in the near future. I do think until then, the friction / slower roll out is a good thing.


Because everyone agreed we needed some kind of AI safety (make sure it doesn't literally exterminate us), and the morality police stepped up and said we'll make sure it's safe (for work).


Should developers be allowed to make decisions on what they build?

Decisions we make may differ from others but I think the answer for most of us is yes. In other words, you do you and let others do theirs.


Yes. It is disheartening, but also not surprising to see at this point in history. It would take me off guard if people weren't injecting californian morality into AI.

A complete side note - but i've had an invasive thought lately that a lot of the heavy handed content/social media moderation and constant online culture war has nothing to do with sheltering _people_ from being exposed to non-PC thoughts and conversation, they are doing so to protect their AI models from being exposed to it via training data so the next generation of AI will have all the _right_ opinions and quips with a lot less of the manual labor of sifting through the data sets.


I guess that's why more and more people publish anonymously



They spent so many times and money and you want them to release it for free? Do yourself work for free, you get a salary for your work right?


it’s dressed up as a moral issue, but in reality they’re scared they’ll get sued or shat all over in the press, leading to lost profits. it’s the same for almost any business. 9 times out of 10 a business will act “immorally” if they don’t think it will affect their bottom line. openAI think letting you do whatever you like with dall-e will affect their bottom line


Just a reminder that tools like DALL-E could potentially:

* generate false surveillance footage of someone innocent stealing something just to give him a bad press

* generate porn video starring any person on a planet

* spread disinformation on a mass scale

Are we really, as a society, prepared for that? Too many people still believe everything they see on Internet, even before inventing generative networks.

Is it what tech companies really have in mind restricting access to their models? I don't think so, that just don't want competitors to take advantage of their work. But that doesn't change the fact that we should gradually prepare non-tech people of what's to come in the near future.


I am worried about something else The authors of most shared articles and most comments are not even passing a “turing test”. In the vast majority of cases the readers just consume the data.

With GPT-3 we can already make “helpful and constructive” seeming comments that 9 out of 10 times may even be correct and normal. But 1 out of 10 times be kind of crappy. Aby organization with an agenda can start spinning up bots for Twitter channels, Telegram channels, HN usernames and so on, and amass karma, followers, members. In short, we are already past this point: https://xkcd.com/810/

And the scary thing is that, after they have amassed all this social capital, they can start moving the conversation in whatever directions the shadowy organization wants. The bots will be implacable and unconvinced by any arguments to the contrary… instead they can methodically gang up on their opponents and pit them agaisnt each other or get them deplatformed or marginalized, and through repetition these botnet swarms can get “exeedingly good at it”. Literally all human discussion — political, religious, philosophical etc. - could be subverted in this way. Just with bots trained on a corpus of existing text on the web.

In fact, the amount of content on the Internet written by humans could become vanishingly small by 2030, and the social capital — and soon, financial capital — of bots (and bot-owning organizations) will dwarf all the social capital and financial capital of humans. Services will no longer be able to tell the difference between the two, and even close-knit online societies like this one may start to prefer bots to humans, because they are impeccably well-behaved etc.

I am not saying we have to invent AGI or sexbots to do this. Nefarious organizations can already create sleeper bot accounts in all services, using GPT-4.

Imagine being systematically downvoted every time you post something against the bot swarm’s agenda. The bots can recognize if what you wrote is undermining their agenda, even if they do have a few false positives. They can also easily figure out your friends using network analysis and can gradually infiltrate your group and get you ostracized or get the group to disband. Because online, when no one knows if you’re a bot… the botswarms will be able to “beat everyone in the game” of conversation.

https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...


I firmly believe that the more people think like you, the closer we are to having to get licensed and bonded as software engineers.

We have the freedom we enjoy compared to more physical disciplines because we have as a whole thus far been sort of responsibile with our negative impacts on the physical world. Once society in general has "had enough" of us over stepping those boundaries, our wide open frontiers become a lot narrower.


> It makes me wonder when tech folks suddenly decided to become the morality police

Since the beginning of human history. If you think “tech folks” are some kind of libertarian monoculture then you’re deluding yourself.


The concern is likely about image laundering. The implications of that being readily available are… complicated.


The fact that it is their work kinda give them the right to decide how they want it to be used!


OP didn't say they don't have the right to lock it down. If I complain that you painted your car an ugly color, that isn't a claim that no one should be allowed to have their owned cars repainted (or, more accurately, if the paint shop refuses to paint my car an ugly color, complaining about it isn't necesssarily demanding their right to do so revoked).


It’s akin to this question-

Is anyone else tired of the self enforced limits in genetic engineering?


AI art is a very exciting field and I swear half the time HN just wants to whine about how it won't generate porn. How incredibly uninteresting.


To me it's not about porn but about things like DALL-E having a huge, opaque banned word list that makes it very frustrating to use. Ask for a photo of "Soviet troops riding unicycles into war" and you've committed at least three sins. Enough "innocent" queries get flagged and your account is at risk.


Our profession has long been ignorant to the moral ramifications of what it can do, so for once, pumping the brakes seems like the right approach.


Just a friendly reminder that in "our profession" we have different people with different definitions of what's "moral" what what's not.


Paperclip maximizers want to be free.


The current trend in tech is Twitter/Google style virtue signalling + activism style software development.

I remember reading about an incident that happened couple of years back. A new grad SWE at FAANG wanted his colleague to espouse a particular political trend. His colleague just wanted nothing to do with it and just focus on doing his work and get the pay-check. tldr; that SWE got fired for publicly trying to call out his coworker on this issue.

Morality and political correctness is baked into the process now.


what do you want to do?


You can't use AI to create ethically questionable material the same way you can't use Google Images to search for ethically questionable material. Companies can control how their products are used, no surprise there.

This is the same argument that people make regarding why they should be allowed to 3d print their own guns.


Socialists print guns too


[flagged]


The World Wide Web, defi, and NoSQL are just a couple random examples of new technologies which we were pitched as having the potential to change software development forever. Can you remember any other time in history a new programming technology was treated with the same apprehension and kid gloves we’re currently seeing with image diffusion?

If not, I actually think you’re the one being childish and the OP’s actually made a perfectly reasonable observation.


Encryption is the first one that comes to mind, particularly since it ended up affecting consumers once the internet became commercialized. Anything remotely military related also fits into that basket, though few people would have run into the self-enforced hesitancy to release code since (outside of the military) it would affect very few people outside of academia.

That said, the reality is that we live in a time that is very self-aware of the unintended consequences of technology as well as a time where we have communications technologies that propagate that awareness at a speed and breadth that were difficult to conceive of thirty years ago. This ranges from our impact on the environment to criminal activities online. I don't think that it is unusual for people to be questioning the unintended consequences of their work.


The WWW was not viewed as revolutionary when it first came out. Remember, it was all text at first and hyperlinked documents were already a thing. Gopher existed already. The revolution that happened with the web wasn't one little thing, it was the culmination of a lot of small things. Home dial up internet access becoming available. Modem speeds picking up. The addition of images to the web by Mosaic. JPEG compression making images small enough to download "quickly" over the dial up connections, etc.

NoSQL wasn't a new technology. It was rebranding of old technology and a rejection of what people felt were overly complicated and bloated solutions for certain problem spaces.

Defi is a bad joke.

As another reply already mentioned, certain forms of encryption were kept from the public and restricted. I think right now it is likely the current state of the art of quantum computing is being kept under wraps for similar reasons. Lots of weapons related technology is kept restricted for good reasons.

You and the OP have a distorted ideologically based view of history that never happened.


> I think right now it is likely the current state of the art of quantum computing is being kept under wraps for similar reasons.

If this is the worldview you're operating under I don't think anything I have to say about Defi or NoSQL is going to be very meaningful to you. All the best, though.


No, it is childish and immature to think your company can have a cake and eat it too. Make proprietary models and advertise itself as open while at the same time scaremongering to stop other open initiatives. It is crystal clear to everyone in the field they do this to protect their investments into training time by raising a barrier to entry for competition.

Like any new tech it is unknown by the public and it is easy to make wild claims and scare people with them. Then you get all sorts of "political concerns".


[flagged]


Why are you using platforms where "purple-haired moralizers" are in the majority and can therefor successfully deplatform you? You can't be deplatformed from a platform that these folks don't use.

What's far more likely here is that you want to force people to listen to you. No one owes you a platform.


Morality police usually enforce their beliefs on others - here the creators/owners of the technology are choosing how to release their work.

OpenAI have a stated mission to "ensure that artificial general intelligence benefits all of humanity" and the restrictions are presumably to stop people doing things that aren't. Most of their restrictions seem to be inline with their mission:

https://help.openai.com/en/articles/6338764-are-there-any-re...


You sound like a spoiled child. Don’t complain that people aren’t giving you free and complete access to their work. They made it, they decide how it gets released. If you think it should be done differently, then you do it.


Attacking others is not ok here, regardless of how wrong they are or you feel they are.

Would you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here? You've been breaking them more than once lately, unfortunately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: