Let's be realistic, just like building codes, medical procedures and car manufacturing sooner or later we will also be subject to regulations. The times where hacking culture and tech was left unbothered are over.
Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.
Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.
Yes you can. Your hammer doesn't magically stop functioning when it discovers that you're building stairs without railings.
You don't want tools to discriminate on what you can and can't do with it, because if you can discriminate, then you will get hammers from Hammer Co that can only use nails from Hammer Co.
I would say it's fairly easily enforceable if the government simply passes a law that requires a "licensed AI Engineer" to approve a design before it gets used commercially.
This is how many other engineering disciplines work. Most people don't realize most engineers actually work under an exemption to this rule. But the mechanisms exist.
I don't think this is the way it will unfold though. Giving licenses to SWEs gives them even more leverage in the job market, which I don't think software companies will want to do and would fight that type of regulation tooth-and-nail.
That would likely accelerate the fall of society as we know it. I can’t think of a stronger financial incentive to go underground, or simply pack up and leave the country.
Before the recent AI boom in the US, China was known to have better opportunities for AI researchers. Hugo de Garis and Ben Goertzel both went over to China for that reason some time ago. Interestingly, De Garis predicted that a world war would eventually start over the issue of AI prohibition.
Maybe you could elaborate on how you define opportunity. If it resulted in more employee leverage, where they could double their salary, do you think people would still flee the market?
They weren’t doing it “for the money”. It was the only way they could get funded at the time to be able to work on AI at all. Similar analogy to the hypothetical where AI is regulated.
> I would say it's fairly easily enforceable if the government simply passes a law that requires a "licensed AI Engineer" to approve a design before it gets used commercially.
Would this include any type of AI because a simple linear regression is technically "AI".
This seems to me to be a problem specifically because the issue people have with models like dalle is that they could be used for harm (e.g., generated deep fake porn, cp, etc.), but that is left up to the consumer. The same could be said about any product. This seems to be akin to trying to "ban math" like then the US tried to ban the export of crypto. It necessarily fails in the end because no regulation can actually stop it, and once it exists it exists.
Plus, one could just set up a foreign hosted VPN and sell their "regulated" product from outside the US anyway.
This isn’t unique to software. The same issue exists in other engineering domains.
For example, some industrial programs require adherence to ASTM standards for pressure vessels. This extends to seemingly common tools like air compressors, which some people take umbrage with. It really comes down to the way in which industry/regulators craft the standards.
The best way IMO is to make the applicability risk based. Can the linear regression result in loss of human life? If so, maybe being “stamped” by a licensed engineer is appropriate. Approval by a licensed engineer essentially says a competent has ensured the relevant best practices to mitigate risk has been implemented.
> Would this include any type of AI because a simple linear regression is technically "AI".
I can't predict what lawmakers do, but it wouldn't surprise me if the first draft said before someone explains to them what Excel spreadsheets do when drawing trend lines.
That said, the general gist here is an extension of the UK's Data Protection Act and it's successor in the EU's GDPR: while those said "don't process personal data without permission", I can easily believe it would be analogous to "don't process real data without the OK of a chartered engineer".
> Plus, one could just set up a foreign hosted VPN and sell their "regulated" product from outside the US anyway.
Absolutely; the internet mocks national sovereignty.
Also, Stable Diffusion isn't a US product in the first place, it's a UK corporation and a German university research lab, amongst others: https://en.wikipedia.org/wiki/Stable_Diffusion
I think the reason it isn't enforceable is because the software will just be made in another country. This bypasses the entire issue. And if necessary you just get an American "AI engineer" to rubberstamp it.
I'm sure you could get someone to rubberstamp it. But, in practice, this is more difficult than you may think. If you are a licensed civil engineer and I show up to your office and ask you rubberstamp my bridge design, most would not because that creates a large legal liability for them. By stamping it, you are saying it meets the necessary design standards. If something goes wrong, the regulators are going to come after the person who certifies it for use.
That is also why it gives engineers more leverage. If a manager needs some to certify it, that licensed engineer has much more ability to push back on the design. The liability the stamp conveys is also why they tend to get paid more than unlicensed counterparts.
Then industry will just go back to calling various "AI" technology what they really are: machine learning, computer vision, etc. Suddenly no one will be doing AI and the regulation won't be able to keep up.
Lawmakers know about this kind of shenanigans. What you describe is the kind of thing which ends poorly for the person who thinks they have a cunning plan for getting away with $act.
Back when encryption was categorized as a "munition" under ITAR, and it was illegal to export PGP from the US except when printed on a t-shirt, there was talk of trying to claim that if it was a "munition" there was a right to it under the US second amendment. I don't think this got anywhere. You're better off taking a first amendment line of reasoning under US law.
This is particularly visible with the sorry state of accessibility options for disabled individuals.
I deal with a moderate vision impairment and everything I do to make computers more usable is bespoke hacks and workarounds I’ve put together myself.
In MacOS, for example, I can’t even replace the system fonts with more readable options, or increase the font size of system tools.
Most software ships with fixed font sizes (electron is exceptionally bad here — why is it that I can easily resize fonts in web browsers and not electron?) and increasingly new software doesn’t render correctly with an effective resolution below 1080p.
Games don’t care at all about the vision impaired. E.g. RDR2 cost half a billion dollars to make yet there is no way to size fonts up large enough for me to read.
I welcome regulation if it means fixing these sorts of problems.
This is what I do. I primarily use Linux because it has the most flexibility with typeface and scaling, but it’s still not perfect (e.g. all electron apps) and it’s definitely bespoke and hard to reproduce.
I routinely open tickets with software vendors and open source projects asking for things that ought to be low-hanging fruit and I’m either blown off or directed to tools that don’t make sense for me (usually magnifier or text-to-speech). The appetite for fixing issues that limit my access to software is essentially null.
I’ve dealt with these problems for two decades and I really believe that there is very little economic incentive to make software accessible. Unless substantially more people become vision impaired in the future, that’s how it will stay. I am pretty certain that only way software companies will really take the vision impaired into account is if they are forced to.
If they're low hanging fruit why not implement them yourself or put up a bug bounty?
If even a fraction of those with limited vision did this the problem would be solved.
And if its not, then it's probably not worth making it accessible (e.g. if it would cost £10k to make some obscure game accessible that would probably only have a half dozen vision impaired users that's just not worth it, the time of the developers is worth more than the hypothetical users enjoyment of the game).
Sure, but I don’t think an AI model is speech, at least not one trained on billions of parameters and massive quantities of compute. Comparing it to regulated heavy machinery or architecture is apt.
You can’t create an AI model without huge, global industries working together to create the tools to produce said model. And baked in are all sorts of biases that if met with wide-spread adoption could likely have profound social consequences, particularly because these models aren’t easy to make, so the likelihood that they will see large amounts of adoption/use is likely, they are useful tools after all. Group prejudice in these models jumps off the page here, whether race, sex, religion, etc. but black box algorithms are fundamentally dangerous.
Speech is fundamental to the human experience, large ML models are not, and calling them speech is nuts.
You can't create Hollywood blockbusters or video games without huge, global industries working together to produce the tools and content. Popular media has biases and, clearly, social consequences as well—there's a reason people talk about how much "soft power" the US has!
And yet pop culture content is speech both in a casual and in a legal/policy sense.
AI models are not identical to movies or video games, but they're not different in any of the aspects you listed. On the other hand, there is a pretty clear difference between AI models and heavy machinery or architecture: AI models cannot directly hurt or kill people in the physical world. An AI model controlling a physical system could, which is a case to have strict regulations on things like self-driving cars, but doesn't apply to the text/image/etc generation models we're talking about here. Plans for heavy machinery or buildings are not regulated until somebody tries to create them in the real world (and are also very clearly speech, even if they took a lot of engineering effort to produce). At least in the US, nobody is going to stop you from publishing plans for arbitrarily dangerous tools or buildings with narrow exceptions for defense-critical technology.
Aren't people allowed to release source code on free speech grounds? There's not much difference. One could publish a model with its weights, and that would qualify as free speech.
I believe it depends. Free speech is not without limits. I think there's still a lot of discussion in the biomed community about the legality/ethics of releasing methods that can be used by bad actors to generate potentially harmful results.
All sorts of software can be used to “generate potentially harmful results”. Think about an algorithm for displaying content on social media sites, a search engine, a piece of malware on a computer. Do we ban books on how to make bombs? It’s such a broad point to make that it’s practically meaningless. The computers job is to read a set of instructions and execute them. It’s abstractly no different to a human writing instructions for another human, with the adage that the computer is a much faster processor than a human. You can quite literally run these ML models (a collection of statistically weighted averages) with pen and paper and a calculator (or even none!). Things will be maliciously used, that means we don’t ban sharp forks (at the determent of legitimate users), we ban those that intentionally misuse those tools.
You’re correct, and, yes, it can apply to virtually anything. But we don’t let that hold us hostage for properly mitigating risk.
I think the distinction is when a threat is “imminent”. To the point of this thread, I don’t think the dialogue has progressed enough to form a consensus on where that “imminent threshold” lies.
Teaching someone about DNA doesn’t constitute an imminent threat. But the equivalent of teaching the recipe for a biological weapon may be considered enough of an imminent risk to warrant regulation.
Speech can get people to maim and kill, too. Sometimes with surprising efficiency. And when a technology is outlawed, the power that it bestows upon humanity concentrates in the outlaws. For sure, information and communication technology are an interesting edge case of that general principle.
The USA threw 100k+ of its own citizens in concentration camps, stripping them of property and leaving them in poverty after the war. Not exactly a shining moment for principles.
Literally all major platform websites--Youtube, Tiktok, Twitter, Facebook, Instagram, etc.--ban all posts which contain misgendering and deadnaming as part of their broader "hate speech" policies.
Agree or not, this is "censorship", and it is "widespread".
It is hard to show examples of censored content because it is, you know, censored. But you can try some posts for yourself on these platforms and see how it goes.
So, you don't consider multi-trillion dollar valued companies, whose annual revenues are greater than the GDP of the majority of countries in the world, and who have sole control over platforms used by over 1/3 the world's entire population, to be "authorities" capable of censorship.
You seem to forget that this AI doesn't really give a crap about your arbitrary borders, a document or an amendment to that document. You also seem forget that even if it did, most people don't live in the same country under the same jurisdiction, so even if everything you think and feel turns out to be true, it still doesn't help the conversation at all.
On top of that, you also seem to be trying to mix-and-match rules for institutions and rules for private entities as you see fit which is also not how any of this works.
One day we will crack the "platforming" issue. Until that day, free speech remains under attack, for reasons I cannot wrap my head around completely. It is pervasive though, to the point it feels like a gag at times.
But yea, if speech can become law, it can matter quite a bit. Do we really want DALL-E generated state laws?
Also, I thought stable diffusion did release their models and methodology? You just need a 3080 with enough ram to do the inference with now boundaries, and if you have the money and time can train new models.
People are already making txt2porn sites. I'm sure they will get crazier and creepier (from my boring vanilla perspective, not judging people with eclectic tastes) as time goes by.
I saw it here a couple months ago. The results so far were quite odd but it's early days. I'm not sure what the ethics are. My understanding is that models/actors of consenting age are now well treated in the industry. I'm not sire what those people will do if replaced. But that is true of uber and truck drivers when they figure out self driving, or me when copilot gets a lot better.
As for those not considered capable of consent? Maybe this decreases the pressure to hurt more of those people? But the ethics of training seem hidiously evil to me.
Totally, however as somebody who knows quite a few adult content creators and sex workers I would say the standard for the industry is actually pretty high.
Sometimes they just inconvenience one group of people or fail to sufficiently support local industry and those aspects are regulated too. Anything big enough is going to get regulatory scrutiny in one way or another because it's moving around a lot of money and affecting a lot of people.
Well you're not the decision maker here. If the public gets sufficiently upset about ai generated lookalike porn these models will be regulated and maybe banned. And as you seem to realize that doesn't mean people will stop using them, it just means the companies that make them will stop getting paid. Obviously this is what they're trying to avoid, and the morality stuff is just because they don't want to say "we're worried about being regulated."
>just like building codes, medical procedures and car manufacturing...
"Self enforced limits" could also be an attempt to avoid formal governmental regulation.
The maturity of CS as an industry is still in it's infancy compared to other engineering disciplines. I'm sure if you went back to the 1880s, five or six decades after the start of industrial revolution, there was very little limitation on the design of mechanical equipment. Now, there are all kinds of industry standards and government regulations. We could lament how much this stifles progress, but we're generally not even cognizant of the amount of risk it has reduced. For example, most people don't give a second thought to the idea of their water heater being capable of blowing up their house because it happens so infrequently.
Code is also machinery and infrastructure though, it can interact with the physical world in material ways, and because of that it will probably end up regulated.
AI is all fun and games when it's data, but if it's being used to make decisions about how to take actions in the physical world I think it's fair that it follows some protocols each society gives it. Making a picture of a cat from a learned model, writing a book with a model, cool, whatever. Deciding who gets their house raided, or when to apply the brakes on an EV, or what drugs should be administered to a patient, we probably want to make sure the apparatus that does this, which includes the code, the data and the models, is held to some rules a society agrees upon.
I recently saw a presentation from someone trying to build a predictive model for violent incidents in mental healthcare settings. They took steps to prevent the model considering race because this could potentially lead to less favorable treatment for some groups.
The model was unable to give any useful predictions. I don't know if it would perform better without the deliberate limitations, but I do know that healthcare staff are making their own judgements in it's absence.
I completely agree with the principle that legal entities: businesses and governments should have their use of AI restricted. Facial recognition, for example, is a danger to society.
But the development and proliferation of AI by extralegal entities cannot be stopped. Individuals, foreign researchers, foreign business, etc will keep pushing the frontier of AI forward indefinitely.
And those rules seem to be: nobody gets to inspect, question, or even see the model. Just accept the output without any recourse, like good little peasants.
It gets tiring playing word games to avoid the suggestion that certain natural pressures have personal agency.
Water "wants" to flow downhill. Gasses "want" to expand to fill their container. Genes "wanting" to replicate drive animals literally wanting to reproduce, and the incidental awareness of that drive in some species comes down to a certain molecular arrangement brought about by said genes. The genes are data, the minds are data, and the natural pressure is that that data which succeeds in reproducing will tend to keep reproducing. So in the original sense of the notion of memes as proposed by Dawkins, yes, information "wants" to be free, as that is its tendency. The only other option is that said data ultimately dies out.
Not really. Guns serve one purpose, really: shooting at things, while possibly killing it (ignoring gun ranges). It’s why they were created in the first place. That’s why the “guns don’t kill people, people kill people” argument is bogus. “Information,” OTOH, doesn’t serve any particular purpose. On its own, information is just bits; it’s how those bits are used that matters. Those bits can be arranged to say “Hello, world”, but they can also be arranged to make the Stuxnet virus.
Nuclear fissile material is similarly morally agnostic. It’s just matter, right? So is smallpox. It’s just DNA code at its heart, right? But it’s also recognized that wide access to some things creates a lopsided risk/reward profile.
Correct. I’m not arguing that wide access is a good thing; Just that the comparison to the tired gun argument is wrong. Hence why I brought up a “bad” usage of the agnostic item. It’s not the best example, but it’s what I thought of on the spot.
A remotely operated hole punch. I have used one to quickly make small round holes in hard materials on several occasions. Also to install small lead weights into things. Mostly though, they are good for humanely harvesting meat.
If you're old, you say speech. If you're younger and watch any integration modern technology into life, you'd say both.
But at the end of the day, your viewpoint doesn't matter. What does matter is when the 'average person' feels unsafe by this 'informational speech' as you like to call it, that it will be banned and restricted and you will be punished under the full force of the law for trafficking in it. Your neighbors will cheer as you're dragged out of your house by a swat team for being an evil terrorist. SCOTUS will make whatever finding is politically expedient for the majority parties point of view that is the majority at the time, and you will rot in prison until you die of old age.
So, your choices are attempting to setup a regulatory framework to minimize the dystopian hellscape your ideas will create or embrace the dystopian hellscape.
I'm confused, was the discussion about software or visual art? Anyway, I'd recommend posting something considered criminally offensive in your location (I'm sure you can think of something) and seeing how far free speech gets you.
Cybersecurity is a mess and 100% the reason is "no skin in the game." If a car manufacturer promises or even implies "safety," and something bad happens, they get sued or they take real action.
The big tech companies must be held to do the same.
The parent comment mentions issues with cybersecurity. There are measures companies can take to reduce their likelihood of breach, but we all know they humans make errors that are exploitable all the time. Just the other day there was a thread about an exploitable bit in the Linux kernel that had been there for the prior 10 years.
My point is that unless the regulation we're discussing is merely marginal in cost of adoption, there will be definite harm caused to small businesses trying to bootstrap from the ground up. It's not some bogeyman of an argument. It is quite a real possibility.
Surely there is a cost benefit analysis to be had regarding any regulation? It's not always as trivial as railing and fire escapes. What would the cost of building become if every building had to be built to withstand 9.0 magnitude earthquakes or be resistant to bunker-buster bombs?
Such policies do a number of things:
1. Raise consumer costs since these new regulations need to pass the costs to the consumer.
2. Inhibit competition since now new businesses need to have to know-how or pay for super specialists or some external service to get them complaint before they can even offer a product or service.
I'm not discounting all regulation, but at some point consumers can't consume blindly and need to be thoughtful. Some businesses may get third party certs stating they're compliant with industry leading security practices. They can charge a premium. Other companies may target more elastic consumers that want the service but don't care about whether their data, etc. is protected. They can choose to buy the products/services without certs, as they would likely he cheaper to differentiate themselves.
Ah yes, why don't you also talk about how well this works when one party lives in poverty and the other party spends 10's of millions on lawyers and media clips attempting to discredit the affected party.
Just please stop with this tired and broken line of thought, it does not work out for the individual in the end.
> Ah yes, why don't you also talk about how well this works when one party lives in poverty and the other party spends 10's of millions on lawyers and media clips attempting to discredit the affected party.
What are you talking about? My point was related to anticompetitive regulation, which is generally supported by the wealthy since it increases barriers to entry and reduces their competition.
You can do any of those things for your personal use, the difference is that you can't legally do things like build stairs without railings when taking on construction jobs for clients or customers. The tools you use don't care if you're what you're doing is legal or what not, they work the same way regardless. It's just there are (rightfully) legal restrictions on what you can do with products or services you sell to the public.
The difference here is like not being able to own a hammer or car mechanics tools for your own personal use, and only being able to use them under corporate guidelines/surveilance in a restricted area, which is ridiculous.
There are many such restrictions on tools and components which could be dangerous to people other than you. Refrigerant, for example, cannot be sold to the general public for personal use (outside of a fully sealed AC system), and licensed refrigerant users must follow specific usage procedures to ensure it doesn't vent to the atmosphere.
"Being dangerous to people other than you" is not why refrigerant requires a license to purchase at all.
The license is required because refrigerant is categorized as an ozone-depleting substance, and the sales restriction is established by the Clean Air Act[0]. All of which is under control of Environmental Protection Agency.
I guess you can argue that any pollution-restricting laws are based on the premise of "being dangerous to people other than you," but that's not quite what people have in mind when talking about things being restricted due to being dangerous. We are talking about things like "driving a non-street-legal car" or "owning this one potentially dangerous carpetry tool", not "increasing gas taxes to disincentivise pollution."
The potential danger being inflicted in those cases is direct and specific. You can totally drive a non-street-legal car on your farm, even without a registration and a license plate, as long as you do it purely on your own property and not on the actual road.
And that's the approach that personally makes sense to me with AI-generated images. Any restrictions on it should be imo on the distribution and the commercial/sales side (e.g., you should not legally be able sell ai-generated posters of your neighbor in an embarrassing situation without their permission or sending them to that neighbor's workplace), not on the creation/usage side (e.g., you should be legally able to generate those images of your neigjbor, with any potential restrictions and legal problems only beginning to come your way at the distribution stage).
I agree with a lot of what you're saying. The marginal damage any one person can do with access to high-quality image models is very limited, and most of the ways they could inflict damage in the first place either are illegal or easily could be made illegal.
The reason I use pollution as an analogy is that having it happen society-wide creates new problems that simply don't appear on an individual scale. What happens when someone builds a browser extension to let you porn-ify any image you'd like? What happens when schoolyard bullies can fabricate a compromising video as easily as they can make up a nasty rumor today? I think most people would prefer not to live in a society that works that way if they can avoid it.
If that’s the case then innovation in the US can be tossed out the window. Once that ball gets rolling it only ends in crony capitalism and protectionism for existing players.
Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.
Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.