Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really, really hope that there aren't any people who think the way you've outlined. Technology has empowered small groups or even single individuals to create things that have the potential to change the course of civilization, so I for sure hope those individuals think twice about the potential consequences of their actions. How would you feel about people releasing a $100 'build your own Covid-variant' kit?


Once the cat is out of the bag, the problem exists. Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point.

There are many examples of this.

- Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

- Does anyone seriously think they can stop human cloning, once it's technically feasible, from happening somewhere on the planet sooner or later? By fiat, by legislation, by moral appeals, etc? Will not happen. If clones can be made, clones will be made. Descriptive, not normative claim.

- AI-generated content has reached a certain point where we have to worry about a whole host of issues, most obvious being more sophisticated fakes. "Please think of potential consequences", ad-hoc restrictions, self-imposed or otherwise, are moot in the long run. It's part of our world now.


> Non-proliferation folks who think they can actually rid the world of nukes. Will not happen.

It looks to me that you're shifting the goalposts here: nonproliferation has effectively reduced the number of countries with access to nukes. Or is worrying about the number of direct military conflicts between nuclear-armed powers an example of what you call 'missing the point'?


The nuclear non-proliferation treaty has eventual complete disarmament as a stated goal. So the subset of people who believe not just in not increasing the number of countries with nukes, but eventually getting that number of countries to 0, are unrealistic. The fewer the number of nuclear powers, the greater the incentive is to cheat. Maximum incentive is when the number is 0 - get nukes and you rule the world. Until it goes to 2 again, and so on.

Larger point being: with some disruptive technology, like nuclear weapons, if it can be done, it will be done.


>Worrying about how long exactly it takes for $irresponsible_person to make it slightly worse by reducing the barrier to access even further is, in my opinion, missing the point

I disagree with the idea that putting restrictions in place shouldn't be done because 'the problem exists'. The problem exists but that doesn't mean measures can't be taken to keep it manageable. I don't think the majority of people are in your intended demographic of wanting to stop the problem. Most just want to prevent exacerbating the problem.


>The problem exists but that doesn't mean measures can't be taken to keep it manageable.

Specifically when it comes to the problem of AI fakes, I'd rather invest effort in harm reduction - train better fake recognition systems - than attempting to stop people from abusing this technological advance by crafting moral appeals and attempting to legislate it all away. Or something as silly as hiding the code. I think mine is a more robust measure.


I really, really hope that there aren’t any people who think they way you’ve outlined.

AI image generation is not a build-your-own-weaponized-virus kit.

It’s a useful tool that can be used to produce creative expression. What people produce is up to them, and the fact that they might misuse their capacity for free speech isn’t an argument for curtailing it.


OP doesn’t sound like it’s talking exclusively about image generation. Sounds like a general, “I should be able to build, propagate, and use whatever tech however I want no matter the negative externalities.”


I think even the phrase "negative externalities" is overstating it. There's a big difference between "I push this button and now the woods behind my house are destroyed" and "I push this button and I have what looks like a photograph of some important person naked." Photo generating AIs are not a big deal IMO. We might be talking about these things more generally but I doubt we are talking about McNukes here.


The problem with extremely powerful forces (like new technologies) is that you can’t always predict what effects they’ll have.

This is doubly true with regard to technologies that seem not only powerful, not only adaptable to new domains, but also rapidly improving on both of those dimensions. I don’t know what is the right level or type of limitation, but there is nothing confusing or weird at all about wanting to be careful with such a technology.

If technology keeps advancing (it will), new developments will approach “looks kind of alarming” status faster and faster. This is because they will also approach “could destroy everything we know and love” status faster and faster.


At it's core, the argument for caution can be articulated as "utility and availability of this technology must be limited to incumbent actors in the industry for our protection" and that's very fishy. It's particularly fishy considering this technology cannot even so much as break a fingernail or cut a blade of grass. Is it consequential? Obviously or neither one of our arguments would exist. Does it have the potential to hurt people? Only if those people let it. To me it's overblown moral panic that's suspiciously convenient for the big players in the industry and software in general.


The scenario that worries people is less "photograph of some important person naked" and more "photograph of you naked". State of the art image tech is more than capable of allowing people to create convincing porn of their enemies (or creepy crushes). I don't know if that genie can be put back into the bottle, but it's hard to complain that researchers aren't interested in providing the genie as a service.


If someone wants to crank it to what amounts to a high tech doodle of me doing naughty things to myself I don't see how that's any of my business. There are people in the world how put real legit porn of themselves on the internet, I'm sure they'd find this fearmongering about fake pictures and videos of themselves on the internet laughable. It is the closest to inconsequential you can get, posting yellow pages information on twitter is far more damaging.


The problem I would say here is you're thinking in binary, but real life doesn't operate this way.

Lets take a current potential problem. That is a low powered application capable of facial recognition. You can now strap that on to any number of dumb weapons and you've created a smart weapon.

In itself it's not a problem, until it starts happening a lot. If you think like a house cat, you tend to think that society owes you its existence and you're the king of the hill. But say weapons proliferation occurs all those ideas of "I have rights" go right out the window, and this loss of rights will be supported by the masses that don't want to get droned in the head out on a date. The tyranny you want to avoid will be caused by the pursuit of absolute freedom.

As technology becomes more complex the line will blur even further. AI as a build your own 'terrible thing' will happen. Physics demands it, everything is just information at the end of the day.

Now it's up to you to avoid the worst possible outcomes between now and then.


Murder is already illegal. Laws preventing strapping facial recognition to a drone and killing people won't actually prevent it from happening because this is more like a Unibomber style event where the perpetrator won't care about the law and normal people won't be doing this anyway.


Heh, I like how you handwave American judicial precedent away like it doesn't even exist and the multitude of laws enacted where a small group of people committing acts gets laws affected against all. Do we want to go into all the online protection acts enacted recently?

Even actors like the Unibomber had a huge impact on things like bomb detection in mail and airplanes. Now imagine the modern Unibomber, instead of attacking randoms went after senators. The moment the class protected by wealth and power comes under attack from AI technologies expect a raft of laws limiting and restricting them to be enacted.


> The moment the class protected by wealth and power comes under attack from AI technologies expect a raft of laws limiting and restricting them to be enacted.

That's exactly why your rental histories at places like Blockbuster are, by law, confidential; A politician had their rental history leaked. Once a deepfake of a politician gets enough movement, said politician is going to begin rallying support against AI.


It's already the case.

Crispr has changed a lot of things and make possible for an outsider, with 10.000$ and a little dedication, to alter genome of every living form.

https://www.ft.com/content/9ac7f1c0-1468-4dc7-88dd-1370ead42...


Right and every new technology that enters this “high power, high availability” domain, the more civilizational risk we all carry.


Is it really true that we're more at risk now than our ancestors were? They had smaller numbers, less access to life saving technology, worse conditions to grow up in… I can accept an argument that since the advent of the cold war there may have been more than previously, but even that is conjectural.


Fair question. The way I see it there was never a moment before the Cold War that a single person’s decision could even have a chance at destroying human civilization (if not the species itself). Since the Cold War, that has been true every single moment of every day, and there are probably hundreds or thousands of individuals who are capable of making a decision that will trigger nuclear annihilation (check out Daniel Ellsberg’s Doomsday Machine for an alarming inquiry into this topic).

Now we have the additional risk of man made biological risk amplified by cheaper and cheaper genetic engineering as well as natural biological risk amplified by a nonstop global travel network. Neither of these risk vectors existed til recently either.

IMO the only analogous risks pre-cold war were what, asteroid strike or volcanic event? Those are rare and, more importantly, not modulated upwards by any human action or human system. Nuclear and bio risk probably only climb with more people and more technological advancement.


Yet the risk of non-civilization continues to exceed the risk of civilization.


> How would you feel about people releasing a $100 'build your own Covid-variant' kit?

Not very good but:

a) the people who currently have this tech are not what I'd call trustworthy so why should I leave dangerous tech only in the hands of dangerous people?

b) it would probably kickstart a "build your own vaccine kit" industry


That you even express the problem like this shows an impressive ammount of bias. By calling them dangerous people you are actually implying malice. What makes you believe people with access to biomedical tech are inherently more malicious than the populace? What makes you believe there aren't far more malicious people who do not yet have access to such tech?

I think this is just fear of the unknown at work. Biomedical knowledge is complicated and requires effort to learn therefore most consider it a known unknown therefore something to be feared. Some people do have such knowledge therefore they are to be feared because who knows what nefarious intentions they have and what conspiracies they are part of. Therefore they are dangerous people using dangerous tech.

Were the physicists who discovered how to split the atom also dangerous people?


> Were the physicists who discovered how to split the atom also dangerous people?

In the ordinary meaning of the word? Yes. That's why they were sworn to secrecy.

Not the same as saying they were immoral or wrong to have worked on the Bomb, that's a different debate, but in terms of sheer effectiveness they were incredibly dangerous.

An army is dangerous. That's how it works.


To you and sibling comment I would argue the comment I responded to conflated dangerous with immoral.

Danger is part of the domain of threat modelling. And when doing threat modelling, the morality of the opponent is a distraction.

However in the domain of propaganda, threats, immorality and assigning the latter to the former go hand in hand.


I didn't realise my career as a propagandist was going so well. Regardless, have the people involved in gain of function research of coronaviruses been shown to be trustworthy?

I would argue not, given the evidence[1].

[1] https://theintercept.com/2021/09/09/covid-origins-gain-of-fu...


Define trustworthy.

What kind of trust did you place in them that was now broken?

I at the very least believe they would not desire to expose themselves and their loved ones to pathogens.

If the origin was a leak, then, sure, we should see if any biosecurity protocol was broken and why, and design better protocols. But posturing that the researchers were not "trustworthy" is not helpful.

You call them untrustworthy and dangerous people and by that you are implying malice. Why? And what makes you believe outside of that small circle there aren't people far more malicious?

If creating diseases becomes so easy anyone can do it, we will see the age of biological ransomware. I am certain there are people far more malicious and far more immoral than any of the researchers who worked on this.


> Define trustworthy.

Come on, I'm not a student debating in the common room, this is just silly.


It is the key word on which your argument rests and you used it in every comment in this thread.

Since you decline, I believe it is useless to continue the discussion.


I'm glad we were able to come to an agreement.


Danger is not a moral judgement, but a rational one. Iranian nuclear scientists weren't assassinated for no reason, and I highly doubt anyone involved had serious moral accusation to make against them. The U.S. also banned export of cryptography in the past (and still does in some cases), solely based on strategic grounds, not because mathematicians are "bad people". People in positions of or close to power and/or possessing certain knowledge are dangerous, not because they are necessarily morally repugnant, but because of their privilege and ability.


This is just beyond obtuse. More people having access will mean more people who are untrustworthy having access, which means more malicious action. (Unless you want to setup some toy scenario where the only bad faith actors people on the planet are biochemistry researchers).

As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.


Pandora's box is open, technology will spread regardless of what you do. You want to limit tech that will become cheaper and easier to build through the efforts of legitimate, well meaning research, and try to stop knowledge spreading as we wish for there to be more people in this line of work. More people having access will mean more people who are untrustworthy having access is entirely right but whether "which means more malicious action" is a corollary is up for debate, as we've seen with violent action around the world via many different types of weaponry.

In this talk[1] at DEF CON, John Sotos pleads with hackers to put their time into learning how to combat biological weapons. He points out that the tech will move forward to being able to target more and more specific populations. Like any other weapon, knowing that your enemy has it too and can hurt you or those you care about with it is a highly effective deterrent.

> As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.

Two things here. Firstly, trauma surgery and techniques from the military and places where bullet wounds are common have benefited the rest of us for when we get into scrapes of even a non-violent kind.

Secondly, defence does not always lag behind attack, you assume that the future would look like now but that is to ignore the history of any tech of such nature.

[1] https://www.youtube.com/watch?v=HKQDSgBHPfY


this is the gun debate rephrased


There is a fundamental difference in the US: access to guns is a constitutional right.


Freedom to speak and publish, even dangerous ideas, is also a right. Beyond the US.


True, but (in the US at least), those rights also stop when they come up against an imminent threat. See the trope about falsely yelling “fire” in a crowded movie theater. The issue here seems to be accurately defining that risk.


It's really different IMO.

"Fire!" (falsely) in crowded theatre: Specific. Beneficial outcome unlikely and difficult to even imagine. Harmful outcomes nearly certain.

Powerful AI codebase or service: Generic. Endless beneficial and harmful outcomes easily imagined.


>Powerful AI codebase or service: Generic.

Generic means it can do a nearly unlimited list of things good and bad right?

Just like a human can do a list of nearly unlimited things?

Humans, because they can do both good and bad have laws they must follow if they do bad, right?

Then what are you suggesting for AI?


That's a pretty key difference and a good point. But we still regulate stuff with a similar dichotomy.

Nuclear material can be used to treat cancer. But it can also be used to make weapons. We regulate both.


The "fire in a crowded theater" case was overturned as unconstitutional prior restraint on speech.


That’s because the original case was not actually about inciting an imminent threat. It was about speech regarding a war draft. The theater was an analogy in the case opinion.

Limits to speech were sill upheld if that speech could reasonably incite “imminent lawless action”.


Incitement to imminent lawless action[1] was a test applied from 1969 onwards (from Brandenburg v. Ohio[2]), the test used in the case you're referring to (Schenck v. United States (1919)[3]) was that of clear and present danger[4].

> Justice Oliver Wendell Holmes defined the clear and present danger test in 1919 in Schenck v. United States, offering more latitude to Congress for restricting speech in times of war, saying that when words are "of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent....no court could regard them as protected by any constitutional right."

That was ostensibly for sending literature to recently conscripted soldiers suggesting that the draft was a form of involuntary servitude that violated the Thirteenth Amendment.

Whereas, clear and present danger is defined as:

> Advocacy could be punished only "where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action."

That test is basically redrawing the law so it fits, once again, with that of common assault and breach of the peace. Still, I'm not sure what relevance all of this has to the subject at hand, unless we're going to end up at whether something is legal or not or even more absurdly, whether there's a war or not. Those are not very interest nor compelling arguments, especially as there are no such kits yet and no such law regarding the kits (unless we concede that it may well be covered under the 2nd amendment, as it states arms not guns).

[1] https://mtsu.edu/first-amendment/article/970/incitement-to-i...

[2] https://mtsu.edu/first-amendment/article/189/brandenburg-v-o...

[3] https://mtsu.edu/first-amendment/article/193/schenck-v-unite...

[4] https://mtsu.edu/first-amendment/article/898/clear-and-prese...


We seem to be saying the same thing. My previous reply was in response to the statement that the 1919 ruling was overturned. Rather than implying it was completely overturned in 1969, I'm saying that it was upheld in the specific instances where imminent danger is present.

>Still, I'm not sure what relevance all of this has to the subject at hand

The point is rights exist. An to put a limit on a right, you must show a clear and imminent risk. I think you got a little wrapped around the axle on the 2A piece and missed the connection to the article at hand.

When you equivocate code to free speech, there will be people who say certain code is dangerous enough to be limited in that regard. Meaning, a discussion about regulating code is apropos, even though many people will disagree about the threshold of what constitutes a credible risk.


It was clear what your response was about, however it was incorrect. Not only did you provide the wrong test for the wrong cases, cases after that 1919 case used the bad tendency test, up until the 1969 case moved to incitement to imminent lawless action. It's not true to say that anything was upheld due to imminent danger before 1969.

If you're going to correct others for misstating the facts and reasoning of US Supreme Court judgements then I think it only fair that others may do the same to you.


But that fact doesn’t matter in the actual discussion because the debate is the same but the implementation is different. In the US the “get rid of guns” stance is just restricting their use to the maximum extent allowed by the constitution and making them as close to de facto banned as possible.


The fact that there is a constitutional right puts pretty strong limits on where that line is drawn though. I think those guardrails make it a fundamentally different perspective.


Because the gun debate has the same interesting facets that most people ignore, things like ignoring presumption of innocence, what to do about technology that will inevitably become more efficient, cheap, and easy to produce, among other things.


The sole purpose of guns is to harm/kill. Not comparable to ML.


the comment I’m replying to is talking about a DIY coronavirus kit


> it would probably kickstart a "build your own vaccine kit" industry.

This is a scenario for a dystopian science fiction novel, as opposed to a a rational plan for our children's future.


One of the co-founders of BioNTech had this to say[1]:

> “What we have developed over decades for cancer vaccine development has been the tailwind for developing the COVID-19 vaccine, and now the Covid-19 vaccine and our experience in developing it gives back to our cancer work,” Tureci said.

[1] https://nypost.com/2022/10/19/covid-vax-makers-say-cancer-va...


This is beside the point in the context of your reply to mckirk. Even the most vigorous supporters of 'gain-of-function' research are not proposing that the resulting pathogens should be be released into the wild in order to promote medical research.


Nowhere did I write or imply that there was intent in the causation. You get a gun, I might buy a gun, that does not imply that your intent was for me to buy a gun.

Do you think the advances in medical techniques for treating gun wounds was the intent of gangbangers when they shoot each other?


I am at a loss to see how this could be a pertinent reply, but let me make something clear: my 'gain-of-function' comment is intended to show that your "it would probably kickstart a 'build your own vaccine kit' industry" is not a useful response to mckirk's scenario, even in the unlikely event that this would actually happen. The fact that no-one in their right mind would suggest releasing enhanced viruses into the wild as a means to foster medical development shows that the countervailing consequence you suggest will occur does not work as a response to mckirk's concern.


> pathogens should be be released into the wild in order to promote medical research

Do A in order to promote B requires intention. It doesn't matter that no one is doing that because it's a straw man, as I'm not arguing it either. What I am saying is:

If A then B will occur (probably and with increasing likelihood).

Quite different.


My replies to you do not depend on or imply that you proposed any course of action, or that anyone else has actually done so, either. You have quoted me out of context in order to give the impression that I have.


> You have quoted me out of context in order to give the impression that I have

My heart is full of bad intentions, that must be it, it can't be that I simply found your logic to be wanting.

I would like you to tell me how I can quote you out of context when your replies are just above, that would be interesting. This thread is the context, I doubt you even need to scroll from my quote to see yours in full.


I have no definite opinion as to what your intentions are, but your argument is, indeed, wanting.

And what you did is, indeed, quoting out of context. The whole issue with quoting out of context is that, when you look at where the quote was taken from, you can see that it does not actually support the claim that it allegedly justifies. When the original is right there for anyone to check, it merely raises the tangential question of why the quoter thought the argument would succeed in the first place.


Tell someone from 1950 about modern cell phones, tracking in them and online, tracking in car ECUs, tracking all purchases, with companies listening in on everything said at home, and that's their dystopian novel!


We have seen your scenario, and we have also had, in the last three years, a taste of brigandish's scenario, so I feel people can make up their own minds about whether they are usefully comparable.

And if you do think they are comparable, note that the equivalent of a "build-your-own vaccine kit" in your scenario has not materially improved the situation (right now, "Google Has Most of My Email Because It Has All of Yours" is at position 8 on the HN front page.)


It's also a dystopia according to our current sensibilities. Or at least for a plurality of people. Even the people who don't seem to care tend to react adversely once they understand how it can affect them.


>so why should I leave dangerous tech only in the hands of dangerous people?

because handing it to everyone doesn't make things better? I don't like that Putin has nukes, but it's much better than Putin and every offshoot of Al-Qaeda having nukes.

Civilization ending tech in the hands of powerful actors is usually subject to some form of rational calculus. Having it in the hands of everyone means basically it's game over. For a lot of dangeorus technologies there is no 'vaccine' (in time).


> Having it in the hands of everyone means basically it's game over.

It might mean it's time to face up to more important questions, like why every offshoot of Al-Qaeda wants to use nukes and then countering that.


I do not believe you are arguing in good faith but I will answer.

Nukes Are very very effective at creating terror. Terror is somewhat effective at almost any action that requires the participation of others. Ergo, nukes are effective at threatening other to do what you want.

Want to counter that, fine. Genetically engineer every person to be invulnerable to radiation and explosions and the desire to use nukes diminishes because they are no longer as effective.

Sayonara.


> I do not believe you are arguing in good faith

This isn't Twitter, please keep this juvenile nonsense for there.

As to your "answer":

> Genetically engineer every person to be invulnerable to radiation and explosions and the desire to use nukes diminishes because they are no longer as effective.

aside from it approaching word salad, I was referring to the deeper causes of violence. I would trust (there's that word again, I hope you can cope this time) the Dalai Lama with any weapon known to man and any only dreamt of, because, as he says[1]:

> “Real gun control must come from here,” the Dalai Lama said, pointing at his heart.

How you failed to deduce that with your enhanced powers of insight into me, I cannot fathom.

[1] https://www.sfgate.com/politics/article/Dalai-Lama-says-real...


What questions would you ask to decide if someone is a trustworthy steward of that technology?


Did you fund gain of research function at the labs in Wuhan?

Do you know what they found?

Were you aware of the low level of biosecurity protocols on the site?

Things like that ;-)


These would not answer whether someone behaves responsibly with dangerous research. It answers whether someone had any connection to Wuhan.

You can find plenty of people who would answer all "no"-s and would be far more destructive if they had access.

Would you also claim that no one who worked at Cernobyl was trustworthy? This is just guilt by association.


> These would not answer whether someone behaves responsibly with dangerous research.

Let's see.

“Did you fund gain of research function at the labs in Wuhan?”

This speaks to trustworthiness and responsibility/recklessness.

“Do you know what they found?”

This speaks to, again, trustworthiness and transparency.

“Were you aware of the low level of biosecurity protocols on the site?”

This speaks to, again, trustworthiness and responsibility/recklessness.

> You can find plenty of people who would answer all "no"-s and would be far more destructive if they had access.

It shows that those with the technology now are not trustworthy, going forward I might have different questions but we should start where we are, not by providing an apologetic and condoning poor behaviour.

> Would you also claim that no one who worked at Cernobyl was trustworthy?

Did they fuck up? Were they part of an organisation that fucked up? What did they do to stop the incredible fuck up from happening?

> This is just guilt by association.

Of whom to what?


Again, your 3 questions do not provide any information with regards to trustworthiness. You have just decided that the Wuhan Lab is an epicenter of evil and therefore anyone tangential is untrustworthy. There is no logical connection, you filled it in.

With regards to Cernobyl, other than Anatoly Dyatlov (who can directly be ascribed blame and served prison time), it is really hard to blame any of the people involved. Yet there were all kinds of known and unknown stresses and defects in the system that resulted in a tragedy. Does that mean we should ban nuclear energy?


> You have just decided that the Wuhan Lab is an epicenter of evil and therefore anyone tangential is untrustworthy.

I'm so glad you were able to read my mind, perhaps you could open source your mind reading device, or would that be too dangerous to share to the public?

What I have decided is that there is evidence of lies by people involved in the funding and the implementation of the research at Wuhan, there really is no need for your clumsy straw men, especially when you can read my mind.

> Does that mean we should ban nuclear energy?

I'm the one arguing for the further distribution of technology, are you arguing with yourself now or did your mind reading device go on the blink?


Human life is quite fragile




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: