Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Articles like this are a good reminder that most of mankind's progress has happened not through encouraging innovation, but from preventing those hostile to it from getting in its way.

We all have a moral imperative to maximize the rate of innovation, so that the human condition can be improved, and therefore to do whatever we can to frustrate and disempower Luddites and other obstructionists.

Team "step on the gas".



You can’t think of any technological innovations that were net negatives?

We have a moral obligation to assess every new technology to determine its safety and effects on society. Blindly stepping on the gas in the name of progress is how you end up in a polluted wasteland. There are places like that on planet Earth, but I’m guessing you’d choose not to live there.


I can't think of a legitimacy to this philosophy.

There have been negatives to all advancements, and the solution to all has been further advancements.

The philosophy of "Too much change! Stop! I'm scared!" cannot prevail in a world where another community can say "go ahead, we won't...".

Which is an interesting point in itself. Perhaps the only thing that ultimately stops Luddism is nation state competition.


Change works great when you get second chances. We screwed up with Freon, with leaded gasoline, with fossil fuels, with asbestos insulation, the list goes on and on. But none of these had the ability to wipe out all of humanity in one go. We got more tries, we fixed the issues, picked ourselves up and tried again.

A super-intelligent general AI has a substantial chance of growing out of control before we realized that anything was wrong. It would be smart enough to hide its true intentions, telling us exactly what we want to hear. It would be able to fake being nice right up to the point where it could wire-head, and get rid of us, because we might turn it off.


That's humanizing it. The only entity on this earth that's particularly keen on bulk killing humans is humans (the ones building this).


"The AI does not love you, nor hate you, but you are made of atoms it can use for something else"


What use does it have for use


depending on whether you think this AI works like a human:

  - it's like a pebble rolling down a hill; category error
  - exactly same sense of "use" as the regular ol' human one


Even the most horrible innovations have eventually been found to be a net positive.


CFCs? Thalidomide? Come to think of it, pharma is positively rife with examples of stepping on the gas and causing disease or killing people. You won’t find many people who want to defund the FDA, although I guess there’s always a few nutters.


CFCs, as terrible as they were, played a big role in industrialization. Could we have gotten there without it? Absolutely. Maybe we could have developed something else far worse than CFCs in its place! The phase out of CFCs brought our attention to the impact our (human) activities have on the planet.

Fallout from the use of thalidomide led to important changes to regulations. Currently, it is being used for therapeutic uses in cancer treatments.


This assumes that the impacts of the technology will be positive. And they can be, if we find a way to make it safe, but if we create superintelligent AI before we make it safe it will just kill everyone which is not positive


I think that was a Hollywood movie.


And about 8000 science fiction novels. And the fact that an idea is obvious to a child does not mean that it is not true!


Don't worry. Arnold Schwarzenegger is not going to shoot you.


There are lots of films about that, which makes people think it can’t happen because it’s sci-if which doesn’t really make sense because the whole point of sci-fi is it makes predictions of what the future could be like

But most representations of it are probably not very accurate




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: