Articles like this are a good reminder that most of mankind's progress has happened not through encouraging innovation, but from preventing those hostile to it from getting in its way.
We all have a moral imperative to maximize the rate of innovation, so that the human condition can be improved, and therefore to do whatever we can to frustrate and disempower Luddites and other obstructionists.
You can’t think of any technological innovations that were net negatives?
We have a moral obligation to assess every new technology to determine its safety and effects on society. Blindly stepping on the gas in the name of progress is how you end up in a polluted wasteland. There are places like that on planet Earth, but I’m guessing you’d choose not to live there.
Change works great when you get second chances. We screwed up with Freon, with leaded gasoline, with fossil fuels, with asbestos insulation, the list goes on and on. But none of these had the ability to wipe out all of humanity in one go. We got more tries, we fixed the issues, picked ourselves up and tried again.
A super-intelligent general AI has a substantial chance of growing out of control before we realized that anything was wrong. It would be smart enough to hide its true intentions, telling us exactly what we want to hear. It would be able to fake being nice right up to the point where it could wire-head, and get rid of us, because we might turn it off.
CFCs? Thalidomide? Come to think of it, pharma is positively rife with examples of stepping on the gas and causing disease or killing people. You won’t find many people who want to defund the FDA, although I guess there’s always a few nutters.
CFCs, as terrible as they were, played a big role in industrialization. Could we have gotten there without it? Absolutely. Maybe we could have developed something else far worse than CFCs in its place! The phase out of CFCs brought our attention to the impact our (human) activities have on the planet.
Fallout from the use of thalidomide led to important changes to regulations. Currently, it is being used for therapeutic uses in cancer treatments.
This assumes that the impacts of the technology will be positive. And they can be, if we find a way to make it safe, but if we create superintelligent AI before we make it safe it will just kill everyone which is not positive
There are lots of films about that, which makes people think it can’t happen because it’s sci-if which doesn’t really make sense because the whole point of sci-fi is it makes predictions of what the future could be like
But most representations of it are probably not very accurate
We all have a moral imperative to maximize the rate of innovation, so that the human condition can be improved, and therefore to do whatever we can to frustrate and disempower Luddites and other obstructionists.
Team "step on the gas".