Some people think the "apocalypse" A.I. safety people consist largely of individuals engaged in magical/ religious thinking based on a love of fiction, so if we were to pick a 1960s analogy, it would be like some people concerned with exploding cars, and some people concerned that a lunar apocalypses will unleash Cthulu, as was foretold by H.P. Lovecraft.
"Some people" probably thought the same kind of thing about nuclear apocalypse people (or the CFC people). If there were, my honest assessment of those people is that they were fools. One of the main reasons we didn't have a nuclear apocalypse is that people knew that we could, and took active steps to try to prevent it.
There are a wide range of people concerned about the long-term issues in AI. I agree that some of the logic of people on the extreme end doesn't seem very sound. But there are loads of people who have a more measured take on things.
Essentially, I think there are two questions of fact, which we don't have good answers for:
1. Is it possible for intelligence to go forward without limit? Is it physically possible for an AI to be as smarter than us than we are smarter than chimpanzees, or ants?
2. How likely is it that if we made such an AI, that we could also induce it to leave humans in control?
If the answer to 1 is "yes", then consider what it would be like to interact with such an intelligence: that it would continually be doing things of which we have no conception, but which cause the universe itself to turn against us.
The analogy for #2 I've heard is, "It may be that at some point we reach a situation where our relationship to AI is like a herd of cows' relationship to a farmer. Maybe the farmer slaughters us, or maybe it keeps us around but on its terms, not ours. Either way we want to stop things before they get to that point."
Now maybe the answer to #1 is "no"; or maybe the answer to #2 is "very high". But I don't think we have rock-solid reasons for believing either answer; and so I think it only makes sense to proceed with caution.
The threat of nuclear weapons was an empirical fact, not a thought experiment.
The issue isn't whether we can imagine alarming thought experiments but rather why would we take seriously someone whose field of "work" is "inventing thought experiments?"