Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it remarkable how the readers of this site are at the same time “worried about LLMs”, and totally enthusiastic towards what will inevitably become a no-human-in-the-loop Skynet. Sure, we’ll all get smashed with drones if we stop paying taxes or otherwise disobey, but at least our LLMs won’t accidentally praise Hitler or something.


We are on a venture capital website. Don't be surprised that when people read an article about killer drones, they just salivate at the sweet pentagon money.


If we don’t end up needing it: enjoy the sweet pentagon money. If we do end up needing it: enjoy the adoration of the people _and_ the money. Win-win.


You’re missing a couple use cases there. Particularly the ones when these weapons, which can’t refuse an order like a meat sack soldier would, shoot at _you_. I’m not saying “don’t build weapons”, btw. By all means do. I’m saying as far as risk is concerned, this is by far the riskiest direction imaginable


If we’re making them smart enough to decide whether to shoot on their own, why wouldn’t we make them smart enough to refuse an unethical order (e.g. there’s like 80% civilians there, I’m not shooting a rocket at the target)?

Obviously they won’t, but because the problem here is that the humans do not want the machine to question them, it’s only a matter of time before the bad guys have them anyway. I’m inclined to say that it’s better the good guys (at least from my perspective) have them first.


IMO with defense the prisoner dilemma is out in full force. Ideally nobody would build autonomous weapon platforms like this. I'm under the impression though that out opponents are already working on these as well. Building these ourselves turns the situation from a lose/lose (we lose because our enemy has these weapons and we don't or we lose because the weapons kill us) to a potential win/lose (we win or avoid way because we have solid weapons or we may lose because the weapons turn against us)


Any rogue AI, if it proves to be a danger, will still need to maintain its servers somehow. So to be a danger it either needs to get humans to do what it wants, in which case it can bomb things just as well with manned fighters, or develop its own robots and manufacturing, in which case it can build its own fighters.

We ought to worry that these will let General Ripper go rogue more easily or an adversary hack them, but I don't think this moves the needle at all one whether advanced AIs could be dangerous.


If the movie "the Cube" has taught us anything (and I guess it hasn't) it is that it's possible work on something without knowing what it's actually for. With some misdirection, an AI could probably run in a data center that is on paper doing something completely different. Do most people maintaining cloud servers today even know what they are running?


I think it's entirely coherent for someone to be worried about losing their place in the world but not worried about autonomous jet fighters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: