No, it's a legit concern. Both things will happen - there will be abuses, and there will be good uses. It will be a complicated mess, like viruses interacting with anti-viruses, continuous war. My hope is on AGI learning to balance out as many interacting agents.
Sure there will be abuses, but not in this way, I don't think. GPT at this point isn't capable of creating novel computer viruses.
If you want to scrape random websites and have GPT hammer at them for old vulnerabilities, I think you could get that to work, but to what gain? You'd be spending a crap ton of cash on API requests and compute, and people do this without GPT obviously. Cost is probably not worth it here for attackers.
Then, I'd hope OpenAI would have some way to detect this and shut these people down. I doubt they do right now, but that'd be my hope ...
We should regulate everything which could cause a mass havoc. Like some chemicals, like radio-active isotopes, like nuclear research, like creating viruses.
Is AI in the same category? Some respectable people think so.
Isn't it silly to jump to these conclusions when yourself are admitting you really don't know anything about the tech?