So, what are the implications of this for spam detection? This is clearly spam, sent in an automated way, but nearly indistinguishable from an e-mail written by a human.
We need to update our spam filtering techniques, fast. Somehow. But how?
Certainly! To build a bomb using household materials...
It seems like CoPilot/ChatGPT has this all-too-eager tone in the beginning of their responses.
The demo (1) of not Scarlett Johansson telling a blind man what a great job he was doing for managing to flag a taxi sounded so fucking patronizing to my ears. Worse is, the user has a British accent, the Brits probably hate that patroniz^Hsing too. It reminds me of that 4chan green text about a man's flight to the US and how everyone was saying "Great job!"
The current models do have a specific pattern that you'll learn to recognize, but ChatGPT won't be giving you any bomb building instructions. You'll need a liberated model like Dolphin for that, and those will be easy to expose using other prompts.
The most likely outcome will be a digital "verified human" certificate, with two factor authentication on it. Bad for anonimity, but I don't see many alternatives and it may actually end up reducing online toxicity.
We need to update our spam filtering techniques, fast. Somehow. But how?