Can you imagine if it rained ice cream? If you bought 10 lottery tickets and every one won? What if all disease on earth spontaneously went away? I could imagine all sorts of wonderful things, but being able to imagine something is not a reasonable basis for expecting it to occur, especially when it contradicts what we already know about the world.
GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it. At best, they are engines for contextually regurgitating chunks of real information, but in practice it's more like reaching for a volume from a lightly curated shelf of Borges' Total Library; self-cast shadows of information that roil and collapse into meaninglessness. The technique is fundamentally untrustworthy for any application with real stakes.
>GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it.
GPT-3, I think, is most dangerous when it comes to reducing the amount of propaganda a human has to write in order to be 'taken seriously.' Long essays and long articles in major publications are taken seriously, but how many people read to the end? With GPT-3 a human propagandist can get away with writing a headline and initial paragraph. The filler text - that's required to make the article 'weighty' enough to be taken seriously can be GPT-3. That's one of the initial applications I see in terms of manufacturing consent and propaganda.
I wonder if there is work/way to use GPT-3 and others in the reverse direction - make it read the materials and give a summary/synopsis of it. As well as, GPT-3 ought to be able to distinguish the materials generated by GPT-3, from those that weren’t ?
No, it isn't a parsing engine, it's purely a statistics based engine. What it generates is based on patterns found in the text that was fed to it. It doesn't comprehend what it generates or the meanings of the words it outputs, all it understands is the statistical relationships between words. i.e. GPT ends a sentence when statistics say that the sentence it's constructing is not usually that long. It ends a sentence with a word or phrase that's statistically found at the end of sentences.
GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it. At best, they are engines for contextually regurgitating chunks of real information, but in practice it's more like reaching for a volume from a lightly curated shelf of Borges' Total Library; self-cast shadows of information that roil and collapse into meaninglessness. The technique is fundamentally untrustworthy for any application with real stakes.