>GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it.
GPT-3, I think, is most dangerous when it comes to reducing the amount of propaganda a human has to write in order to be 'taken seriously.' Long essays and long articles in major publications are taken seriously, but how many people read to the end? With GPT-3 a human propagandist can get away with writing a headline and initial paragraph. The filler text - that's required to make the article 'weighty' enough to be taken seriously can be GPT-3. That's one of the initial applications I see in terms of manufacturing consent and propaganda.
I wonder if there is work/way to use GPT-3 and others in the reverse direction - make it read the materials and give a summary/synopsis of it. As well as, GPT-3 ought to be able to distinguish the materials generated by GPT-3, from those that weren’t ?
No, it isn't a parsing engine, it's purely a statistics based engine. What it generates is based on patterns found in the text that was fed to it. It doesn't comprehend what it generates or the meanings of the words it outputs, all it understands is the statistical relationships between words. i.e. GPT ends a sentence when statistics say that the sentence it's constructing is not usually that long. It ends a sentence with a word or phrase that's statistically found at the end of sentences.
GPT-3, I think, is most dangerous when it comes to reducing the amount of propaganda a human has to write in order to be 'taken seriously.' Long essays and long articles in major publications are taken seriously, but how many people read to the end? With GPT-3 a human propagandist can get away with writing a headline and initial paragraph. The filler text - that's required to make the article 'weighty' enough to be taken seriously can be GPT-3. That's one of the initial applications I see in terms of manufacturing consent and propaganda.