I feel like this is a good educational goal but a very poor execution.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
Kind of reminds me of the junk forensic fire science. "Slop Detective" might have been nice in 2022, now its slop itself. Maybe this is an old link? If someone just published this in the last 90 days, they are an idiot.
There's a lot of anti-AI sentiment in the art world (not news) but real artists are now actively accused of using AI and getting kicked off reddit or whatever. That tells me there is going to be 0 market for 100% human created art, not the other way around.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."