Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem here isn't resisting those forces, that's all well and good.

The problem is the vast masses falling under Turing's Law:

"Any person who posts a sufficiently long text online will be mistaken for an AI."

Not usually in good faith however.



I don’t know how we’ll fix it

Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop

How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?

Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith

It’s the same problem with 9000 slop PRs submitted for code review


I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.

Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.

If we can't respect genuine content creators, why would anyone ever create genuine content?

I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.

The blanket bombing of "AI slop!" comments is counterproductive.

It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: