Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces