Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A large part of the problem for twitter Google and Facebook is their ever shifting goals for content moderation to appease their politically activist employees and media critics.

Something tells me Elon would have different goals that should be less of a moving target and easier to accomplish. Less subjective target like "hate speech" and more objective target like true threats of violence, illegal activity, actual bot detection not "people who disagree are bots", etc



No. Not relevant to the discussion. Gmail has no content moderation, but spam still gets through occasionally.


Spam volumes are also down dramatically due to conventional means like making it illegal, and taking down bot farms that were sending it. Also the introduction of DKIM and DMARC making it harder across the board to be seen as a legitimate sender, and the fact that many-to-many emails are a huge red flag for spam filters, and not a concept for social media.

Spam is a dramatically easier problem, and has many more mechanism to suppress it both legally and technologically.


Occasionally? Recently (maybe the last 6 months) I have been seeing a huge amount of spam getting through (comparatively - maybe 10 a day make it to my inbox). Previously I rarely saw any.


In the last two weeks my gdrive has been spammed with shared porn pdfs. It's not even being shared with the email address I use, it's being shared with my email without a full stop in the middle that Gmail ignores. Haven't looked into it much but apparently people have been asking for years for an option to only allow a doc to be shared from a know contact, Google has ignored this for a long time.


Well the government bills always get filtered out for me, I agree with gmail that it's spam however the goverment doesn't agree when I didn't pay a few times. So opposite is even worse.


It is relevant actually. I worked on spam fighting on Gmail for a while and when I quit Google, I was invited to Twitter for lunch and (though somehow I wasn't actually told about this) to give an impromptu talk to their spam teams.

Because the guy who invited me sort of sprung the talk on me I had no slides or anything, so it became a collection of vague thoughts + discussions with their team (vague because I didn't want to discuss any trade secrets). One thing that became very clear was that they weren't thinking about bots a whole lot compared to the Google abuse teams, because they'd been re-tasked at some point to consider abuse as primarily meaning "humans being mean to each other". A significant amount of their effort was going on this instead, although there is really little overlap in technologies or skills needed between problems.

This was pre-2016 so the whole Russian-social-bots-gave-us-trump hysteria hadn't started yet, instead Twitter was being declared toxic just due to the behaviour of its users. Thus the term bot still meant actual spam bots. Since then various groups, primarily in academia but activist employees too, realized that because deleting bot accounts was uncontroversial they could try and delete their political enemies by re-classifying them as "bots". For example this Twitter developer in 2019:

https://reclaimthenet.org/project-veritas-twitter-hidden-cam...

“Just go to a random Trump tweet, and just look at the followers. They’ll all be like guns, God, Murica, and with the American flag and, like, the cross. Like, who says that? Who talks like that? It’s for sure a bot.”

Clearly any abuse team that uses a definition of bot like that won't be able to focus on the work of actually detecting and fighting spam bots. If Musk bought Twitter and re-focused their abuse teams on classical spam fighting work it'd almost certainly help, given that Twitter didn't seem to be keeping the ever-shifting overloads of the "abuse" clear in their org structure.

Incidentally, trying to find the above quote on Google is a waste of time. Search "project veritas twitter for sure they are bots" on Google and the links are almost all irrelevant. DuckDuckGo/Bing gets it right on the first result, no surprise. I don't believe for one second that's a result of incompetence on the part of the Google web search teams.


> “Just go to a random Trump tweet, and just look at the followers. They’ll all be like guns, God, Murica, and with the American flag and, like, the cross. Like, who says that? Who talks like that? It’s for sure a bot.”

Hahah wow this is so out of touch. Communities each have their own way of speaking (famously /r/wallstreetbets has its own grammar, emoji, and lingo, but this applies to every community).

Sidenote this is why minorities speaking with each other sometimes get misclassified (even by humans!) as spam.


How is spam blocking not content moderation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: