1. The Twitter board of directors has become a status thing. It's like a country club. They don't have real skin in the game; they don't own much stock. They're not aligned with the shareholders. It's about status and cultural power.
2. Why should we believe that the company will become more valuable when the stock price is where it was way back in December 2013? Compare to how the S&P or Tesla did during this period. Twitter has languished for a long time.
3. They should put it to shareholder vote.
4. One AI engineer from Tesla could solve Twitter's bot and spam problem.
5. The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.
> One AI engineer from Tesla could solve Twitter’s bot and spam problem
I think you underestimate how hard the problem is or you don’t fully understand how it works. The scale at which Twitter operates makes it near impossible to take down spam in real time. You don’t think Twitter is working on it, but they are. What you see is a much smaller percentage of the actual spam generated, and the reason you see it is because users are creative on circumventing all the preventive actions they take.
To think that a company that has failed to create self-driving for nearly a decade could solve "content moderation", one of the hardest problem on the internet that every single of the big companies (who btw hire the best talent) have been struggling with, is truly a shocking statement to me.
If all Google and Facebook couldn't solve it, why could a Tesla engineer of all people...
The idea that an AI engineer for self driving could be trivially retasked to content moderation is fairly laughable as well.
That's not to say there aren't overlapping skill sets, but the AI tech involved is quite distinct (although there is overlap, I believe some of the latest perception research is trying to adapt transformers from NLP to replace CNNs). Many of my colleagues are AI/ML engineers working on the self driving problems (perception, prediction, planning) and they're all super smart, but they'd still take a while to get up to speed in another area.
I have no idea why a serious business would hire a non-NLP specialised engineer to solve content moderation (they'd maybe hire someone who was interested in changing sub-field, but not to build out the backbone of their teams).
Also to think that an engineer who’s intentionally joined a company to work on AI self driving tech wouldn’t immediately quit when they where forced to move and work on Twitter content moderation is laughable. They’d find another job faster than Musk changes moods.
I also find it "laughable" how literally everyone is taking the comment. They weren't saying they'd just take some computer-vision expert and retool him to fix all of Twitter's spam problems.
Spamming "$$$ TSLA to the moon! Get latest tips on my free Discord http://biy.ly/ocozxc #TWTR #GOOG #AMC #GME $$$" thousands of times for days being suspended is hardly the hardest problem on the Internet
Spamming "$$$ $TSLA to the moon! Get latest tips on my free Discord bit.ly/ocozxc #TWTR #GOOG #AMC #GME $$$" thousands of times for days before suspension is hardly the hardest problem on the Internet
Heck, you could even just limit the number of hashtags in a single tweet (say, 2) and fix 95% of Twitter's spam problem without harming legitimate users in any way
Uh, wouldn't spammers just...stop using lots of hashtags? Part of what makes the problem difficult is that spammers are agents who respond to the techniques you use to stop them.
1) I think Elon Musk's angle on Twitter spam is rather different than ours. Look at replies under his tweets.
2) I am not an ML engineer, but still a software engineer and in my opinion the problem of self driving cars is at least two orders of magnitude harder than the problem of Internet spam. Maybe even three orders of magnitude.
3) We actually know that the problem of spam is successfuly solved by Google. When was the last time you saw a spam message in Gmail? I am seeing them in my inbox approximately once a month. There are about 8 messages in my spam folder per day, so it is about 0.4% false negatives. This is a great result.
4) Social networks are not interested in fighting spam. It just will make their metrics worse. All they need is to keep it at moderately acceptable level, so users won't flee in bunches.
We're not talking about the hardest content problems on the internet. Look at any of Elon Musk's (or anyone in crypto's) popular tweets and you'll see a flood of incredibly obvious bots that poorly mimic his account, spamming some shit coin or advertising a BTC address.
Also, you're really understating how good Tesla self-drive is. It's not full self-drive but it's a real achievement.
Yes, and Youtube and Facebook both have similar problems. There is survivalship bias at play here, you do see the bots that manage to get past the spam protection, but never see the ones that don't. For all you know, Twitter could be blocking 99% of the spam. Of course, as long as there's money in it (and there is), spammers will constantly be coming up with new and unexpected ways to bypass your systems. It's a neverending battle, and if you think it's trivial, you are the one understating how hard the problem is.
I never claimed Tesla self-driving wasn't an achievement, but clearly they underestimated how hard that would be themselves, so I wouldn't be surprised if they'd do the same for content moderation.
> Youtube and Facebook both have similar problems. There is survivalship bias at play here, you do see the bots that manage to get past the spam protection, but never see the ones that don't. For all you know, Twitter could be blocking 99% of the spam.
If you spend 5 seconds on YouTube and Facebook and compare that to Twitter, you'll see a massive difference.
The spam bots on Twitter are incredibly unsophisticated. It's literally bots with the same profile picture and name as the person they're trying to spam.
On the other hand, YouTube spam bots are actually incredibly sophisticated. They're using GPT-3 or some language model to generate text and reply to each other. Like, sometimes I'll read a comment and not be sure if it was a spam bot or not.
Obviously YMMV but in 2021, every tech video I commented on would get these replies, which all had 18+ profile pictures. Clicking their profile every account had the exact same "playlist" with a porn site ad inside, and the channel header had the exact same website link. I kept getting these for months.
There's a substantial qualitative difference between what gets through Youtube's and Facebook's filter, versus what gets past Twitter. They're not even practicing the current state of the art, as practiced by their peers.
I know there was one instance when an engineer using an internal build got fired for vlogging it (which is not at all unusual, Apple would also fire you for showing off an internal build).
Tesla FSD is level 3. You still need to be fully aware and hands on the wheel. Level 4 is when no one needs to be at the driver seat, which is what Cruise and Waymo are doing.
Quantity != Quality. Just because McDonalds serves billions of people doesn't magically make it good food.
A large part of the problem for twitter Google and Facebook is their ever shifting goals for content moderation to appease their politically activist employees and media critics.
Something tells me Elon would have different goals that should be less of a moving target and easier to accomplish. Less subjective target like "hate speech" and more objective target like true threats of violence, illegal activity, actual bot detection not "people who disagree are bots", etc
Spam volumes are also down dramatically due to conventional means like making it illegal, and taking down bot farms that were sending it. Also the introduction of DKIM and DMARC making it harder across the board to be seen as a legitimate sender, and the fact that many-to-many emails are a huge red flag for spam filters, and not a concept for social media.
Spam is a dramatically easier problem, and has many more mechanism to suppress it both legally and technologically.
Occasionally? Recently (maybe the last 6 months) I have been seeing a huge amount of spam getting through (comparatively - maybe 10 a day make it to my inbox). Previously I rarely saw any.
In the last two weeks my gdrive has been spammed with shared porn pdfs. It's not even being shared with the email address I use, it's being shared with my email without a full stop in the middle that Gmail ignores. Haven't looked into it much but apparently people have been asking for years for an option to only allow a doc to be shared from a know contact, Google has ignored this for a long time.
Well the government bills always get filtered out for me, I agree with gmail that it's spam however the goverment doesn't agree when I didn't pay a few times. So opposite is even worse.
It is relevant actually. I worked on spam fighting on Gmail for a while and when I quit Google, I was invited to Twitter for lunch and (though somehow I wasn't actually told about this) to give an impromptu talk to their spam teams.
Because the guy who invited me sort of sprung the talk on me I had no slides or anything, so it became a collection of vague thoughts + discussions with their team (vague because I didn't want to discuss any trade secrets). One thing that became very clear was that they weren't thinking about bots a whole lot compared to the Google abuse teams, because they'd been re-tasked at some point to consider abuse as primarily meaning "humans being mean to each other". A significant amount of their effort was going on this instead, although there is really little overlap in technologies or skills needed between problems.
This was pre-2016 so the whole Russian-social-bots-gave-us-trump hysteria hadn't started yet, instead Twitter was being declared toxic just due to the behaviour of its users. Thus the term bot still meant actual spam bots. Since then various groups, primarily in academia but activist employees too, realized that because deleting bot accounts was uncontroversial they could try and delete their political enemies by re-classifying them as "bots". For example this Twitter developer in 2019:
“Just go to a random Trump tweet, and just look at the followers. They’ll all be like guns, God, Murica, and with the American flag and, like, the cross. Like, who says that? Who talks like that? It’s for sure a bot.”
Clearly any abuse team that uses a definition of bot like that won't be able to focus on the work of actually detecting and fighting spam bots. If Musk bought Twitter and re-focused their abuse teams on classical spam fighting work it'd almost certainly help, given that Twitter didn't seem to be keeping the ever-shifting overloads of the "abuse" clear in their org structure.
Incidentally, trying to find the above quote on Google is a waste of time. Search "project veritas twitter for sure they are bots" on Google and the links are almost all irrelevant. DuckDuckGo/Bing gets it right on the first result, no surprise. I don't believe for one second that's a result of incompetence on the part of the Google web search teams.
> “Just go to a random Trump tweet, and just look at the followers. They’ll all be like guns, God, Murica, and with the American flag and, like, the cross. Like, who says that? Who talks like that? It’s for sure a bot.”
Hahah wow this is so out of touch. Communities each have their own way of speaking (famously /r/wallstreetbets has its own grammar, emoji, and lingo, but this applies to every community).
Sidenote this is why minorities speaking with each other sometimes get misclassified (even by humans!) as spam.
An alternative take: there's spam that's trivial to spot and spam that's hard to spot. Twitter is ignoring both. I can give you both searches and keywords to post which will return lots of spam ready to block. Not everything of course, but let's not post the "spam is a hard problem" excuse until they nuke every account responding with "metamask" and a link to Google forms.
You don't nuke the accounts, you make it so no one can see their posts and they don't realize. Make them waste their time instead of creating a new account.
My thesis is: if they put in minimal effort into automation, then metamask spam would be trivial to block. Since we know of the metamask spam for months (years?), it means there was no effort. I could be wrong, but I'll stick to the most obvious explanation.
It's been going on for months - if you're not aware of this it doesn't mean it's not widespread. If you tweet "metamask problem", you'll get a number of bots immediately responding with Google forms links for "metamask support". Yes, it's very widespread and extremely obvious. Both fake support accounts https://twitter.com/metamesksuport/with_replies (I can find at least 10 more "support" accounts from a basic search) and helpful randos https://twitter.com/ij851227/status/1515302472079847425
It's been tweeted by multiple security people at various Twitter engineers already and raised in lots of ways. They know and don't care. Meantime, real people lose their money - otherwise nobody would waste their time to post the scam.
Some scammers who appear in Musk's replies impersonating him use similar or identical avatars and screen names in order to perpetrate their fraud. Removing and preventing the creation of this type of account would not even require "AI". It hasn't happened yet because Twitter lacks the will to do it, for whatever reason.
Calculating the levenshtein distance from users, the average difference between the avatars, and looking at basic interactions in reply-to-reply-to-users isn't AI. That's heuristic. And it should have been implemented ages ago.
Spam on most platform is a complicated engineering challenge. However, for some inexplicable reason, Twitter features a kind of spam that you don't see on any other platforms because it could be caught with a regex.
It's almost never the case that spam issues on popular social networks can be alleviated with some easy fixes, because obviously if they could they would already have done that. Twitter is a very weird exception.
The spam scale problem should be solvable with a one-shot locally evaluated NN trained to classify behavior/tweets from their massive massive ground truth dataset. If it’s evaluated locally then I don’t see how scale is an issue.
I reported child pornography on twitter. It took a while for it to be taken down.
Yet people talking about hunter biden’s laptop in October 2020 had their accounts deleted immediately.
Twitter is very good at content moderation when it threatens the manufactured narrative of the global elite which they serve. And then completely incompetent otherwise.
I don’t buy the line that the problem is too complex because I see selective competence.
If they stopped reporting and focusing on MAU, the problem isnt that hard to solve. If you want to operate a bot on twitter, it costs X/month or just make bots illegal.
The problem would be more surmountable at that point
AI Engineers at Tesla also deal with spam, just not in the way you think. Self driving AI has to deal with input spam from all sorts of details in the real world, most of which are irrelevant to making driving decisions. So yes, I’d be reasonably certain AI Engineers at Tesla could easily solve spam on a simple web platform.
I know how hard it is to deploy features at scale, especially at a world-class company like Twitter. But if Twitter were to retain me as a consultant, I’d happily bet you any sum of your choosing that I’d make a pretty big dent in the crypto spam within a few months of full-time effort.
For those blindly downvoting that, here's what Paul Graham said:
"Either (a) Twitter is terribly bad at detecting spam or (b) there's something about Twitter that makes detecting spam difficult or (c) they don't care.
Based on my experience detecting spam, I'd guess (c)."
"Twitter engineering: If you're going to do such a bad job of catching spam, how about at least giving us a one-click way to report a tweet as spam and block the account, like email providers do? It may even help you get better at filtering, since more reports = more signal."
They’re likely downvoting my somewhat dubious claim of being able to singlehandedly make a big dent in crypto spam tweets.
I appreciate your gesture, but I don’t mind the downvotes. Bold claims warrant skepticism. And talking about votes makes the conversation less interesting for the audience.
But you’re right to point out that the problem isn’t nearly as intractable as it seems. There are many ways to deter crypto spammers.
Think of it this way: suppose Twitter’s stock price was inversely proportional to the amount of crypto spam (without accidentally removing genuine tweets). Does anyone believe the stock price would go down?
It’s why I suspect Twitter simply hasn’t made it a priority.
I remember reading several times that the enormous number of bots on Twitter inflates their "active users" stats and therefore their stock price, which is why they aren't fighting it.
The problem is false positives. Graham's experience of combating spam involved writing a Bayesian filter for his mailbox. That's fine. Somebody misses a message and one of the two parties feels bad, but they eventually either catch up or get over it. You can't "leave" that platform.
Twitter, on the other hand, is pretty sensitive to false positives, and the vernacular is so unique that naive Bayesian filtering would destroy a lot of communities with their own vocabularies and languages. If messages start arbitrarily dropping on it, its users won't stick around.
Sure, you could absolutely knock out spam. It wouldn't be that hard. Because fighting spam isn't the hard part. It's dodging the problem of firing on innocent people that the spam is using as body shields that's the hard part.
They already get incredible volumes of criticism for what little false positives they already have. Imagine if it was normal to be put in a time-out box by a Bayesian filter that wasn't tailored for your community!
Combating spam is something that has very few possible upsides for twitter, and a catastrophic failure case. Right now, spam mostly tends to effect larger accounts, who are going to stay on the platform anyway, because it's where the people are. What little spam small accounts see is manageable, and they won't leave because of it because it's so insignificant. If suddenly they couldn't send messages to others at random and without warning? Why would they stick around, then?
I do believe Twitter isn't doing well at fighting spam, also that it's a pretty hard problem. But where do you think people will go after leaving Twitter? Is there an option?
Does it matter where they'll go? People will always find some spot on the internet to have conversations after a given platform hits the friction threshold, and some might not even go anywhere: They might just leave.
Where people will go doesn't really matter, because there are a billion places they can, and there's not always a clear migration path. Sometimes, a social platform just dies, and its communities form a diaspora on different platforms, without any "clear" successor (like what happened to Orkut), or just stop doing the whole social media thing (many Google+ contributors no longer post online anywhere).
My question was kinda selfish. I want to move now. If there are a billion places, please tell me so I can move there now before the masses arrive and even that gets ruined :)
There's no easy answer to this. A billion different places users could go doesn't mean they have active communities, and if I mentioned where I like to hang out on the internet, those areas would probably get ruined.
Instead, I'll mention two platforms some of my friends like, to avoid taking the cost of a ruined platform myself:
sqwok.im: This one is the most microblog-like of the two, but it's also the least conventional. The quality varies; the front page only looks good every once in a while.
tildes.net: This one is invite-only, which helps mitigate the masses jumping in somewhat, and is run by a former reddit administrator. The existing community isn't great, but it's good. Friendly enough people.
I’d happily bet you any sum of your choosing that I’d make a pretty big dent in the crypto spam within a few months of full-time effort.
Could you do that while also making sure no false positives happen? Elon is making claims that he wants to make Twitter a platform where people are truly free to say what they want, so any spam that gets removed that isn't spam would be seen in a very poor light.
Eliminating spam when you're happy to have a few other things end up in the spam folder by mistake is relatively simple. Likewise, eliminating most of the spam but letting some through because it looks real is also quite easy. Eliminating only spam and nothing else is significantly harder.
The reason it's a ludicrous claim is that while there's overlapping skill sets, the idea that you can take an AI/ML engineer working on perception or prediction or planning problems and trivially apply them to textual content filtering is laughable. They could certainly pivot across, but it would take them a good chunk of time to get up to speed, you'd be better off hiring an engineer more familiar with that particular space.
The fact that they specified it being an AI Tesla engineer is super cringe Tesla-bro stuff and the fact that the commenter would say something like that hurts their credibility in making the other fairly extreme claims.
It's not just that they're wrong, it's that they're talking nonsense. An ML engineer working on perception models can't be trivially retasked to an NLP spam filtering problem.
If they're obviously talking out of their arse on one point then it definitely suggests they're talking out of their arse on the rest of it.
> The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.
It feels a little silly to act like the guy trying to spend billions of dollars to get his way represents anything more than an intra-elite conflict.
So what? .... if we, normal people get more free speech, and a few billionares swap some money around, it's still better then them just swapping money and we not getting any free speech.
What makes you think you will get any more free speech by the grace of Musk's intervention? He has repeatedly shown he is pro-free speech that he likes and anti-free speech he doesn't like.
So, then it doesn't matter... Rich people do what rich people do, with musk+twitter combo, there's atleast a chance you'll get more of a voice, with some other combo (eg. disney buying twitter), you already know there's zero chance for that.
- I see you've edited to "intra-elite" conflict. So is basically every problem in society then given that governance is handled by few. Not sure what the point is here, just feels like a hollow dismissal.
I didn't edit it. It always said that. And it is "his way" because "less censorship" almost certainly doesn't actually mean they just let people post whatever (and even if it did, do "we" all agree that would be a good idea? Probably not).
We’ve been hearing bad-faith censorship debates long enough to know how this song and dance goes, haven’t we? “If I say something, no matter how vacuous and offensive, that’s free speech. But if you criticize it or otherwise say something I disagree with, that’s censorship.”
Beyond that, the idea of a totally unfettered Twitter is not really desirable. Such forums fill up with porn, gore, racism, and various other forms of shock content nobody actually wants on their feeds.
I've seen no advocates of saying that individuals shouldn't be able to curate their own feeds, only that social media platforms shouldn't be restricting those feeds for them.
For instance, if you decide that you want a Twitter feed that excludes porn, gore, racism, and other objectionable content, then you absolutely should be able to exclude those (I'd reckon that that'd be a very sensible default). If I want to go observe the crazy bigoted things that fringe groups are spewing, or if I want to use Twitter just as an endless feed of porn, then that doesn't affect your ability to not see those things.
Likewise, I've not heard Musk propose banning any of his critics or opposing viewpoints (though I don't really follow his actions, so it's possible I've just missed them).
He has pursued aggressive union-busting, sued whistleblowers, sued people for posting videos that made telsa's """autopilot""" look bad. It's clear he doesn't give two shits about free speech, except when it costs him nothing to do so and therefore amounts to free virtue-signalling.
> “If I say something, no matter how vacuous and offensive, that’s free speech. But if you criticize it or otherwise say something I disagree with, that’s censorship.”
What an embarrassingly dishonest characterization of the problem. Nobody sane is arguing that "criticism is censorship".
The problem with Twitter is that they are censoring popular narratives critical of the ruling elite. If you can't distinguish between the concept of banning accounts and posts vs not doing so, and allowing criticism, you are simply too misinformed or low IQ to have any worthwhile input.
(Though I defend to the death your right to babble incoherently)
> What an embarrassingly dishonest characterization of the problem. Nobody sane is arguing that "criticism is censorship".
Is that so? Why do the same people who claim they’re all about free speech get all wound up about “cancel culture” then? There are clearly rules in their head about who should actually have the right to say whatever they want. I quite confident that I am not “low-IQ.”
“Cancel culture” is about people losing their jobs and being censored from platforms like Twitter for making arguments or jokes that rub the politically powerful the wrong way. Again, it’s the active removal of the practical ability of expression that rational adults are concerned with, not the fact that others have contrary opinions to them. Again, you are exposing your ignorance.
Being "lame" doesn't make it untrue. People use the term to refer to "shaming" (i.e., vocally disagreeing with) celebrities for their political stances, or refusing to patronize businesses that take political stances, for instance.
I truly don't understand this business of thinking everything shitty needs to be banned and denied as a right. Like, I take a pretty dim view of hookup culture, but I'm still going to denounce any attempt to make it illegal or deprive people of the right to fuck N different people per week. Because I'm more interested in freedom than agitating to hammer the world's people into a min/maxed social utilitarian dystopia. I'm trying to understand when and how America started pining for its own Soviet Union so hard. Or is this just a Liberal Technologist thing? Just want the government to do the AI Genie's job until the AI Genie wakes up? Like children trying to birth their own parents.
Cancel culture is NEET busybodies making it their day job to hunt for le problematique like bounty hunters (paid in retweets) organizing mobs to campaign to ruin people's lives (for great justice!) Yes, it's free speech! Yes, it's free association! Yes, it has precedents, you savvy insightful geniuses! Most things do, we call that history, and it's full of terrible things we should probably stop doing.
But this modern manifestation of a thing that has precedents and conservatives do too sometimes also has interesting features that are probably worth talking about on their own terms. I repeat: Cancel culture is NEET busybodies making it their day job to hunt for le problematique like bounty hunters, organizing mobs to campaign to ruin people's lives. It's legal, they have every right, and it's shitty, shitty behavior. Please stop denying it's a thing, or alternately trying to whatabout it to death.
What's at work is the recognition that "free speech" is a nice bumper sticker but doesn't go that far beyond that -- there are many policies one could pursue and plausibly call free speech. For instance, one could easily argue that we don't have free speech because money buys access. Somehow the right has been successful in claiming the mantle of "free speech" to mean something specific (basically that anyone can broadcast right-wing views without consequence) but that's not the only way the term could be conceived. There is also growing recognition that some things are outright harmful. Social media has already been implicated in pogroms; platitudes about the power of free speech seem to ring a bit hollow in that light.
On the cancel culture front, I don't agree. It actually refers to an incredibly broad segment of actions which almost nobody actually has much of a consistent line on. Often simply criticizing or refusing to patronize someone's business is called "canceling." Even if we narrowly refer to people losing their jobs, nobody actually believes there are NO circumstances whatsoever where losing your job might be an appropriate response to something you said. If you're a special ed teacher and post on Facebook that people with intellectual disabilities are less than human, one could reasonably doubt that you have any business having charge of special ed kids. If you want a more conservative flavored example, you could probably find conservatives endorsing cops losing their job if they bragged about not enforcing immigration laws. Or if you want an extremely uncontroversial example, you must at least believe it's appropriate not to VOTE for someone because you didn't like what they said. I don't think it's an accident. I think this term is so slippery and amorphous precisely because it obscures the hypocrisy at work.
You mean self driving, which they promised would be ready over 5 years ago and still barely at L3 level? Meanwhile Google engineers have actual L4 level self-driving in both Phoenix and San Francisco (by your logic, they are better than Tesla engineers), yet have not managed to solve Youtube's spam problem.
>Meanwhile Google engineers have actual L4 level self-driving
They don't have L4 self-driving they just made a contrived railway system for their cars. The second the car is out of the hardcoded route it's not a self driving car anymore.
First off, it's a hard coded area, not "route". They're not buses. A limited area is part of the definition of Level 4. What you're thinking of is Level 5, which is being able to handle any area and situation.
There are also reasons beyond capability for the cars being limited to an area. One is legislative, they literally are not allowed to offer service outside a given area. Another is maintenance, the cars need to be within range for their team in case of emergencies or accident. The cars have been tested in many cities outside those two cities, but offering a user-facing service has a lot more barriers.
So it dosen't have a single route, it has a couple of them. Mapping the whole area beforehand then driving in it is not "self" driving. You don't go out and carefully map each square inch of the road you're going to take before driving your car on unknown roads do you know?
"Spam bots" on their own aren't a hard problem. The hard problem is a very restrictive set of constraints on the solution space, e.g. "can't inconvenience or increase barriers to posting for legitimate users in any way."
> The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.
At which point in history could any person blurt out a brainfart and have thousands of people around the globe hear it and react instantly again?
When the printing press was invented ther was a backlash against it, because free speech was endangering existing power structures.
Nowadays we have it the other way around, powerful players use the accelerated chaos of social media to avoid any real discourse from forming — it is just very easy to manipulate just enough into your direction to hide behind "differing opinions" if you have a ton of resources — just like boulevard media has been for the past decades. Censorship and media control is one strategy to reduce the chaos by decelerating the spread of the most outrageous unfounded claims. Of course there can be such a thing as too much censorship (e.g. look to China and Russia), but this would be state censorship.
A billionaire does not want to buy a social media plattform because he cares about free speech — if that was the case he would have nothing about his workers discussing unions, post youtube videos he does not like or journalist writing negative things about him and still being able to buy a Tesla car.
I am not sure there is much wisdom in assuming we can have a world wide instant public microblogging platform without any moderation at all. Anyone who ever operated any public web platform knows the quality of the discourse falls drasticly if there is not at least some level of content moderation.
> One AI engineer from Tesla could solve Twitter's bot and spam problem.
You are heavily overexpecting what the technology can do. Differenciating satire from a threat with sufficient accuracy is not something machine learning ("AI") could do right now.
> At which point in history could any person blurt out a brainfart and have thousands of people around the globe hear it and react instantly again?
Usenet, for several decades already. Good NNTP servers had nearly perfect spam filtering and the trolls were handled individually by each user in killfiles.
If Google had kept the simple original web interface from around 2004 we might not have had all these issues. GPT-3 spam is hard to detect, but can be handled by killfiles.
Of course the real goal of Twitter is to do user profiles and possibly log private messages, for which Usenet isn't suitable. I wonder who on earth would send a sensitive "private" message on Twitter.
Usenet was much, much smaller and quite certainly not a statistical representation of society (people who could afford, access and understand the thing).
Most modern problems with social media started emerging when the general public started using it.
Specialist communities like IRC channels, Hacker News, certain subreddits or webforums still work quite well, because let's face it: It is not the general public there.
And even in specialist communities there is a certain degree of moderation needed in order to keep it in order.
In the offline worlds individuals for which (reasonable) moderation is needed are kept in check by society (either through social pressure or physical force).
Very seldom they are the beginning of societal changes but usually they are just unpleasant (e.g. "I hope you and your loved ones get murdered because...") people which most others avoid if they can.
I'm very sure I would leave any platform where their kind is allowed to run wild and so would many of my family and friends.
Facebook is a very good example: Most people are boring and nice enough. Those who are not (ranging from annoying to vile) spoil the fun for everybody else.
People leave (in part) because they don't want constant conflict and not every opinion is worth to be heard.
> That's a false dichotomy and the exact argument one would expect from a proponent of censorship. That puts you in the wrong camp, the fascist authoritarian camp, sadly.
Sadly you don't really elaborate on how my comment reveals the authotarian position you do falsly assume I hold.
There is a position between no censorship and full authotarian style censorship and it is a common rethoric vehicle to first talk about the extremes to show that the reasonable area is somewhere inbetween (and then we can talk about trade offs and priorities).
My main point as someone who studied media science is that we cannot treat modern social media with the exact same rules we treated other speech with, because ultimatly it is a different place to speak in. The social distance is lower than anything we ever had in history, the audience bigger than ever in history, it feels private but you are in the spotlight at the same time (and people tend to act like this). It is a place where a rural person communicating a rural opinion will be directly next to a city person communicating a city opinion. Before social media you had natural borders of which people could actually hear (and/or react to) each other. So it is fundamentally different place than anything we ever had before it.
And different places have different rules. Someone who speaks loudly in a library will be thrown out (because the place has a certain function and them speaking loudly interferes with that function). Someone who repeatedly and loudly farts in a restaurant might be thrown out. Someone who listens to music in church might be thrown out. A bare-breasted women in a mall might get thrown out etc. Different places, different rules.
The question now is: what kind of place is something like twitter? What behaviour shall be accepted or restrict there and with what goals?
If our goal is the equivalent of a verbal bar brawl where people can let out their innermost emotions we might end up with different rules than if our goal is rational and fact-oriented discourse with the goal of moving discourse forward (these places have typically stricter rules of which speech is acceptable, as can be seen for example here on HN).
With twitter a lot of the emerging behavior that can be observed is a direct result of or a direct reacrion to the systemic structure of the place. If this shall be seriously changed you have to either establish a new culture how one has to behave in such a place (hard) or you change the systemic variables itself (easier).
But one point I want to stress is: by allowing most speech one might involuntarily prevent other, more nuanced speech from ever emerging.
Many, many people are absolutely arguing that zero moderation is appropriate.
Anyway, you also acknowledge that some moderation is both appropriate and necessary. So, now we know your opinion of a private company’s moderation is different from the parent commenters, but you both believe moderation is necessary — and then you call them an authoritarian fascist.
Moderating bot accounts and frivolous antagonists (by virtue of repeated defamation/inflammatory speech) is appropriate. Those are not in the same vein as restricting speech. If a bot account is openly listed as such with links to show it is not a human agent, then it can be openly suppressed.
This isn't a similar argument nor a difference in degree of the same position.
But I don't take your statement with any credulity based on your handle
6. My favorite prediction. Twitter continues to ignore any of Musk's offers. Musk sells his 9% stake. Price goes down. Continues to go down since Twitter('s mgt) is full of shit. Elon gets in an offer when price is languishing at low $30's. Twitter has no choice but to take Elon's offer.
All in Podcast can be fun but it's way more on the side of entertainment than information/insight. There is a lot of self-serving narratives on that show, and I think they'd be the first to admit it. One of them was saying a few weeks ago that the root cause of the Russian invasion of Ukraine was Twitter, so you know.. grain of salt and all that.
"5. The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression."
Yes. Gaslighting crystalized. We are now to believe free speech is bad. Authoritarians know auguring against logic is hard.
Why should a company be forced to carry, host, and service the speech of others?
If Twitter is a monopoly, apply anti-trust laws to it and break it up.
If Twitter isn't a monopoly, then they should every right to decide what their service does, as long as it's within the law. If you don't like the current laws, change them.
I am totally, 100% opposed to privately owned companies being forced to carry speech that they don't want to carry. That itself is a violation of the free speech of people that operate businesses.
> 5. The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.
Not every speech is equal. If you are living in a Weimar-Republic scenario, yes, free speech can be repressive and I can see why people will call for censorship. Personally, I think censorship and speech taboos just keep a lid on certain problems instead of solving them.
> Not every speech is equal. If you are living in a Weimar-Republic scenario, yes, free speech can be repressive and I can see why people will call for censorship. Personally, I think censorship and speech taboos just keep a lid on certain problems instead of solving them
The problem with this is that it always ends up as “censorship is ok, as long as it’s for opinions I disagree with”.
And with your Weimar-republic scenario, if the implied claim is that more censorship would have stopped either the radicalisation or rise of Adolf Hitler, I think that’s very simplistic and highly suspect.
I don't mean to sound rude, but Tesla engineers can't even distinguish the moon from a stoplight. Human interaction and defending against a naturally intelligent adversary are not so simple.
People keep bringing up your point 5 here. Almost every other comment seems to parrot this idea that there is some paragon of free speech that Twitter doesn't achieve.
Setting all the discussion of what free speech should be, aside,
I don't see people making the point that Twitter's huge success as a social media platform may actually relate to their moderation policies. Twitter was found to be more resistant to fake news than other social media sites. I don't know whether this relates to the public nature of tweets or their moderation.
Twitter's moderation policy may exactly be the reason the platform has done so well, and rather than clamouring for Twitter to change, I'd suggest that we allow the free market to allow another platform with different a moderation approach to compete.
>5. The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.
Fascists and authoritarians took advantage of freedom of speech to gain power, the censorship came after they seized power. Hitler promoted himself through his right to speak at his trials, and through his book Mein Kampf, things the Weimer Republic could have absolutely chosen to censor. Karl Popper made a rather infamous observation of the "Paradox of tolerance" where tolerating the intolerant could result in more intolerance, if the intolerant happened to be Hitler in Nazi Germany.
I find the biggest objection to this entire line of thought is that censors always consider themselves to be the ones resisting the next Nazi Germany rather than being Nazi Germany themselves. Anybody who openly censors others is more likely than the general person in the population to be some sort of totalitarian authoritarian, so trusting them with power so they can stop some sort of Nazi uprising is foolish. It's the same kind of issue as "Bombing for peace", pretty much 100% of the people who have ever bombed people have said they were doing it for the sake of peace.
This all being said, I don't think giving Musk 100% of twitter and the effective absolute power to censor others on the platform is a good idea.
> 4. One AI engineer from Tesla could solve Twitter's bot and spam problem.
I don't know why people think is the problem.
They regularly ban bot and spam accounts. This is not an issue.
The issue is malicious/ignorant human actors and malicious state level actors.
Tesla AI engineers can't even get past L3. There's no chance they would be able to create an AI solution that can fight state level resources or massive amounts of humans gossiping and spread misinformation.
> One AI engineer from Tesla could solve Twitter's bot and spam problem.
Translation: "AI is a magic spell! I can take someone who's been working on vision recognition, and they alone could wish all our bots and spams away, with the magic of AI."
> The elites
Oh, god, here comes the libertarian spiel...
> but instead believe free speech, free discourse, and free thought are the instruments of repression.
Translation: "I want the right to tell any lie I please without consequences. I want the right to be able to scream at people on the Internet and keep my account. If I can't call someone racist names on the Internet, it's because you're all Fascists and authoritarians!"
I'd never heard of All in Pod, and so I listened to the section from 43:00 on about this being a 'free speech' issue.
Quick takeaways:
- The frequent pejorative use of the term 'elites' as a catchall for literally anyone they disagree with is hilarious. Their explanation of Musk being the iconoclastic 'anti-elite elite' sounds like Ayn Rand fan-fic written by a 14 year old who just read the Wikipedia plot summary for Atlas Shrugged.
- What does 'inverting history' even mean? It seems to be another example from Glenn Greenwald's vast and vitriolic argot. (I'm sure Glenn Greenwald is another commendable 'anti-elite elite').
- Paraphrasing: "journalists 20 years ago would have been defending the first amendment!" -> this is not about the first amendment. The first amendment protects your ability to speak freely from government imposed restriction or suppression. Ironically, twenty years ago the press/journalists and even government officials were experiencing one George Bush's sweeping censorship efforts (https://www.americanprogress.org/article/think-again-the-bus...).
Anyway, the hosts wistfully allude to the past, with references to society's 'public square' and people having the ability to say whatever they wanted with no repercussions. This is nonsense. Broadly, you can absolutely say whatever you want online — it's easier than ever.
That doesn't mean:
1) Twitter is required to provide a platform for you to abuse and harass people, or spout opinions they deem to be offensive, damaging, or misleading. That's their choice.
2) Free speech has never been consequence-free. To quote Michael Hobbes: "Voicing your opinion in public has always carried the risk of being shamed or shunned."
Finally, I will concede that twitter has suffered from poor growth, and a change in leadership could be a good thing for the company. But the board also has a responsibility to protect minority shareholders from people exactly like Elon. Their resistance to his efforts may have nothing to do with the vague, insulting and unsubstantiated allegations you and the podcast hosts make (It's a 'country club', board members are only acting against musk to protect their 'status', etc). Rather, they may quite reasonably believe that a vainglorious egomaniac who has been repeatedly sanctioned by the SEC may, in fact, not be the ideal owner of a social media company.
Ps.
> 4. One AI engineer from Tesla could solve Twitter's bot and spam problem.
Do you really, seriously think this is true? It comes across as deliberately inflammatory, and you and I both know it's not true.
> Voicing your opinion in public has always carried the risk of being shamed or shunned
This is an argument in favour of a more liberal model is it not? Just not a centralized system that takes a dictatorial role in socially isolating people for having bad ideas. The analogy would make more sense if there was a benevolent gatekeeper blocking you from coming back into the public square (or even all big public squares) because you said something controversial.
Otherwise cultural free speech people want ideas to be openly challenged, not efficiently silenced by the system.
Harassing people is something I think most people agree crosses a line, in the same ways it’s a criminal act. But even here Twitter does a poor job at that by taking a very broad interpretation, for example it punishes people for publicly disagreeing with someone while also being popular, because their fans decided to brigade a person, then banning the parent for ‘causing it’ (without ever directly encouraging or supporting such a thing).
There’s a hundred ways to do this better than Twitter/Facebook/Reddit without throwing the baby out with the bath water.
Trying to accuse every person who wants a very limited scope for wanting Ayn Rand style anarchy is disingenuous.
Otherwise I’m not a fan of “All in” pod’s analysis.
> the board also has a responsibility to protect minority shareholders from people exactly like Elon [...] they may quite reasonably believe that a vainglorious egomaniac who has been repeatedly sanctioned by the SEC may, in fact, not be the ideal owner of a social media company.
This was covered towards the beginning of the podcast. Legally, the role[1] of the board in this situation is to get the best stock price, and not about the future fate of the business.
1. The Twitter board of directors has become a status thing. It's like a country club. They don't have real skin in the game; they don't own much stock. They're not aligned with the shareholders. It's about status and cultural power.
2. Why should we believe that the company will become more valuable when the stock price is where it was way back in December 2013? Compare to how the S&P or Tesla did during this period. Twitter has languished for a long time.
3. They should put it to shareholder vote.
4. One AI engineer from Tesla could solve Twitter's bot and spam problem.
5. The elites have somehow inverted history so they now believe that it is not censorship that is the favored tool of fascists and authoritarians, even though every fascist and despot in history used censorship to maintain power, but instead believe free speech, free discourse, and free thought are the instruments of repression.