I think the problem lies in the impunity, not in the free speech itself. IMHO people should be allowed to say everything but they must accumulate the reputation for saying it.
If someone is a racist bigot, they shouldn't be physically restrained(deleting posts is like physically covering someones' mouth) from being bigots but they should definitely be known for it. Then it's up to the community to decide how to interact with those people. That's how we do it in real life and works pretty well.
Another thing is the amplification: people pretending to be multiple people. This is also an issue, giving wrong impression about the state of the society and must be solved.
Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but an algo to track theme developments.
IMHO if Musk manages to solve these few problems, which I think he can, a free speech social media is possible.
Please, deleting a tweet is hardly being 'physically restrained'
> Than it's up to the community to decide how t interact with those people.
Twitter is a private company, and it chooses to run it's service how it wants. The government avoids actually physically restraining racist bigots, and lets the community decide how to deal and interact with those people. Some may chose to harbor them (Parlor, 4chan, etc), and others (like twitter) may opt to not host them.
It's not a huge social injustice if you're not allowed to tweet. Feel free to go to one of the millions of other websites, or your start your own (it's easier to do this than ever!) and see who's interested in what you have to say.
> Maybe everyone exposed to something should be re-exposed to the theme once there's a new development.
I don't accept that content deletion is a way to go. When an offensive content is deleted we lose the ability to jude it for urselves. The content must remain but be strictly attached to a persona so the persona can be "judged" rightfully. In real life, when we deal with these people we want to know what they did. It gives fidelity, unlike "the person said something that violates rule 4 section 3". We should stop pretending that we are not humans and embrace the human ways of dealing with human problems. There's nothing human in undoing speech.
And no, attaching follow up to organic content is not moderation.
I don’t see any value in spending my time judging content saying that trans people are degenerates or that black people are an inferior race. I’ve already judged those ideas in my life and don’t need to see them anymore.
Well you can judge people that say those things as people who don't deserve your respect and attention. Then don't hang around places that interact with those people, that's how we do it in real life.
In an online world with no moderation it is impossible to not hang around places with these people. They can just show up unannounced to spew hate speech wherever they want.
In real life, unless you’re recorded, there isn’t a record of what you say. Moreover, people who do hear first hand what you say will recall different aspects and also forget detail over time.
This allows people to evolve and to not be beholden to something they said/thought a decade ago and no longer think.
How do you feel about HN's approach - flagged or heavily downvoted comments are invisible if you are not logged in or if you have not changed "showdead" from the default unchecked state (at which point they're rendered in a hard-to-read colour)?
I'm not the person you're responding to, but I myself prefer giving the users the moderation tools that affect only their view of the content. Users trying to save other users from posts they personally disagree with, in my opinion, can lead to echo chambers just by itself. Let me configure my account so that I can block or mute specific users or highlight keywords I can add to a list or allow users to tag posts. That way others can express their opinions in more ways than just commenting and I can use that data to determine if I want to read the existing comments or not.
I do think HN is one of the better moderation systems since this is one of the saner places on the internet and you can still configure things so you can at least see all the content, though you can't interact with it all. I would just prefer it if I was in charge of saving myself from bad opinions or whatever motivates people to down vote posts into oblivion.
I don't like it when it's due to low score, it is an oppression of unpopular opinions.
I like it when it's about flagged posts. I have the option enabled to show these posts and I would vouch if there's something worthy in it. So spam and other BS is "removed" but I still can take a look at it and see it for myself.
Overall, I think HN is one of the best moderated online places.
So this is unusual to me, because flagging feels more open to abuse than the comment score. Indeed this very thing happened to me recently:
- user A says vaguely racist thing
- user B calls person out for racist thing
- user A cannot downvote a reply, so they flag it instead - making it disappear
So both can silence someone, but in one case many people need to disagree with someone and in the other you just need one person (or one person with an alt account if you want to go and revert a "vouch"). So if anything flagging is more prone to abuse than downvoting. I try to read greytext comments when I can and vouch whenever it looks unjust (and do something similar for downvoted posts) but from the looks of things not many people do.
So I'd understand if I was downvoted and called out for my confrontational response to racism. Because at that point I'd reply that actually treating this kind of casual xenophobic comment as unserious and mocking the person is the most effective counter to this kind of behavious. Getting bogged down in debating the merits or worth of any individual person or where they might "belong" is exactly what this kind of person wants to do.
That's where you judge the moderation. The moderation quality defines the community quality.
The good thing about HN is that the moderators are reachable and they do respond intelligently. Unlike AI moderation, you can send an e-mail about it and dang will respond to you and explain why something happen and you can discuss it.
I had my account restricted multiple times and restored once we got on the same page(I don't agree with everything but once I see their point, I can work with it). I had wrongfully flagged comments unflagged by sending them an e-mail too.
It's not perfect but it's pretty good and miles ahead than anything else online.
So in my case the comment didn't desperately need unflagging - someone could wave the comment guidelines in my face and I'd probably concede that such an open confrontation broke at least one. But yeah I guess you can overturn a flagging more easily than being downvoted.
The real key is that there's a moderator, and the community is small enough that he can check things manually.
Once it gets too big for that, you're doomed to destruction eventually.
My preferred solution would be to break up the communities once they're too big, instead of trying to make a massive world-wide community like Twitter does. Reddit somewhat has this, but there is still a site-wide issue.
>Once it gets too big for that, you're doomed to destruction eventually.
>My preferred solution would be to break up the communities once they're too big, instead of trying to make a massive world-wide community like Twitter does.
I agree with this. In real life situations you can see it too, the larger the crowd the stupider their total behaviour becomes. Large crowds are good for certain things though, but mostly primal stuff like singing and chanting.
> Maybe everyone exposed to something should be re-exposed to the theme once there's a new development.
This doesn't work. Show people two articles, one that is false and one that is true, and most people will say the one that aligns with their priors is true. We need to either teach people to recognize fake news, censor fake news, or accept that basically everyone will believe false propaganda. There are no other options. Once someone has been shown an article they agree with telling them the article was false just leads them to think you're on "the other side".
If you took perfectly rational people and ran the same test, you'd get the same result. Evidence that supports a position that you already have evidence for is more likely to be true. If someone showed me very compelling evidence that the world was flat, even if I was unable to find any issue with it, I still would believe that it is false. If a single counterpoint could change your belief, you never had any business claiming it as a belief in the first place.
Do you have cause to believe that repeated exposure to every side of every story won't lead the average person towards truth?
And who is going to decide what those "fake news" are to censor and how will you assume they won't fall into the exact same trap of wanting to believe in what they already agree with.
We're hot off the heels of hunter biden, surely that should be a wakeup call regarding how "misinformation experts" go both ways.
No clue. All I know is that if we don't censor fake news people will believe it, no matter how much evidence to the contrary they are shown. Maybe we just have to accept that.
> Another thing is the amplification: people pretending to be multiple people.
For a free speech absolutist curtailing this could also be seen as removing free speech.
> Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but a algo to track theme developments.
Still this wouldn't solve the issue with spread of BS, specially targeted BS: it is tailored to invoke and reinforce inherent biases and, on average, someone exposed to it will become less inclined to read/critically judge any rebuttal. Bullshit spreads much easier than well researched rebuttals, just by the nature of bullshit. It's a game where truth is bound to lose, no matter how many "algorithms" you implement to spread developments of a story to the same audience, the engagement of said audience to the rebuttals will vary depending on their biases. I'm not even including the required inherent drive and energy to actually follow-up, as an audience, on further developments, in the fast-paced world of social media people will selectively choose what to invest their energy into. Someone falling for bullshits won't want their effort to be thrown out by rebuttals and so will avoid such activities perceived as a waste of energy, after you formed an opinion it's much harder to un-form it.
I'm strictly in the camp that absolute free speech on social media is a fool's errand, at least in 2022. There is no upside to the massive downsides that we already see and experience, even in the scope of not existing with absolute free speech.
The detachment on social media between the written words vs the real humans behind those words causes a non-insignificant amount of grief that wouldn't happen in a in-person interaction. It seems that we humans easily lose our humanity when not in a real world social environment, the vileness is exaggerated while empathy is easily pushed aside.
I agree that we can't have a perfect solution but let's loose a good solution in the pursuit of a perfect one and I think there can be a good solution by implementing some of the real world social dynamics into the virtual one.
Jerks and BS artist are nothing new but in real world we do have some tools to deal with them. IMHO, changing how some things work can create an atmosphere of healthy interactions.
Agree. The big one you missed is identity. Most hate is anonymous. Being able to filter by tags like “known racist” or whatever, and seeing someone’s history of sharing misinformation is useful but most people would self-censor if their identity was known or other users would filter out those that won’t identify.
What I wonder is what Musk will do if he finds out the scales are artificially weighted towards conservative content. Like if conservative content is artificially boosted by bots and algorithms. Facebook was much more liberal before thumbs were put on the scale. I don’t remember when but I think it was Mother Jones that saw huge traffic movement changes after algo changes like a decade ago?
Like what if the natural state of humanity is much more liberal than the American media and social media allow for? Will Musk allow that or will he see anything that doesn’t align with his views as error or manipulation?
What if a truly free and transparent self-moderating platform naturally promotes leftism more than a moderated but manipulated feed does?
A study showed that people are more aggressive online when using their real name
> Results show that in the context of online firestorms, non-anonymous individuals are more aggressive compared to anonymous individuals. This effect is reinforced if selective incentives are present and if aggressors are intrinsically motivated.
Weird but I still think Elon’s idea of needing confirmed identity for a checkmark is solid. If anonymous users then are nicer than checkmarked users, I guess the filter will work in reverse? The elimination of bots will be nice if they can do it.
> Generally, if your solution is virtually indistinguishable from one of the systems the Chines government is using to keep people in line, your solution is bad.
That’s an opinion. I disagree with it. It’s a private corporation not the government.
You either have people incentivized to self-identify with a checkmark or what? The alternative is to build an AI that identifies you in order to remove bots? I don’t even think that’s possible without it auto removing everyone that uses anonymizing tools like Tor?
How do you know that requiring identity is anti free speech? Not everyone online is Iranian political dissident. Sure, some people claim that you can't have free speech when your identity is known but I don't see any solid reasoning behind it.
Mike Masnick in his tweets repeats some talking points but there's no cohesive argument.
And AFAIK an anonymous political dissident wouldn’t want a blue checkmark?
Furthermore, there can be layers of anonymity. There can be anonymous publicly but not to Twitter. That’s dangerous given that Twitter cannot protect your identity from a state actor accessing its internal systems. Thus, again, why would you want a checkmark as a dissident.
Requiring ID verification is adding limitations on who you permit to speak. It is inherently anti 'free speech'. I think it's fine if that's the sort of website you want to build (twitter at the moment is not a free speech maximalist), but don't pretend that doing this doesn't limit speech.
> Requiring ID verification is adding limitations on who you permit to speak
Do you mean that in countries where not everyone has government ID? That's not an issue, the government doesn't have to be the authority of ID. Besides, governments can create fake IDs for covert operations anyway. I don't suggest that everyone should connect to the internet with government issued ID card.
How do you verify someone's IRL identity without a government issued ID card in a scalable way?
I don't mean some idea that could work at some arbitrary point in the future (decentralized whatever...). If a social media platform were to do this, right now, how would they do it without verifying a government issued ID?
Identity doesn't come into existence with the registration with a government, it's something you build over time as you interact with the world around you.
Nicknames are an identity and it's pretty much common these days to have nicknamed account on all over the internet. The problem with these is that one can have multiple of those and a behaviour in one place doesn't transfer into other places.
So maybe we can have across-the-internet identities. You are jasonshaev but who you are on twitter? on reddit? on other places? Once you become the person who is known around everywhere the same way, you have the identity that you would like to protect. You can't troll one place when bored then be known as a nice person somewhere else. I think that's good enough identity. The implementation can be around crypto, single sign in, face recognition etc.
The thread started with "real name." The only way to verify that is government identity.
If you want to verify some other, "online" identity, that's fine, but I don't see how that would meaningfully affect anyones behavior. To be clear, I don't think verifying someone's real name will meaningfully improve online behavior either -- plenty of other threads explain why. In which case, what's the point of either?
You never know where the prevailing winds of online sentiment will turn next. Having your every post tagged with your identity can lead to real-life problems in the future, even if it was something edgy you said as a teenager or something you used to believe but don't any longer.
> You never know where the prevailing winds of online sentiment will turn next. Having your every post tagged with your identity can lead to real-life problems in the future, even if it was something edgy you said as a teenager or something you used to believe but don't any longer.
So maybe, for every single thought one has, one ought not fly around the world and post it on a flyer on every street corner and light post. Which is basically what posting on Twitter is.
But then I think a ton of stuff people casually do online is batshit crazy when you put it in real-world terms. Of course you wouldn't do the above. You wouldn't even do it if you had a magic button that could make it happen for you without taking time & money to go do it in person. "Post my random toilet thought on hundreds of millions of surfaces all over the world? No, god, why would I do that?"
Would you give a teenager access to such a magic button? Of course not. That would be entirely insane. Even if using the button would not, per se, get them in trouble, you'd destroy that thing or put it in a safe. Handing it over to them to do with as they please wouldn't even be something you'd consider doing.
But we live in a world where ~every developed-world kid has a button like that by age 12, and sometimes much earlier. WT actual F. Of course it's causing tons of problems. Most adults couldn't be trusted to make good choices with such a tool (clearly).
Wouldn't self censor solve the problems just as well as deleting content?
See, because we don't say everything that comes to our mind, we are able to interact in a civil manner with people that can have any kind of opinions. In real life, I'm sometimes shocked that someone is a total bigot.
However, when civility is established we can discuss these ideas too and instead of having these people being toxic these ideas can be expressed in a civil manner and discussed. Maybe they have a point sometimes? If they do, it can be dully noted and if they don't they will be exposed to the counter arguments. Also, when ideas are expressed in civil manners, people don't label other people straight as "bigot", "racists" and accept the nuances. In fact, some prominent right-wing people are doing that, people like Jordan Peterson. Because the guy is civil, he is effective and it's up to the rest to contradict his claims in civil manners.
So yes, it is alright to have some self restraint and think before you speak. It's definitely much better than oppressing it.
edit: the comment I responded was a bit different, I guess the OP added more thoughts.
If someone is a racist bigot, they shouldn't be physically restrained(deleting posts is like physically covering someones' mouth) from being bigots but they should definitely be known for it. Then it's up to the community to decide how to interact with those people. That's how we do it in real life and works pretty well.
Another thing is the amplification: people pretending to be multiple people. This is also an issue, giving wrong impression about the state of the society and must be solved.
Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but an algo to track theme developments.
IMHO if Musk manages to solve these few problems, which I think he can, a free speech social media is possible.