Regardless of technical viability, any service without moderation is a service mainstream society will shun, which really just means people won't use it, build tools for it, or generally accept others who use the service.
People seem to both want a service that is immune from society and also used by society. That's a conflict that won't resolve.
Fortunately, the problem you're talking about has already been solved with the concept of federation. If the social media system is a protocol and not a company, then society can chase down individual bad actors, and blame them for their own crimes, instead of blaming you.
The issue is we're already doing too much chasing down individual "bad actors." It's like the #1 hobby these days, everyone is looking for the next person to step out of line so they can drag them into the village square for their stoning. There aren't enough "bad actors" to satiate the mob any more so the definition of "bad actor" has to be continually expanded by the day.
Just the last week a musician tweeted something dumb about making his daughter learn how to use a can opener to open beans, and within a day, the very successful podcast he did the theme song for, where his track was used for the past like 10 years disowned him and stopped using the song. Then he was kicked off the cruise gathering where he was good friends with the guy who ran it and has been a regular headliner for years.
The internet has created a culture problem that I'm not convinced decentralized social networks will fix. At best, if one was created that people actually used, we wouldn't have situations we've seen with Alex Jones and Trump where all the platforms unperson someone on the same day, cause they could still at least keep their audience on the decentralized platform.
The bean dad attention led people to discover these additional tweets which are much worse than his questionable parenting judgement in that particular thread, and that’s what drove people to stop associating with him:
TBH when I first saw that he got chased off twitter for the other tweets I did think it was going to be some cringe “edgy” humour attempt à la James Gunn. But that “white homeland” and “mud people” tweet is overt and not acceptable racism.
Well, there's bad actors, and then there's twitter-assigned villains. Really bad actors, as in people who are doing things that are clearly bad enough for there to be laws against them, need to be controlled somehow. That's what most moderation energy goes in to preventing.
There are different types of moderation, personally I find with platforms offering tools to their users to define the type of content they want to see, sure that can and will end with people in echo chambers of their own making but that is preferred to Platform level moderation where the platform chooses what is allowed and no allows ending with the entire platform being one ideological echo chamber instead if just user silo's with in the platform
Reddit used to be an example of this, it allowed communities to create their own rules enforced by their own moderation with very limited rules at the platform level (i.e illegal speech was banned), over the years however due to social pressure on Advertisers, reddit has started shifting more and more of those moderation from the community level to the Platform level, and IMO it is having a negative effect on the site as whole.
In the end Platforms should build TOOLS for moderation but not actually do the moderation themselves
The platform needs to insulate itself from attacks: DDOS, illegal content, reputation attacks.
The groups/fora/subreddits need local moderation that keep it on-topic and restrict trolls, flamers, spammers and other bad actors that destroy the utility of the group. (There are occasional groups that exist to be an outlet for off-topic discussion from others, or to out-troll each other, or rant.)
Third is individual content control. Everybody needs to have easy access to a killfile: I don't want to see that idiot; I don't want to see any thread that idiot started; I don't want to see any thread that idiot contributed to; I don't want to see an article or thread with these keywords.
It is certainly an unresolvable conflict, my major concern is to what extent those that control the "Freedom From" internet will go to prevent the harm done by the "Freedom To" internet. Will my Verizon connection be shutdown if I have a node running?
We still use email, blogs and the telephone network (including sms/mms). And newer decentralized systems like Mastodon.
These kind of have moderation but it's decentralized and island specific (also regulated in case of the telephone network).
These forms also don’t have a giant corporation attempting to monetize your attention. Telecoms for example simply charge a monthly fee. I truly believe the micro targeted ad driven and “gets the most clicks “ optimizations have been extremely powerful and destructive
Yes, it will resolve, and in fact has been solved.
K-means clustering.
This will scale, is compatible with machine learning, and yields an effect that is good for the users of a service -- but is not useful to owners of a service that maximizes profit and control over user happiness.
If trolls, jackasses, and idiots (by each users independent definition) get automatically segregated into groups where they see only each-others posts, and out-groups just never see their posts: problem solved.
Now, you might say "I don't want a service where neo-Nazis congregate!". I say: why, again, don't you want your police service to be able to infiltrate these groups and observe these people, and arrest them as they see fit?
Hmmm. I wonder.
Once it becomes clear to "evildoers" that the site is not "friendly" to their evil -- toleration doesn't equal agreement -- and that they are not hidden from justice, they will go elsewhere (at least the smart ones), and you can round up the dumb ones at your leisure.
> If trolls, jackasses, and idiots (by each users independent definition) get automatically segregated into groups where they see only each-others posts, and out-groups just never see their posts: problem solved.
The events of Jan 6 show that leaves a rather huge problem unsolved.
The problem of Jan 6 being that a narcissistic buffoon got elected as president of the united states?
I don't know if any amount of internet censorship can ever stop sects from forming. You have to realize that "drinking the kool aid" is an idiom that long predates the internet going mainstream [0]. You can try to push these people off of mainstream platforms, but what will that accomplish? Ultimately, someone will create an easy to use darknet chat platform, and the more radicalized people will use that. In some ways, it's best if we can shine some daylight at the neo nazis. At least then we have some idea who they are and what they are planning.
I think what we had in the last 6 or so years was actually a lot worse than some "darknet chat platform", because honestly the equivalent of that in terms of usenet/forums has always existed.
What we had more recently was a kind of social media conveyor belt and sieve system that found, nudged and filtered people into a specific direction until there were enough deluded/lunatic/trolls/whatever you want to call them citizens from across the country willing to storm the capitol building.
Your darknet chat room just doesn't have that reach, potentially it has the reach to organise domestic terrorism in terms of bombings or something similar, but you'd never get a group of people so self assured in their bubble of reality that they'd willingly assault the capitol building.
My response would be that while it's true that clearnet forums have more reach than darknet forums, there is a risk that pushing people away into really isolated corners radicalizes them even more. If you have a really big open discussion platform, maybe you get assholes saying that Texas should secede or making racist statements, but you probably also have some amount of normalization from regular right-leaning folks who disagree with that. However, if you ban some political opinions, maybe you end up with isolated islands with fewer people that hold much more extreme views, and nobody to tell them that they are going off the rails and pull them back to reality.
> If you have a really big open discussion platform, maybe you get assholes saying that Texas should secede or making racist statements, but you probably also have some amount of normalization from regular right-leaning folks who disagree with that.
While this sounds reasonable, it is not at all clear that it is truly the case.
One day I walk in the house and see my mother crying, watching TV, and pictures of a lot of people laying down. My mother was NOT a crier, so this scared the crap out of me as a child, and is probably why I remember it so starkly today. At some point years later I came to learn what was actually going on that day. (My mother was also not an explainer.)
I wince when I hear the phrase used casually. But of course, people have largely forgotten the deeply dark connotations it has.
Anyway, even though we can’t ensure radicalized sects don’t form, simply ignoring them as the post above mine suggested, is not any kind of solution. As we’ve seen, such groups won’t necessarily be satisfied to just talk among themselves.
Having been on the 'Net since before the Eternal September, I haven't been able to quite understand this drive to purge "things I don't like" from public forums.
If grandma is really lovely, but has some bad habits from her youth or a different time or something -- then don't mark her post as "racist".
I don't believe, either, that people are incapable of learning. If she spews hateful invective and then doesn't get happy-birthday wishes, she'll learn! Sheesh.
Remember, your K-means Clustering will be public; its easy to see where you stand, and why you got there! If people in your preferred group prefer to not see people spouting the N-word, you might want to consider reforming your habits and beliefs, if you want to remain a functioning part of that group!
The problem is not moderation: it's the belief in a consequence-free existence.
I don't think your system gets you to the result you want.
Most people are like Grandma, just on different topics. The extreme but possible endgame for your system is everyone stuck in clusters of one, because they've banned everyone else for different offences (or those people have banned them).
In real life people take the bad with the good and learn to ignore minor disagreements (and some big ones too, up to a limit). That is a stable system which has worked pretty well for a very long time.
In real life, there is both internal pressure to not be a dick, and if you push things too far, the threat of external pressure.
On the internet, there is little of the former (Our monkey brains are not default-wired to treat text we read on a computer with respect and empathy), and none of the latter.
Those are very good points but they don't change the fact that grouping by people in this way tends toward smaller and smaller groupings, and exacerbates the sort of division and "different realities" that is hurting social cohesion.
Bubbles are odd, and you forget they are even there. It sometimes only becomes evident went you stray into an unfamiliar one.
Personally I like some cross-pollination. It can take me by surprise when I'm subjecting to the vagaries of the public.
People tend to protectively double down, and get very defensive, even over very benign topics. Some thrive on that friction, some just would rather leave it alone and drift towards those with similarish outlooks.
People also have a tendency to believe what they want to believe.
I got a load of stick off a friend, because he thought I held a particular political persuasion. Just because I mildly challenged him. Well I'm not even sure as to why he thought the way he did. He just put me in another box/bubble that wasn't his, and lashed out at me. Quite ugly behaviour in my mind. As soon as he realised we were on the same page on the topic, he vaguely apologised for his outburst. To me however, I was left thinking, well what if I hadn't seen eye to eye. Nice guy in the main. Just like racist Grandma.
Anyway I think what I'm trying to say is that I expect differences of opinion. And can accept them (outwardly at least, up and to a point, maybe..). I like a bit of healthy discourse. But the subtleties and lost contexts can be horrible pain points of misunderstanding. Some just can't stand a difference of opinion.
Also, remember: the ratings, themselves, have different meanings to different K-means clusters.
This insane drive to change the meanings of words (eg. every traditionally conservative leaning person is now a Nazi, and everyone proud of their unique cultural heritage is now a Racist, so long as they are also white) is no problem!
A member of a brittle, sensitive, easily-triggered group will quickly find themselves isolated and hearing only the few people who almost exactly match their identical beliefs. And, even those won't last long (just until the first imagined slight or use of the wrong adjective).
This, too, is instructive. Coddling a lack of resilience is not helpful to someone -- but, whatever. Fill your boots! If it works for you, have a blast with your two currently-acceptable friends! ;)
But seriously, the current Kristallnacht purge of social media is the best thing that could happen, in my opinion. It'll force a Cambrian explosion of new platforms that more capably handle differences of belief and tolerance!
>This insane drive to change the meanings of words (eg. every traditionally conservative leaning person is now a Nazi, and everyone proud of their unique cultural heritage is now a Racist, so long as they are also white) is no problem!
This is a strawman, and is generally unhelpful to a good-faith discussion of politics, speech, and moderation.
> I say: why, again, don't you want your police service to be able to infiltrate these groups and observe these people, and arrest them as they see fit?
That is not how it works. The police does not need this. They also will not really use it. However, this does give repugnant ideas a place to spread. It normalises them. It makes things much, much worse.
>However, this does give repugnant ideas a place to spread. It normalizes them. It makes things much, much worse.
I get where you are coming from, but, can we look at three historical examples of repugnant ideas?
1. The earth does not revolve around the sun.
2. Slavery is immoral and man should never own another.
3. Women should take part in the voting process.
There are millions I'm sure from "You need to wash your hands to stop the spread of invisible things that kill people" to "Public executions as decided on by a King are not good for our society" to "The indigenous people here should have the same rights and protections we afford ourselves".
We are wrong about stuff all the fucking time! The humans in society that argued for the worst things, were the same as us. What if we are drastically wrong about any event in the last year? Where do we discuss it without fear of reprisal?
I do not think it's a good idea to "pre-decide" on what is a "repugnant idea". This is what conversation is supposed to be for.
Have a conversation that's more reasonable. Instead of "Do black people deserve to live" which we all agree with, we can have talks of, "What are the reasons for poverty and violence in the black community." And maybe not calling everyone with a non-woke perspective a racist or an uncle Tom.
Uhhm... I'm wondering how you reach the conclusion that the police don't trawl social media looking for things. This is precisely what they're doing. Maybe keep up with current events!
It's a gold-mine for every law-enforcement service on the planet, for good or ill. They drop to their knees every night, and thank God for Twitter, Facebook, Parler, etc.
And, you might be partially right -- it'll only work for rounding up the really dumb ones. All the smart ones are already on Signal, Telegram, ...
But, seriously. This has got to end.
Throwing a kid into juvenile detention because they yell at their friend "I'm gonna kill you!" in the middle of a game or make a gun shape with their hand at school has got to hold different weight than someone posting an instructional video on beheading. But, no, you have kids kicked out of school because someone saw a toy gun on a Zoom call...
The real problem for police is that they have 5% of the population being called "Racist" or "Terrorist" or "Nazi". Not 0.001%. They can't possible track down any real risks.
However, a social network using K-means Clustering easily isolates the "everyone other than me is a racist" crowd, from the "we're normally quiet and tolerant, but holy smokes this guy has really lost the plot" crowd.
People seem to both want a service that is immune from society and also used by society. That's a conflict that won't resolve.