If the solution was providing people with a list of bad faith tactics we would have been done with it at least twice by now: first when Socrates was arguing with Sophists 2500 years ago and another time when Schopenhauer wrote Eristic https://en.wikipedia.org/wiki/The_Art_of_Being_Right. And before you think 'maybe someone doesn't know yet': yes, you are correct. Someone doesn't know. We tried telling everyone before and just trying harder doesn't seem to cut it.
What, I'd argue, would be at least a tiny step forward would be thinking in terms of games people play, rewards they seek and maybe even monetization of systems. Thinking of people who argue in 'bad faith' as being mostly plain wrong is naive and somewhat offensive. Talk to your PR department every now and then, some of them are smart and know what they're doing. Same for Twitter discourse and all else.
Telling people (or yourself) to make better communities ignores costs involved in managing that community. Can you afford onboarding of even telling people that cute list of bad faith tactics? Can you do it faster than a place that doesn't do it? Can you achieve retention higher than love bombarding communities?
No. No, you can't.
Not with current tooling at least. Not to push own products/services (today!), here are some angles that seem achievably hard, yet somewhat underdeveloped: good faith arguments are more time expensive - it can be cut into pieces/redesigned to give them more chance; both wrong and correct ways of thinking about specific problems are actually very limited in numbers - maintaining searchable database of them to reuse should dramatically speed up 'getting through'; false positives in ostracism are unnoticed - layered moderation that provides feedback on initial misjudgment can noticeably improve the space: not so much retention (that numbers would be small), but limit echo chamber by avoiding rituals of cancellation - without increasing costs as much as having 'full conversation' with everyone before banning would.
> What, I'd argue, would be at least a tiny step forward would be thinking in terms of games people play, rewards they seek
Definitely. One thing that I think is unappreciated is the extent to which we see "preference falsification". This is a game people play where they pretend to have different preferences to better fit in with their in-group.
It's common for preference falsification to be manufactured intentionally – I think Robin Hanson formalized it with the idea of a "meta-norm", a norm that not says: you must ostracize people who do <bad thing x> AND you must ostracize people who don't follow this rule. I think when people complain about "cancel culture", this is the real thing they're unhappy about, they just lack the vocabulary to articulate it. The sneaky thing about the meta-norm is that it's self-reinforcing. Once enough people follow the meta-norm, following the meta-norm becomes a stable equilibrium where no individual person gains from not following it.
From Scott Alexander:
> Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.
> So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures.
The subtlety of this is that you might genuinely believe that everyone supports the electric shocks, because you'll never hear anyone speaking out against it, even though everyone hates it.
I think this dynamic is so powerful that it's almost innate. I remember once when a friend of a friend cheated on her boyfriend regularly. I obviously thought negatively of the cheater and didn't want to be around her, but I also thought negatively of my friend for continuing to be around the cheater. The instinct is that punishing cheaters by social ostracism is socially useful, so we should also punish people who fail to ostracize cheaters by ostracising them, and so on. This can be good like in the case of punishing cheaters, but the problem is that it could work for any social norm even if 100% of people disagreed with it.
I think this is a real and powerful social dynamic that leads to a huge amount of people having no choice but to act in bad faith. If this is a real social dynamic, how can it be neutralized? One approach I think is promising is to use local opinion polls, only structured as opinion elections. If everyone could vote anonymously, I'm sure they would say "I'm not such a fan of these electric shocks" (and the anonymity protects them from the fear of socially-enforced retaliation). Once it becomes common knowledge that almost nobody around you likes the electric shocks, it's much easier to coordinate "let's stop punishing people for not shocking themselves". Electric shocks are just an example, you could use this for any hot-button political issue. For example, in the US's antebellum South I'm sure there was immense social pressure to be pro-slavery, but opinion elections might have helped pro-abolition people understand if they were even in the minority (and if so, by how much).
This is just one mechanism I think might be workable, but I'm sure if we sat down and thought about it we could come up with many others, like reputation systems for those who make accurate predictions for the future, debates where people have an incentive to call-out their counterparty's selective reporting of the facts, etc.
And this is muddled even further by effective propoganda.
Say a research article came out, claiming the health benefits of the electric shocks, or even some vague "think of the children" style "children become violent and dangerous when not shocked". It is parroted by all the news organisations you'd expect.
Even if any casual scrutiny debunks the article, I believe a worryingly high percentage of the population would believe it, because they _want_ to believe it, because it means they don't have to worry about the shocks. They've justified the shocks to themselves, so they become less painful. Accepting that they are useless or cruel after this would require them to accept that they were doing something harmful for no reason, and people are generally very resistant to being told they were wrong, especially if they have any stake in the status quo.
So now you have people who _genuinely_ believe in the shocks, even though they got there through being deceived. Then all it takes is a bit of villifying of those who even suggest that there might be another way (They want to hurt our children) and the system becomes self-sustaining.
I'm not making a value judgement, but I thought that the parallels with your last suggestion, and the 'blind cancel' idea in this: https://news.ycombinator.com/item?id=30906621 were kind of interesting. I don't know what that means, I'm mostly just free associating
If the solution was providing people with a list of bad faith tactics we would have been done with it at least twice by now: first when Socrates was arguing with Sophists 2500 years ago and another time when Schopenhauer wrote Eristic https://en.wikipedia.org/wiki/The_Art_of_Being_Right. And before you think 'maybe someone doesn't know yet': yes, you are correct. Someone doesn't know. We tried telling everyone before and just trying harder doesn't seem to cut it.
What, I'd argue, would be at least a tiny step forward would be thinking in terms of games people play, rewards they seek and maybe even monetization of systems. Thinking of people who argue in 'bad faith' as being mostly plain wrong is naive and somewhat offensive. Talk to your PR department every now and then, some of them are smart and know what they're doing. Same for Twitter discourse and all else.
Telling people (or yourself) to make better communities ignores costs involved in managing that community. Can you afford onboarding of even telling people that cute list of bad faith tactics? Can you do it faster than a place that doesn't do it? Can you achieve retention higher than love bombarding communities?
No. No, you can't.
Not with current tooling at least. Not to push own products/services (today!), here are some angles that seem achievably hard, yet somewhat underdeveloped: good faith arguments are more time expensive - it can be cut into pieces/redesigned to give them more chance; both wrong and correct ways of thinking about specific problems are actually very limited in numbers - maintaining searchable database of them to reuse should dramatically speed up 'getting through'; false positives in ostracism are unnoticed - layered moderation that provides feedback on initial misjudgment can noticeably improve the space: not so much retention (that numbers would be small), but limit echo chamber by avoiding rituals of cancellation - without increasing costs as much as having 'full conversation' with everyone before banning would.