I think some other responders have touched lightly on this but I feel it should be explicitly said that sometimes engaging in conversation is not actually what the other party is doing, even though it might look like it.
Sometimes they are not speaking to you at all, but rather to your listeners. And the larger your pool of listeners the more likely you'll encounter arguments in bad faith because it turns out that if the goal is persuasion then good-faith arguments don't scale.
A tangent here is that on private-public platforms like twitter "your listeners" could be an entirely separate set from the people you actually have contact with. Algorithms that signal-boost opinions out of their actual social circle become essentially propaganda posters for a host of varying in and out groups.
My personal opinion is that this is a misaligned incentive that social platforms should correct structurally, rather than via governance or policies. And pushing the corrective actions back down to the individual is a cop-out.
> if the goal is persuasion then good-faith arguments don't scale.
This is true, but only in a very narrow, specific sense. Arguments don't persuade on their own, they scale with the credibility of the speaker. Speakers gain credibility and respect by:
- Giving good advice.
- Showing good judgement and making accurate predictions.
- Demonstrating that they understand various audience.
constituencies, with bonus points for demonstrating that they actually care about these constituencies.
Conversely, there are numerous ways for speakers to destroy that credibility, like a history of making deceptive statements.
For most people, this means good-faith argumentation doesn't scale because they don't have the standing to make people take them seriously. But it isn't a rule of good-faith arguments as such: arguments are just constrained by social factors and by people's finely-tuned heuristics for who's worth taking seriously.
I suspect this is the main reason people are frustrated by internet debates, to the point of wanting to give up on them and just start censoring people. I make what I think is a careful, reasoned case, and in response all I hear is crickets, or trolls, or "lol lol lol lmao lol lol lol". This is because, to almost everyone reading Twitter, Hacker News, Substack, or the NYT comments, I'm just a rando.
It takes time to build respect and credibility, so keep at the good faith discussion, give people a reason to read what you're writing, and keep your relative obscurity in perspective on the way up.
Do you mean only at the local level, individually? Because it sure isn't hard to find extremely popular speakers with a great deal of influence that run counter to most of those points. It is hard not to conclude that the only thing that scales well in social media is to correctly deduce what most people want to hear, and then say it. People don't care about deception, good advice, good faith discussion, none of that, so long as you confirm the biases they already hold.
People really want to be told they are right, and were always right.
Hypothesis: bad faith communicators are more persuasive among people who mostly agree with them; good faith communicators are more persuasive among people who mostly disagree with them.
Not sure if this is accurate, and I'm sure misses some nuance, but rings true-ish to me.
I agree with the what you're saying, but I'm not talking about things that make people popular, I'm talking about argumentation and what makes someone persuasive. I'm talking about the relatively rare circumstances that can create opportunities to repudiate biases instead of confirm them.
I agree with you to a point, but it's not my experience that speakers gain credibility and respect by giving good advice / showing good judgement / making accurate predictions. This would be true in traditional discourse, but not online. Online it feels as if speakers gain an audience by demonstrating their world view as loudly and as viciously as possible. It's like a group of children where one has learned that the way to get attention is to scream louder and longer than the others. I also have this feeling that social media is ruled by those who are willing to spend their time and effort to elevate a particular point of view no matter how unpopular it is.
I know of many specific examples in my field where the most thoughtful people who operate the most frequently in good faith have nowhere near the following of bad faith loudmouths.
I assume it's entertainment. People enjoy watching drama. Social media is their drama hit, and genuine communicators are frankly more boring.
The dynamics you describe are real, especially on the big social-media utilities like Twitter, Facebook, YouTube, and TikTok. "Boo outgroup" is the easiest thing to sell in the attention economy, the louder and more viciously, the better.
But "building an audience" is a different than building credibility, trust, and respect. If you just aim for the most eyeballs, you find yourself vulnerable to what some people call audience capture, where your audience controls you, rather than the other way around.
For example, Trump has a huge audience, but that didn't stop an audience from booing him when he recommended getting vaccinated. [0]
The best most of us can hope for, if we're careful, is influence on a small group of people.
This is only true at simulacra level 1 [1], when communication is focused on objective facts. This is not a good way to understand communication in business relationships, social media, traditional news media, or politics. I like talking with my friends who communicate this way, but it's not the only way that people communicate, and pretending that it is will lead to confusion.
There are other ways to persuade: You could lie about the facts of the argument, you could convince others you're in their group or that your opponent is in a group opposed to them, or you could say whatever you think they want to hear and sprinkle your message in among it.
I agree with all of that. The context for my comment is discussions involving good-faith arguments intended to change hearts and minds (more or less object-level or simulacra level 1 arguments).
The parent commenter wrote that "good-faith arguments don't scale". I don't agree with that, because good-faith arguments do scale with social capital. People often run afoul of several problems that make it seem like they don't:
- Their reach exceeds their grasp: they want to persuade strangers, but don't have enough social capital to pull it off.
- They think they're involved in a good-faith discussion or argument, but the other participants are competing for status or trying to entertain themselves (or others).
In other words, I don't think good-faith arguments are the only way to persuade people, but if that's your preferred technique, you have to develop credibility, respect, and trust - on top of building an audience, which is a whole different matter.
Aye, the ideal goal of disussion shouldn't be to persuade in my opinion, but to explore each other's ideas with the aim of modifying and improving each position. I think the process of doing that happens to be the best way to actually get people to go along with something also.
Take an interest, not a position. A position often causes us to feel like we have to defend a position, taking an interest helps increase the likelihood that both parties remain open
With many political matters, though, people inherently have to hold positions by virtue of the fact that the issues affect their material interests. You need to remain open in spite of that, but there are also limits to how open one can be when someone else is arguing that you should lose your job or that your health insurance should be able to deny you coverage for care, or maybe that you are culturally or genetically inherently stupid or incapable of self-governance.
Not all controversial matters are political. For instance, it's not inherently political whether global warming is real or whether vaccines work in curtailing a pandemic. Only a certain side in these debates wants to reduce arguments in which there is rational evidence (and important actions to be taken) to something that is merely "political", and therefore a just a matter of constitutional free speech and action.
That's a good way of thinking about it. Reminds me of when Alan Kay asks "Are ideas like matter? [bumps his fists together] Or are they more like light? [Overlaps his hands]" (roughly paraphrasing there).
The ability to entertain multiple, seemingly contradictory thoughts at once is a good skill I think.
“The ability to entertain multiple, seemingly contradictory thoughts at once is a good skill I think.”
Unfortunately large portions of the populace see this as being phony. You must be a red or blue team person, being “people without a tribe” is itself a heresy because by not conforming to this idiotic false dichotomy denies the simpletons and partisans their fallacious little world views. Socrates died in vein, I suppose.
That's a strange take on Socrates, it makes him sound like he died (intended to die?) for our sins like some kind of Jesus figure. I doubt he'd agree with it.
Anyway, I suspect you mean 'in vain'. Veins are those little tubes in your body that the blood flows through, in generally in the direction of the heart. Since Socrates died by poison one could argue that he literally died "in vein" but this is probably not the interpretation you were going for.
I agree with you on the teams thing, and in my opinion it has a strong relation to two-party systems.
> That's a strange take on Socrates, it makes him sound like he died (intended to die?) for our sins like some kind of Jesus figure. I doubt he'd agree with it.
Much of the Christian narrative about Jesus and the nature of divinity is influenced by Neo-Platonist thought, and Plato did kind of frame Socrates' story in those terms. So it's not that Socrates was a Jesus figure, Jesus was a Socrates figure.
>Sometimes they are not speaking to you at all, but rather to your listeners. And the larger your pool of listeners the more likely you'll encounter arguments in bad faith because it turns out that if the goal is persuasion then good-faith arguments don't scale.
I hesitate to throw this out there because, in general, I don't agree with all the hot takes such as "the internet has made us dumber" or whatever negative trait is the flavor of the week to blame on technology. But in this case, I'd say a lot of this is the direct result of forums/social media/etc.
People are arguing/debating the same way politicians do on stage with each other. Functionally we are millions of surrogates arguing everywhere for whatever individual/policy/ideal we believe in. The goal isn't to change the mind of your "opponent," it's to convince onlookers that you're right. And whether that victory is the result of the "better argument" is secondary - the point is to win over the most people by whatever means are deemed most effective. This is bad for dialogue, but (usually) great for winning debate competitions.
I'll do this sometimes in web forums. There are certain points of view expressed certain ways where I already have a pretty good idea of exactly what the poster's opinions are, how they'll defend them, and how they'll respond to any questions, and I know my likelihood of getting them to reconsider or maybe even engage on terms I consider reasonable is nearly zero, but I might reply anyway just so lookers-on can get a sense that there are alternatives to that POV (and that, after a couple back-and-forths, that maybe their position isn't as strong as their very-certain tone implies)
Expanding a bit further, all sorts of problems arise when participants bring with them different conversational frames. Take, for instance, the case where one participant frames a back and forth as in the same way they would a dyadic conversation, while the other participants frames the same strip of events as a debate in front of an audience. The former may see the later as being bombastic or evasive, while the later may see the former as being naïve or pestering. There are all sorts of rhetorical registers available or unavailable to each depending on how each frames the exchange and using an "unavailable" rhetorical register in a particular frame can be keyed as a deception. Bad faith arguments, can then be thought of as deliberately fabricating a frame in order to induce a false belief (ex: that the participants share a common frame).
> this is a misaligned incentive that social platforms should correct structurally, rather than via governance or policies. And pushing the corrective actions back down to the individual is a cop-out.
Can platforms do a better job at framing online discussion in such a way that makes it easier to maintain common framing? Which sort of laminations are available or could be invented in order to facilitate common framing and allow readers to identify frame-breaking activity? Platform creators literally create the mediums for these interactions, the choices they make make them structural participants in the ways in which these interactions play out. As such there is an implicit responsibility placed on them because of their agency in this process.
You make a very solid point - public conversations are way more predisposed to bad faith - who's got ability to accept the perception of looking bad in public?!
> My personal opinion is that this is a misaligned incentive that social platforms should correct structurally, rather than via governance or policies. And pushing the corrective actions back down to the individual is a cop-out.
hmm... I rather think the opposite is the case. Its all about playing to the crowd. And that there is no incentive for anyone (apart from the good faith individual) to do anything about it - all that noise translates into engagement with the platform.
Sometimes they are not speaking to you at all, but rather to your listeners. And the larger your pool of listeners the more likely you'll encounter arguments in bad faith because it turns out that if the goal is persuasion then good-faith arguments don't scale.
A tangent here is that on private-public platforms like twitter "your listeners" could be an entirely separate set from the people you actually have contact with. Algorithms that signal-boost opinions out of their actual social circle become essentially propaganda posters for a host of varying in and out groups.
My personal opinion is that this is a misaligned incentive that social platforms should correct structurally, rather than via governance or policies. And pushing the corrective actions back down to the individual is a cop-out.