"Bad Faith Communication: discourse that is intended to achieve behavioral outcomes (including consensus, agreement, "likes") irrespective of achieving true mutual understanding"
I would argue that nearly all advertisements fit this description. The field of advertising has achieved a massive technological leap over the past few decades.
Or religions, traditions, cultures, political slogans and reductionisms of apply sorts. Essentially, anything that fits within the realm of the Noble Lie.
The actual EFF headline is, "15 Universities Have Formed A Company That Looks A Lot Like A Patent Troll". Why was the number "15" omitted from the HN headline? It seems to imply that universities in general are guilty of this. Even if it's true, the article doesn't make that argument.
In general I agree with with the EFF that this consortium and software patents in general are harmful. But this change to the headline seems like an editorial twist; it changes the framing of the story.
Probably an overzealous attempt to meet this site guideline:
If the title contains a gratuitous number or number + adjective, we'd appreciate it if you'd crop it. E.g. translate "10 Ways To Do X" to "How To Do X," and "14 Amazing Ys" to "Ys." Exception: when the number is meaningful, e.g. "The 5 Platonic Solids."
There's no such policy and there are countless counterexamples. I think you may be falling prey to the cognitive bias that causes people to notice and/or emphasize what they dislike much more than what they like. The bad stands out more than the good, pain is more memorable than pleasure, etc. This leads to false feelings of generality. Plenty of past explanation at these links, if you or anyone happens to care:
As long as Facebook continues to recommend and target user-generated content based on engagement metrics, it will reward the sort of engagement generated by vicious deceptions such as holocaust denial. It is engineered to handle any post, including expressions of hate, by finding the most receptive audience possible for that post.
Rather than targeting a narrow swathe of Definitely Terrible content to censor, Facebook should target a wide swathe and rather than removing it, exempt it from the algorithmic feed, group recommendation input, etc. If my friend posts something hateful and I see it, I can respond and push back; if they post something hateful and only their friends who agree see it, and they also get a dozen different similarly hateful echo chambers in their group recommendations, that's far more destructive.
By growing as large as they have, and by building automated systems to amplify content to mass audiences, they have acquired that role. It is unfortunate that their control over their responsibility is unilateral and undemocratic. But at their scale, if they chose not to try and assess the accuracy of information, but instead to blindly amplify it based on engagement metrics, that is also a political choice.
One possible option that never gets discussed is to nuke the amplification methods. If we stop recommending content automatically this ceases to be a problem.
As someone who's worked at a big social media company — no, that's not at all what consumers want. They want chronological feeds with zero garbage mixed in. It's okay to have a separate recommendations feed, some (not all!) people want to discover new things, but it's totally not okay to meddle with the main one, and it's nothing but mockery to give users no control over it. People also want their preferences respected, they certainly don't want them reset every now and then.
The only reason people keep using services like Twitter is because their network keeps them there.
Well I guess it depends on making a distinction between what consumers think they want, versus what they actually do.
Yes, people say they don't want recommendations, because 95% of them are irrelevant.
But then the 5% (or 2% or 0.5%) turn out to be super-relevant, and they find new people to follow that they love, and learn about things they love, and the experience in the end turns out to be a net positive.
Their actions show that it's valuable in the end. Otherwise the feature wouldn't exist at all. Recommendations aren't advertisements, sites don't make money off them -- sites use them because people genuinely find things that lead them to use the sites more.
I'm not denying the undeniable fact that some people sometimes want to discover new things. I'm just saying that it's absolutely possible to have it done in a respectful manner. No one, ever, under any circumstances, likes or wants to be manipulated, be it overtly or by having their subconscious played with — period. Adding non-configurable extra anything into people's newsfeeds, be it recommended posts, ads, or "people you may know" blocks, is a crime against user-frendliness. Those who do want to discover new things, will simply open the "discover"/"explore" tab that contains a dedicated recommended content feed on their own. There is no need to nudge anyone to anything.
People aren't stupid if you don't build your UI/UX around the assumption that they are. They also like transparent, understandable algorithms. Chronological feed of (only) the people one follows is as transparent as it gets. A chronological feed with some recommendations mixed in is more opaque and confusing. An algorithmic feed is an epitome of opaqueness. Opaqueness naturally drives users away because it doesn't exactly instill confidence that their posts will reach their followers.
Another example: do you understand what the "see less often" button in Twitter does? No one does. No one likes cryptic algorithmic bullshit forced on them with no way to disable it.
Choice is very important.
> versus what they actually do.
Do manipulations work? Of course they do. Are people happy when they are manipulated? Of course they are not.
> Recommendations aren't advertisements, sites don't make money off them
They absolutely do. Recommendations aren't there because Twitter wants to be helpful — they'd be more user-respecting as I said above if that was the case. They're there because they drive engagement metrics up, and those in turn translate into someone's KPI.
Do consumers want it, or is it merely taking advantage of some more subconscious human behavior patterns. And if the latter, is this something that is bad for humankind?
Consumers want a lot of things with negative externalities - goods that cost less because they're produced with slave labor, transportation that emits greenhouse gases, etc. Their preference shouldn't trump the obligation not to harm third parties.
Automated recommendations of a human-curated set of content - e.g. Netflix recommendations for its suite of programming - are much less objectionable, because they can't amplify anything the organization has not intentionally decided to present. It's the combination of UGC and ML recommendations that presents problems.
> But most of the time people like getting content recommended. It's what consumers want.
Do they, or do they just boost some KPI that suits proxy for actual utility?
Anecdotally even in non-tech circles most of my friends complain about how bad recommended content has gotten, or roll their eyes at whatever "personalized" ad for garbage they've been recommended.
I disagree, this isn't the nuclear option, the nuclear option is forcing these platforms to have a more editorial role in the content they're serving and that comes with a whole bunch of good and a boatload of bad.
Gigantic unmoderated platforms existing like this that promote random snippets of speech to drive user engagement and ad-revenue is a thing that shouldn't exist. The problem we still haven't solved is how to specifically kill off platforms of this type without killing forums and discussion boards in general. I think there is a distinction there but I'm not certain precisely what defines it - but if anyone figures it out please let us all know!
we've had forums and discussion boards for decades now that do not have recommendation features. I don't see why we can't put that genie back in the bottle.
IMO the moment you start highlighting things that people didn't explicitly ask for, it's an endorsement.
I think it's like Gerrymandering - yea we can all tell when it's gotten to stupid levels but the supreme court wasn't wrong to want a definition of where the line between "okay" and "bonkers" is. I personally think the decision could've been a bit more aggressive against gerrymandering but we do need some clear line to say "If you're beyond this you're doing an illegal thing" - and while we could close in on that line over time with a slow accumulation of precedent it'd be a lot cleaner to have a decent measure.
Is lying and deceiving people fine, according to the 1st amendment?
For example foreign states that pays armies of internet trolls to in effect choose president in the US -- is that what the 1st amendment wants to happen
I think "information" can kill more people than cocaine, is more dangerous
>Consumers also want cocaine, but that doesn't mean you get to sell it to them with impunity.
The appropriate situation for those that want cocaine is something similar to the rules around purchasing/possessing/using alcohol.
From an economic standpoint (increased tax revenue, reduced spending on "enforcement" and incarceration, increased economic output because fewer people are in prison, etc.) and a societal standpoint (more resources available to the 2-5% of folks who end up with dependency problems, reduced property crime, not harming communities with significant numbers of residents being pulled out of the community and incarcerated, etc.)
As such, there's no good reason for any mind altering substances to be illegal. Rather, they should be regulated and taxed appropriately.
I also wish that these decisions were more democratic. At the same time, personally, I think that the folks who made these changes did a great job. They're helping preserve American democracy.
In particular, I appreciate:
"10/2019 - Banned all political ads on Twitter, including ads from state-controlled media"
I hope Facebook employees are taking notes.
I'm also a huge fan of:
"...we will label Tweets that falsely claim a win for any candidate and will remove Tweets that encourage violence or call for people to interfere with election results or the smooth operation of polling places."
Does anyone know whether the next one is official US policy, or whether it's just Twitter's policy?
"To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls."
Mail-in ballots can arrive at the voting office as late as November 20th this year (depending on your state.. you still have to mail your ballot out by November 3rd, though) [0]. With so many people voting by mail this year, we might not know the election results until November 20th. I hope that election officials (and Twitter officials) will take that into account.
>But at their scale, if they chose not to try and assess the accuracy of information, but instead to blindly amplify it based on engagement metrics, that is also a political choice.
No that's an apolitical choice.
The political choice was not doing this in 2012 when it was Obama benefiting from it.
The classic rebuttal to free speech arguments which I'm sure you've heard, is that the first amendment doesn't apply to private companies, and that your right to free speech doesn't entitle you to a megaphone, etc.
I think a more nuanced and useful way to look at things is to think of Twitter as an amplification machine rather than a speech machine. I can say what I want out loud, I can write whatever letters I want, I can make my own website if I want, etc., but putting it on Twitter causes Twitter to amplify it. Many of these announced changes pertain to what Twitter chooses to amplify - and how - rather than what it permits people to say. (As far as I can tell, the only tweets they are actually removing are those that call for violence, a standard for censorship that seems quite reasonable.)
If we think in terms of how and when to amplify speech, rather than trying to figure out what kind of speech to censor, we can hit upon more workable improvements. Twitter's proposals here, under that framing, are a mixed bag.
Twitter provides several ways to amplify posts - some of which are intentional on the part of users, some not. For example, if I follow a person, I'm telling Twitter to show all that person's posts in my feed. If I reply to a tweet, I'm telling Twitter to show my post to that person in their notifications, and also show it to other people who engage with it. If I quote-rt a tweet, I'm telling Twitter to show it to everyone who follows me, alongside my commentary. Etc.
On the other hand, if I like a post, or engage with it in any way, I'm not telling Twitter to show it to anyone - but my Like may cause it to recommend the post to others, sort it upward in the algorithmic timeline, etc. This unintentional amplification can have unintended consequences, because the system cannot tell when engagement metrics are due to positive or negative characteristics of the post.
Quote-retweets are also rife with unintended consequences. If someone "dunks" on a post by quote-retweeting it with criticism or mockery, they're betting that their comment is going to lower the status of the person they are quoting or persuade people the post is false. But the folks reading their post may not agree - and the original post might have been an bad faith attempt at distraction, which a dunk then amplifies. Alternatively, if a popular account dunks on a much less popular account, it can (sometimes intentionally, sometimes not) trigger a wave of hostility and harassment.
So I like parts of Twitter's changes here - they have the right to try and amplify true information more than false information, and removing flagged posts from recommendations will do that. Additionally, removing recommended content from non-followed accounts from the algorithmic timeline is positive as well - it reduces unintentional amplification and puts more control in the hands of users. But their encouragement of the quote-retweet is concerning. They don't seem to realize how effective a weapon it can be.
I would argue that any automated recommendation of user-generated content needs to be carefully controlled, if not abolished altogether. Recommendation systems cannot distinguish between content with high engagement due to quality, and high engagement due to emotionally manipulative dishonesty or other negative factors. And specially interested (or bigoted) political actors, who are simply interested in "the most effective way to attack / promote X" rather than arriving at the most truthful position, can test and manipulate those recommendation systems far more effectively than folks trying to engage with nuance and good faith.
This "situation where lies so easily go viral" seems to me to have intensified starting in around 2014 to 2015 - when Twitter introduced the quote-retweet, and Facebook introduced the algorithmic timeline. I don't think "free speech" is the right framing for thinking about it. The recent phenomenon is not the existence of extremist political movements or medical misinformation, but rather, their amplification.
"At a 3% per annum growth rate of CO2, a 2.5℃ rise brings world economic growth to a halt in about 2025."
I wonder if attempts by the scientific community to persuade world leaders of the severity of this problem would have been more successful if this had been more emphasized, rather than inches of sea level rise, wildlife extinctions, effects on poor populations, etc. If there is one thing political and financial leaders understand, it is their own dependence on continued economic growth - and continued expectations of economic growth.
I don't think global warming is on track to bring world economic growth to a halt by 2025, that's only 5.5 years away. As emphasized in the report, the models of the time had a very wide cone of uncertainty - on one end you had the immediate destruction of human civilization, on the other you had a slow meandering towards harsher conditions. Fortunately for us we got off easy relative to what could have been. (Although, it's not right to say we "got off," we are continuing to put CO2 into the atmosphere and unless we stop we will eventually push it far enough make good on every ghastly possibility they considered in the 80s.)
GDP growth is trending down, and has for years. Considering it's 40 years old I'm more struck by the accuracy than us not seeing the prediction come true in 2025.
Maybe the precise date is off, but the trend is already becoming clear, and effects are accelerating. Developed economies grow in the low single digits, it's not going to take much to eliminate much or all of it.
Edit: Which potentially makes the new normal, whenever it may come, a constant recession. That is going to be... interesting.
Yeah that particular date certainly had a lot of uncertainty, as any 50 year projection must. My "this" was ambiguous; what I was trying to refer to was, the issue of warming's effect on economic growth generally, regardless of the specifics of particular predictions.
The trouble is CO2 isn't even a pressing problem compared to everything else. There are so man forms of pollution that are killing things right now. There are the great garbage patches in both the Pacific and Atlantic oceans. There are the lakes of sludge in factories cities in China. There are the coal fly ash dams at coal plants all around the US (and dams like the one in Kingston Fossil Plant that broke and contaminated the entire watershed between Nashville and Knoxivlle), water supply contamination from fracking, the ongoing ramifications from the BP oil spill in the Gulf that is still not really cleaned up, but everyone has forgotten about.
To solve all these other very current sources of pollution that are killing us right now, we need to consume less. We need more trains, fewer cars, cellphones that are designed to last 8~12 years instead of 2~4, industry that isn't based on infinite growth, more automation, less fear over loss to automation, and just a huge change in the way we think about the world, consumption and the economy.
All of these changes will reduce CO2, but CO2 is just a symptom of a much much larger problem. People will continue to argue about CO2, and it'd a red haring. Humanity needs to focus on the actual Flu and not the runny nose/sniffles.
CO2 is not just disrupting the climate, it is also acidifying the seas, which will soon suffer an eco-collapse as shellfish and coral become unable to precipitate calcium out of the water.
There is a nearly commensurate problem. If existing A/C and refrigeration systems end up venting their HFCs, that will cause as much climate disruption as CO2, and remain in the atmosphere for centuries. Somehow we have to drain every failed compressor and incinerate it all.
>People will continue to argue about CO2, and it'd a red haring. Humanity needs to focus on the actual Flu and not the runny nose/sniffles.
You have to be careful about magnifying good plans into a design to destroy and replace every institution of power. If you let your views about how society should be organized piggyback on the need to solve problems, you'll end up fighting for a communist revolution instead of fighting climate change.
There are plenty of ways to address this issue that don't involve the larger problem, (the larger problem being society, human nature, the universal wavefunction and the boundary conditions of the universe...) and you can implement them without having to fight off opposition from everyone else on earth.
Regular old techniques like funding research and taxing externalities can help with this problem, and the opposition to them is going to be far smaller than the opposition to replacing capitalism, or whatever bigger-picture solution you're alluding to.