This is a substantive article. We've changed the title to a representative sentence from the main text, in the hope of not plummeting straight into national-race-ideology war. If you post in this thread, please don't post that kind of thing. It just leads to predictable flamewars.
Instead, follow the site guidelines. They include "Comments should get more civil and substantive, not less, as a topic gets more divisive," Eschew flamebait," and "Please don't use Hacker News primarily for political or ideological battle."
She remarked that Youtube's recommendation system invariably pushes people towards more extreme views on anything, not just politics. Interested in vegetarianism? Here's something about veganism. She has a hypothesis it works this way because the goal is to make people spend as much time on a site as possible, and feeding people progressively more extreme content works for that.
My comment: what does it say about us humans that we fall for progressively more extreme stuff? Perhaps that few people actually hold balanced views and nowadays a compromise is looked down upon and a mark of a weak character?
As has been pointed out, in lots of areas a "balanced" view doesn't make sense. I'm reminded of the judgement of Solomon dictating that, to settle a dispute between two women as to whom a baby belonged to, it should be cut in half and they be given half each. For a lot of social issues, being given the midpoint solution is like being given half a baby.
"All humans are created equal and endowed with rights" is a radical proposition which we are still working out all the implications of after several hundred years; any deviation from that (ie "some humans are intrinsically (not as a result of their actions) inferior") starts you down the road of partitioning society and justifying worse and worse treatment of those deemed inferior.
> For a lot of social issues, being given the midpoint solution is like being given half a baby.
I think this cuts to the heart of the issue - and demonstrates the problem with this line of thinking.
I also think you're missing the point of the Solomon story. When Solomon proposed that solution, one woman said "No, no - give her the baby." That woman was willing to compromise. She was willing to compromise all the way, to lose the entire dispute, to preserve the important thing they were fighting over.
Solomon didn't actually plan to cut the baby in half. On many social issues, there is a negative-sum, "burn it all down" solution that leaves both sides worse. Which side reaches for that option? Which side is willing to sacrifice the deeper issue in order to "win"? That's the side that deserves to lose.
In the Solomon story one woman was 100% right and one was 100% wrong. Are there many social issues where the 2 sides that meed this criterion?
I think a better visualization of the problem is "you can't cross a canyon in less than a single bound". Sometimes there is no meeting half way.
Last but not least, compromising is sometimes not an option. It's like the paradox of tolerance where the only thing you have to be intolerant to is intolerance. Because allowing that is basically killing your options. It's like allowing bacteria to kill you because you are pro-life and not willing to kill them.
In the case of free speech, anything that would lead to someone else losing their freedom for the same should not be allowed just to have "a compromise". Hate speech is not free speech any more than contract killing is just a financial transaction. And just like contract killing, hate speech should be punished the same as the crime it instigated to. You instigate violence, you have to be put in the same jail cell as the people who threw the punch.
Groups of people tend to label the speech of their opponents as "hate speech." It seems like more and more speech today is being labeled as "violent" speech.
For any mass murder you can find people blaming some ideology or another as being the root cause.
It's a dangerous path to follow when a party in power wholly believes in jailing people for "hate speech" as they will be silencing and jailing their ideological opponents.
> For any mass murder you can find people blaming some ideology or another as being the root cause
Yes, there nearly always is an ideological "cause" or justification for mass murder? That's why everyone's so concerned about it in the first place, it's both a warning sign and a facilitator of genocide.
That must be a real conundrum in countries that have laws against it. Presumably clearly defined. I think you're onto some paradox here...
Hate speech is not about hurt feelings. But if that's what you understand from it with almost the entire human knowledge available to you a simple internet search away I'm sure no comment here can provide you some relief.
Telling me my shoes don't match the belt is not hate speech. But instigating people against a whole class (race, religion, ethnicity, etc.), usually suggesting violence or some otherwise harmful methods, is. And it ends up with people hurt, lives and whole communities destroyed. You'd understand a lot better if you were at the receiving end of it. Things are very easy to overlook when they never hit close to home.
> Hate speech is not free speech any more than contract killing is just a financial transaction.
Personal anecdote.
When I moved to the UK in the 1970, there was a little platform in Hyde Park called Speaker's Corner. Anyone could step onto it and make a speech about anything, anything at all; they could do so in the strongest terms, without being bothered; there was always a small audience standing around, and often lively debate would result.
Most of what was said on that platform would now be characterized as "hate speech" by people of your ilk. And the times were not exactly peaceful; there were IRA bombs going off in England at least once a week. Yet the authorities made a point of not interfering, because memories of WWII were still recent, and the right to free speech was considered sacrosanct.
Unfortunately you set off a flamewar with that personal swipe. Please don't do that. Your comment would have been great without that bit, and there would have been no need for the nasty tit-for-tat of the below.
Ah. The halcyon days of free speech when certain elected MPs were banned from speaking on television; the Zircon affair, the prosecution over Spycatcher, the McLibel trial, and so on?
Perhaps you were free to be racist, but certain kinds of speech affecting the power structure were definitely not free.
I think the conclusion can be drawn even without the rest of the comment. There will always be some bigot dying a little inside seeing the world is evolving and they're left behind.
Spewing out some hate for a couple of other bigots is not that much of a big deal. But today's "Speaker's Corner" has an audience of (tens of) millions. And the bigot doesn't stand up in front of anyone, they do it from afar, in anonymous comments, in podcasts, from behind a mask. If you can't understand how much of a difference that makes than maybe you belong to that age and not this one.
And there's one thing your ilk doesn't understand: you're free to make that hate speech. You shouldn't be free of the consequences. Which is why I said it should be punished, not censored. If it's censored then people might mistakenly take those bigots for worthwhile human beings :).
You'd understand the power of hate speech better if enough voices called for violence against bigoted sexagenarians or something like that.
Oh, one other popular thing in the '70s was KKK which you obviously fully support because it happened so it must be right.
I would conclude that you don't know how to argue, and that's why you prefer to take away your opponents' right to express themselves.
On the flimsiest of evidence, you label me with a number of repugnant attributes (bigot, sexagenarian, left behind, stupid, hateful, KKK), and then, having established guilt by association, you politely suggest I shut up.
I could demonstrate to you my political positions are actually further left than yours, but I won't bother, because in a discussion, what counts is the strength of the arguments themselves, not the identities and group affiliations of the speakers.
My identity, my convictions, are irrelevant. My arguments are what matters. And you haven't engaged with the central argument: that democracy cannot function without unrestricted free speech.
> I would conclude that you don't know how to argue, and that's why you prefer to take away your opponents' right to express themselves
And I would conclude that you believe actions have no consequences when it comes to whatever you like to do. That being free to say or do something means being free to get away with anything that comes out of that. The political position is relevant, this is a combination of character traits and education. Or lack of. You all google "free speech" and only bother to read the first line. Just like the other hicks who think democracy is the right to do whatever the heck they want. You're free to call for violence against others just as you're free to be punished for the consequences of that call.
Well guess what, freedom of speech also considers the harm principle. This is further down the page, too much for most of those people to absorb. [1] "Liberty consists in the freedom to do everything which injures no one else".
Speech is like a weapon. They're not prohibited, and you're free to use them any way you see fit but you have to bear the consequences of misuse.
But the further you go towards the extremes (left or right) you find that people are having a harder and harder time understanding this concept of consequences. It's in perfect correlation with the drop in education and usefulness to society. Low education, low income, spineless, always blaming someone else, and running from the consequences of their own actions.
You're here to "negotiate" something that is already law and principles already accepted by many for centuries. To give a nice shine to something every person with the least bit of decency and education knows is wrong. You even have a hard time understanding the difference between not having your actions censored and not suffering the consequences for those actions. You have no arguments, you have a keyboard and an internet connection, and you don't even take full and productive use of those.
> Speech is like a weapon. They're not prohibited, and you're free to use them any way you see fit but you have to bear the consequences of misuse.
Agreed. And there are laws on the books for this; there always have been. The more recent concept of "hate speech" (to return to your original post) is entirely redundant, unless its intended use is censorship.
That's pretty simple: hate speech is that speech that you think you're free to make and should get away with it but you actually shouldn't.
If the fact that something is "a recent concept" made it wrong we wouldn't really have concepts, would we? Hate speech and hate crimes are pretty clearly defined.
See what I was saying about that good use of the internet? You'd rather be here trying to prove me wrong rather that spending the time by yourself finding out why you're wrong.
You're confusing speech and actions. Contract killing is bad not because of speech (after all, the contracting can be done in complete silence), but because of someone actually doing the killing.
Another problem, a much bigger one, is most of what people claim to be "hate speech" these days simply isn't. Post certain factual stats on race or Islam on HN, and you will get banned for hate speech. It's hurt feelings, and they often clash with reality.
I think he got the point just fine. She was willing to compromise because half a baby is a dead baby. She would compromise on whether or not the baby was in her possesion but she could not compromise on the state of the baby.
It is better to lose the whole war than to lose what you value the most. Losing could mean many things,not just loss of possession.
Can you provide a specific example(s) of a side sacrificing a deeper issue in order to win, in recent history? I'm having trouble coming up with examples.
I can think of many cases where "the other side" refused to compromise, creating a negative-sum game. But let me cite an example where "my side" was in the wrong.
I'm a liberal-ish person, and I think most states could use more tax revenue. I'm also concerned about climate change. This is a pretty mainstream set of positions.
In 2016 there was a ballot initiative proposed in Washington state that would levy a tax on carbon emissions, and offset it with cuts to other taxes. The result would be revenue-neutral (or slightly revenue-negative). It was opposed by the usual suspects (energy companies), but also some environmental groups and the state Democratic party.
Part of the liberal argument was that the initiative wasn't perfect. Given a new source of revenue, liberals saw many needs that could be met, instead of tax cuts. But that wasn't the deal on the table. The initiative failed, and we got the "dead baby" of no carbon reductions, no spending on other needs, and a less efficient tax system.
Not to get too 2016 about it but a lot of Republicans would've told you in 2015 that their top 3 beliefs were family values, cutting gov't spending and free trade.
That doesn't make any sense as a comparison that I'm trying to find between real life politics and the the above commenter's analogy about sacrificing a baby just to "win".
We have a couple recent examples. If you think that the freedom of speech on the internet is important and foundational, then FOSTA appears to be us trading a deeper issue for a shallower one. Leaving aside the fact that it's not even likely to fix the problem it's trying to solve.
A similar example: the recent court cases by state AGs to ban certain types of CAD files from being published.
To be clear, the point of Solomon's judgement was not that cutting a baby in half is a good solution: it was just a trick to test the women. One of them accepted it, the other gave up their claim so that the baby wouldn't be harmed. Solomon gave the child to the person who gave up, since they put the child's welfare above their own.
I don’t think balanced means “in the middle.” Someone can espouse a solution to a problem that is extremely liberal or extremely conservative and still have a balanced viewpoint.
I consider somone to have a balanced point of view if they can recognize that all solutions have tradeoffs, frankly discuss the tradeoffs of different solutions, and argue why their proposed solution is the most attractive option. I think it is also important to recognize how sure we are about the effects of different things. Some things we are very sure about. Others (like predicting market reactions to government policy), less so. The world is complex and any argument that isn’t complex probably isn’t balanced.
As an example, if I ask someone what their least favorite part of their favorite programming language is and they say, “nothing, it’s perfect,” I can safely assume they are a partisan idiot. One can believe an extreme programming language like Elm is the best solution available and still have a balanced view of it.
Agreed. To put it in concrete terms, someone claiming to have a truly balanced view of a subject should be able to pass an ideological Turing test for any of the major positions on that subject.
> "All humans are created equal and endowed with rights"
All humans aren't created equal, not even close. A 4 foot tall person without arms will never play basketball as well as a healthy 6 foot tall person.
Even if you meant to say "all humans should be treated equally", it's still a ridiculous proposition, because people are different, and thus should be treated differently. Will you let a person with the history of child abuse to babysit your children? I know I wouldn't, no matter how much they tell me they had changed.
Even if your first sentence was true, I don't think it would explain why content with extreme views dominates. Perhaps because it causes more emotion, and the algorithms senses that emotions is what people crave.
As a counterpoint to yours, Aristotle was famous for his golden middle principle. That in vast majority of matters(notable exceptions: murder, theft, adultery) you can only achieve happiness if you do and experience everything in moderation. Don't ignore your desires, but also don't submit to them completely.
Yes. They're ok. You decide to chose what belief system to subsccribe to,and the highest authority within your belief system decides right and wrong(absolute right and wrongs being decided by absolute authority)
While you're bored and comfortable,others are in horrific pain and suffering.
It seems you prefer comfort and well being over doing what's right. In essence,if you (not society) believe doing something is right,you should at the very least not be be vocally against others who are doing it.
Those aren't progressive positions at all, those are internet bred radical identity politics issues that have nothing to do with progressivism, and stem out of liberalism which is meant to further divide the country. Most people on the true left wouldn't identify with those positions.
Edit: People downvoting need to look at a political spectrum of the left and research the viewpoints of progressive, democratic socialist, and socialist individuals in history. They would all denounce these radical identity politics positions as liberalism that don't end up helping anyone in any way -- especially joining together the working class. This only splits the working class. Identity politics is liberalism and liberalism is very close to centrism, not left politics.
>On Thursday, March 14, 78 “non-black” leaders traveled to Atlanta from 17 states for a planning meeting of the Poor People’s Campaign Steering Committee. They came from Native American, Mexican American, Puerto Rican, and white communities, representing 53 organizations. “It has been one of my dreams that we would come together,” King told the gathering. Their “declaration of unanimous support” for the Poor People’s Campaign, writes Stewart Burns, was “a breakthrough of major proportions, the high point of the entire campaign.”
This is an example of what the article is discussing. Essentially, content on open platforms, (this thread), tend to link to more extremist content, (the thread you're pointing out). Then that content links to more extremist content, and on and on and so on and so forth.
So your comment is a good "for instance" of why YouTube does what it does as an algorithm. In essence, what YouTube's algorithm does is the exact same thing that we, as humans, do in any given discussion. We just don't get to the extremist content as fast as YouTube's algorithms get us there, so we don't notice that tendency in our face to face discussions.
In a very real way, YouTube is trying to be more human. With all of the negative implications that entails.
Interesting ideas. I don't know that "extremist" is the right word though. It seems that it's just further investigation into any particular topic (for instance I replied to a direct question asking for an opinion). "Extreme" is a very subjective term and when talking about sensitive topics, it would be easy to label any further research as such.
Elon Musk and Nick Bostrom were mocked for warning about the dangers of powerful future AI. I've said this here before, the primary use cases of machine learning right now, today, are hostile. That does not bode well for our future. We have very manipulative advertising, mass surveillance used to put ethnic and religious minorities in concentration camps in China, and in YouTube's case, recommending a lot of totally junk science to people who just want to learn new things. I'm afraid these four things probably impact way more people than translation systems and self driving vehicles right now.
I am completely in favor of the internet as an uncensored communications platform. However, in a war of ideas, there is a giant chasm between making a propaganda video of any sort available to watch in an archive and showing the stickiest propaganda video to those who are most likely to be retained watching it.
When I was at NIPS last year, the only person I heard get really angry about something, from an ethical standpoint, was relating to deep fakes. If you are a researcher or a project developer, you need to think really hard about what you are building, because soon you won't be the one building it.
edit: I don't think YouTube should solely be singled out for bad recommendations. Netflix has a lot of junk science on their platform and my assumption is their algorithms are bumping it to the top. In their case they have to proactively go out and pay for licensing for those videos, which is really inexcusable.
> I've said this here before, the primary use cases of machine learning right now, today, are hostile
Yes. As I think that it is quite obvious the majority of current AI is only superficially designed to help me, while fundamentally they are designed quite successfully to f#ck me and get my money, I have occasionally tried to figure out ways to somehow level the playing field against the current crop of AI, but it is difficult. One of the only tools actually having any effect is adblocking. And if I engage in that, I am blamed that I strip content creators of their income. I guess the fundamental problem is that nobody has any economic incentive to fight against the current AI army, while a large bunch of people seem to actually enjoy getting manipulated (to buy unnecessary stuff or get radicalized to some fringe ideology) by our machine overlords.
There has always been junk info floating around. What truly scares me are people lIke the new York times who want to censor anyone they disagree with: to the point of wanting youtube to only show political and scientific videos that they personally agree with. Sometimes on YouTube I watch junk science because I'm bored. I know it's bullshit. What I like about he platform is it represents a diverse range of ideas and I can watch anything I choose to.
Over unity or other 'zero point' energy videos are fun to watch even if you think they are bullshit, same for a lot of the UFO videos. I thought the flat earth videos might be similarly entertaining but they I have found one that is. I used to read Rense.com for a giggle.
If someone doesn't have the cognitive abilities to consider the veracity of a video that's on them. Let me watch what I want.
To paraphrase one of the better soundbites in the show: YouTube is like that friend in high school who always outdoes you no matter what you do. You're vegetarian, she's vegan; you're moderate left or right, she's at the more extreme end of the political spectrum; etc.
Out of curiosity, why are you writing they're ruining podcasts? (FWIW, I wasn't aware of them until today, when I searched that podcast's permalink on their site.)
This is my first time hearing about art19. But after a quick visit to their site, I can guess at what GP was referring to. Looks like they do listener tracking, targeted ads, dynamic ad insertion, etc.
More and more I find people who have done what I do; I don't watch TV "news" anymore and ignore almost every "news" article or posting online. Unless the headline might be a topic I'm specifically interested in and think it affects me I don't give it the time.
I’ve gotten to that point too. Even before things got so much worse in the last two years I had already mostly tuned out of news because it stopped being journalism. I would 100% tune out near the end of election cycles.
So much of it is just straight reporting. “X said this, Y said that.” No analysis, no critical thought as to whether any of those few points worth presenting at all. Just clips of people’s claims, totally unchecked.
If I wanted to know what a candidate or CEO said without any analysis, I could just read their press release.
Seems everyone is tuning out but for different reasons.
I still read a lot of news, but I don't like the dish I'm being served, exactly because it is not just straight reporting but rather is a non-stop stream of journalists' personal political opinions masquerading as analysis and news. I actually would prefer to just read their press releases unfiltered, if there was an organisation that aggregated the ones I'd happen to be interested in, in a trustworthy way.
It's all tabloid news now; calf born with two heads, Grandma abducted by aliens, tells all! It a consequence of the death of journalism in my opinion. We're all worse off.
It seems the "news", in general, is a buffer overflow of malicious information that serves to override our logical thought processes with negative emotions and irrational tendencies.
I advise one tweak to that — find a good local news source. That may be state or city focused depending on where you live. A lot of things happen in those arenas that do affect you, whether you know or it not.
> what does it say about us humans that we fall for progressively more extreme stuff? Perhaps that few people actually hold balanced views and nowadays a compromise is looked down upon and a mark of a weak character?
I was thinking about this recently. I doubt that it’s primarily an inability to compromise (though that exists when the politics are tied to tribal sense of identity), but more that each of our personal Overton windows is defined by about as many binary bits as we have slots in our working memory — if you can write an article with the title “Five Time My Side Was Epic” or “Five Times The Other Side Was Evil” then you can be blind to the mistakes of your own and the goodness of the others.
Might even be less than working memory, because we sometimes double-count.
I doubt that the general mass of people with "extreme" political beliefs do so out of thinking that "their side" is better. It is that some positions are so fundamentally indefensible for them that there is no middle ground, no compromise possible since they are contradicting and irreconcilable. And there are some of those fundamental issues. Basic human rights are not up for discussion or compromise for a lot of people.
What are you willing to compromise when it comes to the right to exist? Only some get killed?
Its of course a hyperbole, but I think thats the mechanic that leads to ever increasing factions that are completely opposed with extreme positions. You always had issues you were not willing to compromise on, but you rarely came into contact with people holding the completely opposing position. Of course there was the city whack job, but he was just that. Some guy. The point of reference was on representatives of "the other side". Politicians and so on. Which had to press (ideally of course) for a position that was the middle ground of his constituency, which was generally not the most extreme one to not scare away more moderate voters. Real conflict here came from fundamentally opposing that consensus of a large group.
With the death of political parties and movements you dont have that representation of the found consensus of the other side anylonger. Everyone can get a platform on the internet and the most extreme positions get the biggest coverage. If you hear often enough about extremist whackjobs in some group, you will sooner or later look for a group opposing those whackjobs.
Could I summarise that with this alternative phrasing?:
Instead of “five ways my team is awesome and five ways your team sucks”, you’re saying the average person is more like “Five ways your team sucks and no that guy is not my team true Scotsman true Scotsman”?
I think the answer lay almost entirely with echo chambers. These exist not only in the digital world, but even increasingly in the real world as people connect to people through digital filtering.
Vegetarianism is a good example. Many, though not all, vegetarians are going to be going in that direction due to some form of objection to how animals are treated. But when you surround them with nothing but other people who hold views that are at least as far away from the mean, they're going to suddenly seem quite trite. And this pushes people ever further down one path or another.
By contrast when you have a diverse array of views, the real position of stances can be clearly seen. That is to say for one that agrees that eating meat implies inherent mistreatment of animals, vegetarians are already taking a more active stance than ~97% of Americans. The same would be approximately true for those that e.g. go for fish + veggie diets. But in a world where those 3% are surrounded by only those 3%, vegetarians are extremely 'conservative' in their views.
The whole thing is people need to surround themselves with a good number of people they disagree with on things. But this is not an easy thing to do, and so most people do not. And in fact even things like voting systems on these social media sites discourage such behavior. Many people do not downvote as a matter of something being inappropriate, they downvote because of disagreement. And that pushes views that go against the plurality's grain further and further out of site to the point that people no longer even think they exist. And of course on another site the exact view bias is instead being pushed to the top with everything else being removed out of the groupthink. It's not a very dangerous problem for a democratic nation.
"The whole thing is people need to surround themselves with a good number of people they disagree with on things."
I agree with this sentiment. I think, however, a more fundamental problem lies with peoples' attitude toward their own ideas and towards the ideas of others. People have an apparently irresistible tendency to moralize; to consider their ideas as "good" and contrary ones as "bad". Once the moralizing starts, reasonable discussion becomes difficult if not impossible. Disagreements quickly devolve into the two sides calling each other evil.
Given this I would rephrase your statement as, people need to surround themselves with those who hold their opinions more lightly and who avoid moralizing. When people acknowledge that their opinions are simply their personal preferences rather than moral truths, genuine discussion becomes possible. In your example of vegetarianism, consider the different response one might get if one says, "I prefer to avoid eating meat" versus "Eating meat is wrong". The first opens up the possibility for mutual understanding, the second is sure to launch a flamewar.
people need to surround themselves with those who hold their opinions more lightly and who avoid moralizing. When people acknowledge that their opinions are simply their personal preferences...
Well, that sure sounds like 'moralizing' to me, and I don't think you realize it. If you don't care much about anything, well ok. But that's no virtue. It sounds close to the relativism that talks of "true for me" and "true for you".
For example, I think we have no right to kill and eat animals, much less subject them to horrific lives. (So I'm vegan) This to me isn't a 'personal preference' or 'opinion'. It is a major moral and ethical issue, much like, say, slavery was/is. People disgusted by slavery didn't say it was 'simply their personal preference' not to own slaves, that it was merely their 'opinion' that slavery was wrong. That would have been ludicrous. We'd still have slavery if people had talked like that.
And no doubt you had pro-slavery people in the 19th C giving the same lofty advice: "People need to surround themselves with those who hold their opinions more lightly and who avoid moralizing. When people acknowledge that their opinions are simply their personal preferences rather than moral truths, genuine discussion becomes possible..."
I think a good litmus test for an argument is whether you approve of things that such a view can be logically extrapolated to imply. And here you must certainly realize that your argument can be used, and indeed is used, to justify the most heinous of ideologies. There are millions of people, and nations, that still believe as a moral absolute that a person ought be killed for behaviors such as apostasy, homosexuality, blasphemy, atheism, sorcery, adultery, and so on. And of course these individuals cannot be logically moved from their position since they simply double down on their moral self assuredness. Ultimately, moral absolutes are best left to religion, and religion is best left to mythology.
---
Even for things like slavery, most people look through a revisionist lens of times, both past and present. This [1] article from WaPo states some 30 million today are enslaved -- more than at any other time in history. It's certainly sensationalized, but only in magnitude - not reality. The more important takeaway from the article is the locations of where slavery is most prominent today. And it's in countries that failed to economically and technologically evolve.
Something often ignored in favor of softer factors for the ending of slavery is is this [2]. That extensively sourced article goes into the price of a slave in the US in the mid 19th century. In price terms relative to how much each individual earned per year, a slave would cost more than $200,000 in modern day dollars -- a single slave cost as much as a house did. And that's before you get into housing, feeding, and ensuring the good health of your slave. You'll find abolitionists moralistically arguing for literally thousands of years, and even 'Lincolns' dating back to as early as 7AD (China in that case).
The thing you won't find is slavery persisting in areas where it became uneconomical. And an even more serendipitous occurrence is that this was happening at the very time that the industrial revolution was driving urbanization creating a massive pool of low wage labor. And on top of this all rising productivity also meant that paying somebody a wage was no longer as burdensome a cost to bear. These factors are arguably what genuinely meant slavery in the United States was finally brought to an effective end. And similarly the reasons it continues to 'thrive' in other areas today is not for lack of moralistic arguments, or even laws against it, but simply because of the economic factors are, for various reasons, not yet in play.
I think a good litmus test for an argument is whether you approve of things that such a view can be logically extrapolated to imply.
such a view - What 'view' are you taking me to hold? You can with some precision tell what is a 'logical extrapolation' of it or not? I'm not sure what you mean here. (Except I suspect you mean something like Kant's categorical imperative.) Sorry. I'm unable to see what view of mine you've extracted and then extrapolated.
And here you must certainly realize that your argument can be used, and indeed is used, to justify the most heinous of ideologies.
Which argument? Saying you must certainly realize that your argument seems just a condescending bluff. No....well, I'm not sure, I don't know what you are calling 'my argument'. Maybe if you filled in the dots here a bit; I don't know what you mean. (You know nothing about me. I'm a huge fan of Sam Harris' book The Moral Landscape. I say that because going by what you're saying about my 'view'/argument, you've put me in the opposite camp to that.)
Ultimately,
It's fascinating what that word's doing. (I'm not sure exactly) Sandwiched as it is between 'moral self assuredness' and 'moral absolute'..
most people look through a revisionist lens of times, both past and present
I'm not sure what you mean by that.
The thing you won't find is slavery persisting in areas where it became uneconomical.
Is that at all surprising?
Well, I don't remember any abolitionists in ancient Greek, Rome etc. It was just the natural order of things, an accepted part of life, necessary to civilization.
What I am referring to is your suggestion that that views based more on moral perspective, rather than rational logic, ought be able to, at least in some cases, be used as imperatives in and of themselves. For instance do we have a rule that people ought not steal because it's amoral to take that which is the property of another without their agreement, or because if theft was legal then we'd see substantial harm to society? By contrast we do have some laws based almost exclusively on morals, which are generally based on religion, such as prohibitions against prostitution. In my opinion, the ideal legal system is one in which laws are cast around a lens of ensuring maximal freedom for all of society with restrictions only being when the logical consequence of such prohibited action would lead to 'reasonably clear' harm to society.
There indeed were absolutely abolitionists even as far back as Ancient Greece. I've no doubt the view even predates written language. Check out this writing from Aristotle [1]: "Others affirm that the rule of a master over slaves is contrary to nature, and that the distinction between slave and freeman exists by law only, and not by nature; and being an interference with nature is therefore unjust." It's even interesting that he also rather directly alluded to the exact logic I laid out above, "..if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.." Reading the classics is always surprising that such wisdom and clarity could be had by a people thousands of years past. At times it seems much more so than even modern man. Perhaps division and diversion are not conducive to the development of sound reasoning abilities.
Hi. It's not easy to talk about this stuff! Ok, you've retracted the here you must certainly realize that your argument can be used talk. I was mostly distracted by your loose use of terms, as if they don't have to be used carefully.
For instance do we have a rule that people ought not steal because it's amoral to take that which is the property of another without their agreement, or because if theft was legal then we'd see substantial harm to society? ..That's an instance of my suggestion? Are you asking that or..
your suggestion that that views based more on moral perspective, rather than rational logic, ought be able to, at least in some cases, be used as imperatives in and of themselves. I don't think I suggested that; but that's how it sounds to you, ok. I'm not sure what based more on moral perspective means exactly.
Talking about your first paragraph: Rational logic seems a very strange term to use in this (or any) context. Reason, I guess you mean. Oh, so for you, ethics are based either on reason, or some mystical/religious thing. And I'm in the second camp? Your In my opinion sentence sounds exactly like Mill's On Liberty. You're talking purely about law, where the discussion was about ethics, I thought.
My motivation to write in this thread initially was objecting to the suggestion that it's better to say "I prefer to avoid eating meat" versus "Eating meat is wrong". It sounded like 'tone policing' - it's easy to insist people remain calm when you're fine with the way things are. And that "preferring" in its vague mildness, like preferring one restaurant over another, scarcely captures how I feel about such things. It would sound bizarre to say that you prefer not to be murdered, to be tortured, to be a slave etc. Probably I misunderstood the guy I first responded to, then you misunderstood me etc.
It sounds like we are in agreement, far as I can tell. I don't believe in the kind of 'moral absolute' people get from their holy book/religion. However, plain utilitarianism doesn't fit our moral intuitions either etc etc.. I'm vegan because I don't want to be killed, abused, eaten, and I figure animals don't either. And I can very easily avoid that, so I do. But I don't just prefer that; I think everyone should. It's only from habit/custom that the reality of the situation hasn't dawned on most people yet. Well, the 'consensus' seem to be today, dogs and cats deserve ethical consideration, but not cows, pigs, sheep, fish etc. There's nothing reasonable about that. It's all-too-human to think only white people matter, when you're white, or that only humans matter, when you're human. Like The Moral Landscape says, the origin of most moral/religious/ethical feelings, customs etc is that they produce better outcomes, a better world, for people, than their contrary. And whether they actually do that, can be determined a lot more often than people think; some beliefs, customs can be said to be objectively bad. Thanks.
I'm going to try to rephrase what I said above with a bit more context I as agree with you we're having a failure of communication somewhere.
The distinction and issue I am considering is between those imperatives we ought desire to encode into law as opposed to personal beliefs or opinions that ought remain something executed only at an individual level. What I've proposed is that unless an act can be shown to be unambiguously and directly harmful to a society, it ought be considered an opinion, rather than an imperative or something that is wrong enough to justify its prohibition by law. So again using the same example, would the existence of legal theft cause a clear and imminent harm to society? There would be nobody, excepting perhaps the insane, that would argue that it wouldn't. What about if if society ate meat? There's no clear argument against it without major appeals to moral views, which should relegate it to the realm of an opinion.
To go in the opposite direction, if we accept the idea of creating imperatives or laws based on moral views, it can rapidly lead to endorsing very absurd notions. For instance the countries and individuals that believe individuals should be executed for being atheist, or homosexual, or engaging in perceived blasphemy against their god or gods do so while appealing to moral views. Such actions cannot be argued to be causing directly harm to society, but they can be argued to be harming the 'moral fiber' of a society from some group's subjective perspective. These are the sort of views that ought remain subjective opinions, instead of legal and enforceable imperatives.
What about if if society ate meat? There's no clear argument against it
So I think you're saying things like this because you're only considering humans (and I guess cats and dogs). The whole thing about 'ethical vegetarians' or whatever you want to call them, is they are people who don't just consider humans and nothing else when thinking about who is harmed (or rather, not just humans, cats and dogs). It seems entirely arbitrary to draw that line at the exact point between humans and everything else. At least, it's not easy to think of a reason to do this, except we are humans or I was brought up doing it this way etc, which aren't very good reasons. Whatever particular quality you choose, the line isn't there. Intelligence? Some animals are smarter than some humans, etc. For me and a lot of people, being able to feel pain, suffering, is a good line to draw. Anyway, I'm no expert, I haven't read or thought about this stuff for decades, there's a large literature on the subject, which it sounds like you don't know about, and shouldn't be saying things like "There's no clear argument against it" when you have no idea whether there is or not. (Christians think there's no clear argument against their god or their ridiculous mythology, well, it's clear enough if you aren't already a believer.) But from the evidence, it seems reasonable that animals deserve ethical consideration, not just treated like, well, like slaves, like pieces of 'meat' waiting to be exploited by humans. There's no clear argument they don't, I'd say. Although there's no clear argument is an unfortunate phrase, being so subjective.
It's why I think slavery is a useful comparison - they're not just ethical questions, but questions about who is to be even included in the ethical circle, who is worthy of consideration, respect, decent treatment. Before even considering whether they're getting it.
Also, I don't think I've heard the term moral views used like that before, it's sad that it's such a bad phrase for you. Although I'm in Australia, the country isn't full of religious people running around proclaiming their insane god-given morality unlike some, e.g. the USA.
Again, I'm not sure you're seeing the distinction between rational law and moral law. You are arguing that people ought not be carnivores or omnivores because of moral views. The people that argue for things like execution for atheists or homosexuals do so under the exact same perspective. By contrast, rational laws can be derived with absolutely no notion or appeal to morality whatsoever. And once again, if you start to accept morality as the basis for an imperative you're suddenly creating a far more arbitrary line that you claim to be protesting here as you want the morality of your worldview to be seen as an imperative, but not the morality of another individual's worldview to be seen as an imperative.
As a simple test here you can envision a thought experiment. Take two of the most extremely amoral individuals who disagree on everything subjective except for their own desire for self preservation and growth. And of course we have to assume they'll always tell the truth when answering these questions, and also know the other person is also telling the truth. Any law you can get these individuals to agree upon is like to be a rational law. And you'd find they would agree to not kill each other, to not steal from one another, and so forth and so on. The things they would disagree upon would be moral laws -- eating meat, belief in a god, sexual behavior, usury, relationship views, etc.
The metaphor of me not being able to see something that you can see clearly isn't a helpful one.
You are arguing that people ought not be carnivores or omnivores because of moral views.
I've tried to understand what you mean by the term moral views, which I've never come across before. I've tried to explain the reasons behind my thinking that people shouldn't eat animals. Given the reality and the science, it seems the most reasonable position to me. You just seem to ignore all that each time and reiterate exactly the same claim, that I have no reasons at all, that it's identical to people arguing for execution for homosexuals with no good reason at all except god says so. I don't at all buy the way you talk or think about this stuff. I don't think there's much point continuing. Thank you very much for being so civil and patient, I appreciate it! The last time I tried understanding someone's unusual views about ethics/morality on here, the guy ended it with "You sound like a retard." haha.
Morals are concerned with the subjective goodness or badness of an action. Rationality is not concerned with the nature of an action, but only with its ultimate consequences.
I've never said you have no reasons. I've said that your reasons are moralistic in nature. In other words arguing that stealing is unlawful because is stealing is bad, as opposed to arguing that stealing is unlawful because of the predictable cause and effect sequence of events that would lead to chaos and likely the very literal destruction of modern society if it were legal. Arguing based on morals is a concept that inherently feels right, as one takes a stand for what they feel is just and right. Yet it's the very sort of behavior that stands to regress and undermine the very nature of modern free and liberal societies.
Aside from a moral basis for disagreement, a great number of people do not seem to be able to argue logically, or even civilly. That's nothing new, but I think the ease with which people can get a dose of outrage or confirmation is taking it to another level. If people are being encouraged or even subtly taught to become emotionally aroused faster, logical thinking and mutual understanding becomes harder to reach. It seems to me, more important than ever, that critical thinking needs to be a good part of the education curriculum.
> The whole thing is people need to surround themselves with a good number of people they disagree with on things. But this is not an easy thing to do, and so most people do not.
Exactly. Except that you don't need to surround yourself with people you disagree with, so much as with opinions and points of view that differ fundamentally from yours. And this constitutes the basic argument against the new censorship ostensibly targeting hate speech.
It may not be a universal result. It's an ml algorithm optimizing for engagement. If ten users watches mild entertainment for 30 minutes and one user watches conspiracy theories for five hours straight, the algorithm hones in on that case. I don't know if this is generally true or not, but I do know the algorithm regularly comes up with stuff I'm not interested in so I get bored and do something else.
> My comment: what does it say about us humans that we fall for progressively more extreme stuff?
Simply that we're biological creatures. The response to a stimulus is proportional to the strength of the stimulus. Also see: supernormal stimuli.
There is no sufficient dampening factor towards the extremes because the stimulus/response mechanism evolved well before we were vertebrate. It wasn't designed for sentient beings able to manipulate their environment to such an extent that it becomes such a powerful feedback loop.
For those unfamiliar with the theory of supernormal stimuli, think about super hero comic book characters. In particular the female ones. They are simultaneously both incredibly attractive and would seem grotesque if you were to encounter a real human being with the same body proportions.
The fact is that our response to stimuli which are "larger than life" just keeps increasing. It's not that it stops or saturates, it's just that usually there are counteracting stimuli saying e.g. "this is deformity and thus unattractive". But the balance between these varies per person (and per moment, per instance, etc), and it basically becomes a screaming match between two opposing forces.
Sometimes the opposing stimulus is too weak, can't be cranked up to eleven as much, or is simply nonexistent. Especially with abstract stimuli based on social constructs, especially inside filter bubbles or echo chambers.
Sadly it is these unopposed stimuli that survive a screaming match, not the balanced ones. Which is exactly what we're seeing today.
Combine this with Baudrillard's theories on simulacra and the hyperreal, and you get our current Strange Times, heading towards a breaking point until either someone gets a fucking grip on things, the collapse or possibly something even weirder.
I found an alternate explaination which hadn't occured to me at the time.
It's not necessarily that most people don't like balanced views. It may be just most people aren't excited by balanced views. Extreme views are more emotion provoking, even if you disagree with them. But if the algorithm actually sends you nothing but extreme views, you may eventually start having doubts and become a new convert. The joke's on you.
It's like the pickup advice (I smell downvotes again) that when it comes to women, it's better to be outrageous than boring.
More extreme stuff gets viewed more. More views means more ad revenue and a higher tier. That only incentivize people to make even more extreme stuff.
When these were webpages with a handful of a handful of ads didn’t matter that much. Now one of the largest media platforms on the planet is actively pushing this stuff (unintentionally) at everyone so it’s actually having an effect.
>what does it say about us humans that we fall for progressively more extreme stuff? Perhaps that few people actually hold balanced views and nowadays a compromise is looked down upon and a mark of a weak character?
It says we are the way that we are. The human brain is full of little quirks and cognitive biases that unavoidably shape our behavior, at least on the large scale. Anyone who builds a system or an organization or a movement ignores them at their peril. This happens to be one of them.
> She has a hypothesis it works this way because the goal is to make people spend as much time on a site as possible, and feeding people progressively more extreme content works for that.
While I understand the attractiveness of conspiratorial hypotheses, I think there is a simple explanation that applies to recommendation systems generally: such systems are designed, overtly, to identify content the user is likely to react positively the, based on what information it has about population-wide preferences.
When it knows nothing (or nearly so) about your personal preferences, the only thing it can do is recommend the most widely accepted content. As it learns more about your preferences, it has more basis to predict things that you will find meert your preferences that are less widely supported in the population at large—so, over time, it naturally rexommwnds more content that is extreme.)
Oh, it's far more terrifying that the natural result of being good at feeding people things that you can determine that they will respond well to is that you will progressively feed them more extreme content, because tragedy of the commons is much scarier than mustache twirling villains.
At the point when you recognize this and then choose to proceed apace, you become a “mustache twirling villain.” YouTube crossed that threshold a long, long time ago.
Because our systems are already optimized in a hundred different terrifying ways.
There is a review posted around here of the book 'seeing like a state' that talks about how the modern world is not optimized for human needs, but the needs of the state to collect tax. It makes many good arguments on why it is the case.
Google didn't "expressly design" YouTube to manipulate people, come on, that's a wildly uncharitable view. Virtually all popular content hosting sites have some form of recommendation system - because people like them. Claiming it's all a conspiracy to manipulate people is the same school of thought that says all advertising is manipulation, all businesses are run by psychopaths, etc. It's fairytale stuff.
Please do watch the TED talk by Zeynep. She actually says something along these lines - that it's not so much a conspiracy but probably an unfortunate side effect. What the system ultimately cares about is that people stay longer on a site if they're fed more extreme content.
From my brief experience with neural networks, they work very well for some kinds of tasks but you never know why they calibrate themselves this way. One of uses of neural networks that impresses me the most is Keldon AI, the free (open source) app for Race for the Galaxy board game. Traditional game AI is notably bad with "fuzzy logic" - when you're unable exhaustively calculate everything, and your opponent is just getting some cards. Well it turns out machine learning doesn't care about that. It cares that if it has X cards in hand and you have Y cards in hand, and the table looks so and so, playing card Z has C chance of landing a win in the long run. Or something like that. And it works! But if you look inside the rftg app, you'll just see piles of numbers. Just weights. SO when you play against it, you see what it played, but you're never sure why, and it can't explain itself. Zeynep actually talks about this, but in more sinister context. For instance deep learning algorithms are eerily good at detecting bipolar people. They can take advantage of people with various mental disabilities and disorders. Think of the consequences. What if this is used to create your personal profile and share it across shops you visit.
I don't really see why this is a conspiratorial hypotheses. The goal of youtube is to keep you on youtube watching videos with ads. That is what they are trying to do, and it happens to coincide with pushing extreme content as an unintended side effect.
Exactly. No one here, least of all Tufekci, is claiming that YouTube has deliberately engineered their recommendation system to favor extreme content. The argument is that it's an unintended consequence of how the YouTube recommender's objective function interacts with human psychology: (1) YouTube trains its recommenders to maximize time on site. (2) It's a sad fact about humans that sensationalistic or extreme content is highly engaging. (3) YouTube's recommendation system thus learns to serve up sensationalistic and extreme content.
To have an algorithm that favors extreme content above non-extreme content because it keeps people hooked is indeed unlikely to be programmed directly.
I actually don't think it's learning personal preferences like you suggest either.
It likely is a collaborative filtering method. You get served what people "similar to you" have watched most. Similarity is based on your watching behavior. If you watched 1,2,5,6,7 and someone else 1,2,6,7,8. You will be recommended 8 as well.
It would be interesting to come up with an algorithm that suggest favorites of subgroups of viewers you don't belong to.
Not clear what exactly technically he is referring to, whether your explanation fits. I suppose an algorithm that tended to cluster people together could be described as divisive.
You know, when I read the tweet and the discussion I think, people worry about AI becoming paperclip maximizer, but this is worse.
Our own society is becoming paperclip maximizer! We want to spend more time watching interesting videos and more dopamine shots.. And the company and people working for it, even indirectly, oblige.
We worry that there will be an AI where all of what it does is a consequence of giant misunderstanding of human nature. But what if there is nothing to misunderstand about humans, but it's rather our own design flaws that cause our desires, when being taken as an objective function to be ruthlessly maximized, lead to society that is, at best, extremely dull.
> It likely is a collaborative filtering method. You get served what people "similar to you" have watched most. Similarity is based on your watching behavior. If you watched 1,2,5,6,7 and someone else 1,2,6,7,8. You will be recommended 8 as well.
Yeah, similarity of “preference” is approximated, in any recommender system, by this or some similar analysis of behavior with respect to content.
That's essentially what she believes as well. The only intent she ascribes to Google is a desire to keep people on the site, which, of course, is that point of any recommendation engine.
> My comment: what does it say about us humans that we fall for progressively more extreme stuff?
This is the fundamental question. Every YouTuber quickly figures out that the more extreme content you produce the more views you get. This is how something as silly (but harmless) as the Atheist Youtube community degenerates into idiots raging about feminists and denying 9/11. YouTube is just a market where the endless demand for extreme content can find a ready willing suppliers.
What it looks like is that YouTube is just a mass demonstration of the Fundamental Law of Group Polarization [1]: low cost communities must enforce high cost penalties. The easier it is to enter a community the more extreme the community must fight outsiders and punish internal deviancy -- otherwise the community will simply dissolve. This is the logical conclusion of Iannaccone [2].
YouTube communities, being based on videos that absolutely anybody can post and watch, is as low cost to enter as it gets. Given such bare minimum entry costs, YouTube viewers rationally want "proof" that everybody else (the video creator, the paying supporters and the supporters who just watch/subscribe/like) really are dedicated and it's worth sticking around in the community. This proof comes in the form of producing and consuming more quantity of video hours and more extreme content.
The genius of YouTube it that it implicitly understands the "race from zero" dynamic at work and every feature of the site is designed to exploit it. People get hung up on the algorithm but the algorithm is a very small part of it. Everything about YouTube: its anonymity, its system of subscribes and likes and the very prominent display of these numbers, the YouTube format which so often includes somebody staring directly into a camera trying to make direct eye contact with the viewer, the confessionary content, the RL social events -- invites users to prove that they are truly dedicated members of these low cost communities. (Google really missed the ball with paid donations, btw. The company is so hung up on advertising they didn't realize that YouTube viewers would leap at the chance to just give straight cash to their favorite creators.)
In conclusion (1) Google could turn off the recommendation algorithm tomorrow and it would at best slow down YouTube's extremism (2) there's enormous profit to be made given the fundamental dynamics of the site because (3) there's nothing unique about YouTube here, we see the same dynamics at work in politics, religion, sports, and plenty of other free sites like reddit and Facebook. Once you have people participating in these low cost communities the communities will naturally polarize in order to preserve themselves. This is not going to change any time soon, really, it's the new norm.
Could you elaborate please on the atheist community going extreme?
Regarding the recommendation algorithm: it's utter bullcrap. The simplest and dumbest collaborative filtering for category and popularity system that they can get away with. Yet it works. YT is addictive.
> Could you elaborate please on the atheist community going extreme?
I read a paper on this some time ago... and now can't find it. There are many blog entries that discuss the evolution of the Skeptic community and the whole anti-SJW out there but can't find the actual researcher who was quantifying this using YouTube data.
We want to feel better than others, and through that shore up our insecurities.
Being more 'extreme' is the easiest and most obvious way to do that, even if being more extreme means virtue signalling in more ways, or being more of a victim, or being richer, or having more cars.
Just anecdotally, Youtube's recommendation system seems effectively self-referential. On virtually any subject I look at, recommended videos seem to cycle between just a few themes or even just a few specific videos. I think there's a logical reason for this.
I imagine for a given topic you wind-up with:
[broad video] related to
[other broad but unrelated video] plus [extremist video].
And
[extremist video] related to
[other extremist video only]
Since youtube's recommendation system is entirely naive, once you choose [extremist video], that is the video that gives the system the more specific clue and thus [other extremist videos] will be what's recommended.
It's a function of naive recommendations as such. If a system knows that X likes two videos, one that the entire population likes, one that only "metal heads" like. What can it recommend? Metal is the only sensible thing. And if the system then shunts many people to metal and they seem to like it, metal will count even more as a logical recommendation.
Given my understanding of how the recommendation engine works, if you watch a video slightly related to a topic, then another video slightly related to that topic, there is a reinforcing effect whereby the engine believes you to be more interested in that topic. E.g. if I watch a video on vegetarianism, then another one, it may strengthen the confidence that I am interested in vegetarianism, and conclude that they should be recommending me more content about vegetarianism and perhaps even content about veganism. Apply this to something like conservatism and you'll be getting recommended alt-right content in no time.
Slightly off topic:
I have a theory about this which is that there are certain recommendation loops or "attractors" that you basically cannot escape without manually flagging videos as "not interested", disliking, manually choosing an entirely different topic, or taking many days off of the platform.
I also am curious to know what percentage of content literally never gets recommended. I've been down the rabbit hole, so to speak, on certain youtube topics and I get the sense that at a certain point, there are many videos on the topic that youtube simply isn't recommending. E.g. if I'm watching a video about the Philosophy of Derrida, I feel like my recommendations never show lower view count videos related to the topic, they are much much more likely to recommend higher visibility channels full of slightly related content.
I have a theory about this which is that there are certain recommendation loops or "attractors" that you basically cannot escape without manually flagging videos as "not interested", disliking, manually choosing an entirely different topic, or taking many days off of the platform.
I'm pretty sure this is true. However, there is one more effective around this. You have to go back and delete your history. I have one youtube identity I maintain for exactly one interest (call "interest X"). I have to essentially scrub everything not related to that from it's history or more and more unrelated crap will appear reliably. Basically, even watch 10% of something means recommendations jump up to 50% of other things even though I have a looooong history of just being interested in interest X.
> if I watch a video on vegetarianism, then another one, it may strengthen the confidence that I am interested in vegetarianism, and conclude that they should be recommending me more content about vegetarianism
Recommendation algorithms don't "conclude" anything though, they are merely statistical tools to ensure you get somewhat relevant content, but they are never perfect because there is a lot of noise in what everyone watches.
Recommendation algorithms don't "conclude" anything though, they are merely statistical tools to ensure you get somewhat relevant content, but they are never perfect because there is a lot of noise in what everyone watches.
A. Jeesh, there's no problem with informal anthropomorophizing in this situations. When humans have a goal and feedback towards reaching a goal. When a human gets positive feedback that X gets them towards the goal, human choose X.
The combined system Google-corp+developer+algorithm is also goal seeking and making choices so anthropomorphizing the system is appropriate.
B. The problem we're talking about isn't "noise" but "feedback" - a goal-seeking-system that muddies it's final result with it's initial state. Essentially, bias, a situation that's quite common in statical systems.
Not perfect is understatement
They are horrible. Not just because of what the article is about, but because their recommendations are ridiculous avwn outside political content.
And youtube rules also motivate providers to churn out a lot of content regularly (leading to quick to produce crap) and punish those who take their time to think or research before they talk.
It becomes much less horrible when you consistently train the algorithm with the "I'm not interested" button. You can even choose if you're not interested because you already saw the video, dislike the theme, or dislike the channel. My YouTube home page is pretty good now, after doing this about 20 times. Any time I watch something vaguely conservative and start getting "SJW CUCK OWNED!!!" everywhere, one button press is usually enough to instantly revert back.
The problem is probably 1% or less of users ever click it.
They would be much less horrible if every time one watched videos, they would rate them in a way, instead of assuming that "watching"="interest".
> And youtube rules also motivate providers to churn out a lot of content regularly (leading to quick to produce crap) and punish those who take their time to think or research before they talk.
That's not just Youtube: about all media is driven by what's "new" and "hyped" rather than what is deep and well thought about.
Watching does = interest if you continue to stay on the topic. Interest certainly does not mean approval, and it often can mean "I find this content extreme and outrageous."
I am interested in topic of WWII including seeing Nazi movies. I am not interested in nazi stormtrooper adjacent alternative history channels youtube recommends to me as a result.
I am interested in scientist or writer etc youtube channel about her topic. Not so much in confident-Johnny-cranked-out crap on the same topic.
I see the same thing on a completely uncontroversial subject: music videos. If I watch e.g. a Beatles video, the recommendations aren't [Rolling Stones, Kinks, Badfinger, ...], they're always [Beatles, Beatles, Beatles, ...]. I don't know if it's naive and bad or if it's actually good for engagement and I'm just an outlier in what I expect to get.
I only wish I would get songs by the same band... It sort of seems like Youtube tries its darndest to steer me away from the current band. I'd be happy if it just cycled between hits from the same artists, but I've never seen it do this recently.
I think what's most frustrating about it is I don't know what game they're playing. Recommendations used to be logical, now they're almost a waste of screen real estate. What even happened?
I've noticed the same in my usage across almost all google services at this point. My speculation is that this is due to them now being so heavily dependent on neural networks these days vs the previous mix of algorithms. The old approaches seemed to be better at not presenting false positives / creating a rediculous mish-mash of previous trains of thought in an effort to recommend something, anything... no matter how wrong.
For example: older versions of the google keyboard were pretty good about not recommending/autocorrecting non-words. So I'd type things like function/variable names (which often use camel case) and it would either suggest valid English words or perhaps a non-word used earlier in the message. These days, my use of camel case names in technical emails spills over so that emails to family now pop up with bizarre camel case recommendations. I seeing similar things happening to YouTube recommendations and elsewhere.
My guess is the system creator think a lot about making the best recommendations for situations where they have enough information but seldom think much about the consequences of making recommendations where they don't have the information to make a recommendation and both users and developers underestimate how information is needed for a decent recommendation.
Edit: And the situation seems very much a product of youtube (and all the content providers, basically) stripping any user content controls from their UI. The user has no easy way to tell youtube what they want besides choosing between video that youtube offers them and those become more circumscribed as the process progresses.
There's a myth of the content providers doing magic to give the user what they want and the results seem to be this, ie, just terrible.
There's is a hidden "not interested" button on mobile where you can then say why you don't want to watch a video and choose to block the channel or similar videos.
The algorithm isn't ideal, but after a couple days of filtering you can pretty consistently get a feed of videos you're interested in.
The idea of people getting persuaded by fake news, etc. is a topic for another day. This assumes you can vet trustworthy sources on your own.
I've noticed this as well, the algorithm hones in on a few topics and videos. Interestingly enough, so does the algorithm on the Quora app. I hate this behavior, I want to use a recommendation algorithm to discover new similar content. My guess is that a "new similar content" algorithm doesn't get the same levels of engagement. Maybe its the same as radio stations playing the same twenty songs on a loop.
Clearly google knows what their doing, its just disheartening that this algorithm is the best for engagement. Also makes me wonder about human habit and reinforcement...
Here’s a Quora response to the question of why radio stations play the same songs over and over and over and over and over and over and over and over and ... again:
I am fond of videos showing people's trips in Antarctica. This tendency exposes me to flat-earth videos ALL THE TIME. No matter how many pleasant tourist videos I watch, I still get flat-earth videos and even occasional hollow-earth videos. (the flat earth view includes the notion that the Earth is a disk with Antarctica as the outer edge).
I can't watch the pleasant tourist videos of Antarctica before bed, in case I fall asleep and the automation shows me a stream of flat-earth videos.
The link here is just the word Antarctica. The flat-earth view has nothing to do with viewing penguins or pleasant scenery.
Another aspect is youtube's algos tend to heavily favor "engagement" ie watching a full 30 minute video, not hard to see how this funnels people into fanatics due to their nutty followers devoted engagement.
> Users searching for news on Chemnitz would be sent down a rabbit hole of misinformation and hate. And as interest in Chemnitz grew, it appears, YouTube funneled many Germans to extremist pages, whose view counts skyrocketed.
As a long time YouTube user, I've noticed this myself.
The starting video doesn't even have to be extreme. For example, watching a 5-minute daily news video on Fox News channel primes YouTube's super sensitive [$] algorithm to nudge me toward channels with considerably more ideology-driven content (like 'PragerU,' for example). Watching the next auto-recommended video primes the algorithm to an even greater extent that nudges me toward channels with even more extreme positions ('1791L,' 'Computing Forever,' for example). Rinse and repeat a few more times and you end up on channels that server as a gateway to radicalization -- channels whose names I'm not comfortable mentioning.
YouTube's algorithm needs some urgent tweaking, especially to its topic/video recommendation sensitivity. It gives more weight to recently watched content, which is fine, but should not be at the cost of previously watched content by the user.
[$] When I talk about YouTube's 'super sensitive' algorithm, I mean to say how watching even a single video on a particular subject primes it to keep showing me similar videos everywhere -- from homepage to recommendations to the auto-play ones.
P. S. The aforementioned example is also perfectly applicable for channels on the opposite end of the political-spectrum.
I very rarely use youtube on my (iOS) phone; I'm not logged into any Google account on it. Before Chemnitz started I showed some friends a video of a robot marching to "Erika" (a WW2 marching song)... a few days later I got a push notification by the Youtube app on my homescreen of a video posted in some fringe-AfD channel which was full-on propaganda.
Youtube will literally push extremist right propaganda onto peoples homescreens based on what I can only assume a single video view.
> Youtube will literally push extremist right propaganda onto peoples homescreens based on what I can only assume a single video view.
Absolutely. It's as if YouTube forgets about your previously viewed videos and channels, and tunnel-visions you into watching content around your most recently viewed single video.
>It's as if YouTube forgets about your previously viewed videos and channels, and tunnel-visions you into watching content around your most recently viewed single video.
Which, if your primary concern is maximizing engagement, makes perverse sense. Viewers are more likely to be attracted to content similar to what they just watched than content they watched less recently.
Edit: I just realized the opposite might be true as well... some viewers might be attracted to videos which appear to contradict their preferences as well, just to be able to criticize or debunk it in comments. There's probably a relationship between this and successful clickbait thumbnails with their random arrows and circles that don't actually indicate anything.
>primary concern is maximizing engagement, makes perverse sense.
I agree. But considering online platforms have, more or less, become de facto town squares, it's no longer responsible for a tech company to ruthlessly optimize for user engagement at the cost of stability of democracy.
There appears a conflict of interest for those platforms because they don't intend to act as de facto town squares... they are businesses, and arguably don't have an obligation to anything but profit, regardless of their popularity or ubiquity.
Despite this, their nature as "town squares" affects the market whether they want it to or not. If enough viewers disengage because of their their ruthless tactics, or change their behavior in ways that don't fit their algorithm, that creates a business case for more ethical behavior... which would probably be more compelling to them than an argument for preserving the social fabric or democracy.
Right...very few people seem able to differentiate "radical" views with views they merely oppose.
What does "radical" even mean? Opinions that are not part of the present consensus? How is society supposed to evolve?
A society that cannot express "radical" views with society's support may eventually express these ideas violently. The downvote button only works for so long.
This article seems a bit sensationalized. The examples they give are a single video with 500,000 views, and a couple of channels whose views average 20-30k. If YouTube is recommending "extreme" channels so often, why aren't they getting more views?
> “Lies, propaganda and manipulation are harmful for society, but on their own are not illegal — and so our hands are often tied,” said Mr. Ipsen, of the government-linked internet monitor.
Good. I think it's insane that the German government is even trying to establish a Ministry of Truth.
It seems now more than ever, we are getting biased reporting from mainstream media outlets, and I guarantee that any government crackdown would not include these mainstream news sources.
Here's an example from just a couple of days ago: CNN's Chris Cuomo as well as a number of major news outlets reported on Judge Brett Kavanaugh "snubbing" Fred Guttenberg, father of a victim of the Parkland shooting, by not shaking his hand at the event. None of these outlets bothered to mention that the day before on Twitter, Guttenberg said:
They also didn't mention that Guttenberg was the only one trying to approach Kavanaugh, or that Kavanaugh's kids were removed from the room for safety concerns shortly prior. That seems like manipulation to me.
I don't say this because I want CNN banned, or anything like that. I'm just completely against any censorship, especially government censorship, of ideas. I believe in the liberal values of free exchange of ideas (or "Liberal Science" as Jonathan Rauch calls it, in his excellent book, Kindly Inquisitors).
>They also didn't mention that Guttenberg was the only one trying to approach Kavanaugh, or that Kavanaugh's kids were removed from the room for safety concerns shortly prior. That seems like manipulation to me.
That article doesn't seem to indicate it's "fake news". Snopes says there's no definitive answer as to why they left, and that different sources give conflicting information. I'm not sure that level of uncertainty rises to the level of stating it's "fake news".
At any rate, there are plenty of clearer examples of CNN reporting things that are false. Glenn Greenwald writes a lot on this topic. See the articles he's written about CNN, MSBC, retracted coverage of Russia etc:
Yeah, there are many, many examples. I just took an extremely biased article from very recent memory to show I wasn't cherrypicking. That's a great resource, though, thanks for the link.
You can also use a site like http://newsdiffs.org/ to see when publications retract statements or change headlines quietly.
>Good. I think it's insane that the German government is even trying to establish a Ministry of Truth.
What is the alternative (if you or anyone is able and comfortable in answering)?
Option 1: Let the free market regulate itself.
Option 2: Let government govern - And have the state regulate the free market.
Option 1 depends on demand from the consumer being ethically driven (rather than purely financial / selfish).
I think that society has a long way to go before we can consider option 1 viable. Letting the state regulate however is a terrifying prospect if 'fallen into the wrong hands' or 'the algorithm is biased'.
Is anyone aware of the leading conversations / academic & theoretical consensus in best structure / infrastructure / accountability hub for this problem?
Government's primary job is to protect liberty, so I would be in favor of legislation that is strictly anti-censorship (such as Social Media Anti-Censorship Act, check #SMACA on Twitter for the outline).
Absolutely. Those numbers are peanuts in Youtube scale. Popular Youtube videos regularly get 10m+ views. It's similar to back when WSJ went digging for days and managed to find 2-3 racist videos with 10k views, then refreshed them until a Coca-Cola ad showed, to screenshot it. Most of these are manipulating reality to drive their narrative. I've yet to see a proper scientific and thorough example of this actually happening.
See also Danah Boyd's talk on how media literacy classes push some students toward extremism. The gist is that schools have been promoting programs where students are taught not to take what they see in the media at face value, and to go Google/YouTube it for themselves. Which, because of the sorts of things discussed in this article, results in a certain percentage of students becoming radicalized and basically (or literally) becoming nazis.
I suppose there's a need for "Internet literacy" (or "Google literacy"). One rule I instinctively follow is, if I find some claim surprising, e.g. "there was more X under president Y, than president Z", I will search for something like "X USA graph", look for a source I recognise (and hence know the credibility and biases of), and see if the data supports either or neither of those alternatives.
I will specifically not search for "X president Y" or "X president Z" because that will be priming the algorithm to confirm my biases.
Google News works the same way. I've been using Google News since it was a list of links to news sites. The algorithm is very good at surfacing breaking news, but can't help showing the most extreme clickbaity BS among the legit stories.
Interested in stories about Oceach Science? We'll show you "Holy S*, These Triassic Ocean Reptiles Were As Big As Hell"
Two criticisms: one old tweet (edit: parent added extra tweets after I posted this -- and... I can't find them when I do equivalent searches anywhere near the top of the page. ilamont appears to have some kind of axe to grind here) doesn't carry much weight. A current search for "ocean science" on news.google.com in fact looks pretty much like what you'd expect.
Secondly: sorry, what exactly is wrong with a sensationalized headline about interesting content? Some of those animals were, in fact, "big as hell", and it's worth covering in popular media. This appears to be the article in that screenshot:
Assuming the youtube recommendation algorithm is mostly just "people who watched this also watched", then couldn't the asymmetry in the recommendation graph structure simply be a relatively accurate reflection of asymmetry in the underlying data?
It seems plausible to me that the probability that people who hold "extreme" views will watch "moderate" content is lower than the probability that people who hold "moderate" views will watch "extreme" content. I haven't run any experiments, but it seems to me that a difference in probabilities like this would induce the kind of "closed group" structure being discussed on the resulting recommendation graph. The low number of steps to the closed groups shouldn't be that surprising, given previous famous results about degrees of separation in these kinds of graphs.
It's quite bizarre, the way YouTube works. I will watch a video that covers something like climate change and immediately I am bombarded in my recommendations with left leaning news clips and stories, the same goes for when I watch a video that could be considered conservative, right away I am suggested videos like "Professor puts liberal sjw student in her place - compilation."
Watching any video involving comedian Bill Burr will immediately get you anti-feminist and even white nationalist video recommendations. It's just weird how close that cliff is to fairly mainstream videos.
I use incognito mode for probably close to half of the YouTube videos I watch, as a defense against the algorithm. That technique, plus freely using the "not interested" button, has actually resulted in a pretty good experience.
Same here. After getting bombarded with extermist-ish content after watching some random socio-politics video, now I use incognito for lots of stuff I watch on YT that's not tech/science/unpolarized-art/business.
I actually want youtube to recommend me videos, because it recommends me stuff i'm interested in a good deal of the time. I can keep it fairly clear of toxic content in the same way i can keep my garden fairly clear of weeds: constant vigilance.
Yeah, perhaps I wasn't completely fair. For some of the content I watch, I'm quite happy with what they recommend. But as you said I have to be constantly vigilant with the rest.
In 1999 I was working for a web search company and one of the things we really wanted to do was to make search personalized. Eventually we started talking about what years later became known as "filter bubbles". It would be bad if people were constantly fed stuff from the same information universe, right?
In retrospect, it didn't get all that much of our attention in terms of work done. Sure, we got very obvious (and measurable) feedback mechanisms and we did spend considerable amounts of time countering the aspects that more directly affected percieved quality of the product. But in terms of what it means, we figured "we have time to deal with it later".
Just figuring out how to do this inside a few milliseconds per query was more than enough of a challenge in the near term. Since then (and probably before then) I'm sure thousands of teams have had roughly the same experience. You know that there are going to be problems ahead, but you build it anyway because if you don't someone else will. Your paycheck depends on it.
When you implement software that anticipates the preferred path a human likes to take through content you end up amplifying what tendencies are present in the person. This is "harmless" in the sense that you are not imposing some editorial direction. You can't be criticized for imposing your opinion or notions of good/bad, right/wrong etc. However the results appear to be polarizing in that they allow people to navigate to certain maxima.
The really hard problem here is how you counter this effectively without imposing your view. I think there's interesting work to be done in that area.
The international corporate media has not been honest in its reporting of the refugee situation in Europe. This is becoming clearer year by year, because some of the facts that were earlier dismissed as racist rhetoric are now becoming very difficult to ignore. For example earlier this year, Chancellor Merkel finally admitted that there are "no-go" zones in the country where even the police dare not go. This is an extremely serious problem.
People flock to this content because they are not getting the whole truth from the corporate media, and that is the root of this problem. YouTube's content would not be compelling if these things were being reported honestly, and given the same degree of volume as other important stories.
In my country, India, there are plenty of places in which the value systems are completely at odds with ideas of gender equality, religious tolerance, racial equality, and LGBT equality. If a western country were to create pockets of populations where people held these values, it would be extremely incompatible with the norm in the societies that they have achieved today. As a result, these places would be radically transformed into incredibly dangerous areas, particularly for women and LGBT folks. It is not bigoted to point this out, it is true because this reality exists in our country, and it is one that we are aware of and grapple with. Of course it will exist in another country if a large enough number of people who hold these ideas are imported en masse. If these values existed in Syria or Sudan or other parts of the world where refugees are coming from, of course this will become a problem for the country that is resettling them. This was a valid criticism that did not get a fair enough deal from the media, and the consequences have been life altering for many Europeans.
The dream of multicultural societies where everyone lives together in harmony requires everyone to buy into it. If there are bad actors, and others are aware of this, it does not make them bad actors too. The media seems to have been hesitant, perhaps because of fears of stoking xenophobia by providing it with legitimate information that could be compelling for their cause. There is never a good reason not to tell the truth about these things, this is the result of what happens when you go down that path. At this point, many are convinced that the news media had sinister and not well intentioned motives, behind why this was done, and as a result trust in the news media too will decline rapidly, in favor of rag tag YouTube operations.
The only example you cite is completely off-base. The media first reported on “no-go” zones in Paris, then in Sweden, and later in other European countries. These were reported as places where Muslim immigrants or refugees took over, declared Sharia law, attacked police for entering and were so dangerous that police refused to enter.
I personally visited many of these “no-go” zones in France and Sweden specifically called out in some of these reports over the past few years out of curiosity, and found the reporting to be utter bullshit. These neighborhoods ranged from completely normal, peaceful places with a mix of native and immigrant people, to places with crime but nothing more than you’d find on an average “rough” street in the US. The violence that people reported was drug-related, just as it is in most places in the world where so much crime occurs. I saw normal police drive around on a regular basis, just as they would anywhere else. There was no evidence of “sharia law taking over” or any similar garbage. My personal conclusion is that right-wing media took a common occurrence: instances of drug-related crime, and spun a racist storybook narrative around it blaming refugees and immigration.
I obviously didn’t see everything so some could have been worse, but I know enough to understand the false agenda behind this reporting.
I don’t doubt that there are places where police are hesitant to go due to high crime and violence (Camden, NJ anyone?) but those exist everywhere and can be reported on without the false narrative backing it. Merkel acknowledging that there are such high-crime areas doesn’t mean that she acknowledges the racist fairytales that go with them or that the public was misled by the lack of confirming the (false) right-wing media reports about them.
There is an argument against one part you wrote.
"There is never a good reason not to tell the truth about these things, this is the result of what happens when you go down that path."
Let's make a hypothetical that a researcher has come up with a good way to measure intelligence/violence and concludes that indigenous Australians are way below everybody else in intelligence and way up in violence. What is there to gain to tell everybody that and what is the potential downside.
They can get extra resources in school to help them and people can resent them for taking more resources from the school without paying more tax. So there are opportunities to help them but can also create tensions. I agree that we should always tell the truth but I can still see the point of the other side.
So as a hypothetical, if there are differences between different people, should we tell everybody. You don't need to answer me, just a fun concept to play with.
For your hypothetical, that research would probably explain that averages in lack of achievement are likely to be caused by the intelligence differences, instead of some unconscious bias. Yes, I think it would be useful because it helps ensure that inaccurate conclusions are not drawn.
I think your description is misleading when you start talking about no-go zones refering to a statement[0] from Angela Merkel without mentioning by which group this is supposed to be created by and then go on to talk about refugees, which suggests that they created it, when in fact those no go zones are upheld by racist violent right wingers who just attack about anyone who looks non white and foreign. I am living in Germany near those areas so I can differentiate fact from fiction.
If you never care about rapes except when it’s a dark skinned perpetrator in a white dominated country, then you are both racist and correct that that rape amongst refugees is an important issue.
No, it's just true. You can take a fact and use it to support a larger racist worldview composed of other facts, many of which may be false, contributing to a poorly reasoned out illogical perspective, or one that is lacking key details.
Or you can simply take that fact and look at it in isolation, or in combination with true statements, and all the necessary key details, and form a conclusion that does not involve bigotry.
What you are talking about is what people take away from the story. If the media is trying to control what people take away from the story, by not publishing it, because it could potentially help racists further ratify an already flawed worldview, then that is actually having the opposite effect right now, and it is a poor, unethical strategy. The media should report the facts, and should report on important issues with a fair amount of coverage, the volume of which is unaffected by agenda.
I think you're setting up a strawman there. What he's saying is that media organizations are intentionally ignoring issues that would stand out by themselves, but choose not to cover them because of certain characteristics of the offenders.
This leads to extremism as people who rely on those sources for news coverage end up completely misinformed about the world and find people discussing matters to be doing so out of some form of ism and 'fake news' rather than because it may be an increasingly serious problem. And it also fuels extremism on the other end as they also see the lack of reporting as 'fake news' and it fuels questions about the motives involved in not covering these issues.
In my opinion, this quite rapid shift from the news trying to inform people (though of course there some lens of bias) to the news quite overtly trying to control people's actions is probably playing a major role in the rapid and, arguably deserved, deteriorating trust in the media. But that destruction of trust from the sources with the resources to carry out impartial reporting is doing nobody any good.
You think there are no real people who only care about rape when it’s interracial? That is what straw man means. You think I described a man who doesn’t exist?
I'll take this one: when you open your doors to people fleeing war and part of that population turn out to be over-represented in sexual assault crimes at a rate 4 times that of natives, this anomaly warrants investigation and often outrage.
Why? Well if it's not obvious, your society has been made more dangerous due to policies you had no say in, and when you voiced concern that something like this might occur you were called the very epithet you just used - "racist".
Hence why people 'care'.
I mean do we not remember the mass sexual assault in Cologne? Or the same in Sweden's summer festivals? This is not normal and demands attention.
Wow your comment and mine were flagged. Absolutely amazing. Not a single thing wrong with mine or yours.
Great critical thinking guys, have fun in your complete echo chamber. If you can’t even have a discussion from someone who wants one and is willing to have an open mind and asking questions, not surprised at all that the entire world is slowly rejecting the current orthodoxy that has given us peace and prosperity for 70 years. Let it all go down the drain due to sheer arrogance and stupidity.
If you disagree with anything I said, downvote and reply back. Only flag if I broke any rules or attacked anyone in any way. Cowards.
This is my largest gripe with these recommendation systems. They are patrimonious and regressive and not aspirational in any sense. What do I mean by that? I mean that they are built with a very sacrosanct attitude towards the algorithm knows best (so maybe algorimonious).
They allow no ability for the user to control and guide the process, to mold what they consume to be the person they aspire to be.
More concretely I neverever want a recommendation of “late night talk show interviews celebrity person” even though I am somewhat likely to click on that, and waste my time on hollow vapid content, and tell the algorithm to regressively drag me down towards that content.
I have gone so far as to go to each of those channels and banned them but that doesn’t stop the video recommendations from randomly popping up in the sidebar so I think it must just block those channels ability to comment on things I post.
What I would prefer overall:
An ability to guide and control the recommendations to the point of writing my own code that runs in a google cloud function because everyone will be a bit different in what they’d like to see. Personally, I’d then write something that flat out blocks certain keywords and attempts to show sidebar recommendations similarly to how they used to do them: closely mirror the current video with very little if any skew towards the current users history.
So I think the issue we face here is one of limited user control, users are not being given a voice or meaningful input to the algorithm that determines what content they see. Instead the only input they are given is their own watch history.
Right now YouTube is a buffet. It’s the only buffet in town. It loads the bar with all the junky food people are statistically most likely to eat whether they want to have that food sitting their tempting them or not, and whether it’s even good for them or anyone or soceity as whole or not. It doesn’t give them any ability to say “no French fries” in the buffet. If you put French fries on your plate before and they observed you to eat the French fries, by golly, you’re getting French fries in that buffet from there on out, mental waistline be damned.
It's repeatedly emphasized in the article that the recommendations algorithm gives viewers "what they want to see" and what has high engagement. Unfortunately it looks like these criteria seem to push towards fringe and extreme content because that's what evokes people are interested in watching and causes them to react. YouTube, on its part, probably does not intend for it to be this way, but seeing as its goals are the same (albeit to maximize ad revenue using these guides), I'm not optimistic in this being fixed unless there's a massive rethink on how to surface content to users.
They have to play with the control variables producing the feedback loops.
People and content creators are influenced by the view/like/dislike count.
Instant feedback through these counts produces all kinds of unintended consequences.
Instant feedback is overrated and unnecessary. It's like constantly getting a live upvote/downvote from your spouse or boss or prof on everything you do. What kind of behaviour does anyone think this produces? In such real time conditioning environment ppl turn into rats in a B F Skinner experiment.
The Russians or any bad actor don't have to create content anymore.
Just find a lunatic and encourage/prop them up with views and upvotes. YouTube will then take care of distribution triggered by the view and like counts.
Hide or delay the counts to both the creator and viewer and the world changes over night.
The experiment can be run on HN. Hide the upvotes on certain controversial threads and see what kind of conversations happen.
As a content creator it's very quickly apparent the best route to driving views and subs is to push extreme/edgy content. Balanced content gets no engagement. It's really that simple, with the ADHD short attention span of the current gen of heavy media consumers (read: kids) you have to shock them/surprise them/make them laugh ~once every 5-6 seconds to keep them on the video. YouTube offers very detailed metrics on engagement and you can study the exact time where people lose interest/close the video.
Only massive established channels have the privilege of pushing non-edgy content as their audience is built in already, for anyone new it's shock value and hard-line extremism or your voice will be lost among the noise.
Yeah, but if you are shocking and edgy, is your voice preserved? Sure, people may watch but it's in one ear and out the other, a swell in the sea of noise.
I'm sure most youtubers don't care, but viewer engagement is not the same as being a notable voice.
I've only ever heard the term 'echo-chamber' as a pejorative against people who hold anti-establishment views, ie 'Hillary lost because middle America is stuck in right wing / alt-right echo chambers.' It's appears to be a derivation of the 'conspiracy theorists are mentally ill argument' or 'Yes, people hold these views, but its not their fault, choose one (mental illness/echo chamber).
The west has long lived by the rule of laissez-faire, until something bad happens and only then try and fix things up. It has been effective albeit somewhat risky. We are still in early, lassez-faire days of the internet, meaning we have got a lot to learn about funneling the vast sea of information responsibly. Maybe we'll learn it eventually, without sacrificing core democratic values like freedom of expression in the process.
This problem in particular seems to be caused by the need for ads (since the recommendation engine seems to be geared towards clicking on ever new videos). Maybe a solution to this and similar issues that currently turns the internet into a sewer is to promote a 'responsible ads policy' for the internet that puts the initiative in the hands of the user: 1. no recommendations alongside the main content (only under a separate filterable view), 2. no ads alongside the main content (only under a separate filterable view). 3. No ads in the middle of main content (i.e. in the middle of a video, only in a separate filterable view).
It may seem idealistic, but with small increments of public momentum it could eventually make a dent. I don't see any other way, really.
I got frustrated with the Youtube recommendation system, and built a search engine of videos I scraped: https://www.findlectures.com
I'm relying primarily on trusted recommendations - if someone I trust likes a speaker, I'll index channels where they speak. This doesn't scale well, but it has made for a tool that works really well for doing research.
Have you written about the challenges involved in this? Like does yt allow it? What do you index exactly? The implementation? Would be very interesting reading. There used to a few yt aggregrators around. I think the most successful was whatever became youtube kids (someone correct that please).
It's not just politically extreme content it does this for. You watch one video about how NASA supposedly faked the moon landings and suddenly it's trying to get you to watch all kinds of wacko conspiracy stuff.
It just struck me that this explains a lot of changes I've seen in people near me and myself. My mother started getting really into stuff like essential oils. My dad (non-american) spends a crazy amount of time watching videos and reading news about how crazy Trump is (yeah, he IS, but we don't even live in America!). My sister got really into nutrition and diets, with a bent towards complete suspicion of western medicine. My wife, while not a full on anti-vaxxer yet, is now suspicious of vaccinations and wants to wait to vaccinate our child, and skip some of them all together. This after having watched some "documentary" series on Youtube.
Myself, I've also seem to have become more extreme in what I'm willing to try to improve my health, personal finance, mental abilities etc. I'm not sure if all of this is due to Youtube specifically, but definitely due to the internet in general. However, I do believe that for many topics, the center/average view is usually wrong. For example the average views on things like diet, exercise, personal finance, materialism is just bad. The average Joe eats like crap, rarely exercises and has less than a few thousand dollars in savings/investments. Therefore, I'm starting to wonder what else out there I might be doing wrong that I haven't even considered yet.
> My wife, while not a full on anti-vaxxer yet, is now suspicious of vaccinations and wants to wait to vaccinate our child, and skip some of them all together. This after having watched some "documentary" series on Youtube.
Out of curiosity is your wife college educated?
It's the radicalization of "average" people that is most interesting. I suspect what your wife discovered wasn't just a series of videos but a whole community of people deeply committed this nonsense. This is the secret of radicalization: it appears as just a (very rigid) community of like-minded souls. This shows up again and again in the literature and the news.
The internet is doing its job of bringing people together. What's not clear is whether any of these communities are actually beneficial.
> Perhaps most striking is what was absent. The algorithm rarely led back to mainstream news coverage, or to liberal or centrist videos on Chemnitz or any other topic. Once on the fringes, the algorithm tended to stay there, as if that had been the destination all along.
I wonder if part of the problem is the "news coverage" on YouTube is mostly amateur. People in the mainstream don't feel the need to produce amateur news coverage, since that's done professionally. However, people on the fringes produce and consume vast quantities of amateur news content, because there's so little fringe content that's professionally made. That leads to a large, focused single that Youtube's algorithms have trouble filtering out because they're too coarse and they confuse activity for quality.
With a clean slate (no cookies stored), the front page can get as ridiculous as recommended videos. For half the music out there yt will recommend anything between chemtrail videos, weird christian conspiracy videos, aliens, all manner of tinfoil hat shit. It's vile.
I regularly view videos on Polish politics and geopolitics. I am constantly suggested videos by fringe extreme right geopoliticians and conspiracy theorists ("it's all a Jewish conspiracy to destroy white race and Poles", "EU is USSR", "muslims will rape us", that kind of videos - Michałkiewicz, Braun, Kowalski and similar). I regularly click "don't like this suggestion, stop showing me it", and I'm still suggested it after a few days/weeks. It seems there's endless supply of these videos and they have hundreds of thousands of views.
It's like youtube wants me to become a right-wing nut. That despite me mostly watching mainstream news and liberal news-as-comedy shows.
My coworker is a right-wing estremist and he gets suggested the exact same videos, he doesn't see liberal or mainstream news in youtube suggestions, ever.
I was a casual youtube watcher. I am talking 2012 style fail videos, music videos, and broad areas of interest.
Now I can not stand YT. 1st, they have clearly decided that they want to compete with IG as their 'Trending' section is nothing more than influencers. 2nd, it is likely the case but the 'Trending' section is always loaded with liberal, nonsensical, click bait videos, and artist of no interest to me.
Their trending section is much like TV: content build for the masses. Log in and like videos that you enjoy, and you'll have a personalized recommendation page that excellent, in my opinion.
Say a perfect recommendation system existed. Should it give me what it knows I'll like or challenge me? Should it give me what I ask for even if it's wrong? Should it aggregate information for me to judge or analyze the content and judge it for me? Is it optimized for me? For the company? For what's best for humanity? And who decides what's best?
What is politically extreme? Would a company that hires someone like sarah jeong be viewed as politically extreme?
The problem isn't youtube. The problem is the news. The news has become politically extreme and that has seeped into youtube.
And it's a bit hypocritical of the nytimes to whine about youtube when they pressured Alphabet ( youtube and google ) to give them priority status on both platforms.
Anyone else getting sick of the news media attacking social media for all the ills they themselves created and are creating even further? Youtube, and social media in general, has become politically extreme because the news industry ( in particular CNN, Washingtonpost and the NYTimes ) has been pressuring it relentlessly to push their content on social media. And since they are so politically extreme, it creates an extreme backlash against it.
Lets not forget that their political extremeness is why fox news was created and why fox news is so popular. If news media companies were a tiny bit honest and objective and fair, we wouldn't be having this problem. But these people running these companies surround themselves with sycophants and create such an impregnable bubble that they can't see how extreme they have become.
Get off your high horse already. You are the problem, not youtube.
I like the recommendation system. It provides content that I'm interested in and I'm surprised over how "fair" it seems given what I'm interested in watching.
Maybe Youtube can provide a choice of algorithms. For example, if somebody wants a more curated "socially acceptable" experience, they could opt-in.
dang and I thought the whole point of youtube was to end up in its weird corners. So many memories of me whistling to the cheerful tunes of the Arctic Monkeys, 4 hours later I'm googling "anxiety" to the background music from Boards of Canada.
And whenever I fall asleep to the stimulating and thought provoking ideas of Yuval Noah Harari, I wake up several films later and likely half way through a 4 hour documentary of the 4 horsemen & NWO.
But I guess the remedy is to use Google even more, deactivate my adblock and embrace the milking experience as intended. So that they can build a more accurate picture serving me "better" content. :)
It also happened to me. I was watching some completely unrelated stuff (technical stuff) when I noticed that a polemical and obviously stupid and dumb right-wing AfD propaganda video showed up high in the recommendation list. I was watching stuff in English, yet BS in German was shown (just to be clear: that it happened to be German was not the problem, but additional to be puzzling that it was such BS, it was also strange to show me something in a different language). I don't remember what the current news was at that time, but when I then searched for something related to that, that extreme right BS pub talk stuff was all over the place. It was obvious that the recommendations were skewed. I was puzzled. I closed the browser (cleared cookies) but it remained the same. One video was recommended regardless of how non-political the videos were that I tried to watch. And once I searched for that topic, again, every second video or so was extreme right BS. It took a few days before that video was not shown anymore.
If I understand the methodology correctly, he built a spider that recursively followed all outgoing recommendations. He was able to find fringe content when starting with mainstream content.
But is that so strange? I remember playing the "Hitler wikipedia game" where you start on a random page and try to find the Hitler page with the minimum number of clicks and it's typically possible to get there in a handful of clicks.
I don't think he has done enough to establish that there is a problem.
EDIT: On the other hand whenever I see someone respond to "source?" with a youtube video, I immediately tune out and disregard their argument as in my experience it will be either cranky, long-winded or both.
This is a good point. It seems plausible to me that a recommendation algorithm should "feel out" a user by putting a variety of options in the sidebar.
What I don't totally understand is how it determines the directed nature of the recommendation graph, and how the destinations are labeled. What does it mean to "end up" at "alt right" videos? Do the recommendations have to exclusively lean right or just mostly? What about higher vs lower recommendations? If there are recommendation patterns that go left but don't end up in a tight loop are they considered somehow? There could also be some dependence from the chosen starting video / topic. I'd be curious to see the results from a variety of starting points.
Basically, it's an interesting exploration but I'd like to see it with the rigor of a peer reviewed paper, more than just the NYT.
But people know how to form their own opinion - YouTube likes to show me talks by Noam Chomsky, but I don't have to agree with him. I think they are blaming too much on these evil algorithms.
Yes. 'Extreme' refers really to the emotional temperature rather than the content. 'Moderate' views, compromises, etc, don't make much sense and are therefore ultimately boring. This doesn't imply moderation or the battle for centre ground in politics is bad. After all we know more than we can tell, so we can't always explain our decisions and policies. It's merely that there's less to understand and/or criticise.
At one point in history, the suggestion that you would vote for your leader was "radical". Giving the vote to women was once considered "radical", etc etc
How about encouraging others in demonizing, targetting and killing emergency workers/first responders. Seems pretty radical. Yet it's becomming more and more normal to amplify voices of freaks who do this on social media.
I'd rather not for this relativism of "what's radical?" to spread too much, because you can justify pretty much anything with it.
Because greatly increasing the suffering of millions of people who flee a warzone just because some of their ranks are criminals, extremists, or terrorists posing as refugees is very unfair to the vast majority of these people. People shouldn't be allowed in without vetting, but refugees need to go or be placed somewhere.
I'd be more worried about getting stabbed by the average low empathy xenophobe than the average asylum seeker fleeing violence. Should we remove xenophobes from Germany? No?
Vetting people and holding them accountable for their actions is one thing, but closing borders outright to me is just hysteria. I doubt such cruelty is going to make the world a safer place in the long term, either. The average person remembers who was kind in their time of need and who was not.
While your salary is partly due to your country bowing up weddings that you don't care about, don't be surprised when you get 9/11
Just like the British benefited from the East India company, the US benefits from destabilisation on the other side of the world. Eventually the world fights back. Every time a western bomb kills 100 civilians, of their 400 relatives 4 turn into extremists with a (legitimate) grievance against the west. Doesn't matter if the western bomb killed a terrorist ringleader, it's not helping.
People died in Natzy camps because other people took the position of "it's not my problem"... An action is extremist or not depending on consequences and context.
I know, as a fellow European, it's easier to not care when the people suffering happen to come from places like the Middle East or Africa. But this is just a bias to overcome.
Probably real solution is that Western European countries actually update their police and legal systems to the point that they can handle the problem of more and harder to "herd" people. If there really are "no-go zones" in a country, then imo that's just police incompetence/complacency - they are paid from your tax money especially to prevent stuff like this from happening. More police and more prisons (and an upgraded legal machine to promptly prosecute thieves and rapists etc.) doesn't sound good, but it's kind of what you need to keep a safe society in the context of wandering masses of people with different values that need to be coerced a little into accepting democratical and liberal values...
Climate change is going to make refugees everyone's problem (if it doesn't make you a refugee yourself) so I would encourage you to keep your eyes open for effective means of absorbing migrant populations.
Anecdotal but +1. Watched a couple Jordan Peterson videos and YT started recommending far right wing channels everywhere. Identity politics is exactly the kind of thing he preached against.
In Germany a new edition of Mein Kampf was published - and it sold a lot of copies. Nearly all of those copies were to people who were already anti-Nazi.
I read all sorts of stuff - doesn't mean I buy it.
The focus of this article is on right-wing extreme content, but I think the tendency is true on both sides. The fact is that partisans are more likely to seek out news content, and partisans are happier with content that matches their prior viewpoint. And so the extreme content rises in popularity and is presented as the recommended option to the less partisan.
It's a worthwhile question as to whether this is a flaw in the algorithm or if the algorithm is simply accurately mirroring democratic preferences. Maybe everyone's tendency is toward more politically extreme content, and youtube is simply giving the people what they want. Which is, after all, what YouTube probably sees as the correct goal.
I suspect that enough differences may exist between liberal and conservative subpopulations that there may be differences in the media they prefer to produce and consume.
For example, talk radio is much more popular for the right wing than the left. Left wing commentators and performers have occasionally tried to break into talk radio and they generally don't accrue an audience.
Given that, I wouldn't be surprised if YouTube in particular (or any one site by itself) had more right-wing extreme content than left-wing extreme content. Left-wing extreme content may exist in equal proportion overall without having much of a presence specifically on YouTube.
There's been numerous reporting on the surge in popularity of anti-government militia groups, tho it looks like most of it comes back to a report by Southern Law Poverty Center, which you may or may not take as a reliable source.
To be fair, whilst the modern, Western extreme left are generally less violent, there's no shortage of extreme left media that's designed to engender hatred of perceived groups of enemies far more than propose solutions, and certain extreme left factions which will go out of their way to defend state use of violence by notionally left wing or "anti-imperialist" governments, offer explicit support or defences for certain militant Islamist groups, or in the context of the modern UK extreme left, identifying "Zionists" to be purged from public life...
You have to be willfully blind not to see the obvious connection between far right groups and violence. Both the US and the UK have seen dramatic increases in far right terrorist attacks and equivocating this clear, observable phenomenon with bogeymen on the left is very disingenuous. If you're actually concerned with rising extremism you'd focus on the more pressing issue which is very much not the anti-imperialists.
You'd have to be wilfully blind to ignore both my opening sentence acknowledgement that the far right is on average more actively violent than the modern, Western far left and the fact that groups with an extreme agenda and the will to endorse violent action exist on the extreme left, as does a social media funnel leading from very reasonable causes to campaigns to purge people on account of imagined political allegiances linked to their ethnicity, arguments that death threats for political opponents are actually justified, solidarity with pro-violence Islamists and people who advocate extend the principle of punching perceived fascists well beyond Richard Spencer. Few things are more damaging to political discourse than thinking it's unfair or disingenuous to criticise anyone who's not as bad as the neo Nazis...
Well, most recently I've heard about hundreds being killed in Nigeria by Boko Haram and hundreds of social leaders murdered recently in Colombia. Plus whatever countries the US is currently slaughtering people in.
In the mid 20th century far left groups were responsible for the majority of violent extremism. Wikipedia says as much as 75%
Your link doesn't support what you say at all. Firstly, it's talking about terrorism just in the USA. Secondly, the 75% figure references a 1994 book[0] which is referring to "American terrorism in the eighties"[1].
To give more flavour of the book: "Had it not been for the emergence of the environmental terrorists during the closing
years of the decade, terrorism in America would have been virtually non-existent during the late 1980s. ...1990...was the fourth consecutive year in which America experienced no deaths or injuries due to terrorism."
Meanwhile in the 80s, the US was busy invading and subverting countries across latin America, training paramilitaries/torturers etc. But the left is the one 'rationalizing'..
[1] "Although most Americans were left with the impression that American terrorism in the eighties was largely rightist, the actual number of acts of terrorism committed by left-wing groups accounted for about three-fourths of all
officially designated acts of domestic terrorism in America." (p94)
Yes, I and, I assumed the OP was talking about America since the statement that there aren't any leftist extremist groups is obviously untrue if you look at it from a world wide perceptive. That section on Wikipedia absolutely lists the left wing extremism that was common in America from the 1960's to the 1980's. I didn't look into the source of the stat listed there. But the point still stands, left wing extremism has existed in the US in the past even if it isn't the predominate problem present day.
Sorry, I thought it was obvious I was talking about active groups killing people today, and not about no longer active groups killing people in the previous century.
It is important not to put the cart before the horse, its youtube that controls recommendations, thought various mostly opaque means.
Do not confuse that with what people want, you hear plenty of stories about how somebody watched some $stupid_shit and then their recommendations are pure $stupid_shit for the next two weeks.
You have to remember that these articles are to some extent manual stitchers writing critiques of the sewing machine. The newspapers are what is getting killed by new media, and honestly they deserve it. They've had an oligopoly on pushing their agenda for a long time, and it's hurt people (eg. beating the war drums for Iraq, leaving the American people objectively misinformed). There are criticisms to make but these people are not the ones to make them.
A week ago the 'mainstream media' spoke of mobs and pograms. Yet here's Saxony's Minister President:
> He also criticized the role of the media in the reporting of the events, saying that "there was no mob, no hunt and no pogroms."
You see it's all about narrative. Yes fringe Youtube accounts may have misinformation, but so may mainstream news. In fact people flock to Youtube because the media tend to commit lies of omission or editorialize their reporting.
Instead, follow the site guidelines. They include "Comments should get more civil and substantive, not less, as a topic gets more divisive," Eschew flamebait," and "Please don't use Hacker News primarily for political or ideological battle."
https://news.ycombinator.com/newsguidelines.html