Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Facebook doesn’t understand about the Facebook walkout (theverge.com)
61 points by rbanffy on June 2, 2020 | hide | past | favorite | 43 comments


This is one of the dumbest articles I’ve read about the issue, treating it as a binary good-vs-evil problem instead of critically thinking about the underlying issues and the effects of what they’re actually proposing.

I mentioned this in a previous thread, but we absolutely do not want Mark Zuckerberg or his employees to make decisions on what is or isn’t newsworthy. They have far too much power, and not enough internal controls, to be involved in politics. (And yes, you can have an independent moderator system - but they’ll end up in the same position as Facebook as well, with no accountability to the public.)

Just to take a simple example, what would have happened if a facebook moderator had decided to remove the initial video of Floyd's murder because, hey, the official reports said he was doing something wrong and the video had the potential to incite violence? (which, in hindsight, it absolutely did.) I'm fairly sure we would not be having the discussion we're having today, and we would have lost a vital opportunity to improve our democracy.


> we absolutely do not want Mark Zuckerberg or his employees to make decisions on what is or isn’t newsworthy.

The problem is that they already ARE making decisions on what is newsworthy. It is not possible for them to be neutral.

Today the FB News Feed algorithm optimizes for user engagement. That is an editorial decision! It just so happens that it results in having the entire population get their news through a skinner box that reinforces and worsens their most harmful biases.

Like it or not, demagogues around the world (not just the US) are exploiting this mechanic to gain power in ways that are harmful to society. Should Facebook be the arbiter of truth? That's the wrong question. They already are. At this point all we can hope is that they choose to arbitrate the truth in a less damaging way.


I assure you that Facebook is both very powerful and very involved in politics. Just the month it came out that their own research found their recommendation systems were engaging people with the far-right. The execs allowed it.

It is impossible for Facebook to exist and act outside of politics. It’s a nonsensical idea that they could avoid involvement in political issues given the definition of politics.


I find it ok for Facebook to do something like their "Top stories" based on what their algorithm determines for me.

But I'm somewhat annoyed that "Most recent" doesn't actually sort the news feed chronologically, and also show everything. The algorithm is still doing something there.


Yes! I want to see everything my friends are posting, not what FB thinks I should see.


Perhaps less applicable to the Facebook issue, but I have found Mastodon to be a potentially viable replacement for Twitter, though it definitely skews heavily towards the tech crowd.

Chronological timelines, and user-driven moderation. Plus there are way more users now than there were when I joined a few years ago.


From first principles, it is possible to have the medium of Facebook, without the infusion of politics. One only has to imagine an orchestration layer on top of email, contact lists, with a publicly visible open source algorithm driving ranking. I don't think that world can exist now, but it can be a useful mental model to disprove the assumption that the communication medium Facebook provides is inherently coupled with centralized editorial control.


" It’s a nonsensical idea that they could avoid involvement in political issues given the definition of politics."

It's sensical.

The issues are objectivity and established protocols, independent of executive tinkering or favour.

If some government agency, or industry body, 'very intelligently' and hopefully in collaboration with all the necessary parties (highly doubtful but possible), established a set of 'regulatory protocols for social media' - then FB, Twitter could wash their hands of this.

They set up internal controls, a segregated unit with maybe an 'independent oversight board' based on those policies. When there are special cases, they are flagged to the regulatory body who can maybe provide guidance and a ruling.

That way, instead of Zuck coming out and saying 'this is why it was allowed or not' - you have the head of some FB body making the statement, backed by the oversight board, and why it conforms with the specific, established regulatory guidance.

That said, it would also probably be within Twitter/FB rights to remove anything else they wanted to remove, after all, they are a private company. If we wanted to extend their status by virtue of their influence as 'partial public good' - then maybe they could only remove other content within certain parameters (i.e. advertisers don't want it or whatever).

FYI - the Trump 'Tweet' itself was terrible ("Once the looting starts, the shooting starts") and is easily within any policy grounds to take down - however - the actual FB post by Trump, is not that, rather, it's that sentence plus an indication that the statement is meant to infer that looting causes violence etc. which is actually very reasonably above bar. For example, if Trump made the FB post alone, without having made the previous Tweet, there'd be little said about it. It's not controversial to say "When rioting starts, violence starts because things get out of control etc. etc." i.e. a contextualize statement.

I'm really perplexed at why Mark and Jack Dorsey have any inclination to stand in the world and justify their willy nilly decisions to do this or that on a case by case basis. It makes them seem like they're making it up as they go along, an it's just inviting controversy. It's a no-win.

Get some really cold, boring, intelligent, articulate administrator with a legal background, someeone with the stature and credibility of a judge ... and let them do the talking - the world will appreaciate it.

During Covid - all over the world - we're seeing doctors and bureaucrats take the microphone. It's very refreshing to see people who know what they are talking about, speak in detail. They don't seem political. They are super boring to the point where it's dry, anti-populism, they speaking like real adults.

Then we can argue about regulations instead of personal politics.


For those those who aren't familiar with some less savory parts of US history, the phrase "when the looting starts..." has rather ignoble origins. Specifically, Miami's Police Chief Walter E. Headley uttered that phrase in 1967 in support of white supremacy and in response to an outbreak of violent crime [0]. It's the callback to white supremacists of the 1960's is makes the statement a racist dog-whistle of a statement. That's the context that isn't, and can't be cold nor boring.

I'm sure neither Mark Zuckerberg or Jack Dorsey want to do any of this, even if only for the simple economic reason that paying for moderation and fact checkers cuts into the company's profit margin, and that they've been mostly hands off with content moderation thus far.

That they're having this discussion should then be taken as a signal that things are different, and hosting user-generated content is no longer the free-for-all it was in 2009, for better or worse.

[0] https://en.wikipedia.org/wiki/When_the_looting_starts,_the_s...


> "I mentioned this in a previous thread, but we absolutely do not want Mark Zuckerberg or his employees to make decisions on what is or isn’t newsworthy. They have far too much power, and not enough internal controls, to be involved in politics."

Absolutely agree. Which is why I also agree with Zuckerberg's decision to let the post stand. I also believe it to be more important to teach people to assess a danger for themselves, instead of trying to protect everyone from danger.

> "And yes, you can have an independent moderator system - but they’ll end up in the same position as Facebook as well, with no accountability to the public."

In its present form, yes. But what if we force social media towards interoperability? As in, you can use Facebook, but you can befriend people from Mastodon and Twitter with it, too, because a law says that this all must be interoperable? In such a case, a neutral, independent, maybe state-enforced moderation would become possible.


First of all, it is a mistake to couch the issue as "newsworthiness." It's not about whether or not speech is newsworthy. It's about whether or not to allow a post to be openly displayed and disseminated - even to non-subscribers of the poster - without a disclaimer. The current argument is not whether speech should be quashed; it's about whether it should be annotated, disseminated and/or amplified.

Even if the question were around newsworthiness - which, I emphasize, it isn't -- publishers make decisions all the time as to what is newsworthy and what isn't. Someone has to decide what goes on the front page or how to order stories, whether it be in the form of a newspaper, a TV show, or what's above the fold on a website.

The golden rule when it comes to private property -- which Facebook is -- is "my house, my rules." Mark Zuckerberg, with a complicit Facebook board, has made it abundantly clear whose house it is. He can -- and has -- set whatever rules he wants, subject to the law. At this point, he has made it clear that his rules are to maximize dissemination of the President's communication even at the expense of other legitimate concerns. And it is legitimate to take a contrary position on what those rules should be.


Also, stripped of context, a video of someone getting murdered (by a cop or other) would probably get taken down I think? Not even for incitement, just for the violence in the video itself.


Good point, didn't that video lead to protests? Which gave an opportunity for looting. If the goal is peace at all costs, then the goal is ignorance.


Journalists have balanced the need of the public to know something against the harm for decades. They sometimes get it right, they sometimes get it wrong.

Facebook and the like try to reproduce that as an algorithm


> They have far too much power, and not enough internal controls, to be involved in politics.

How does this compare with television? Newspapers? Any form of media? Facebook isn't special. Maybe you're right, but if you are, the problem is bigger than one tech firm. It's bigger than all tech firms.


The problem is exactly something Trump complained about.

Big tech companies are using a law made to protect ISPs to avoid getting sued for whatever people say on their platforms, including stuff their own employees say.

Other forms of media can be sued.

This is a huge difference.


The OP is complaining about editorial control and bias but traditional media companies have full editorial control and biases.

Whether or not someone can be sued for content created by other people seems completely irrelevant to the discussion. In either case, you can always sue the content author.


FB already is making newsworthiness decisions all the time with their recommendation engine.

Being scored low by the algorithm is effectively the same as being censored.


I agree. Its an impossibly thin and weird line that Facebook has to walk; there are no overarching, simple policies which, when applied unilaterally, would produce an outcome that everyone finds desirable.

I do think that a non-unilateral policy of allowing more lee-way for the accounts of government officials makes sense. Yes, its glorifying violence, and I understand Twitter's decision to hide the tweet, but we also have a right to be apprised of these borderline-insane things that our leaders are saying so we can make more informed decisions when it comes time to vote. Silencing Trump is not productive; more people need to see these things he's saying, and hopefully, make the right decision (for them) when it comes time to re-elect him or not.

I think, to be frank, its a situation similar to the old gun control argument that, when guns are outlawed, only outlaws have guns. Whether you agree with that statement or not; when Twitter censors Trump, the only people who won't hear him are Left who are participating in that discussion on the social platforms where he was censored. There are too many communication channels nowadays, and most right-wing citizens already feel so suppressed on Twitter/Facebook that they additionally monitor other platforms, where his message is shared loud and clear, with no censorship.

Additionally, I don't like the symbolism behind the supposition that these private companies are above the government. I may not like the government right now, but Twitter and Facebook should not have the right to censor government officials. You may support Twitter's actions today, but the situation could easily be flipped. The left is more technically inclined today, and most tech leaders are leftists. When the left applies pressure to Twitter, it sticks. It won't always be like this, and we need to set the precedent that at least some tactically useful form of free speech still applies on these platforms, even though they don't have a legal responsibility to uphold it.


I can't remember who suggested it but it would be awesome if we could have plug and play algorithms for our Facebook/twitter/youtube/etc content.

That way we could get the bring me top stories from all side. Or, I'm looking into this topic let's go with this other matching algorithm.

I suspect we'd be back to one algorithm that gets 5 stars and 90% of people use, but it'd be interesting to see and would remove the need for Facebook to curate content.


That might fix things for you. But it doesn't fix things for the nation, or the world.


What I don't understand about all of this because I'm simply not in the know, so hopefully somebody can explain it in more detail, is this: Facebook regularly monitors posts for inappropriate content such as this. This seems more of an active "we want to leave this up" than a "we want to take this down" type of post, correct?

I have trouble forming an opinion on whether or not I agree with Facebook or with Twitter on the whole matter because I'm not well-versed with their previous T&C, and what would typically happen for a tweet like this from someone who isn't POTUS. If someone could provide some resources, that would be great and useful to myself and hopefully others who want to get a fuller understanding of this issue.


From Zuckerberg: "I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves, because ultimately accountability for those in positions of power can only happen when their speech is scrutinized out in the open." - https://www.cnbc.com/2020/06/01/facebook-staff-angry--zucker...

It's a perfectly reasonable to keep up things that are a matter of public record like this. I find people walking out over their employer not censoring enough childish and indicative of a generation that hasn't learned the dangers of censorship, caring more about moral grandstanding than freedom of the press or information otherwise being accessible. It comes off as especially pretentious doing it in the middle of all the protests over the very serious and real issues with the police, contrasted with Facebook employees with cushy jobs being mad their boss didn't do enough to curate the platform to their liking.


> "I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves, because ultimately accountability for those in positions of power can only happen when their speech is scrutinized out in the open."

That’s pretty fair. But, will that set a precedence of violent posts?


I can say that I see violent posts on facebook every day, and often ones inciting violence. I still see absoloutely no reason to take this stuff down. There is a comment section that people can use to share their thoughts and views, and to me this is the place where posts should be getting called out for any issues a viewer has with them. Putting a sensitive content warning on something is acceptable, but in a way that provides no moral opinion on what is behind the content warning.

To remove these types of posts due to a moral opinon is a slippery slope to so much more censorship. If facebook starts actively taking posts down and writing their own reality they should not be protected under section 230.


From the president? Maybe, but I'd still think the value of the public record is more important here. From regular users it would still be disallowed.

Instead of this murky de-facto system of different rules for public officials and regular users, maybe we need to have it codified. Accounts owned by public officials have their activity publicly recorded (with edit/deletion attempt history, etc) and with something in the UI indicating it's such a type of account, and a link to a disclaimer visible near all the posts explaining that the post might have TOS violations and doesn't necessarily serve as an example of acceptable behavior for other users. Such protection would go away when their term is over and they'd be able to be banned for past posts (but everything remaining visible), maybe reopened if they're in a position again later on.


I think this is a sensible and easily-enforceable solution. Facebook and Twitter are necessities today and oftentimes (for better or for much, much worse) are where history is being written. I think using these platforms as a highly important person geopolitically should dictate different rules than me, who has like 400 friends from the decade or so I've been on the platform, 90% of whom I haven't spoken to in years.


I wish twitter hadn't opened this whole can of worms. I thought their previous policy of basically saying 'yes, different rules do apply to the president, so we're just not going to delete his posts' was pretty good.


But where does the contrary position go, if you follow it through to the end? Say you delete the presidents post, so he goes and says the same thing at a press conference, do you now delete all the posts from news networks carrying his remarks? Deplatforming the president of the united states is just not a thing that is physically possible.


There's a middle option, and it's the one Twitter has taken: Allow the post to remain, but put a disclaimer on it.

You could even take the extra step to allow the original posts on the determined-to-be-bad topic to exist, but not allow sharing, commenting on, or otherwise interacting with and spreading the post.


I'm not even sure the T&C even matters _that_ much. Facebook is a private platform, they can likely take anything down they like.

This is a "moral" judgement call really, about the content of the post. Twitter/Jack decided one way, FB/Mark the other (was cross posted on both platforms).


I really dislike the legalistic versus moral distinction. Emphasis on legalistic rather than legal. Legal and moral are obviously not the same. Legalistic seems to just relate to making and enforcing rules, which is a super important way to deal with moral issues when you're a platform or arbiter. Yes, absolutely take the awful stuff off facebook, but make rules and tests to be as clear as you can about it.


I think Trump should use the White House homepage for his communications. And what he writes on Twitter and Facebook, those platforms should at least be as liable for as if Trump wrote the same thing in an opinion piece in a newspaper. I think AWS and Azure are true "platforms" but Facebook and Twitter are social media. They are more similiar to other media than to technical platforms. I think the buck should end with Zuckerberg for everything expressed on his site.

If this was the case, it might very well be that social media as we know it today would no longer be viable. It might very well be better however. I long for the time the blogosphere was were people expressed themselves online. It was more mature, slower, and more thoughtful.


Isn't this what section 230 prevents? It protects Facebook and Twitter from this specifically. It's also what Trump was pushing back on.


What's confusing to me is that killing 230 would make social media liable for the content published by its users. It seems like it would require them to control and filter content more, not less.


They aren't wanting to kill 230, they want to take it away from companies who choose to alter users' posts.

Their argument is if you start censoring and editing your users you are abusing the protections given to you by 230 and you are no longer eligible and you must vet EVERY post, not just the ones you selected.

Other companies who do not edit/censor their users selectively continue to have protections from 230.

Not picking a side btw, just trying to clarify.


I don't know the details of that since I don't live in the US. But I googled and read this

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider

I think there is a distinction where this should apply and when not, and that is when the service acts "editorial" and when not. I see editorial as making choices what the user sees and not. I think all social media makes such choices today. They "recommend" you things, and thus I think they are editorial. Compare this with a static blog on AWS S3. All content comes from the writer. Amazon does not making any editorial calls at all. I think there is a distinction to make here.


I have been thinking: there's a real opportunity to disrupt FaceBook, from the political side. It could gain traction in a way, and for a reason, among large groups of people who otherwise won't give a damn ('FaceBook competitor', yawn).

I think a billboard and an advertising blitz that really put in people's faces, especially in the large blue cities that red staters hate so much, something like: Did you know that, when you're on FaceBook, you're giving money to the ad platform that elected Donald Trump? (*asterisk to Atlantic article here with quote to that effect) Join SocialBook to learn more.

Then you could explain that there's a kind of default understanding that FaceBook is 'neutral,' but it's probably more accurate to think of FaceBook as an analogue, a cousin even, to Fox News. Its most popular site for engagement is Breitbart. As people have noted online, by various metrics, it favors Trump. And whether you agree or not, we are under no moral compulsion at all - none whatsoever - to support FaceBook. Not using it is, at worst, morally neutral.

So if you oppose FaceBook's general effect on our society, why not do something that costs you no money and requires no real lifestyle change, but can radically improve your society's politics? Join SocialBook and get your friends too also, and as a principled stand, delete FaceBook. If nothing else, you'll start exerting some powerful pressure on FaceBook to change its policies in a direction it usually ignores, or else. And at best, you could help launch a powerful, healthy competitor.

I should point out also: I don't see anything morally wrong in advocating for this. I do feel a duty to support people working towards building in general, and the economy in general. But do I have a specific moral duty to support FaceBook? Of course not. I think there are people here who object to people making commercial decisions for political reasons - but there's no moral imperative supporting this. For what it's worth, I don't see other companies taking this angle (say, Gab) as being in the wrong for that reason, either.


I think if you feel strongly about X, you shouldn't mod discussions of X. It makes the discussion reflect your view instead of the views of participants. You should delegate modding to someone who doesn't feel strongly about X. It might be hard to find such a person in the US now, but with a global workforce that's less of a problem.


It's disturbing how things are turned upside down. The GAFA already are behaving like they are above the state. You can love or hate Trump, is is elected, not Facebook or Twitter. As the head of the state he represents what we call in french 'monopoly of legitimate violence'. I found dangerous how the GAFA companies think about themselves not concerned by that anymore.


Absolutely, I personally hate Trump and Facebook but he is still the president.

The idea it is even a discussion to censor the president is lunacy.


Can we please stop voting for, and start to flag, articles from theverge.com?

The Verge is a clickbait site that:

- constantly replaces original sources (earlier today there was a post which was essentially a wrapper around a Microsoft employees blog)

- mocked the tech industry for practicing social distancing during the Covid 19 outbreak.

- As another poster already noted, consistently treats complex issues as binary 'good vs evil' situations with no room for nuance.


Twitter, Facebook etc have never gotten over Trump winning the election over their preferred candidate. All the censoring (and cancelling) of exclusively right wing voices since then is purely designed to get a Democrat into the Whitehouse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: