So obviously no Wikileaks or any “sources inside the $agency with access to information” (but with no clearance to leak) cannot be published on Twitter any more, right? No guessing in who poisoned Navalni, etc... Got it!
Exactly. I guess banning content on a private platform by its private owner is totally okay, at least legally. Banning content with double standards, though, makes the platform editorial, which means people should be able to sue the company left and right.
And hacked in what way? Didn't the repair shop owner take ownership of the computer after repeatedly asking for payment but not getting it? Didn't the owner give the hard drive first to FBI, then to a few media, and then to Giuliani?
As for fact checkers, Twitter didn't really fact check those media who give a report that says "anonymous source says", right? Twitter didn't really fact check that Jack Tapper contradicted himself now and in 2016 on exactly the same fine people hoax, right? Twitter didn't really fact check The Project 1619 that teaches us to hate America with a long list of inaccuracies (if not outright lies) or the critical race theory that claims that all white people are born racists or Asian people are complicit racists because they bought the values like working hard or being good at STEM, right? Or why isn't leaked tax records not "hacked"(FWIW, I'm only arguing the definition of "hacked", not whether it's good or bad to reveal tax record).
Oh wait, I guess I'm not exactly following the righteous narratives here, as all the morally superior mainstream media are doing. So this makes me a what? A bigot? A Nazi? A brown but really white supremacist? A racist?
> and I think Twitter shouldn't be protected by 230 for their actions
I don't see how. Their actions are explicitly what are protected:
> Civil liability - No provider or user of an interactive computer service shall be held liable on account of
> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
> What do the words “in good faith” mean in that context?
Essentially nothing, which is why a few of the proposed bills to punish tech companies for removing content have tried to turn that into a clause with teeth.
Basically if you took twitters actions and did the exact opposite it would be "in good faith". They just don't give a fuck since they assume they won't suffer consequences.
Hiding negative articles about Biden that wouldn't have been hidden if they were about Trump goes under which of those kinds? Section 230 lets you filter out unrelated disturbing content like porn, it doesn't let you inject political bias.
> Section 230 lets you filter out unrelated disturbing content like porn, it doesn't let you inject political bias.
It's important to remember that it's the first amendment that protects Twitter ability to filter anything it wants on its own platform, but it's the "material that the provider or user considers...otherwise objectionable" part of Section 230 that maintains the liability shield for other content that remains on their platform. That covers removal of essentially any content.
At a certain point moderation becomes speech. Literally these words are not my words, they are Meriam-Webster’s. I just chose which ones to include in my comment. In so doing I convey a particular message.
It's good to hear we can't publish any leaked information about Trump's tax returns, re the NY Times, a story Twitter intentionally allowed to run at max distribution, along with dozens of prominent anti-Trump stories that ended up being baseless over the last four years which the media happily concocted.
Everyone here knows exactly what's going on and it's rotten as can be.
Unless you have a factual error you'd like to report, citing the 1619 project is a tell that you don't like it's conclusions, and anything you dont like is "fake news".
Yes I am aware. Bret Stephens was hired to write headlines like this, to appeal to the "fair and balanced" crowd (which will always fail, nothing will ever be enough).
The fact that NYT didn't post this story says more about their bias than the legitimacy of the story itself. You know full well they would post a story with this level of verifiability if it was damaging for Trump.
The only practical way for the Post to verify the authenticity of the emails is via the DKIM signature. It's also trivial for them to share the emails and their headers as opposed to just sharing screenshots of them, if they're interested in making it easy to verify the emails' authenticity.
I'd say there's a far better chance the photos and videos are real than the emails. Given Biden's political connections I think getting videos of his kid smoking crack would be considered very valuable compared to what it would cost to have some party pal take the video.
Think about how well it works for a scenario like this. Using real videos to legitimize fake emails is an easy win, especially if you don't release either. No on can disprove the legitimacy of the emails and if Biden says they're total BS you release the real videos to add more legitimacy and give the whole thing another news cycle. Then you claim victory and go silent.
The photos are legit, obviously, but the emails as are yet not authenticated. All we got to see was a PDF printout. The full emails, with DKIM headers, being released would solve this problem instantly.
I think that's there point 0 verification and "anonymous sources" is all that's needed vs trump but for everyone else there is at-least some due diligence.
> Which publication do you think has better journalistic integrity?
Should that matter if the criteria for Twitter is whether something contains hacked/leaked materials or PII? Big news often contain the former and most articles contain the latter. Twitter would lose a lot of journalists if they went down that route. Might be good for the sanity of the users though.
I actually do think it matters. I also think there should be a distinction between things leaked by a whistleblower and things leaked by a thief or a patsy. However, since whistleblowers are often putting themselves at significant risk, we end up relying on the integrity of the journalists / publications in terms of taking their word for the legitimacy of the source.
So yeah. A publication that makes a significant effort to vet their source and the story deserves the benefit of the doubt while a publication that acts as a click-bait tabloid without any kind of investigative effort doesn't.
That said, I think Twitter is grasping a bit and trying to use a policy that's right or wrong when the reality is more subjective.
The other thing I don't understand is how revisiting section 230 of the DMCA and turning Twitter into a publisher is going to improve things. If the current publishers don't suffer any repercussions for anything, how will it be different if Twitter is considered the publisher? They're looking for a whipping boy IMO.
I very much do agree with you with regards to giving more trust to companies (or individual journalists) with a good track record. My issue is mostly that Twitter isn't using that as a policy (at least not as a stated policy) but a more vague thing like "can't contain information gained by unauthorized access" and then applies that selectively.
I kind of understand why they don't, because that would likely be hard to quantify and you'd have to decide what basket a company goes into, which would act as a gate keeper (if you don't have a good track record, or none at all, you can't publish visibly) and they'd likely have to constantly monitor for changes. Not something that scales well or can be automated, and definitely something where they'd get roasted each time the NYT commits a faux pas.
> If the current publishers don't suffer any repercussions for anything, how will it be different if Twitter is considered the publisher?
True. In a post-fact world, consequences for publishers reporting falsehoods might need to come back on the table. On the one hand that's a problem because it stifles reporting, on the other hand they have been playing very loose and saying "oops, sorry, we'll do better next time, promise" every time doesn't work.
> In a post-fact world, consequences for publishers reporting falsehoods might need to come back on the table. On the one hand that's a problem because it stifles reporting, on the other hand they have been playing very loose and saying "oops, sorry, we'll do better next time, promise" every time doesn't work.
That one flip-flops in my head all the time. Maybe the idea of reduced liability or a higher bar for proving libel / slander in a civil suit against a publisher might be an option, but with _some_ exposure to liability. There has to be a threshold where high quality publishers could bear the costs of honest mistakes, but bad actors would be overwhelmed financially.