X seems to be arguing that the report was defamatory because only Media Matters saw ads from 4 companies next to specific pro-Hitler posts, while not presenting an argument that no company has its ads run next to antisemitic content in general.
Media Matters set up a test account that showed that it was possible for X's algorithm to pair ads with objectionable content. Given that, how can X claim that "no company has its ads run next to anti semitic content in general" when Media Matters has shown that it is possible? Unless Media Matters "photoshopped" or otherwise manufactured the results--but that isn't what the filing claims. They claim that MM set up a few small accounts following only a few other accounts-fringe content and brand advertisers--and then scrolled through the feed until they found something bad.
The line in the filing that mentions that MM's tests used existing accounts to get by new member restrictions shows the fragile nature of X's arguments. Most real-life users are going to have existing accounts, so it's that experience you want to check, not the highly-constrained environment X puts new subscribers into because they don't trust them yet.
Customers like Apple, Comcast, NBCUniversal and IBM, are sophisticated ad buyers that wouldn't let a single story change their buying strategies without additional information/confirmation from X. If they made the choice to leave X, I'd bet that the Media Matters story was the last straw, not the first one. And it's quite possible that the Media Matters story was the result--rather that the cause--of those companies' decision to leave the platform in the first place.
While X is trying to spin this a Media Matters "did bad things" to convince Apple, IBM, Comcast, and NBCUniversal to stop advertising with X, it is far more likely that the highly volatile and bombastic behavior of X over the last year had far more to do with that result than Media Matters' article did.
Good points but do you think if MM said it took us 10,000 clicks to see 1 ad, we had to follow 30 users of objectional content generators and we had to follow the same brands as shown in the ads, that it makes the claims of MM significantly different?
It seems to me that defamation here hinges on exactly what Media Matters said their little experiment shows about X: if they indicated that they were capturing the horrific state of the general user experience with respect to ads and offensive posts, then this was a really malicious lie.
Otherwise, they were just using X in a strange manner, which is not defamatory in itself.
> But that [the claim that "brands are now 'protected from the risk of being next to' potentially toxic content."] certainly isn’t the case for at least five major brands: We recently found ads for Apple, Bravo, Oracle, Xfinity, and IBM next to posts that tout Hitler and his Nazi Party on X. Here they are: <screenshots>
Nothing is said about how common/rare this occurrence is nor whether anything specific needs to be done to observe such a result.
And if you are one of those advertisers, how "common" a problem does this have to be to make you think you don't want to advertise there anymore? Even X's filing doesn't claim this "can't happen", just that it doesn't happen frequently.
For CMOs making major ad buys for carefully curated marquis brands like Apple, IBM, et al, I suspect that the only concrete number that they want to be assured of by their advertising platform is "0".
0 is impossible and advertisers realize that, which is why large user-generated content companies set up a Trust and Safety team to rapidly respond to these issues to placate advertisers.
Too bad Elon fired Twitter/X's Trust and Safety team!
What could a "trust and safety" team even do when the site owner/CEO spends his time directly responding to racist diatribes with positive encouragement [0] ? Like you can't pass off the nastiness as just "some users" or otherwise exceptional when it's being directly nurtured by the forum admin. So the resulting question is more like when to stop advertising somewhere the admin seems intent on making into Stormfront Lite?
[0] https://nitter.net/elonmusk/status/1724908287471272299 . I'm including this link even though it's been referenced to death, because reading primary sources is important - especially with people people becoming desensitized to claims of racism from a media landscape that often takes things out of context and heavily paraphrases to blow them out of proportion, which is decidedly not what happened here.
Not in the article that article (written by the defendants) - the lawsuit, following the example of others, explains how they poked and prodded in very unnatural ways, trying to contrive a circumstance in which ads would be shown next to certain posts - and they did find such! X contends no actual users were or ever would be in this same circumstance, so no brand damage was actually done.
It may hinge on exactly how strong the 'protection' is that Yaccarino alluded to is inferred to be - whether it's reasonable to infer she meant that content moderation under Musk was now perfected and 100% hateproof, at least with respect to ads.
Sure, but the issue is that defamation requires a false statement of fact. Elon may not be happy that the article is missing context, but that's not the same thing as claiming something false.
> X contends no actual users were or ever would be in this same circumstance, so no brand damage was actually done.
If that's what they want to contend then this lawsuit is probably not the right vehicle. They'd probably be better off making that argument to their advertisers.
The contention is close, but not exactly that - it would be that when MM said 'Yaccarino was wrong, here's proof', this was defamatory because 'protected' was never meant to imply 100% perfect protection, and therefore claims that her statement was disproven - with their contrived method - are false and malicious.
It may well be a weak case. MM are certainly slimy political operators, but they seem to have mostly avoided any direct statements which are easily, unambiguously provably false.
> it would be that when MM said 'Yaccarino was wrong, here's proof', this was defamatory because 'protected' was never meant to imply 100% perfect protection, and therefore claims that her statement was disproven - with their contrived method - are false and malicious.
I'm honestly not sure how that would be legally analyzed. It doesn't really feel like a very convincing argument, but I don't think I can articulate exactly why.
In any case, it doesn't seem that particular line of argument is present in the complaint, so it's pretty much just a curiosity.
It's a lot less relevant to find some way to "trick" it into displaying ads in situations like this if users wouldn't normally see this, and it could easily be defamatory if you went on to claim that this was therefore routine or a serious problem.
For example imagine that it takes exploits, URL editing, or something similar to do it. The question here really is how much effort you really need to put in to get it to happen.
> it could easily be defamatory if you went on to claim that this was therefore routine or a serious problem.
Claiming that the problem is "routine" might be problematic, but I think the problem being "serious" may arguably be non-defamatory. A problem being "routine" implies there's a pattern, which can potentially be proven/disproven, but whether a problem is "serious" seems much more opinion-based. One advertiser may not care that their ads have a minuscule chance of showing up next to objectionable content, and another one may care very much that there's a non-zero chance.