I would say generally I think the idea of placing ads "next to" or "on" hateful content doesn't really get me upset. If its a list of people you follow with ads and you follow crazy people I don't think it's surprising that would happen. It's also unclear why advertisers should care, given that its personalized. Maybe you could argue if an account is saying hateful stuff, they shouldn't put any ads in their list of tweets if you go directly there?
That said, the Media Matters article starts with quote from X CEO Yaccarino (https://www.mediamatters.org/twitter/x-placing-ads-amazon-nb...) specifically saying they put controls in to prevent it. So it seems fair enough for a journalist to check if those controls work. Clearly, they don't in this case. Or the controls are for another case.
I do think Media Matters could have been clearer about its methodology, since I agree with X that making an account that only follows hate accounts and seeing if it shows ads is not really what the article implies. I don't see how serving ads nearby to hateful content is more objectionable really than just having the hateful content in the first place. The connection to ads seems more like a tactic to try to force them into action, which is more activism than journalism.
However, it's crazy to try to sue them for this. It's not illegal and it is mostly accurate! It's especially unacceptable that these virtue-signaling Republican AGs are trying to capitalize on it. Gross!
The entire mechanism by which Twitter/X is able to convince advertisers to advertise on their site is by promising their ads will not appear next to vile content.
Because otherwise, you're right, no major brand would risk it.
> Clearly, they don't in this case. Or the controls are for another case.
I mean the lawsuit (and preceding tweets by Yaccarino) allege that they do indeed work and Media Matters effectively committed a denial of service attack of sorts in order to forcibly cause them to appear.
It's not a denial of service attack for them to scroll more than a typical user. They don't allege it is such an attack either. If X is correct, they certainly went to some effort to make the screenshots and its somewhat of a synthetic test because they only follow brand accounts and hate accounts – but that would be the first way you'd test such a hypothesis and X did in fact serve them the ads next to that content.
That said, the Media Matters article starts with quote from X CEO Yaccarino (https://www.mediamatters.org/twitter/x-placing-ads-amazon-nb...) specifically saying they put controls in to prevent it. So it seems fair enough for a journalist to check if those controls work. Clearly, they don't in this case. Or the controls are for another case.
I do think Media Matters could have been clearer about its methodology, since I agree with X that making an account that only follows hate accounts and seeing if it shows ads is not really what the article implies. I don't see how serving ads nearby to hateful content is more objectionable really than just having the hateful content in the first place. The connection to ads seems more like a tactic to try to force them into action, which is more activism than journalism.
However, it's crazy to try to sue them for this. It's not illegal and it is mostly accurate! It's especially unacceptable that these virtue-signaling Republican AGs are trying to capitalize on it. Gross!