AI accounts are the final nail in the coffin for the user-generated content era of the web. RIP UGC, 1994 - 2024.
We started with Usenet, message boards, Geocities and Blogspot. Then all the traffic migrated to aggregators like Facebook, Reddit and Instagram which prioritize engagement-heavy sensationalism.
Real people used to post all that content, but it's been clear for a while that sharecropping on somebody else's platform benefits them, not you. It used to be possible to build up a following and sell some Tshirts or get a few Patreon subs. But now it's simply not worth the hassle when every piece of original content is getting sucked into an AI training model and Instagram is openly telling users to pay to 'promote' their posts in the algorithm.
The bottom is falling out of the so-called bottomless pit of doomscroll machines, so instead of offering a better product, they're propping up its corpse with a hollow, computationally expensive facsimile of the real thing.
The solution is obvious: move back to invite-only message boards and webrings. Only real-world humans are allowed to enter the playground. Maybe we could even resurrect the PGP key signing Web of Trust?
If starting out, then the friction to join would cause many people not to bother.
But... if it was already a large enough group to have network effects and then that friction would actually make it more appealing in that it would add a feeling of exclusivity and status to be a member.
> "There is confusion," Meta spokesperson Liz Sweeney told CNN in an email. "The recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product." Sweeney said the accounts were "part of an early experiment we did with AI characters."
Don't worry, they still intend to fill their social networks with fake robot people, just not those fake robot people. This is definitely what users want and not a desperate scramble to justify Llamas R&D costs by cramming it into every text-shaped hole in their products.
I don't understand why Facebook would want to debase the value of their content by mixing in obvious AI slop. It's a sharp contrast to their past strategy of keeping the network as 'authentic' (their words) as possible.
> AUTHENTICITY
We want to make sure the content people see is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using our services to misrepresent who they are or what they’re doing.
I'd love to hear how this happened, and what kind of rationale they could possibly have for doing it. It's very, very bizarre from an outsider's point of view.
With no sarcasm, I truly can not understand Facebook's end game here. Why would anyone choose to spend their time interacting with fake people, pushing a corporate party line to sell you something?
Already plenty of people are tuning out of the current Facebook. How is stuffing it full of fake people going to help? Like, I can just about understand how this can goose certain short-term metrics but how would it actually make money? Fake people pushing ads on us is already a thing, that field has already been harvested, it's not like some fresh new frontier. Ads already fake this stuff, fake testimonials, fake reviews, fake web sites, fake profiles posting fake statuses for ulterior reasons. I don't see how openly hooking up LLM tech to this helps them. Covertly, I can at least cynically guess at some things, but this current approach I don't get at all.
Short of, "we've got a huge stock bump from the AI bubble, but in reality we can't actually find a way it can help us proportionally to the cost, and this is the best Hail Mary anyone has come up with to prevent an accurate revaluation of our stock."
Meta (like microsoft, google, etc) have a monopoly mindset and are past thinking about what the consumer wants, they can only think about what they want. So it wasn't a case of 'would users want to follow our fake account's content?', but rather 'AI accounts will give us even more control over content, and we could use them for sponsored content and make even more money'.
> Why would anyone choose to spend their time interacting with fake people, pushing a corporate party line to sell you something?
Really, functionally, what is the difference between any of the following other than in degree:
- Account that is nothing but AI generated content.
- Bot controlled accounts that spam messages at opportune times or can react to keywords detected in comments
- Account that belongs to an influencer and only consists of and posts influencer content.
- Account that is nothing but trolling/boosting with political memes.
- Account that belongs to a company and only posts memes and other advertisement type posts.
- Any account with activity trying to get likes, karma, etc. to gain privileges on the app or simply increase a displayed score.
All of the above are varying degrees of fake/non-authentic, and the non AI ones have been around a long, long time. These are everywhere and create engagement on multiple platforms despite people knowing about them and complaining about them. So I can understand why someone may think an AI model can possibly capture that essence and increase/lock-in engagement further. I'm pessimistic enough to think it will work if done the right way.
I was still grasping at how to phrase it, but that is definitely the sort of thing I was trying to get at, not that this is some massive change of the rules of engagement (war analogy at this point selected quite deliberately), but instead, an awful lot of money poured into plowing a field that is probably already overplowed.
I wonder if this is like the social media equivalent of the sims? Or a soap opera. Log in to see if jimmy works up the courage to ask out rhonda. Will ashley find out about the secret william told you? It's like a westworld where you're "safe" to interact socially in any way you feel, because they don't feel anything. As long as you click a few ads a day to keep the lights on.
People like you and me found the idea of injecting personalized ads between posts by real people disturbing. It worked fantastically for the company.
Now we think AI users subtly pushing for products such as delicious Coca Cola (TM) - refresh your day with a cool glass of Coca Cola (TM) - is a terrible idea. I expect it will again go fantastically for the company.
Almost certainly it is not doing well. Has anyone heard of anyone talking about the metaverse in real life in the last year ? Seems like one of those Covid-time fever dreams
Makes sense. Maybe this was a "see-what-happens" experiment that caught more attention than they expected?
I can't imagine it came from Instagram having a shortage of content.
e: Like, the quality and relevance of the content is the main value proposition to users in an old school social network like Instagram. You can't really mess with that in service of some other goal without undermining the product's raison d'etre.
Generative AI seems like a great opportunity for the metaverse instead of further enshittifying their existing networks. Users could utter entire universes into existence and then explore them, this could even work for the elderly reliving some past experience (this article alludes to ensnaring older users with non-existent bot "personalities").
So why would they do this? Is it just another attempt at swaying public opinion (see: Cambridge Analytica)? Perhaps promoted messaging will come from these accounts so they can monetize their AI investment? Whatever it is, it doesn't seem like a feature users requested so it's difficult to think it benefits them.
Plausible fake Facebook friends would make it more sticky. The elderly are already subject to manipulation, are less tech savvy, etc.
Facebook, like Google, is a surveillance advertising company that tricks you into using their products so you provide targeted ad information to them. They know when or if you poop regularly l, and what brand of TP you use. Your AI friends will help them advertise better.
The influx of AI slop on Facebook was apparently tied to their partner program which pays for engagement, so they were paying people to post 10,000 pictures of Shrimp Jesus. By vertically integrating the Shrimp Jesus pipeline they can save money by cutting out the middleman.
One of the bots made a post about a fake charity clothes drive. With fake generated pictures of the clothes. Hallucinated being a good example to its child and how it’s helping the community with the clothes.
Absolutely fuckin wild that this got rolled out like this.
Meta keeps making more and more bafflingly horrible product decisions. Tens of billions into the Metaverse and now “let’s put gen AI into everything” even if it goes against the one thing that is arguably the most responsible for FB’s success - real identity. “Let’s put fake bots everywhere, because GenAI, need I say more?”
The appearance of chatbots in social media is annoying (but also totally predictable). What I find disappointing is the persistent practice of journalists to interview said chatbots as if those interviews mean something. They do not. These programs are essentially Monte Carlo simulations. Why on earth would you think they could tell you anything about their genesis? Also, let’s be real: Meta’s goal is and always has been to drive engagement. So it should not be surprising that they chose to create a bunch of avatars that many Americans will feel are a little too transgressive. Meta wants to rile you up so you stick around and post more.
I say this a lot. Growing up, being interested in tech, I was so excited by the future. Now that we're here, a bunch of greedy tech bros have sufficiently killed all the dreams I had about the world with tech, and now it's just an utter disappointment.
All the people who ushered this terrible new world in by passively consuming, moving into walled gardens, turning off any semblance of critical thinking, justifying it with "but where, in this age of endless digital communication options, where I talk to my friends and family?" Or "I'm ok with targeted advertising because it removes the effort I need to exert to find things I want to buy."
This nation is the embodiment of greed, sloth, and pride. Foolish, ignorant people, refusing to use the tools we have to access the entirety of human knowledge, gleefully falling for any bullshit their favorite "influencer" is hawking today, are as much or more to blame than the people who set up the rat mazes for them to run through all day.
We welcomed and begged for this. It's hard to not be angry at the way things went, but as much as I hate the tech bros who ruined technology, I think more of the blame deserves to be placed on the people throwing themselves into the engagement machine, letting themselves be duped by easily disproven nonsense because it's easier and more comfortable to not critically think about what they're seeing and hearing.
I'm risking turning this into a screed, if I haven't already. I'm pissed off and disappointed, but the longer this goes on the less surprised I am by any new distopian shit I hear. Sci-fi writers of old would be fainting seeing the shit they couldn't dream up happening every day now. It just sucks.
I think Big Tech is exactly like the tobacco companies of yore. Hawking a product that they know is addictive and harmful while feigning ignorance and benevolence.
To blame people is to blame the victim.
There are dude bros commenting here on this site that have materially made the world a worse place.
Yeah the more you read about how reddit started and the really odious people they were having run large subreddits, the more you realize those scummy guys really duped us all.
We started with Usenet, message boards, Geocities and Blogspot. Then all the traffic migrated to aggregators like Facebook, Reddit and Instagram which prioritize engagement-heavy sensationalism.
Real people used to post all that content, but it's been clear for a while that sharecropping on somebody else's platform benefits them, not you. It used to be possible to build up a following and sell some Tshirts or get a few Patreon subs. But now it's simply not worth the hassle when every piece of original content is getting sucked into an AI training model and Instagram is openly telling users to pay to 'promote' their posts in the algorithm.
The bottom is falling out of the so-called bottomless pit of doomscroll machines, so instead of offering a better product, they're propping up its corpse with a hollow, computationally expensive facsimile of the real thing.