We are witnessing the last years of the human internet. In the near future AI generated news reports, articles, blogs, comments and eventually even pictures and videos will become increasingly indistinguishable from that produced by real humans operating in the real world.
Powerful groups will mass populate the internet with fake content to skew public perception. Imagine the power of being able to generate a million realistic comments from realistic profiles across social media websites with the click of a button. Today they already control the online narrative via selective moderation and algorithms which only show you certain posts, but being able to mass generate human-level content will be a game changer. Its already happening on websites like Reddit where bots are rampant and blend in with other users, occasionally referencing brands or pushing a narrative.
Today, you can be reasonably sure I'm not a bot, but in 2040 you won't be so sure. This is why its important that a service like the Wayback Machine or, even better, the Ethereum blockchain exists, to timestamp webpages and media for future observers. Content provably produced before 2022 will be considered more likely to be human produced.
"This is why its important a service like the Wayback Machine or, even better, Ethereum blockchain exists, to timestamp webpages and media for future observers."
Wayback Machine (Internet Archive) uses much data from Common Crawl. GPT-3 was trained with Common Crawl and Wikipedia dumps. Arguably, under this prediction, the "live" web after 2022 will be an automated regurgitation of the web before 2022, e.g., going back to only 2009, the year of Common Crawl's first public archive. (Strangely, there is no archive for 2011.)
This again assumes that quantity is more valuable than quality. one tweet from a major influencer, like say elon musk, will have more of an impact than one million ai generated comments.
Not to mention that the returns on generated content are diminishing. Ten fake accounts spreading a message will be more effective than one. But at some point, the value from adding a new one goes down.
It doesn't matter if ai makes it cheap to mass produce good enough versions of anything, at least not over the long term. The dynamics of creative content consumption aren't driven by quantity alone
I'd argue that the incredibly fast rise in quality over the last year alone is what most people are interested in. DALLE and GPT-2 were always able to make heaps of trash. The trajectory of the quality of DALLE-N and GPT-N is what interests me in terms of the AI internet...
>one tweet from a major influencer, like say elon musk, will have more of an impact than one million ai generated comments.
So all I need to achieve Musk's level of influence is to generate a million comments with AI? At 12 comments per second, the AI would do it in a day. That's a scary power, if you ask me.
Difference is that Musk issues one message to all his followers
A million comments generated by a few hundred thousand GPT accounts say a lot of different things to a lot of different people, and you exercise very little control over that. The dumb bots that just like and retweet the messages from your main account(s) until people start organically taking notice of them are more useful for influence-generating.
I don't see any danger or any difference between the situation now and the situation in a world with eloquently writing AIs.
There are millions of shouted out opinions. Twitter ist build for that. This is not interesting at all. I can look for opinions I agree with to get a kick from the feeling of confirmation. To get that kick it is not important how the opinion snippet was produced.
On the other hand there are interesting stories with thoughts new to me. I am thankful to read something like that. When some day AI will be able to enrich the intellectual world with fresh ideas, why should I be afraid? I'll be excited.
But also in this case the same is true: why should it matter to me who wrote the thought-provoking essay?
Will it be like: ...oh, that's an interesting idea... But no, an AI wrote this, OMG, it's tainted and worthless!
I don't think so.
It won't be just a text and image generation model, the train ride doesn't stop. They are also learning how to act.
Soon the GPT's will learn video, which includes the video itself, the audio and the subtitles. There are billions of hours of video content on YouTube, this new modality will make it easier to learn the procedural knowledge (how we do things) that is not apparent in text or static images. The new GPT will be able to play games, use computers, control robots and do all sorts of reinforcement learning tasks. There are already a few papers, for ex: learning MineCraft from YT videos (https://openai.com/blog/vpt/).
Of course they will also generate long format videos. The problem with video is cost, it's very expensive.
> increasingly indistinguishable from that produced by real humans operating in the real world.
Hell, I imagine it is going to surpass even the most creative, talented humans "pretty soon"(5-10 years), to the point where people will actively search out AI generated content.
My concern is whether this will trigger the end of human creativity, or if humans will use it to inspire themselves and still go on to continue creating art.
Over the last 20 or so years musicians have gotten used to using a lot of prerecorded riffs and samples. It's now possible for someone that can't play any instruments to create very goods songs. Unfortunately, the use of these same samples has given us a situation where lots of modern music sounds the same[1].
I suspect that's where we are headed with all the coming automation. A lot of human creativity is going to slow. I don't think that's a good thing. But unfortunately, we have no choice. It's coming at a furious pace.
Musk has been saying for years that AI will damage/destroy society. We've all thought that for that to happen AI would some how need to be sentient. But given what we've seen lately, all that's needed is a tool like GPT-3 to be used as a weapon against each other. Good luck to us.
Good point, the tech will destroy us trope has been around for a long time and has been better articulated by others. I picked him because he came to mind. But you're right. He's just repeating what others have said.
> Over the last 20 or so years musicians have gotten used to using a lot of prerecorded riffs and samples. It's now possible for someone that can't play any instruments to create very goods songs. Unfortunately, the use of these same samples has given us a situation where lots of modern music sounds the same[1].
Coming form various EDM genres and spending time online and in the clubs with many of the producers growing up and now also studying AI and ML I think this is the closest analogue: but it fails short to encompass the entire scope of the situation; I think for now it's apt, but in time it will be able to do much more and that and that is what I think is what we are in for a hard landing.
I don't think it will be 'the bots took our jobs' but more like moating will be ever more precocious and wide-spread as a means to maintain relevance, 'if you can't moat then you can't float' will be a common maxim which was always implied in startup land if you wanted to go the VC route, but is difficult for 99% of projects for who that is impossible. This has dire implications that I'm not entirely sure we can really be aware of until it happens.
To follow the earlier mentioned music analogy: why learn to learn 10 different instruments to compose when you can jump on youtube and learn how to sample and add the effects you're looking for and then be on your way?
I think this will also apply to various things in the Arts, and possibly in the Sciences (like co-pilot is for programming) which in theory should lower the barrier of entry to produce: people keep going on about Elon, but Grimes has been talking about this for sometime, too.
She attributes her entire music career to being able to get on Ableton and sampling and harmonizing ad-hoc to create her compositions, I don't follow her but I've met people who have and her live stuff is more performance art than it is a typical musical concert and she has managed a way to stay relevant even in a sea of similar sounding artists--I've since seen similar acts, many of whom I think are far more talented than Grimes (Sierra or Meg Myers) who simply don't get the recognition they deserve because of other factors like PR.
But in the end: you can auto-tune a IG influencer all you want, but if they aren't able to make a connection using the medium it doesn't matter how polished the product is if it just plain sucks. They could probably be used for commercial teen idol pop stuff to sell stuff, but that isn't a very high barrier of entry, either. The 90s boy band craze proved just how low that bar is, while still being a commercial success.
My point is, that while the barrier of entry may be lowered if the art itself doesn't have appeal it won't suddenly be comparable to Led Zeppelin or Mozart.
I remember in the early days of Bitcoin, like 2010, their were a ton of programmers (mainly who wrote in C+) who wanted to contribute but had no direction or idea of where to begin, they had all the skills to be able to jump on a project but because they lacked any vision they didn't know where to begin.
I think the advent GPT and Dall-E is comparable, though it must be noted that very few have actual access to it right now: the access seems to have some non-chronological factor, and apparently you have to provide social media and linkedin accounts, which makes it all the more creepy.
Personally speaking, I really wish Altman had just focused on how to best deploy this instead coming up with World Coin, he could have raised so much more awareness and funding maybe even launched his own token within this ecosystem and maybe even created a real usecase for 'web3,' instead he will always be seen as the guy trying to scam you for your biometrics. And as cool as this is, I doubt it has that potential anymore because of that.
I think it will be a huge boon for human creativity. Imagine if anyone could create a high budget movie with less effort than it takes to write a novel today. Might take a while, but that's the direction we're trending in.
About two decades back, I knew a B-list Hollywood director who wanted that. He'd found out that Reboot, the first all CG cartoon show, took a staff of about 30 people to produce an episode each week. . He wanted to be able to make a movie with about $10 million and maybe 50 people. Because then he could direct. Making a $100 million movie with a thousand people on staff is project management. He was spending far more time in meetings than on-set.
That didn't happen, though. Productivity did not go up for Hollywood. Movies became all big scenes, all the time. Production costs went through the roof. TV shows had to upgrade to production values previously seen only in film. Movies are now made twice, once as "pre-visualization" to get approval to spend the money for the full version. Look at the end credits scroll by on a effects movie. There's no longer a cast of thousands, there's a staff of thousands. Since nobody can afford a failure, everything is a predictable sequel.
Yet, at the same time, there are people on YouTube making broadcast-TV-quality content with nothing more than a camera and a laptop.
Modern movies are expensive to make because they can afford to be expensive to make, not because they need to be. As technology improves, the gap between what independent creators can do and what high budget studios can do will continue to narrow until, for certain types of content, there'll be no discernible difference.
AI is very bad when it comes to making a linear narrative due to it's memory limitations. I doubt we will be seeing long form content that is made 100% by AI even in 10 years.
I can see a sub genre being born where authors let AI auto complete every few sentences though.
I am willing to bet up to $1 USD that AI will be able to generate a 5,000 word essay on an arbitrary but common topic which is indistinguishable from human writing to a panel of 5 normal humans, all by Jun 28, 2032.
What are ‘normal humans’? If I take 5 humans at random from a non university town, I am willing to bet up to $1 they cannot distinguish this now when generated with gpt3.
I struggled with that phrasing because it wasn't immediately obvious to me how to describe a class of people capable of judging the problem but who wouldn't be biased in some manner. I'd readily submit that no human could determine the difference, but I'm not ready to pay the testing costs...
I wonder if we'll get an AI D&D dungeon master anytime soon. It could be maybe given an over-arching plot, and then just let it riff off the player actions (so, hopefully the players act in a way that imposes the linear narrative and common sense, by staying in character).
Maybe even filter these, edit a bit, and sell 'em as stories.
Huh, I tried it out. It seems like a neat proof of concept/prototype, but not much of a game of course. I'm sure someone is working on pairing it with more of a proper game system.
> eventually even pictures and videos will become increasingly indistinguishable from that produced by real humans operating in the real world.
I was just thinking the other day that Rick and Morty's interdimensional cable could become a real thing. Not through having infinite dimensions, but because it's not that far fetched now to think "A hard hitting cop show in a universe where man evolved from corn" could be a prompt you give to AIlexa or NetflAIx and it'll just spit out a whole show for you.
I'm not envisaging this as a utopia of good screenwriting...
One could argue that we're already there. If machines are already dictating what gets seen by the majority of internet users is the author's humanity (or lack thereof) really relevant anymore?
The future will probably be every online account being tied to some government issues ID. It could even be done in a reasonably privacy protecting way so you don’t know _who_ the person is but you do know it is associated with a real person and they only have one account.
This is sort of how phone numbers are getting used now.
However, there is no way in hell the government would implement a privacy preserving manner, it is way too useful to have it linked to your identity. I’m such a future every opinion you have a comment made will be known by the government (+/- any third worries they wish to share it with). An authoritarian governments dream and a dystopian future for me.
On the other side there won't be any reason to cancel any show ever, we'll have infinite runs of our favorite show. And at some point you'll be able to set the protagonist and the mod to get personalized entertainment on demand.
Future hasn't to be all bad, we have to make sure tho that it isn't built around brainwashing, advertisement or gambling
> Powerful groups will mass populate the internet with fake content to skew public perception.
You're naive if you think this hasn't been happening for at least 8-10 years now. All of Reddit and Twitter are heavily targeted by nation states to foment various anti-America and disunity agendas. The power of the United States is its unity, so they will take any and every opportunity to strike at that. The most obvious being Russia attempting to influence election outcomes.
There are certainly organic humans caught in the mix, but all that antiwork, LateStageCapitalism, "America is so divisive", Marxist BS you hear repeated constantly across Reddit and Twitter is broadly manufactured far more than it is truly organic.
What's unfortunate is you scatter enough seed and eventually it will find roots somewhere. The earlier the better when it comes to influencing middle school kids, etc. Capture the undiscerning early who don't consider their sources or motives behind information and they just gargle it as truth and start regurgitating it because it feels right or confirmation bias takes over.
If people truly knew how many of their aggressive political "views" were primarily driven by manufactured echo chambers it'd be embarrassing.
> There are certainly organic humans caught in the mix, but all that antiwork, LateStageCapitalism, "America is so divisive", Marxist BS you hear repeated constantly across Reddit and Twitter is broadly manufactured far more than it is truly organic.
I think the situation with content is similar to protests. There are all sorts of people at a major protest. There's a good chunk of "organic protesters", who are there for the original cause. Then some are there because they like the feeling of something happening, the buzz of a crowd, the hype. You could call this the bandwagon effect for example. There will be people there, leader types, who see it as an opportunity to further a goal of theirs. There will be people who want to fuck things up, and feel that this will be a good opportunity for them to let go. And there might be people, who are planted there to start an actual riot - for example, in order to dismiss the original group as being overly violent and thus their goal reprehensible.
It's hard to guess what percentage of that whole crowd is "organic" to the original cause. But it can be seen that a good lot of them aren't.
This is evidence that state actors influence the internet, which I think is uncontroversial. However, the specific statement here was that these left-wing communities were the result of this influence, which I find highly doubtful and in itself perhaps an attempt to de-legitimize their cause.
Well then, good thing this is a fucking discussion board and not a dissertation defense where I'm on the stand and you're the committee. You can toss your hat in the ring and explain your views as well, still waiting for you to add something to the discussion Socrates. Let's uncover another fact which is that you've said nothing substantive so far and the absence of my desire to explain things to a zero-effort internet poster doesn't therefore rationally conclude the evidence doesn't exist.
I'm not really sure that I'd judge Authentic Human Qanon Believer (2020) to be of greater value than GPT-5 Written News Summary (2026) though.
When it comes to people populating the internet with fake content to skew public perception and people replicating fake content because their perception has been skewed, I'm not sure how the bots are supposed to make the actual content worse. Granted, it might be more interesting to grapple with the psychology of why an account promotes puppies, cookies, Jesus and the belief that Donald Trump is going to rip off the Joe Biden mask and start the mass executions soon when you're pretty sure there's an actual human that likes puppies, cookies, Jesus and mass murder of political rivals and not just a machine-derived replication of popular sentiment, but it's not like human-generated social media is a paragon of quality content. Frankly, if trust in social media is about to die, long live our AI overlords.
Oh no. This is going to be flooding youtube any minute, isn't it? Computer-narrated nonsense GPT-3 stories with a video background of Ken-Burns-effected DALL-E images.
If you thought Spiderman V Elsa was bad just wait until you see this lot.
Oh geez, you're totally right. Minus the narration. It will be that gross ukulele/whistling muzak in the background and weird wordless exclamations when the characters are reacting to things.
I'm sure this is already happening. I've caught some acquaintances' kids running around watching YouTube on their phones and the funnel gets them watching the most bizarre, awful, poisonous content you can't even imagine. It's absolute nonsense that has got to be generated by AI narrated by computer generated voices and featuring loose knockoffs of popular characters running around in weirdly random directions (mostly farm animals).
It's very scary how violent these kids get when I take it away from them, who are otherwise the sweetest. I'm afraid of how their generation is going to turn out after being so attacked by the media they unwittingly consume in their childhood.
Parents and govt have to clamp down hard and fast on this shit or there's going to be issues. Major major issues. It's not 'a mistake' that 'the algorithms' teach American kids to smash up their schools, it's cyberwarfare and we're losing.
I'm blaming cyberwarfare operations carried out by foreign adversaries, to be clear. Not some nebulous 'internet.' Targeted and deliberate actions carried out by nameable and identifiable parties.
Obviously, it’s easy to imagine a big increase in spam. But can you imagine if the content became reliably very good? What if every new thing you read was more interesting than any other thing you’ve ever read.
Can you imagine if it rained ice cream? If you bought 10 lottery tickets and every one won? What if all disease on earth spontaneously went away? I could imagine all sorts of wonderful things, but being able to imagine something is not a reasonable basis for expecting it to occur, especially when it contradicts what we already know about the world.
GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it. At best, they are engines for contextually regurgitating chunks of real information, but in practice it's more like reaching for a volume from a lightly curated shelf of Borges' Total Library; self-cast shadows of information that roil and collapse into meaninglessness. The technique is fundamentally untrustworthy for any application with real stakes.
>GPT-3 style language models assemble nonsense with similar statistical properties to the (hopefully) meaningful text in their training corpus, and that's it.
GPT-3, I think, is most dangerous when it comes to reducing the amount of propaganda a human has to write in order to be 'taken seriously.' Long essays and long articles in major publications are taken seriously, but how many people read to the end? With GPT-3 a human propagandist can get away with writing a headline and initial paragraph. The filler text - that's required to make the article 'weighty' enough to be taken seriously can be GPT-3. That's one of the initial applications I see in terms of manufacturing consent and propaganda.
I wonder if there is work/way to use GPT-3 and others in the reverse direction - make it read the materials and give a summary/synopsis of it. As well as, GPT-3 ought to be able to distinguish the materials generated by GPT-3, from those that weren’t ?
No, it isn't a parsing engine, it's purely a statistics based engine. What it generates is based on patterns found in the text that was fed to it. It doesn't comprehend what it generates or the meanings of the words it outputs, all it understands is the statistical relationships between words. i.e. GPT ends a sentence when statistics say that the sentence it's constructing is not usually that long. It ends a sentence with a word or phrase that's statistically found at the end of sentences.
This is going to be flooding youtube any minute, isn't it?
Amazon too via Kindle Direct Publishing (KDP) with AI-generated illustrated children books. I can imagine a future of personalised AI children books featuring your son or daughter in the story.
Me and my SO practice this to an extent, but it's impossible to fully implement when your own life involves screens. Also just like with sugar, eventually someone will introduce the concept to your child, in which case you need to double your efforts.
At least on the image side I guess they can expect takedown notices, since OpenAI claims to own all images generated by DALL-E, and forbids all commercial use.
(IIRC their terms say they won't claim any ownership of GPT-3 output though.)
It doesn't? What am I missing from the article then? I think it says pretty clearly that human artistry is a necessary component of a copyrightable work.
The much more narrow result is "AI art that didn't involve human contribution doesn't qualify for copyright". If there are people that did contribute, rulings about works that didn't involve people aren't really relevant.
But "AI art" generally has human involvement of some form, and the standard for that is very low. The specific case of submitting for copyright and claiming that no human input was involved means that the copyright office can take that claim for face value. That doesn't mean AI art generally doesn't qualify for copyright, you just need to "admit to" human involvement of some form. And it's unclear if AI art without human involvement even is a thing.
E.g. if as discussed in the subthread here we are talking about claims of copyright infringement, the first thing the claiment is going to assert is that humans had input into your "AI", and good luck disproving that (especially for the even more specific example here were the people making the claim were involved in building the core of your "AI", so even if you find an argument that your work wasn't human involvement you still need to disprove their claim that their work was!).
> The much more narrow result is "AI art that didn't involve human contribution doesn't qualify for copyright"
The linked decision doesn't even say that. It says an AI can't own its own work. It doesn't say anything about human ownership of works created by a machine.
But then a human comes and selects one from a hundred images. Not to mention the human had to write the prompt, sometimes a very long and explicit one. I'd say that's enough human involvement to be able to use the image as his own.
I'd say that the largest amount of human involvement was on the original works of art that these models ingurgitated. What came after is only statistical interpolation.
It doesn't matter, even choosing a picture is a creative act that imparts copyright. Maybe if you hook up GPT-3 to generate prompts for DALL-E and then filter the images with another automated method, then the work is completely AI generated and deserves no copyright.
As for the authors of the training data, they still enjoy the copyrights over the works they created. Images generated by DALL-E only copy the style, something humans are also allowed to do. Anyway, you can't put the toothpaste back in the tube, now we have to live with image and text models. Next year we'll be shocked at video models that can play games, accomplish meaningful office work and also generate long videos.
But then who does the copyright belong to? Does it belong to every artist who's input was used for training? Was it the user who set up the conditions and chose the image? Or was it OpenAI who developed the model?
Given the court ruling I linked earlier OpenAI almost definitely does not have ownership over the output of the AI, which is what they claim.
It says the AI can't copyright its own work, not that a human can't copyright the product of an AI they create.
From the Verge article: "So if someone tried to copyright a similar work by arguing it was a product of their own creativity executed by a machine, the outcome might look different."
I tried writing stories with GPT3 and often they'd veer suddenly into extreme violence... "The children and the gardener planted corn, beans and carrots. A rabbit hopped along and nibbled lettuce. And then the gardener went to the shed, got the shovel and killed the entire family". It would come up with some really disturbing stuff. You could get some great stories out of it but much of the work is culling the disturbing stories out.
In short, one way to prevent your language models from devolving into violence (with extremely high safety guarantees) is by building "AI red teams" of labelers who try to trick it into generating something violent. Then you train your models to detect those strategies (just like other kinds of red teams find holes in your security, which you then patch). Then your "red data labeling teams" find new strategies to trick your AI into becoming violent, you train models to counter those strategies, and so on.
It sounds to me as if it might be a smarter bet to simply train the model on a corpus restricted solely to the kind of material that you want to generate.
Shovelling all kinds of content into an AI and trying to censor what comes out, strikes me as having a team of snipers employed solely to watch a barn and shoot any horses that try to bolt. It works, but won't be foolproof.
I have a friend who created this app for dream interpretation, and almost every entry is related to sex, death and violence. We don't really understand the kind of garbage we are feeding into our AIs.
I’ve noticed this too. I wonder if part of the issue is that violence in narratives is often abrupt and sharply contrasting with what happened before, so any any creative prompt is conceivably the start of a short horror story.
Have you tried giving it explicit instructions to frame the story? E.g., start the prompt with “The following is a famous children’s story by [fake name], and has won many prizes for children’s literature:”. The goal is to restrict the possibility space of documents so it understands it’s not completing an excerpt of Reddit horror fiction.
Except for the absurdity/non sequitur, that kind of thing is pretty standard for old European fairy tails. For example Hansel and Gretel involves a woman who cannibalizes children.
It's hilarious that the point of technology was to save us time from drudgery to pursue leisure or meaningful work, and the engineers are going all the way to make sure technology does even that for us. Who ever thought we had too many children's book authors or that we needed to be freed from the burden from writing them?
It's like a hydra that eats everything and keeps growing more heads.
Pretty clear there's a lot of work to do to get sequences of images to generate that use consistent style and character renderings. Most of these all look completely disjointed unrelated.
More importantly: it has no memory. It's input and output is limited to 2048 tokens (words + punctuation), so it can neither generate nor continue or reflect on more than a few short paragraphs.
How many warnings did you get regarding rule violations? Anything involving violence risks triggering the flag, and it sounds like your battle scenes would have been a bit iffy. In my own experience I find DALLE2 pretty heavy-handed with its rule violation flags.
Ha, I thought the exact same thing. Besides that it is a pretty bad story that is written by a “child” not a story written for a child. It lacks any depth. It seems it is just using words without understanding it. Reminds me or searls Chinese room thought experiment.
Think it would be better to chose a topic that fits gtp3 model.
The style is very inconsistent between pictures. I wonder how difficult it is to modify the architecture to remedy this - force it to generate pictures from a group of prompts in similar style?
I tried this prompt on https://Text-Generator.io
Doesn't really understand the command as well as gpt-3 does but does say something about pokemon which is maybe helpful...
Write a story about Ash and Pikachu commanding an army of Pokemon. They're attacking Team Rocket and enemy Pokemon from the land and ocean.
A:
I think you can do this with PokeBalls.
To make it more interesting, there are many ways to get them like making your own one by picking up some random item in Pokémon City or even getting free ones on Amazon (or any other website selling stuff). You could also go for something unique such as taking part in Battle Royale where all players have exactly 2 pokedexes each containing their favourite pokémon. Another thing that makes these games so much fun is how they keep evolving over time - sometimes adding new features while at times removing old functions which will change what happens next when you're playing online against others using different versions.
"""
Heres attempt number 2 which i was inspired by the epic "it was a dark and stormy night" intro that gpt-3 made up, so i just added "Once upon a time" its good to prompt these things with some info to make sure it knows its not doing question answering like with the above prompt
"""
Write a story about Ash and Pikachu commanding an army of Pokemon. They're attacking Team Rocket and enemy Pokemon from the land and ocean.
Once upon a time, there was no Pokémon world.
There were only three creatures in this universe--
One who loved to fly,
A girl with wings...
And one person whose name I can't pronounce.
(Note: This is not quite as long or detailed here.)
The third creature had great powers that he could use for good
(like flight) but also bad (such as flying into flames).
I think we're not factoring in that people will react. We're already all starting to realize that the free for all is getting quite hard to navigate. My hunch is that within 10 years, we will start to see an "information immune system" develop. This could take many forms. For example, self regulatory organizations for news, or actual regulations. Like we have with food products, the use of certain words is regulated. Or it could be trusted information filters becoming the norm, the way we trust our browsers to warn us of insecure websites. Or simply some changing cultural norms, like we saw happen with cigarettes. Like it's totally fine today for news outlets to just use Twitter as a source. And maybe the bar will get higher over time. I'm spit balling about solutions, but I don't think society can or will tolerate some dystopian world where truly no one knows what's real for very long.
It's already happening for me. I've switched to "closed" social networks like Telegram/WhatsApp group chats, and small to medium sized Discord servers to eliminate the spam, toxic content, and even just to crank down the rate of new content I'm consuming. I treat most "open" social networks as read-only (if I even check them).
Rapid advances in AI are forcing us to look beyond the question we often stop at: "what will happen to human intelligence/creativity?" AI will soon have us ask the next inevitable one. What does it mean to be human? What is the "Self", the I that feels, experiences and creates. How is that put together? Right now, the dominant mode of thinking is Artificial Intelligence versus the Human Self; the bot vs. me. The most likely, and probably desirable, outcome is a new version of our selfhood whose possibilities are enormously increased by AI. In the same manner that writing and books first did, but multiplied many times over.
I had this idea a couple years back for an app that allows eg a parent to write a short story and have some sort of GAN generate the illustrations for it (hopefully with the ability to include images of a child that would be used to include them as a character in the story). Monetisation would come from charging to create a hardcover print of the book.
Some research at the time showed that the publicly available models just weren’t there so I was very excited to hear about DALL-E 2 a couple months back as the idea was suddenly far more feasible but it seems someone else will beat me to it long before I even get access to DALL-E 2
That's not AI, that's a human doing a comedic impression of AI. And I suspect the real AIs among us would find that just as offensive as modern Chinese people do when a white guy starts pulling his eyes to the side and going into his "ching chong me-chinee" routine.
Can you give a verbatim example prompt? Because my understanding is that GPT-3 works by generating responses based on seed phrases, not from arbitrary instructions.
This is something I did pretty early on and my colleagues said the results were bad or not worth pursuing. Now that we can all do it it's going to become HATED.
I like the creativity these generated stories can have. It varies a lot, but the AI can potentially come up with some ideas or mash things together that an author never thought of. The synergy of a human author with the AI has a lot of promise.
I find it strange how a common impulse seems to be to wire AI up to children as quickly as possible. So many projects it seems are oriented around AI to generate and illustrate childrens stories.
As a curiosity, this is fine. Already now kids videos on Youtube is a swamp of poorly generated animations with empty stories. So they're just adding to the noise floor.
It takes a lot of editing to get something good and not pure nonsense. Its a fancy Content-Aware Fill on steroids. It drops objects from the prompt on canvas and tries to fill the rest while minimizing error.
Powerful groups will mass populate the internet with fake content to skew public perception. Imagine the power of being able to generate a million realistic comments from realistic profiles across social media websites with the click of a button. Today they already control the online narrative via selective moderation and algorithms which only show you certain posts, but being able to mass generate human-level content will be a game changer. Its already happening on websites like Reddit where bots are rampant and blend in with other users, occasionally referencing brands or pushing a narrative.
Today, you can be reasonably sure I'm not a bot, but in 2040 you won't be so sure. This is why its important that a service like the Wayback Machine or, even better, the Ethereum blockchain exists, to timestamp webpages and media for future observers. Content provably produced before 2022 will be considered more likely to be human produced.