Someone's vocal characteristics are not meaningfully creative; it's the intellectual content of a work and specific instance of a performance of an artist's skill in singing or writing or drawing or fabricating that content, that makes the result copyrightable. Therefore, it seems like it's a stretch to think voice-mimicking AI works might be copyrightable by the mimicked artists.
You can't sue someone for mimicking someone else's voice well enough to be confused for the mimicked creator's voice. Some people can do some voices quite well. AI can just do it better, faster.
I hope the courts put a swift end to this, but maybe they'll try to uphold the status quo—entertainment megacorps get what they want—and only reverse course after a few years when it becomes clear that everyone's mimicking everyone else's voices and pictures for fun and profit and any attempt to maintain copyright or any other IP protection on such mimicry is absurd.
It might not violate copyright, but it may violate a "right of publicity" for imitating the voice of someone else. See e.g., Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988), in which Bette Midler sued Ford Motor Company for commercials in which Ford used a sound-alike singer to copy Midler's voice.
If that's what they're going for, it might have helped if they'd consulted more competent lawyers and issued a more accurate statement, rather than saying that the AI song represents "a violation of copyright law" (image-quoted in the article).
Taking a brief look at that case, the way AI is being used is distinguished from that case in an important respect. Hedwig, a backup singer for Midler, mimicked a specific song that Midler had made, after Ford couldn't get Midler to perform it for their ad. It seems reasonable that doing that, for profit and for broadcast, would violate Midler's right of publicity, and maybe even the performance copyright on the existing song.
I wouldn’t say that training on copyrighted material definitely isn’t copyright infringement, because training may require making a copy into memory for the training algorithm. That may be enough.
In Midler v. Ford, Ford had a license from the publisher to use the song, so Midler’s case was based on the publicity right, if I recall correctly.
Also, the publicity right can’t be implicated just by copying a style or genre. It has to be imitating the person without permission.
> You can't sue someone for mimicking someone else's voice well enough to be confused for the mimicked creator's voice.
You can sue someone for Financial/ID theft and impersonation though. Many shady artists were (and still are) using celebrity names in their track titles to get attention for their music releases prior to this. The problem is services like Spotify and YouTube (content search) primarily working based on keywords, and also in those same types of music services only being geared towards major industry artists rather than giving everyone an even playing field to have their music heard... It encourages scam and spam environments, as well as impersonation by artists accounts for the tiny profit they pay.
By posting music with the "emulated" voice in it (which is actually tiny samples of the artist's actual voice), the Ai tools allow and encourage people to create songs and title them as if they were made by the actual artist. In the emulator, they list the authentic artist's name as an option, which technically highlights that they are complicit in facilitating production of music that impersonates the named artist.
Imagine if you're an emerging artist and beginning to gain traction in your career, then all of a sudden, because of an Ai web site, hundreds of people start flooding streaming services with low quality, parody, and even good music using your name, while also collecting royalties off of it... That can ruin a musician's brand and cause damage to the artist's reputation that they can't recover from. There are legal protections against that, even without copyrights, but it can often involve a lengthy court battle.
I'm a musician myself, I am not a lawyer, I do my best to stay out of court.
It's better for everyone if everyone simply puts out their own original music, or properly credited samples and/or remixes with artist consent and permission.
This wasn't illegal already if you were a good impersonator provided you're not making money from someone's image and likeness. There's no difference here. I'm not saying there won't be lawsuits but they'll be in bad faith.
The condition that absolves impersonators from responsibility is that they must clearly present that they are impersonators.. That doesn't quite apply here, as the tracks were posted to streaming services as if they were published by the actual artists.
If the fake/imitation song makers are required to label their releases as parody or impersonation, then they'll get no streams, as most of the bootleg streams they got were due to putting out a mimicked (pseudo-authentic) release as if it was done by the actual artist.
Reputational harm is also a serious case that can also be filed... With an artist as popular as Drake (for example) that can lead to a financially devastating judgement.
If it's non-commercial and not making false accusations then I'm not sure it matters (in the US)? Happy to hear otherwise.
Anyway, ghostwriter clearly demarked Heart on My Sleeve as being AI-assisted. Furthermore, the tracks were first sung and then AI was used to transform the voices. It's hardly more than another tool in the audio production toolbox.
"You can't sue someone for mimicking someone else's voice well enough to be confused for the mimicked creator's voice."
Absolutely not true. It's called "appropriation of likeness." If the representation is reasonably likely to be confused with or linked to the famous person, that person can sue.
That's for [commercial] use of a specific slogan related to his name. Even without using a name, mimicking short phrases like, "Let's get ready to rumble!" or Bruce Buffer's "It's Time!" can run into intellectual property problems. But those are specific, highly-recognizable catch phrases, not similar to the issue with this AI-generated song.
Someone else in this thread tried to provide a case on point, but it doesn't seem to be, either.
Has anyone been successfully sued for mimicking someone else's voice and singing an original song?
Has anyone been successfully sued for mimicking someone else's appearance, and parading around and taking advantage of that likeness, without falsely claiming (in seriousness) to be the mimicked person (which might be fraud or something)?
Ok maybe it’s not a violation of copyright by current standards, but should laws be reconsidered?
What if I use AI to make and sell web dev educational content using the voice & likeness of Evan You, for example? Where are the lines?
Isn’t this in some way bootleg merchandise? As identity becomes radically easier to duplicate, should laws adapt? While impressive & novel, I think AI mimicry is its least useful (and likely most dangerous) feature.
That’s more like a trademark issue. If you call it taylor swift when it isn’t taylor swift then that’s fraud. If it sounds like taylor swift but you don’t brand it as taylor swift then it’s not really anything.
Sure, but those laws were written before it was possible to make audio that sounds exactly like someone else. So I wouldn’t be surprised if they get rewritten.
There is centuries of human artists copying each other's painting styles and explicitly copying individual pieces of art. Voice doesn't seem any different, this should be fairly well settled law. Of course it could be explicitly changed but it doesn't seem that likely to me.
The core issue is the content these models get trained on. If you train your model on copyrighted work are you guilty of copyright infringement when the output is used commercially? We are going to see a lot of these cases over the next few years.
"Using" copyrighted material is not a copyright violation, "copying" it is. So the question is if "training" is copying, but it doesn't really seem like it is.
Me viewing it with my eyes involved making a copy in my neurons.
Or viewing it on my television requires multiple copies as the data moves through wires and tuners and memory and storage. Obviously a video just be copied into RAM to process and display for normal viewing. Why would making a copy in memory to run through a training algorithm be different than an algorithm to encode or decode for display on a screen?
I don’t think transmission should count as making copies.
“Copies” are material objects, other than phonorecords, in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.
17 USC 101.
There’s no way to look at your brain to perceive, reproduce, or communicate a work that you looked at, so under US law, your neurons are not a copy.
As for decoding a video to display it on a television, yes, that process requires making copies. That is why if you don’t have a license to a work, then decoding and displaying it is copyright infringement.
Certainly you can. You can audit Bill Header’s streaming history to see exactly what he watched and when.
But I think that’s immaterial as I think the answer is that it doesn’t matter if you train on copyrighted work. Or maybe better that you don’t need a special license.
If I steal books and train on them, then I think that’s copyright infringement not because of the training but because I made an infringing copy. However, if I have a license already to read those books (ie, I bought a copy at a book store) then it’s not infringement to train an AI, or loan it to a million people or whatever I like with it as I bought a copy.
Yes but you’re not allowed to copy and distribute so you’re never going to actually do that. Also you’ll pass away with the knowledge you gained unlike the AI you’re training it on which will contain remnants of the work in all copies of itself, however abstracted away you consider it.
I think that depends on intend. If you train the model specifically to clone an artists artstyle or characters, maybe even use their name for marketing (in the style of <famous artist>), than that might be a copyright violation and/or trademark violation.
On the other side if you just dump tons of artist into a model to teach it how to draw, it'll be a much harder argument to make when the result doesn't really look like any artist in particular.
However a big problem with current models is that their vocabulary is still quite limited, so the only way to get some interesting art style with just a text prompt is to have a "by <artist name>" in the prompt. That's rather fishy. Meanwhile if you custom train a model, the amount of images you put into it will be drastically lower and more focused than what went into into initial training, so that's a bit fishy too.
Long term I expect art models to filter trademark terms and just get better language understanding or other means of input (e.g. sketches in ControlNet), so that the model ends up being unable to reproduce something close to an existing work out of the box. It wouldn't stop the user from using that model to perform some copyright violation, but it would require intend and effort on the users part.
original melody, chords and lyrics never generated copyright infringement no matter what the composer listened to.
The problem with the music industry is the industry part : it's been generating standardized, and easily mass produced music for the past 20 years. You can't expect the process to not be fully automated eventually ( i've already felt like some pop or rap song could have been for a long time).
You can't expect it not to backfire at some point. And IMHO this is well deserved.
No one says ai companies shouldn't automate what can be automated, but they need to pay for or not use licensed content. Be it art, software, books, anything. If we started stealing cars and selling their wheels we'd be pretty popular too. But we need to pay for what we sell.
An artist « uses » licensed content all the time, just to learn. If we start taxing the learning process (not the teacher, but the learning material itself), then we’re in trouble.
And it’s going to be even worse with multimodal ai and robotics : imagine a robot walking in the streets, looking at cars, and ads, and people, then generate original content based on what it saw on the street (much like a human do). Who’s going to collect for all the licensed content it was exposed to ?
That's the idea - content used for training ais should be licensed. Also there is nothing intelligent about current ai, there is no learning involved, just software ingesting content, there's nothing original about what ai generates. It really shouldn't be compared to humans. Just because a laptop can emit sound it doesn't mean it can sing. Neither can a piece of software aimed at mimicking human brains think or learn.
But then you'd make a distinction in the license based not on the usage (learning), but on the recipient. A human listening to a song and "learn" composition => fine. A machine doing the same => not fine.
That's going to be totally impossible to apply. Imagine a sound processing software than "learned" mixing based on a collection of popular records, and provide you, the composer, with presets => wrong, copyright.
Ok, now imagine an ai that learned how to configure a sound processing software based on records. It only outputs numbers and presets for the software.
=> wrong, copyright.
Now, imagine an ai that learned those settings, by asking hundreds of human what they like in some kind of A/B testing. The humans will judge based on the songs they've heard in their lives, and the end-result (the settings) will basically be the same as in the previous example. But somehow, this will be fine.
I get your point, but creative people are in fact like databases. They gather as much input as possible, and synthesize output based on that and a few mutations.
They are, but the main difference is they don't store raw data. They store key features which makes it and even more blatant infringement. Basically, they are made of exactly what copyright protects - characteristics. In on over simplified manner of speaking.
The problem isn’t what they store, but what they produce. a « database of songs » is a problem only if gives you access to the copyrighted content. Wikipedia contains the key characteristics of millions of songs (aka their names and author) and yet nobody complains about it.
Those AI produce original content. The means by which they produce it shouldn’t be relevant.
I agree. Basically, the issue is not that AI can reproduce _ideas_ but that it can execute ideas in the same way as the original, or with parts of the original. It doesn't learn. If it could learn openai would create a new programming language, train it for real, and it would be the greatest programmer. The marketing that is it is "intelligent" and a "black box" that does things on its own is just science fiction. A hallucination on the part of those promoting that type of ai.
I'm not sure you can ground a copyright judgment based on whether something is truly intelligent, or just looks like it.
The means through which a piece of content has been produced is totally irrelevant. The original content has to be distinguishable in the new one for copyright laws to apply. This has been judged over and over when sampling became more and more widespread. Some DJ sampled the original songs blatantly and thus had to pay the original authors. Others applied so many sound transformations that it was basically impossible to identify and thus copyright was basically impossible to apply.
“If it could learn openai would create a new programming language, train it for real, and it would be the greatest programmer.”
I don’t understand what you’re saying here. It can invent new languages, but they would those be better than existing ones? It’s less intelligent than a human (though still intelligent) so the language would be worse than existing ones
I used to think “well this dumb law can’t be so bad if it kills Facebook that’s a net positive” but I don’t think having directions that allow Beatles cover bands like Oasis but disallow an AI Oasis will be good for society. Comically, I think this because one day we’ll be digital beings and it will be funny if these old timey 21st century laws only allow creativity from meat-based consciousnesses because my digital consciousness is basically just an advanced AI that I legally treat as me.
I think that it’s knee jerk for now but the mistake is that there was confusion and promotion that Drake and Weeknd provided vocals.
But I expect we’ll see some new law, or lots of “Drake-style vocals” in song labels.
Cover bands exist and can even sing new songs while sounding like the original. So Journey hiring someone who sounds exactly like Steve Perry seems perfectly legal. So Journal training an AI on Steve Perry and having a robot sing exactly like him seems find as long as there’s no presentation as if it is Steve Perry singing or endorsing or using his celebrity likeness.
I think there's a very big difference here in that you're equating a person singing like Steve Perry (one in millions) and a machine singing like Steve Perry (electricity required).
Because of the scale on which AI voices can be abused, the infeasibility of taking all violations of publicity rights to court, and, personal opinion, cheapens the human experience of music by creating eternal singers and encourages removing real singers from the process of music?
I expect humanity will be more benefited by having infinite amazing singers. We have to think of the benefit, not just to the harm to someone who is now vocally copied.
And I don’t think it’s harm so much as lack of lots of money.
I don’t think Adele will get paid less for her concerts and music if there are a billion Adele sounding AIs singing the news to everyone.
Agree to disagree. I think the cons outweigh the pros. Adele benefits from being an established presence in music. New singers need to be better than the musical clone army in addition to other singers to build a fan base. This just exacerbates existing problems with market saturation. For in person concerts we’ve seen what can be done with the Gorillaz and Tupac. It won’t happen overnight but I can foresee an industry of manufactured idols in the future.
Well another way to look at it is human society and its laws should tend to the needs of its citizens and the job loss implications of this and other AI tech should concern the state which likely doesn’t want a bunch of idle people dependent on the their coin purse. Some protection of human workers seems warranted. I don’t think think copyright or public rights even need to come into play.
Arbitrary or not, this technology serves companies more so than workers. It’s not clear the benefits outweigh the cost to society. The cool factor is not a compelling enough argument to allow it unrestricted. Everyone needs work, that’s a basic fact of life. We should be mindful knowing one day we will be targets of replacement.
As consumers, we should avoid supporting, where possible, commercial uses of AI voices and as citizens, support new laws that curtail their commercial use. Not sure what else can be done as people will use this stuff commercially if able and people will consume it because it exists.
It would be nice if people who lost their voice could get it back. Although he would be less iconic, speech synthesis like this could have given Stephen Hawking his voice back. That harms no one so I can see exceptions for such cases.
"Someone's vocal characteristics are not meaningfully creative"
This is absolutely not true. You might think when you hear someone singing that it's just their natural voice, but in fact they have trained their voice over years or decades to produce that particular sound.
The human vocal system is extremely more flexible than people think. Most people produce a particular sound out of pure habit and imagine that they can't produce any other. The most identifiable singers have invented a new sound (and then immediately get a bunch of copycats who don't do it as well)
> The most identifiable singers have invented a new sound (and then immediately get a bunch of copycats who don't do it as well)
The Weeknd does a pretty credible Michael Jackson-style vocal, and has noted that Jackson's Off the Wall was the album that inspired him to start singing.
Bruno Mars is famous for doing impressions of other singers as well as songs in various musical styles.
(He even had a singalong (sting-along?) with the actual Sting at the 2013 Grammy awards.)
You can sue someone for using their likeness without permission.
I'm not legally qualified to speak on the topic.
Conceptually someone's unequivocal identifier is part of their likeness. Maybe Drake's vocal cords and characteristics unequivocally and measurably differ from everyone else's.
I wouldn't want to live some where my face, name, image and (maybe?) my trained voice can be ripped off 24/7 by a typist with AI generative tools, even if the result was economic benefit to me.
If I can use Tom Hanks' voice for the cost of a ChatGPT prompt, in my next powerpoint presentation about the dangers of AI... why wouldn't I?
> You can't sue someone for mimicking someone else's voice well enough to be confused for the mimicked creator's voice. Some people can do some voices quite well.
Oh you most certainly can. In fact, you can sue someone for sounding too much like himself. Just ask John Fogerty.
But as far as that specific trial - not anymore. Fogerty vs. Fantasy becomes case law, and so any future cases with substantively similar facts are going to be ruled the same way. That's how the law works.
I'm feeling quite ambivalent about where AI art is going. This passage from 1984 always stuck with me:
“The tune had been haunting London for weeks past. It was one of countless similar songs published for the benefit of the proles by a sub-section of the Music Department. The words of these songs were composed without any human intervention whatever on an instrument known as a versificator. But the woman sang so tunefully as to turn the dreadful rubbish into an almost pleasant sound.”
On the other hand, people like Rob Sheridan are doing amazing things with AI art, and I've played with and enjoyed it myself as just another creative tool.
I suppose it depends what you think music, or art is general, is for.
If you had an AI generated Netflix let’s say after 5 years of this you will start to see how it is dull and you will see the boundaries of it. Kind of like how Metflix sounds Netflix or TEDx sounds TEDx.
Then I guess handmade or half handmade will be back in trend. The problem is after all this time AI was trending it was eating up talents and industries and after that a massive part of artists will be gone and the (human) industry shrunk. No more huge human made artistic ecosystem… Less worldwide talents. Less high quality and innovative ideas.
Endless cheap entertainment for the masses is IMHO one of the best outcomes of AI; it's a field where minor mistakes are tolerable (or even add to the entertainment value), as opposed to using AI for the more critical decisions of society. On the other hand, it may also make real human interaction far more valuable.
How depressing that’s sounds. Kind of like cheap cattle feed or something.
Are you only here to be entertained?
I’d like to contribute that music is not just entertainment. One of the best, and accessible, books on the subject was recently The Philosophy of Modern Song by Bob Dylan.
What ever happened to automating the boring things? Instead we’re hearing thousands of software developers celebrating what they perceive as the end of art (it’s not, sorry, just mass noise being injected into human communication). If only it was done with a little more discourse and less blind superiority then we might have something to talk about. This is the saddest trend in some time.
Let’s go back to automating our taxes. Leave playing and writing music for all the time we create for ourselves instead.
> How depressing that’s sounds. Kind of like cheap cattle feed or something. Are you only here to be entertained?
GP said "for the masses."
Popular media is just that: Cattle feed. It doesn't really matter if the next Marvel movie or the next Drake song is AI generated as long as it has its intended effect.
"The masses"— and what separate you or them from "the masses"?
Do we need unrestrained, automated more of that? Or should we leave art to do what it does best: nurture the soul and wake people from their sedated dream-like lives and into lives more conscious and more fully-lived?
Drowning art in noise is going to have resounding effects on society. Cutting people off even further from their own souls is heinous. Doing so wittingly is nothing short of evil.
Agreed, AI will indeed shine in situations where mistakes are tolerable and entertainment is a huge area like this.
For example, currently playing a golf simulator is a little like the movie 28 days later. The simulation is amazing, but the vibe is an empty barren world. AI could generate crowd chatter and interaction, just as if one was playing in front of a real crowd of opiniated, semi-informed and possible semi-drunk patrons.
One think I'd love to be able to do is revive dead MMOs locally. Imagine running a server for Ultima Online or Everquest or something locally, and having a bunch of AI players that act like humans, can group up for group content, make their own guilds etc.
You can rope friends in too, but it would be amazing to only need a couple of friends to play with but still feel like the whole world was populated.
I wasn't really kidding. I think if you wrote the characters' instructions wrong like "I'm trapped in a video game" it'd be easy to get strange and dark responses.
Or even "don't leave the town or you'll be attacked by bears" or "your job is to stand in this building and let the player character shoot you with a gun".
In a world where you have all the musical masterpieces of history available to you with one tap why would you waste time listening to algorithmically generated mediocrity?
In theory, I could imagine an AI which learns the exact type of music you like and generates songs precisely tuned for your brain.
If the result is too same-y and you get bored of it, no problem, just hit the big red button. The AI also knows how to make a new song which is just different enough to excite you again, but still familiar enough to grab you in all the right ways.
> In theory, I could imagine an AI which learns the exact type of music you like and generates songs precisely tuned for your brain.
We've tried it with social media (a feed generated and tuned to keep your brain clicking on ads), and the end result seems close enough to an utter disaster (in terms of impacts on society and humans) to try to avoid doing that thing again, in a different space.
Therefore, I'm sure it will happen quickly, and someone will make a lot of money on it. Feedback chambers are clearly profitable.
The interesting parts of being human, though (IMO...), are finding the different things you like. In the musical realm, conversations with other people about music where you have somewhat different tastes, find overlap, and have to be able to describe music in forms they'll understand (I personally prefer doing this without any actual music available to listen to...) is just fun.
As is tossing on something with a lot of randomness in the mix, finding something interesting, and going and exploring their catalog. I'll suggest Unleash the Archers, Northwest Passage, should you be interested in the "metal Canadian folk covers" category.
The smooth drug of AI-generated brain-rut music... sorry. I'll pass.
Your favorite song loses its ability to trigger your dopamine when you listen to it 100 times in a row. Being able to generate unlimited music in the style of your favorite song sounds amazing.
I’ve never heard of someone enjoying a song more after the 100th time. I’d say there’s a sweet spot around the 3rd or 4th play for me, after that I enjoy the song less on each play.
I just checked my last.fm stats and for my top artist I have around 7400 listens over a few years time. The artist has 110 songs listed on last.fm so on average 67 listens per track. The most listened track is at over 200 times, but that is mainly due to the shuffle algortihm.
The actual numbers are quite a bit higher since not all of the listens have been tracked.
It's interesting - I cannot wrap my head around how anybody could think that idea is amazing, but I don't really want to pass any judgement on it, I just wonder if we fundamentally have different relationships to music.
> Your favorite song loses its ability to trigger your dopamine when you listen to it 100 times in a row. Being able to generate unlimited music in the style of your favorite song sounds amazing.
One thing that would be innovative / creative:
Imagine if an artist writes a song, then trains an AI on their own new song.
After doing that, they could have the AI produce a hundred variations of the song, and crowdsource which one is the best.
I understand they do something with commercials, but in a manual way. They basically publish multiple commercials and then measure the response to gauge which commercial resonates the most.
If you've not played with mynoise.net, you might give it a shot. It's one person's passion project to do exactly this - more or less infinitely variable background noise. Each soundscape has a range of samples (typically 8-10 sections), and you can adjust the levels for each section individually - or let it automatically fade stuff around. Some of the newer stuff is more algorithmically generated "infinite variation" music.
I've easily put thousands of hours on the Flying Fortress option - both my kids slept better with the low background rumble of WWII bombers thundering away in their rooms (the womb is very loud, and, yes, there's a womb noise simulator too), and I love it for office background noise, though my office doesn't have other people in it (typically).
I'm far, far more comfortable paying for this sort of thing, where he's gone and sampled stuff in person, than with something that's just "scraped the internet" and replicated something or another.
Another +1 for mynoise.net from a happy subscriber. The neuromodulator with some binaural tones does a good job of calming down my tinnitus (and blocking out, e.g., people on public transport.)
very interesting intersection there, the unrepeatable song, like AI riffing never to be heard again. Pulling the digital into the realm of the ephemeral, almost like a live performance. What a combination of thoughts married up there.
Ha, there is no music that eveyrone agrees is a masterpiece already. People will what their ears like regardless of someone's opinion of the "quality".
AI increasing the music in the world and lowering the bar to creation will mean it can both generate music specific to a few people's tastes and guaranteed top 100 hits in some regions. Much like many human artists can do today.
Tastes vary, but the best music that exists is invariably an "acquired taste". In other words: it takes listening to a lot of good music to be able to appreciate good music.
Of course that is a completely different situation and one I would welcome but that’s not where we are now and I’m guessing not where we will be any time soon.
In a world where you can have algorithmically generated personally targeted stuff fed to you, why would you bother with the past? Why not instead live in an eternal ephemeral present?
How are you defining "in the matrix" such that it applies for the past couple thousand years of recorded human history?
That seems rather at odds with how I would define it, "The computers create a customized compelling reality for each person to keep them from considering anything interesting."
Society, culture, this hierarchal and quasi-performative game we play, where we take on roles and portend certain things which cycle on and off, is a matrix. Any situation I would think where there is a center and then merchants and then banks and then institutions suddenly starts by the constellation of these things generating a thing we call reality, a thing we call time. I sound schizo, but I am speaking about something I am not parroting, it is a realization. Try living in the desert for a few days or the some wilderness in general and then come back to suburbs and the city.
I would just call that "civilization." "In the matrix," to me, refers to a reference to The Matrix (movie from the 90s), in which humans are kept around as energy sources, with the entire world being a simulation generated by computers to keep their energy sources complacent. You seem to be using a rather different definition.
Can't stand suburbs and cities, though. I live rural and only leave the hill a few times a week.
The thing is that the matrix as it is in the movie is allegorical, and for some reason I feel like you know that. I think there is a difference between knowing something and understanding something, right, and to me the realization that there is so much artifice, and not in a metaphorical sense I think, in a very tangible and I'd go so far as to argue certain sense, implicit with living in a society with culture. With universities, with arts, with expectations, with great-men, with money, and buildings, and other things, I just find words do not do justice to just how visceral having the realization is. I think reducing it to a pretty neutral, vanilla term like civilization just doesn't do it justice. It is the matrix, it's like a whole library of being in people's brains that we don't even recognize as simply a set of routines, habits, and biases, we think that's all there is. Sure, it may now be manifesting into something more quantifiable, but it's been with us for a long time.
I agree with your perspective on our social/cultural "matrix". It's amazing how much we take for granted and how deeply ingrained these constructs are in our everyday lives.
I used to gloss over the words representing "constructs" as I had used in my last sentence. I finally groked what was being referred to while listening to some lectures by Alan Watts during my recent sabbatical from burnout.
He was able to explain the differences between eastern and western philosophy in a way that made me see things in a completely different light. It was like waking up from the matrix.
Since then, I've been constantly aware of the constructed nature of reality and have had to re-evaluate many of my assumptions. For example, just learning that words are just abstractions and don't always accurately reflect reality was something I had never considered before. Now I try to be more careful with my word selection, but nobody's prefect :)
Not sure if you're subtly being critical of my use of the word. However I do have something to add to your thought. Richard Feynman has a really interesting quote, I'm paraphrasing, but he says it's hard to learn some things initially because everything is named slightly wrong. At the time I was also reading some Wittgenstein and that got me thinking about words and what they correspond to. I really do think they simply just correspond to the multitude of contexts they exist in now. For instance, if you look up the definition of words, you get more words, and then for those words, other words. It's turtles all the way down, it's all circular. The only grounding is the situation where the noises are made and then consequences follow, that's what I think characterizes them, and so atomic words are meaningless, they only have any discernible meaning in time and place in configuration with certain other words and with certain people.
I think using a word like the matrix in a strict sense when it comes from a movie about just about all the evocative ideas in Western thought is ludicrous. It's pedanticism on steroids.
It’s important to remember that the music business is one of the first places where corporations used technology to replace human workers.
Back in my day (not really) you could get a job as a live musician, performing at restaurants, cafes, and and lots of other places. Then, suddenly, people started to install record machines and buy music from record companies! So, you know, AI-generated music is just a progression of a centuries-old trend.
Quite ironically, this might be the disaster that causes the labels to license their libraries on more flexible terms to a larger variety of players, thus liberating the world from the current streaming service oligopoly.
Don't forget the funny brief transitional stage where a musician or band would play for multiple recording devices multiple times to create consumable masters with very limited lifetimes.
The capability of "Deep fakes" or "AI generated actors" or whatever you want to call it continues at a rapid pace.
How long before we get back to some modern version of "key signing parties," in which you check how many layers of "Humans you reasonably trust to verify that other people are humans" are between you and some chunk of content?
> How long before we get back to some modern version of "key signing parties," in which you check how many layers of "Humans you reasonably trust to verify that other people are humans" are between you and some chunk of content?
Out of nowhere, random teenagers with computers generating anything they want from whoever they want, with perfect quality? And thats just the beginning?
Yeah, if you are a record company, this must feel like the Rapture.
Imo we’re fucking up pretty hard, it’s interesting technology but I don’t think people realise that we’re all going to be “the record labels” soon.
Imo it’s worse that most of the good works generated are based off peoples hard work. People used to complain when China would steal IP from the USA and sell it at a discounted price…this is going to make that look like a joke haha every single person on this forum will be effected too, don’t think you won’t be.
Just because we can do things doesn’t necessarily mean we should keep doing them, using massive datasets that people innocently uploaded to the internet then reproduce works that will make them poorer is kind of, shit ?
I really don't want to contemplate the implications for crime or politics when we cannot trust even the most convincing evidence of wrongdoing. Innocent people can be smeared, guilty people can plausibly claim it was faked, and the inevitable result will be a world where truth is so obscured that people just believe whatever aligns with their preconceived notions. Which happens already, but it will get much worse.
And in the meantime, a generation of people who didn't grow up in that world will fall victim to forgeries of a quality they cannot conceive of.
So what is to be done about it? Do we just let it continue as is and slowly erode society, do we just continue with further technological advancement blindly no matter what the social or economic costs?
I get this sounds hyperbolic but I really don’t see how the internet can continue like this. Like personally, if I was an artist, writer or musician I’d be never releasing any works online ever again. Sure people can get around that but like, I’d be making it harder to steal my stuff.
This is probably the moment when the internet is no longer “cool” and just starts to get weird.
I’m already becoming much much much more suspicious of the internet and even starting to question online forming participation, I have no idea if the comments section is real people or generated content made to influence people. It’s really turning me off cyberspace.
I'm not sure what can happen except maybe a country balkanized internet with all actions online tied to a real id. When the internet is all bots maybe Facebook can keep it's relevancy by garunteeing real human presences behind accounts
People are becoming more concerned about where their electricity and food and heat and shelter are going to come from. It won’t really be AI that tells you to get into the pod because the AI said it’s the only way to survive.
I expect the end result, and I think this is happening quickly, will be a rough splitting of humanity into the "onlines" and "offlines."
The onlines will live in their feedback chamber of AI generated content to stoke their ego and convince them they're 100% right and wise and, you know, you can express how right and wise you are to others if you buy this product. The same thing we have now, just turned up to about 50 on the 0-10 dial.
The offlines will interact mostly with other humans in person, and dismiss the concerns of the online world, and... pretty much act like humans have throughout history, for good and for bad.
The problem is that the online world takes an awful lot of resources and energy we seem to be getting pretty short on (at least if you want it reliable enough to run a datacenter), so that world has some endpoint, eventually.
The "online" world now has fairly solid influence over voting, while any "offline" world is going to find it very hard to organize any kind of politics at all.
If you are a genuine new talent, the next Hendrix, then you'll have unique content to offer, and maybe things will be even brighter for you sa you stand out above a soup of algorthmic sameness.
But if you're a cookie cutter muso who mainly derives from other people's stuff then yeah, might be time to find a new gig (to make money at anyway).
"Unique content" is half an hour away of getting absorbed into whatever AI model you have, which can than crunch out an endless amount of variations on it.
The issue with AI art isn't just what it can do, but how insanely fast it is at it. It can start copying you before you have even finished your art piece and it can produce new ones 1000x faster than you, in your style. It already can produce images faster than you can view them and generate text faster than you can read. An AI-Spotify that just streams endless amount of AI music, custom written specifically for you, might not be far away.
And that's not going to be the end of it, the next step would be generating unique content from scratch, since AI can be trained on what people are listening too and than predict what might be the next trending thing.
I really don't see how one can compete with that as a singular artist in the long run. Some big companies might survive by being the ones that run those AI models, but the individual artist will have so much competition that they won't even be noticed in the first place.
Please take the time to read about how he got started as an artist and see if you still hold your views. A major avenue for younger artists to grow will soon be done for.
Anyway others have said, it wouldn’t matter his music would now be copied in seconds and he wouldn’t get paid to continue.
The simple fact that anybody can be using these "impossible technologies" RIGHT NOW, just log in and play around: this time is different. It's no longer reading about scientific marvels and wondering "Yeah but probably no" ... now you can actually play with these large language models yourselves and if you aren't left speechless, I'll recommend that you probably need to learn how to ask better questions / interface with AI.
With song copyright, for example, the gold standard has been 'is the melody the same'.
How do you define a voice specifically enough to copyright?
What if some dude just happens to sound like Drake, because he won the punnet square of larynxes? Is his work now a copyright violation?
What if he sounds like Drake when he puts on the Drake filter? Did the Drake filter violate copyright? Does his voice violate copyright only when filtered?
The people deciding these questions are not philosophers of sound. They're lawyers and industry hacks. It's going to get messy.
Most drake songs are repetitive and largely average. He's definitely got some with standout production, but don't kid yourself that he's some artistic virtuoso. The song became popular because it actually is a reasonably good imitation of early era Drake.
I’ve just been searching for info on how much of this song is actually “AI generated”. The initial news reports I read described only “AI generated vocals” (ie. effectively musical TTS with accurate AI modelled voices).
But the phrase “AI generated song” implies more than just the voices.
Anyone have a non-clickbait source which provides more details?
edit: wow, if your link (i.e. the song) was AI-derived (and thus not-copyrightable) then I am gd speechless. Smooth background, great vocals — lyrics too, of course — but wowsers, my tenant was just realizing that "the AI shit you been talking non-stop about is actually real: DRAKE is upset, shit is crazy!"
So, that and this and these are interesting datapoints upon insanity.timeline.
Copyright shouldn’t exist by the simple fact that no creative work is made in a vacuum. Every song is derivative of its influences and only the big companies have the ability to enforce copyright, making it more of a tool of wealth consolidation rather than justice.
If we lose our jobs to ai built by large entities using our content for free that’s going to solve wealth consolidation issues how? By making sure we are all equally poor?
There will always be an aspect of our culture ai cannot take away, that we are human. The invention of the radio spelled doom for the local musician but here we are a century later with live music still happening. The fact that musicians and record companies are such big business to me has done more damage to music than ai ever will
I am not blaming the tech, I am just cautious about how it's being used by the very same big businesses that you mention. If used right ai can be a powerful enabler. It has the potential to create opportunity by enabling vast numbers of people to perform blue collar jobs, or it can damage society by devastating said jobs and concentrating power in the hands of those who blatantly take what is not theirs. If those who generate the art that suddenly millions of people can use it to create their own products are paid then that's a win win. Similarly for any job that this insane marketing campaign claims will be replaced. Proper use of ai can lead to more independent work because everyone can be their own team.
I think AI will mostly lead to an automation and ubiquity of mediocre content. The high quality stuff will still be rare and made by those with the ability, talented artisans.
> There will always be an aspect of our culture that ai cannot take away
Sure, but who cares? All it needs to do is take away the good bit.
Also, the example of radio is telling. Radio destroyed musicianship as a career. Before the gods of rock and roll, there were tavern fiddlers, and now we don't even need the gods anymore. A classic example of ever-greater concentration of capital.
No, it isn't. If you genuinely believe this then you live under a rock and are unaware of microtonality, creating your own scales et al.
Most existing music is 4/4 too for starters because it is club-oriented, tons of other underexplored time signatures. If what you said was true then scores for picture and games wouldn't be so innovative.
I had a few late friends who pioneered never-before-seen technologies, for which the tools created whole new genres.
People said the same shit decades ago. Smacks of naivety.
Im not saying new stuff can’t happen, it’s just that the new stuff has been built on the shoulders of giants and to then take sole ownership and right to that new stuff just because you have the resources to defend your claim is tyrannical.
Well back in my day prog rock and and jazz were all we had! My god, kids should be listening to fugues played on organs and Gregorian chants... Kids these days just don't know music gosh darnit
Not exactly related to the content of the article, but rather the technology that displays it to me: the year is 2023, and the font face "popped in" a full second after the unstyled text was showing. If you can't use a custom font on your web page in an elegant, high-quality way, just use default system fonts.
UMG actions will likely result in exactly the opposite effect of their intention.
Everyone is tired of the gated walls of the music industry and this will only result in a rally behind AI even if AI itself becomes problematic in the end.
"Where is the value in AI creativity? Does it actually create anything? What can be said of artistic works in which the creation was essentially a series of playing at the lottery of the machine until something fantastic emerges? Spinning the wheel of chance and instead of red, black or some number, it is a series of lexical phrases with which we hope to catch the spinning ball on the Roulette and award us a work of art." - https://dakara.substack.com/p/ai-and-the-end-to-all-things
If you replace "Popular Music" with "Anything Else" (e.g. software engineering, standardized examination) you might see the silliness of your statement.
Someone's vocal characteristics are not meaningfully creative; it's the intellectual content of a work and specific instance of a performance of an artist's skill in singing or writing or drawing or fabricating that content, that makes the result copyrightable. Therefore, it seems like it's a stretch to think voice-mimicking AI works might be copyrightable by the mimicked artists.
You can't sue someone for mimicking someone else's voice well enough to be confused for the mimicked creator's voice. Some people can do some voices quite well. AI can just do it better, faster.
I hope the courts put a swift end to this, but maybe they'll try to uphold the status quo—entertainment megacorps get what they want—and only reverse course after a few years when it becomes clear that everyone's mimicking everyone else's voices and pictures for fun and profit and any attempt to maintain copyright or any other IP protection on such mimicry is absurd.