"Gen AI is the only mass-adoption technology that claims it's Ok to exploit everyone's work without permission, payment, or bringing them any other benefit."
Is it? What about the printing press, photography, the copier, the scanner ...
Sure, if a commercial image is used in a commercial setting, there is a potential legal case that could argue about infringement. This should NOT depend on the production means, but on the merit of the comparisons of the produced images.
Xerox should not be sued because you can use a copier to copy a book (trust me kids, book copying used to be very, very big).
Art by its social nature is always derivative, I can use diffusion models to create uncontestably original imagery. I can also try to get them to generate something close to an image in the training set if the model was large enough compared to the training set or the work just realy formulaic. However. It would be far easier and more efficient to just Google the image in the first place and patch it up with some Photoshop if that was my goal.
But the social nature of art also means that humans give the originator and their influences credit - of course not the entire chain but at least the nearest neighbours of influence. While a user of a diffusion generator does not even know the influences unless specifically asked for.
> Art by its social nature is always derivative, I can use diffusion models to create uncontestably original imagery
How are you defining “uncontestably original” here?
The output could not exist if not for the training set used to train the model. While the process of deriving the end result is different than the one humans use when creating artwork, the end result is still derived from other works, and the degree of originality is a difference of degree, not of kind when compared to human output. (I acknowledge that the AI tool is enabled by a different process than the one humans use, but I’m not sure that a change in process changes the derivative nature of all subsequent output).
As a thought experiment, imagine that assuming we survive, after another million years of human evolution, our brains can process imagery at the scale of generative AI models, and can produce derivative output taking into account more influences than any human could even begin to approach with our 2024 brains.
Is the output no longer derivative?
Now consider the future human’s interpretation of the work vs. the 2024 human’s interpretation of the work. “I’ve never seen anything like this”, says the 2024 human. “The influences from 5 billion artists over time are clear in this piece” says the future human.
The fundamental question is: on what basis is the output of an AI model original? What are the criterion for originality?
Is it? What about the printing press, photography, the copier, the scanner ...
Sure, if a commercial image is used in a commercial setting, there is a potential legal case that could argue about infringement. This should NOT depend on the production means, but on the merit of the comparisons of the produced images.
Xerox should not be sued because you can use a copier to copy a book (trust me kids, book copying used to be very, very big).
Art by its social nature is always derivative, I can use diffusion models to create uncontestably original imagery. I can also try to get them to generate something close to an image in the training set if the model was large enough compared to the training set or the work just realy formulaic. However. It would be far easier and more efficient to just Google the image in the first place and patch it up with some Photoshop if that was my goal.