Because the models are not creating a 1:1 replacement of the original work.
As mentioned before "style" is not something subject to copyright and the model creates a model of that style. The process of finetuning a model generally means that one would not want to recreate the original images as that would overfit it and render it, essentially useless.
When it comes to code, there is a higher chance of getting a one-to-one clone of the input as the options used in creating an algorithm, or even a simple function are dramatically reduced imo.
> Because the models are not creating a 1:1 replacement of the original work.
Since when did that become a requirement? If those are the rules now, then cutting the final credits is good enough to start torrenting movies.
> When it comes to code, there is a higher chance of getting a one-to-one clone of the input as the options used in creating an algorithm, or even a simple function are dramatically reduced imo.
If you're going to consider each function within a larger work as an individual work, that makes the 1:1 replacement claim more dubious. In order to recognizably imitate a style, one or more features of that style have to be recognizably copied, although no single area of the illustration would have to be. A function is a facet of a complete program just like recognizable features of a style are facets of each work an artist produces. If it helps, consider an artist's style as their own personal utility library.
If I made a scene for scene remake of a Disney movie, with an ugly woman for a princess and social commentary/satirical injections, it would be defensible as fair use in court.
I think when it comes to art, less than one-to-one clones are often still functionally equivalent in the mind of many viewers. Stylistic and thematic content is often just as, if not more, important than the exact composition. But currently the law does agree that this is not copyrightable. And sometimes independent artists profit and make a name for themselves copping other styles, and I think that's great.
But could it be considered an intellectual and sociological denial-of-service attack when it's scaled to the point where a machine can crank out dozens of derivative works per minute? I'm not sure this is a situation at all comparable to human artists making derivative works. Those involve long periods of concentration, focus, and reflection by a conscious human agent to pull off, thus in some sense furthering to the intellectual development of humanity and fostering a deeper appreciation for the source work. The machine does none of that; it's sort of just a photocopier one step removed in hyperspace, copying some of the artists' abstractions instead of their brush strokes.
I have written projects where I'd consider a handful of lines of code to be the core central tenant of the entire project that everything else is built up around. Copy those lines and everything else is scaffolding that falls out naturally from the development process.
I very much doubt this. One of the clues is the reproduction of artifacts that have nothing to do with the original prompt - for example the Getty images logo.
In this case, I think that the hand and its shadow are both part of a single original artwork. If they weren't, there would be two other explanations I could see: one, that the AI understands lighting and 3d somewhat, and applied the shadow accordingly, or two, that there are many such images that have a hand a shadow in a similar place, and therefore the AI has it as an association of sorts. I find both of these explanations less likely than the original theory, which would be that the hand and the shadow are part of a single original work that was, even if not copied verbatim, then used as unmistakable influence for the result.
I'd recommend Teeline. It was developed for British journalists. I've used it off and on when I was a scrum-master to keep notes on standups
https://en.wikipedia.org/wiki/Teeline_Shorthand
Well...Wintermute wasn't essentially hostile to humanity - it just had it's own goals. I find that way more cyberpunk than a dystopian AI bent on humanity's destruction.
From the Ruby side Sinatra is just as fast as flask to get something going. Having supported all three ( Rails, flask, Sinatra ) if it's going to be an API. I usually start with Sinatra.
Oddly enough last week I showed this video to my 9 year old daughter who has a passing interest in animation and she understood the technology clearly.
That fact alone means that we have to step up our game in modern technical documentation.