Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is because they: expanded upon an existing sample-inefficient technology, commercialized the sample inefficient technology using copyrighted data, fundraised and expanded operations using this legally questionable technology, and are now complaining that they can’t balance their business expenses if they can’t keep using other people’s copyrighted works to feed their extremely sample inefficient data monster.

What they could have done was stayed as an open research org when the tech started to work, and focused on sample efficiency and cultivating copyright free data sets. But they were too impatient to commercialize.

Whoops.

I don’t actually think intellectual property restrictions are good, but I don’t want a world where small creators have their rights stomped on by multi billion dollar corporations. Either we have copyright or we don’t, but unless OpenAI is also going to give up their copyrights this seems deeply unfair.



If the model weights are distributed, or its a free forever product... Fair use seems like a solid claim for LLMs in my semi-informed opinion. It seems like it will be a tall leap to create a trillion dollar corporation as fair use.

It would be unsurprising to me if OpenAI directly purchased copyright holding firms for the purpose of acquiring access rights to their material.


I don't think fair use applies at all.

First, it may not even be a thing in some countries.

Second, I can't upload the bee movie on Youtube for free and claim fair use. One key consideration in fair use is whether your use of the copyright work competes with the author. It's why you can take a screenshot of a GUI (which is copyrightable) without worries since nobody is going to be like "I don't need to purchase Affinity Photo because I found a screenshot on the Internet for free." In the case of AI, several whole classes of artists are vehemently and justifiably complaining that it will take their jobs.

It would be like saying photobashing is fair use, or collages are fair use.


> First, it may not even be a thing in some countries.

It doesn't matter as long as it's a thing in the country where the training occurs.


That doesn't sound right to me? If I use an AI trained in a way that is illegal in my country, the fact it was trained elsewhere shouldn't mean anything. If it meant, that is just an easy way to skirt the law.


Exactly. Imagine a law become hostile to openAI and they decide to relocate to offshore legal paradise, still selling they services in the rest of the world because "they respect the law in their country".

We ban, embargos and tax countries for political reasons. Human right violation , copyright disregard or tax evasion ? Free business should be tall free !


They’re doing licensing deals left and right.


Licensing means recurring payment, why not buy elsevier outright?


We're talking about UK law here, so is it a settled question as to whether or not OpenAI is violating copyright? In the US there was a recent high profile challenge that was largely dismissed with prejudice.

There's the descriptive question over whether something is a copyright violation today and there's the prescriptive question over what kind of policies we'd like. Does anyone with more intimate knowledge of this area of law care to comment?


Search engines wouldn't work without the fair use doctrine, period. Otherwise a google search would be something like hundreds of thousands of dollars of copyright violations per search.

OpenAI doesn't seem to do anything different than Google. They can own a private library of copyrighted works and index/model it how ever they choose. They can offer products on that index/model in creative ways. And AFAICT, they don't distribute the index/model nor the raw training set to others.

Now artists may not like it, and we as a society may not like it, but it looks like they're not doing anything illegal. And to be fair, a court would have to slice the baby super thin here to allow Google to do their indexing, but not OpenAI.


Search engines drive traffic to websites. ChatGPT takes their content and sells it to people without paying the original creator


Does ChatGPT just copy content and resell it? If it were that simple then there wouldn't be much debate and existing copyright law would be sufficient to settle the question. Also, whether or not Google is mutualistically benefiting everyone is not sufficient to answer the question of whether Google or OpenAI are violating copyright protections.


>Does ChatGPT just copy content and resell it?

Yes. Oodles of lawsuits claiming as such https://www.businessinsider.com/openai-lawsuit-copyrighted-d....

But yes the crux of it is financial and not algorithmic. Google ALSO uses AI. If google took sentences from 2 different web pages and stitched them together to form a paragraph, is that AI or search? [see google BERT]. What if they stitched together 3000 word-parts? What if they took sentences from 15 web pages, ranked them, and that's your view? See, it's all the same thing. openAI stitches together word-parts, by comparison.

Now when I say it's financial, when said information is presented, google gladly gives a hit to the site +/- add revenue if present. openAI does not do this. If openAI figured out how to do this then we'd be cooking with some gas.


> Also, whether or not Google is mutualistically benefiting everyone ...

Google arbitrarily chooses which sites to show on the search page, which may have nothing to do with relevance of the actual topic being searched. Only certain sites which google thinks should be at the top are at the top.

If google decides to blacklist you for whatever reason... well tough.


Search engine singular. The DOJ maybe wants to break google up now.


> I don’t actually think intellectual property restrictions are good

We already see huge amounts of LLM generated garbage on the web, as blogspam, regular websites, etc. Amazon is getting flooded with LLM-written books. Chat bots/search engines regurgitate the actually valuable content

If things continue, soon the motivation for generating new content will trend towards zero.


I don't think so. People will just follow personalities not aggregators. I can go buy a Neal Stephenson and know it is not spam. I can watch Cody'sLab and no it is not spam.

By the web do you really mean google and FaceBook? Most of the channels/feeds I follow have not been flooded with spam. I am pleased that the large corporation content aggregators are struggling. It isn't like they produced content themselves anyway.


One day Neal Stephenson will be dead. How will we find the next one?


> If things continue, soon the motivation for generating new content will trend towards zero.

only the ones trying to make money out of it. a lot of people create content because it has to be let out - it's like art. people have stuff to say, and they love saying it.


I did some searching and found the web distribution contains factual inaccuracies up to 15%, while LLMs are around 5-15%. Generally the LLM text has better formatting and language. So I would trust books > LLM > web scrape.


> If things continue, soon the motivation for generating new content will trend towards zero.

I see it differently, I see that this generated nonsense will be filtered out, like SEO 'hackers' way back in the day. And those real articles, who actually have some insight will be valued a lot, the issue at hand here, is that the handcrafted unique data that was created, will be pooled without consent to a training set for further changes.

It's the equivalent of the rich eating the poor in my eyes just on automated steroids.

Edit : To avoid all doom and gloom, a method to guarantee your site/data is not added to a training set is required. This sounds good on paper to me, and is really needed, but just like peoples personal privacy, I don't think the sentiment is there to put the effort in.


Maybe if you attach #NoAI next to the content it would be filtered out.

But from a copyright standpoint, unless it regurgitates training data which is rare and only happens with a specific prefix, LLMs should be safe. They are decomposing, recombining, and regenerating language, not copying it. They execute user commands, not imitating any author on purpose.

Say you want to read Harry Potter without paying, would you rather borrow a book or ask chatGPT to regurgitate the original? It would never work well, and be slower and more expensive to use AI. It's not a tool for infringement, it will naturally hallucinate and degrade the original, it can't possibly store a perfect copy of everything it trains on.

In fact LLMs often use 15T tokens for 15B models, so 1000:1 compression ratio, while diffusion models compress 5B text-image pairs into less than 5GB model, so they hold about 1 byte worth of information from each training example if averaged out. There is no space to put all that copyrighted data in a model, it necessarily compresses the hell out of it. Plain old piracy is still 1000x 'better' than infringement by AI.


> But from a copyright standpoint, unless it regurgitates training data which is rare and only happens with a specific prefix, LLMs should be safe.

From a copyright perspective, I do agree, you're not reproducing someone else's work. However, it's a 'new-ish' area of open source licensing, if your product is a product from others peoples product without at a bare minimum, citations (given the author said 'go wild with this data'), it's maybe legally rude but not a problem. But without permission, IE; no permissive license to USE someone's content, it's SOME kind of new infringement ?

> n fact LLMs often use 15T tokens for 15B models, so 1000:1 compression ratio, while diffusion models compress 5B text-image pairs into less than 5GB model, so they hold about 1 byte worth of information from each training example if averaged out.

This is wild and I didn't know this, but again, the gravy was made from bones.. I'm not sure it matters here ?


Copyright as it says in the title concerns with copying, and does not confer rights to restrict automated analysis and statistical modeling. I think the trend against copyright coincides with the growth of internet. We moved from long form, passive consumption - books, movies, TV and radio - to games, social networks, search engines - generally interactive formats where there is no compensation for copying. We create our own content. LLMs are squarely in the interactive camp. Copyright was fit to the passive consumption model, but now it is standing in a precarious position facing the future. It is ineffective against internet and AI.


I agree, but would you also agree that this is exactly why there is a need now for the courts to step in and define these 'new-frontiers' of data reproduction ?


> the gravy was made from bones

Probably not very relevant to the point, but gravy is made from the meat juices and fat. Bones get made into bone broth.


> Either we have copyright or we don’t

Carve-outs used for the social good would be fine with me


So-called "fair use" for editorial and review purposes seems like the social good you're talking about?

I'm not sure OpenAI offering a service for profit falls into that category.


The supreme court ruled in Authors Guild, Inc. v. Google, Inc. that google scanning entire books and showing snippets was fair use. I don't see why AI training wouldn't also qualify.


That's not quite the same. The argument in that case was that Google did that not in order to copy and distribute books, but rather to provide search and indexing of certain words within the text. It was important that it wasn't for the same purpose as the original works. There were also limitations on the amount that would be shown.

In the case of LLM training, it's for the same purpose as the source material -- to generate code, or writing, or photographs, etc. Not only that, but in several instances it's been shown to reproduce source material, which is either derivative work or straight copying, depending.

They're different situations.


>In the case of LLM training, it's for the same purpose as the source material -- to generate code, or writing, or photographs, etc. Not only that, but in several instances it's been shown to reproduce source material, which is either derivative work or straight copying, depending.

If it's used in a reference/"inspiration" capacity (as opposed to verbatim copying), I doubt the rightsholder have anything to stand on here. Sure, their works might have been used to make other competing works, but all art is derivative, and I don't see why it would be legal for a human artist to "train" on past works of art but not AI.

Alleging that AI models can reproduce some works verbatim is probably the stronger argument, but AFAIK you have to coax them pretty hard to do so, and therefore AI companies might be able to argue they're tools like photocopiers or such. Likewise, you can probably extract an entire book off google books by bruteforcing common ngrams to get the entire book, but google wouldn't be held liable for that.


> If it's used in a reference/"inspiration" capacity (as opposed to verbatim copying), I doubt the rightsholder have anything to stand on here. Sure, their works might have been used to make other competing works, but all art is derivative, and I don't see why it would be legal for a human artist to "train" on past works of art but not AI.

The "all art is derivative" line is essentially something people try to convince others of to justify breaking copyright law. It's not grounded in reality or law. It devalues creative work by implying the machine, with no lived experiences, is doing the same thing. And it's also completely wrong about what the specific term derivative work actually means in the context of copyright.

Derivative works deal in specific. If your LLM reproduces a substantial portion of the story beats from Jurassic Park, you can bet it'd wind up in court. If it reuses identifiable characters, that is usually gonna be derivative unless it can otherwise qualify under an exemption.

"But fanfic, fanart, etc." Is a common counterpoint but misses the commerce aspect of it. Here Open AI and similar are offering paid services based upon harvesting all of this information. When they produce for you the response to the prompt, they are, effectively, distributing that to you for money. That's the point at which it becomes a problem.

As an aside, it's an act of drinking the LLM Kool-Aid to believe it can be "inspired".

> Alleging that AI models can reproduce some works verbatim is probably the stronger argument, but AFAIK you have to coax them pretty hard to do so, and therefore AI companies might be able to argue they're tools like photocopiers or such.

They can try that argument but it'll fall flat when you consider that a photocopier is reproduction agnostic, while LLMs generally have a ton of work going into them to prevent them from outputting damaging things (and they still fail). That fact makes them not at all comparable to a photocopier, setting aside the more obvious "subscription software service" different.

Also, you "know" pretty wrong about the effort required. For a recent example, see: https://www.latimes.com/entertainment-arts/business/story/20...

Here a number of people noticed getting specific producer tags basically unaltered in the output when just asking for songs of a certain genre, which then also often sound similar to existing songs.


I don't understand how AI is any different than going to Fiverr. I can specify what I want in the style I want, and have a human do it versus a computer.

"I want a black and white logo in the style of an 1960's Archie comic for an ice cream shop named 'Bettys'"

Once I say "1960's Archie comic", why doesn't the work instantly become derivative whether a human does it versus a computer?

If I understand your argument correctly, the person from Fiverr will not pay license fees to the owner of Archie Comics, even though he may use it as reference material.


> I don't understand how AI is any different than going to Fiverr. I can specify what I want in the style I want, and have a human do it versus a computer.

I mean, if you ignore all the massive differences in paying a human to do something versus paying an LLM service to do it, sure. But you're effectively throwing at least ethics and care for the environment out the window in one case.

> Once I say "1960's Archie comic", why doesn't the work instantly become derivative whether a human does it versus a computer?

It does. Just because you can commission art from someone doesn't mean you won't get sued if you start trying to use it as the logo for your business. If you put Foghorn Leghorn on the logo of your chicken business, you'll be sued. Having an artist simply make you a logo like that on commission, if not transformative, could get them sued, though by doing it on commission the terms likely mean the requestor is the one who's liable

Earlier I noted clean room implementations. The software industry went to incredible lengths to be able to interoperate with competitors without violating copyright.


> Having an artist simply make you a logo like that on commission, if not transformative, could get them sued.

I agree with that. But then at what point does AI output becomes transformative versus not? No one owns the 1960's comic art style. But did mentioning "Archie" somehow make a difference? I don't think it does. I might be wrong.

So I don't understand how it becomes a problem once a computer does it versus a human. If it's a legal issue, then I might be more persuaded. If it's an ethical issue only, well... this can be thrown on top of the heap of ethical issues businesses have long ignored -- and your arguments is screaming into the wind, as it were.

I think if you argue for UBI, or some kind of remuneration then we can argue about who deserves it. Truck Drivers who lose there jobs to AI? Open Source Software Engineers? Artists? Writers? Normally this would normally be served by things like unemployment insurance in the US, but we as a society hate freeloaders.

I think with UBI or something similar we could have more people doing things they enjoy, so maybe we would have more artists, rather than less. But again, that would be, socialism, which we also hate.


>They can try that argument but it'll fall flat when you consider that a photocopier is reproduction agnostic, while LLMs generally have a ton of work going into them to prevent them from outputting damaging things (and they still fail). That fact makes them not at all comparable to a photocopier, setting aside the more obvious "subscription software service" different.

And what about my other point about coaxing google books to give you a full copy of a book via multiple snippets?

>Here a number of people noticed getting specific producer tags basically unaltered in the output when just asking for songs of a certain genre, which then also often sound similar to existing songs.

Can you provide an alternate source for this? I skimmed your link and it does not substantiate that claim.


> And what about my other point about coaxing google books to give you a full copy of a book via multiple snippets?

Sure, given enough time and effort maybe a person can. That's not really relevant to the lawsuit or its details though. Is your argument here they won the lawsuit which cleared the way for mass copying and redistribution?

> Can you provide an alternate source for this? I skimmed your link and it does not substantiate that claim.

If you want to hear it yourself: https://youtu.be/_wuKZR0Pv-Q


Because lots of AI firms literally pirated books to train their model. See https://shkspr.mobi/blog/2023/07/fruit-of-the-poisonous-llam...

Neither the authors nor publishers received any compensation for having their work ingested. It isn't like OpenAI went to Amazon and bought one copy of every book - they downloaded a torrent.


Fair use depends very heavily on the nature of the use.

A key part of Google's defense was that not only was it not using the entire books to reproduce the entire book, but also that it was taking measures to prevent people from abusing Google's systems to reproduce an entire book. It's a lot of work to emphasis that the impact on the market (in other words, the fourth factor) is as minimal as practicable--and that's the crux of the analysis.

When you're instead scanning someone's stock image database to build a tool to generate stock images... the fourth factor is jumping up and down screaming at you "YOU LOSE" and your best defense is that it's not the training, it's the tool built on the training data that is infringing the copyright.


I don't see why it would qualify as fair use.

OpenAI are building a product to offer to the public for profit.

If I employ 10,000 humans to read books and provide summaries or texts "inspired by" those books, I need to pay for the copies of the books those humans read.


>OpenAI are building a product to offer to the public for profit.

So was google books.

>If I employ 10,000 humans to read books and provide summaries or texts "inspired by" those books, I need to pay for the copies of the books those humans read.

IANAL, but that would be perfectly legal. Summaries aren't copyrightable, and if you can acquire the book free but legally (eg. library, borrowing from a friend, buying it from a store and then returning it), there's nothing the publisher can do.


The summary is also copyrighted by the author, but it is a different copyright than the book. Probably also a bit thinner copyright than the book. Mere collections (notably, an alphabetized ordering of name and corresponding telephone number) is effectively not copyrighted, since it requires no creativity. A summary, though, requires quite a bit of creativity: what to emphasize, how to compress the ideas and arguments into a concise statement, etc.


I find English-law's way of legislating through the judiciary terrible. When a new situation appears (like whether machines should be allowed to learn on copyrighted works), the legality of the situation should be decided explicitly by the legislative branch, not by arcane interpretations of previous judicial decisions.


> When a new situation appears (like whether machines should be allowed to learn on copyrighted works), the legality of the situation should be decided explicitly by the legislative branch

Okay, so what happens in the interim situation? If the legislature hasn't spoken yet, is it assumed to be legal or assumed to be illegal? Or is this assumption tested on a case-to-case basis, with both sides making arguments as to why it should be treated to be legal/illegal in this specific scenario?


Possibly because companies using AI don't train them on copyrighted material simply for research or the public good, but to turn a profit on content generated by those models, and often explicitly in the style of the creators of that content.

It's difficult to argue that, for instance, training a model on all of Frank Miller's work then prompting it to generate comic art in Frank Miller's style then selling that is fair use.


You can hire an artist that did that though


An artist that copied an existing artist's style to the degree that LLMs do would be sued.


Source? Has any artist been successfully sued for copying "style"?


I don't know. I'm pretty sure if I traced his work and passed it off as my own, just with maybe a few tweaks, I'd get in some kind of trouble with someone.

And in my opinion (which is unfortunately more controversial than I think it should be) what LLMs do is far more akin to tracing than inspired creative expression. And I think intent is relevant here. Someone using an LLM to create a product in Frank Miller's style, trained on Frank Miller's work, isn't merely trying to create something inspired by his style, so much as create a Frank Miller product without having to pay Frank Miller.


There was probably an early stage where Frank Miller learned by tracing, and many comic artists trace and augment photos of models. I'm not sure how you can define that diffusion image models are tracing things though, unless using something like control net on outlines of Frank Miller drawings.


>I'd get in some kind of trouble with someone.

"trouble" of what nature? You'd probably face more social consequences than legal.


And google books was "simply for research or the public good"?


As far as I'm aware, Google didn't create new versions of those books based on that content.

Even if you want to argue that fair use didn't apply to Google, it clearly applies far less to what AI is used for.


>As far as I'm aware, Google didn't create new versions of those books based on that content.

It's _copy_right. If reproducing verbatim snippets was "transformative" enough to fall under fair use, I don't see why producing whole new books would not count as "transformative" enough. Copyright is a regime to grant monopoly over a specific work, it's not a regime to prevent competition from others in general.


There is an interesting argument here.

If Google was selling brand new books created only by taking snippets from other books, that would also fall under fair use?


They would, but openai isn't explicitly doing that either. You can probably smuggle a full copy of a book via snippets, but that doesn't put google on the hook for copyright infringement. Likewise if openai makes you jump through hoops to produce works verbatim they should be in the clear.


I would assume that Google's use counts as fair use precisely because they are only providing snippets in the context of a search utility which doesn't provide a market substitute for the original product, whereas the business model of AI is to provide market substitutes based on the original product.


Providing a snippet arguably is a substitute of the original product in some contexts. For instance, if you wanted to know who ate the most burgers eaten in one sitting, then the snippet on google books fully replaces the full copy of Guiness World Records. Likewise for other sorts of information like how to inver a matrix.


On a side note, does anyone use Google books? I haven't found useful information with it in a long time.


Etymologists use it a lot. I went down an etymology rabbit hole a while back looking for the origin of a phrase and google books was immensely helpful


Libraries would be illegal without centuries of precedent giving them a “legal inertia”.

Google benefited from the exact same kind of bulk copyrighted data collection. They made verbatim copies of the text of both web sites and just about every book in existence!

This kind of argument seems disingenuous to me. Either ban Internet search or acknowledge that training an AI on copyrighted text is no different than a student reading every book in a public library.

Speaking of which: We all have free access to GPT 4o without advertising. It feels like asking a knowledgeable librarian.


Actually Google's faced this problem before. Caching is generally legal, and Google's just holding a cache to search through instead of requesting every site every time anyone searches.


They show snippets from web sites and show subsets of books as well, including artworks and other diagrams in their entirety.

For comparison, I had to fight for a year to get copyright permission to show book cover artwork in a library enquiry system! If I simply Google the same book titles or ISBNs, Google will show me the pictures directly. E.g.: https://www.google.com/search?q=greg+egan+eon&udm=2

How is that legal!? We had to pay to get access! In public and school libraries!

The law in most western countries is very clear that book covers are "entire" works of art, and can only be displayed by organisations that pay the copyright holders.

Google, Bing, and others violate copyright on a mass scale on a daily basis. Not to mention YouTube, TikTok, and Reels, all of which are packed wall-to-wall with "movie clips" and "TV show highlights". They're publishing copyrighted content uploaded by random people and then distributing the advertising revenue to the copyright violators instead of the copyright holders.

This isn't "caching" or "indexing", it's verbatim serving.


And they've faced legal battles over how much information they copy and display to users - not over the fact they do it at all.


A librarian which can't cite its sources, and whose services are only free while the VC funding is rolling in.


The social good would be not carving out for OAI, if we want to maintain a culture that rewards human creative output (which also happens to be what models need, so it's a win-win).

The good news is it's not true that OAI can't be profitable while paying copyright holders, so we should easily be able to find a balance.


open ai making money isn’t the social good tho


Your comment made me think of something. Wait, I know!...

These sometimes quite expensive textbooks that we use to train children and students. Those should also be usable without a fee ever paid to the authors or publishers surely, education is clearly a social good!


How much are they selling social good for these days? $5/1 million tokens?

Was the addition of a former member of an alphabet agency part of their social good initiative?


Nope. Don't trust the carvers, given the current state of campaign laws. Thank the SCOTUS for Citizens United.


I'm all for diminishing copyright power, but it has to be done for all. Besides, how are you going to keep that carve out from letting everyone use AI as a giant copyright laundering scheme? Does that mean piratebay could have saved themselves as long as they claimed their list of torrents is for "training" only? I guess that's effectively what all datasets being used for those high end LLMs already are...


I'm not. I want authors and artists to have some ability to make a living.


No it's because what they are trying to automate is creativity. I can ask any artist/writer to write me a story or create a character like "X" but black and they get to use the knowledge of that character. An AI needs context just like any other intelligence.


This. And, commercial or not, what they're creating is fundamentally more important and more valuable for humanity than any of the inputs that go into it.

And, by "they", I don't mean just OpenAI. If it were just them, you could perhaps argue they shouldn't be getting a free pass (I'd rather go with "potentially too dangerous to be allowed to exist" angle though). But it's not just them. The same training process and the same use of copyrighted content powers all the commercial and non-commercial models, including SOTA competitors like Claude 3.5 Sonnet, and "open source" wannabes derived from various Llama versions, which are not far behind.

To me, the "open source" models alone are already good enough to outweigh any copy rights being violated through use of unlicensed materials in training.


This. It's very easy to build a new technology like this and realise the benefits to that tech while training it on your data. They knew the ingredient was more data, and without even asking, they just started to consume.

Even if they wanted to move to a profit based company, they could have negotiated deals with data holders like they would with third party software vendors they might need to use.

Instead, they ignored it all and it's the 'what-if' scenario of when open source software was first being introduced. I'm old enough to remember everyone saying "well companies will just take your work and sell it for themselves." This happed a few times, and its been fought out in the court. Infact so has production data like written articles, videos etc, there is a fair-use clause. I'm not a lawyer, but even I know if the ingredient to your for-profit business ingredient is 'fair-use content' it's not fair use.

I like the idea of AI, I absolutely hate the execution. The race to the top and the mentatlity of 'ask for forgivness' really can't apply here.


> What they could have done was stayed as an open research org [...] and focused on sample efficiency and cultivating copyright free data sets.

Ironically, if they stayed a non-profit research org, they could explicitly use copyrighted works since many countries have copyright exemptions for research.


> I don’t want a world where

You don't want the LLM world? Let's send it back where it came from. Can we unpublish Attention is all You Need?


I think LLMs are cool and that research should be focused on sample efficiency, building public copyright free datasets, and on recognizing and moving beyond the limits of LLMs. These systems are often very half-baked, and I am more interested in seeing research that moves to new ways of machine thinking than I am in dumping more and more GPUs in to this promising but incomplete technology.

As one example, Yann LeCun's vision of a system called JEPA [1] is interesting to me. It may not be the solution we need, but this type of thinking - taking what we have learned and exploring new architectures that may have even better real world performance - is what interests me.

[1] https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-jo...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: