The SLS is widely understood to be less about research and development, and more about pork barreling, and jobs program. I don't buy that humanity should avoid investing in space travel because US Congress and surrounding governmental beaurcracy is not running projects effectively. That argument would stop just about any human activity.
It's pretty difficult to predict what spinoffs would come from attempting to put a colony on Mars. I would imagine to succeed we would need to solve a lot of challenges with human biology, genetic engineering, automation, and many novel engineering solutions.
But economics is not the only reason to do things, and I bet you don't expect everything humans do to have a purely economic rational.
Why does the reportimg say she "abandoned" her car at the beach? She was at the beach when she was located. People usually do not take their cars with them once they arrive at their destination. To get out and walk is not abandoning the car.
If you're talking about longevity of solar or wind turbines, given that their lifespan is measured in tens of years, it does give you some ability to respond to some external group cutting your supply off. It's not remotely the same as oil/gas dependency.
I think more people than you know are using Sinatra-like frameworks on Java or C# because you can get the job done with heavy loads. If you were thinking about building your application around a schema you'd look at GraphQL. If you were going to make the "new Ruby on Rails" it would have to build out the front end and back end based on the schema.
The vast majority of people cannot do the structured thinking to model the real world in a fashion that a computer can understand. That is a key attribute of a good dev. If someone can do this, and describe it well enough to a LLM, they are a dev. It's not devs that will be taken out of the loop, unless you define a dev as someone who is just a translator between human language and machine code.
The LLM + agentic framework supplements many core dev skills including sporadic habits of tirelessly testing things, spec-driven development (thinking what you’re gonna make before you do), and debugging.
Some devs may have all of these, some may lack several.
I certainly benefit from having a better overview while vibe-coding: I don’t get lost in some rabbit-hole trying to optimise something that is good enough, and my most important tasks are always well-defined before I start.
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
You can also ask people to repeat a text and some will fail.
What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.
It is entirely unreasonable to prevent a general purpose model to be distributed for the largely frivolous reason that maybe some copyrighted works could be approximated using it. We don´t make metallurgy illegal because it's possible to make guns with metal.
When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.
Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.
You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.
You don't "upload" data to an LLM, but that's already been explained multiple times, and evidently it didn't soak in.
LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.
If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.
One of the reasons the New York Times didn't supply the prompts in their lawsuit is because it takes an enormous amount of effort to get LLMs to produce copyrighted works. In particular, you have to actually hand LLMs copyrighted works in the prompt to get them to continue it.
It's not like users are accidentally producing copies of Harry Potter.
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.
Re-quoting the section the parent comment included from this agreement:
> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.
Yes, it is covered technically. But practically nobody knows what is infringing in the non-literal infringement case. It all depends on the judge and context. Was this idea sufficiently original or was it a necessity, or a generic pattern? Each level of abstraction can get protection from copyright. You can only know if you sue/get sued.
I find non-literal copyrights (total concept and feel, abstraction filtration comparison/AFC) to be a perverse way to interpret "protected expression" as "protected abstraction". It is a betrayal of future creative activities to prop up the past ones.
Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.
Expanded you mean? Wouldn’t that be nice if companies could buy and sell h1b workers? They are not citizens anyway and the Constitution doesn’t give em any rights even if they think they are white
The point of trying to address climate change is not that it's impossible for us to adapt, it's that to adapt to the predicted rate of change is going to be a lot more expensive/disruptive than trying to slow the change to more natural levels.
> In general I think too much emphasis is placed on trying to preserve the current climate as is, and too little on trying to make this planet a good place to live for generations to come
This is on the money.
> The climate changes we are seeing now are not extraordinary
The rate of change is extraordinary, and makes it expensive/different to adjust.
The rate of change over the last 150 years has certainly been dramatic. But it’s nothing extraordinary over the course of Earth’s history. There’s been roughly 30 million 150-year periods in Earth’s history and given the planet’s violent past - massive impacts, supervolcanoes - it’s statistically absurd to think ours is the most extreme.
It doesn't have to be anywhere near the most extreme for all of us to die.
We're not all going to die, but my point stands.
It's not about the earth shifting, it's what happens to all of the human processes, trade, infrastructure, farm production, and human lives along the way.
i know i sound like a denier but havent we existed through extremes of weather up until now? why would more of the same be fatal? im dubious about the apocalyptic parts. no doubt the climate is warming, but the sky is falling?? not so sure
I personally don't think we'll all die due to climate change, but i could easily see it set off massive problems like wars over resource shortages.
The pandemic demonstrated that the global economy is a finely tuned machine. It doesn't take much to upset the operation of such a beast, and a big interruption could have massive unforseen consequences.
Think about it like a complex, mission critical software system. Are you going to make big changes in many key areas and rush them out to prod? Or do you want to release them individually over time.
Imagine a snowball rolling down the mountain that turns into a basketball sized ball, that into a mountain sized ball, that turns into a planet sized ball.
There are numerous periods throughout human history where we have almost gone extinct, and several where we have had under 10,000 people worldwide.
It's pretty difficult to predict what spinoffs would come from attempting to put a colony on Mars. I would imagine to succeed we would need to solve a lot of challenges with human biology, genetic engineering, automation, and many novel engineering solutions.
But economics is not the only reason to do things, and I bet you don't expect everything humans do to have a purely economic rational.