Hacker Newsnew | past | comments | ask | show | jobs | submit | sigmar's commentslogin

>where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

are there any projects to auto-verify submitted bug reports? perhaps by spinning up a VM and then having an agent attempt to reproduce the bug report? that would be neat.


You can't change the law with a license agreement and redefine what constitutes a derivative work. If that was possible, people could have done it pre-LLMs.

also how would you prove it was in the training set? re: your last sentence, the licensed work wasn't in the input in the chardet example ("no access to the old source tree")


Sure, a license can't create new legal understanding of "derived work", but I think the intent of what Splinelinus said still works: a license outlines the terms under which a licensee can use the licensed Work. The license can say "if you train a model on the Work, then here are the terms that apply to model or what the model generates". If you accept the license, those terms apply, even if the phrase "derived work" never came up. I hope there are more licenses that include terms explicitly dealing with models trained on the Work.

Also, for comparison, both GPL and LGPL, when applied to software libraries (in the C sense of the word), assert that creating an application by linking with the library creates a derived work (derived from the library), and then they both give the terms that govern that "derived work" (which are reciprocal for GPL but not for LGPL). IANAL but I believe those terms are enforceable, even if the thing made by linking with the library does not meet a legal threshold for being a derived work.


Yeah, that's possible, but seems to me more about contract law and creating an EULA for the code, than it is about copyright-derived enforcement. maybe 'copyleft' stuff will move in that direction.

it's barely tangential to the topic but worth pointing out, I don't think there's firm legal consensus on your library point, that is just the position of the FSF that that's true. IANAL tho. https://en.wikipedia.org/wiki/GNU_General_Public_License#Lib...


This is also my thinking. A(ffero)GPL does something similar by saying a user of an API to AGPL code is bound by the AGPL license. You can always choose not to use the code, and not to use the license.

For the parent comment on discoverability, I honestly don't know. Some models list their data sources, others do not. But if it came down to a dispute it may be that a court order could result in a search of the actual training data and the system that generated it.

For the second case of derived work through context inclusion, it may end up in a similar situation with forensic analysis of the data that generated some output.


Agree. But then, the test suite was the input (chardet). So, is the test suite creative or functional in nature? And does the concept of fair use apply globally?

Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.

I think we're going to have several years of people claiming genAI "didn't really do something novel here," despite experts saying otherwise, because people are scared by the idea that complex problem solving isn't exclusive to humans (regardless of whether these models are approaching general intelligence).

the claude contract was only 100M/year. about 0.7% of Claude's 14B revenue run rate. Not sure we know anything about the number for openai's new contract.

>Now everyone cares when Anthropic finally said No?

DoD started asking for the ability to do more stuff. That's the issue here. "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance... Fully autonomous weapons." https://www.anthropic.com/news/statement-department-of-war

>So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?

again, this story isn't about people that are against any military contract.


> DoD started asking for the ability to do more stuff. That's the issue here.

The DoD (NSA) has crossed those lines before with big tech when they allowed mass domestic surveillance on their own citizens (PRISIM) in the past. They are willing to break laws to get the job done which probably in 10 years time those actions would be found illegal later. By then it would be too late.

Companies should expect that governments may try to bend or even break rules or laws under veiled pretenses.

So why expect that they would be any different today? It is the nature of governments.

Scorpion (DoD) tells frog (Anthropic) to help them cross the river on the condition that they won't kill them. If you knew that the Scorpion always breaches the contract first, why work with them in the first place?

> again, this story isn't about people that are against any military contract.

Gen AI mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.

Given all the above, the clear indication was OpenAI already signed a military contract last year. Why didn't the employees and insiders make an open letter pledging that AI should not be used for mass surveillance back then?


>mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.

some companies do it, so that means anthropic has to do it? this seems like nihilism


> some companies do it, so that means anthropic has to do it?

Then I would have expected open letters from anon OpenAI employees from the very beginning in 2025 when those military contracts were signed as a reassurance / pledge around those certain boundaries since they ultimately knew. But of course, only after Anthropic rejected the DoW in 2026. Very late for that.

Anthropic should've known about PRISIM and the nature of governments and should have not tested the benefit of the doubt in the first place since it's directly incompatible with their 'principles'. Otherwise none of this would have happened.

Their first mistake was trusting the government (especially this one) and like almost all of them, they are ready to test what they can get away with and breach the contract first.

Anthropic naively expected that the administration (and any other) of this time would change for the better today. Their lesson is that they should never trust governments to agree to their contracts.


I think people that aren’t objecting to AI mass surveillance of populations: haven’t recognized how thorough and invasive these technologies will become; think the current governments share their values and lists of enemies; naively think government priorities will never change, and that scopes will never increase.

"It's about bad genetics... That's why people in Indian aren't considered the best aesthetically.. These arranged marriages aren't allowing the best genes to prevail" - clavicular[1]

I'm not sure why you'd go out on a limb to defend the guy singing songs glorifying hitler. But you do you

[1] https://x.com/WyronGaines/status/1995013476289245573?s=20


>Supposedly OpenAI had the same terms

"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.


https://x.com/SeanParnellASW/status/2027072228777734474?s=20

Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.


The first sentence of that post is:

> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.


Saying something on twitter is not a guarantee.

Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."


>he said this

>>no he didn’t he actually said the opposite of that and the link you just posted says the opposite of what you are claiming

>but he might change his mind!

Okay?


You asked repeatedly:

>Did the DoW ask for these things?

>Did the DoW ask for that?

I showed you where the spokeperson asked for the terms to change so they could make autonomous weapons. now, you're shifting the goal posts.


This administration would never lie, no siree! And especially not on Twitter!

I'm torn here. Who should we believe? The normal people or the people who operate exclusively in dishonesty?


And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.

Is a pundit/politician lying to you a new experience?


instead of me doing 'pip install skypilot' in a terminal, why doesn't skypilot make a skypilot smartphone app that will provision the cloud resource? then could even get rid of the whatsapp/telegram dependency by making the app a messaging client (to communicate with the openclaw server)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: