Flagged this. As much as I also love to dunk on Microsoft, this kind of crapware is not the way to do it. [I personally] don’t want this on the front page. [And I think others should join me in that belief. Let’s try to keep it a place for interesting things that took some effort.]
Its hard to flagged based on your opinion though because at least 200 upvoters thought it should be upvoted? Just don't upvote it. If a bunch of people (non-bots) upvote it I think by definition they found it interesting?
Then can you take yourselves back to Reddit or something? This used to be an interesting place to visit that was differentiated from other “hot take” havens on the net.
Well, when a good faith [0] submission from a respectable unaffiliated developer is disparaged on a place like reddit by some heavy-handed action, a natural reaction can be something like "Who died and made you bhagwan?"
Now that you mention it, the similarity is more obvious than I thought.
[0] OK, moderate faith, "there's an LLM involved so it's got to be crapware".
Not my downvote btw, never have never will, that's almost as chickenshit as can be. Almost.
I appreciate effort to make this a better place, I just would rather do it with upvotes more than anything else.
I don't prefer slop at all, plus people already know Microsoft has declined more in one recent year than any other, and this Microslop site is kind of a lame response, but it did draw some worthwhile comments. This is the kind of thing that has been on peoples' minds so that's what comes out.
I don't think it's gostsamo promoting his own work, just something he ran across that emphasizes how bad it could get or maybe how bad it already is in some quarters.
Pure slop really and it's bound to get the strongest reaction, but quite interesting when plenty of 90% slop is passing under the radar quite regularly.
I’m so exhausted by all the thought leadership from AI company executives. Can you just market a product without a meta discussion on how things are changing so rapidly and where they’re headed? Or better yet, use those legions of agents to cure cancer or something.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.
OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests.
> More succinctly: who decides what is legal here?
Why are people concentrating on legality? Look at the language
| The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
It's not just "legal". Their usage just needs to be consistent with one of
Operational requirements might just be a free pass to do whatever they want. The well established protocols seems like a distraction from the second condition.
> who decides what is [consistent with operational requirements] here?
The Secretary of Defense. The same person who has directed people to do extrajudicial killings. Killings that would be war crimes even if those people were enemy combatants.
There's also subtle language elsewhere. Notice the word "domestic" shows up between "mass" and "surveillance"? We already have another agency that's exploited that one...
As an english speaker (not a lawyer) I'd have read the "and" in "applicable law, operational requirements, and well-established safety and oversight protocols" to mean that all three were required.
Why do you read that to mean just one is required?
The more relevant question is who is held accountable for the war crimes? OpenAI seem pretty confident it won't be OpenAI.
I can see the logic if we were talking about dumb weapons--the old debate about guns don't kill people, people kill people. Except now we are in fact talking about guns that kill people.
> This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.
> More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?
Yeah, there's a pretty strong case that anyone claiming to trust that the administration cares about operating in good faith with respect to the law is either delusional or lying.
The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.
It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.
As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.
Sam Altman is either a fool, or he thinks the rest of us are.
This is an objective standard as a matter of contract interpretation. If it was the government’s right to determine the lawfulness of a usage, it would say so. Perhaps it does elsewhere in the agreement, but that’s not the case here.
Ok, honest question: Can you point to language in the contract that definitively limits the use of OAI tools that’s beyond what current laws or regulations require?
Sorry, I think we may be talking past each other. The language you quoted is an objective standard. If, for example, a court ruled that the government had violated the Constitution using the tool, that language would be breached. I don’t think anything I’ve seen (though we haven’t seen the whole agreement!) allows the government to use the product in violation of the law. Anthropic wanted to go further by further limiting the uses in specific cases.
Ok I think we are largely in agreement, though perhaps missing the main point: Anthropic wanted restrictions above and beyond “all legal uses”. This was widely reported in the last few days.
OpenAI is passing off their deal as providing additional safeguards beyond “all legal uses” but the language they’ve released doesn’t seem to support that narrative. I’m incensed, and am attempting to point out the hypocrisy in the hopes that OAI gets some blowback for this cynical stunt.
The word "legal" is doing all of the heavy lifting. Considering the countless adjudicated illegal things that the government is doing publicly. What happens behind classified closed doors?
I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law.
I know that's not what Anthropic said but that's the gist I'm getting.
> This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.
That depends on whether you view the cited authorities as already prohibiting that usage. I don't have an opinion on that, but some folks on both sides of the isle might have strong arguments that they do.
It's still not consistent. OpenAI made a statement that simply isn't true. They agree to all lawful use, INCLUDING using it to deploy weapons as long as it's legal. It happens to not be legal at the moment, but that doesn't mean it can't be changed and authorized.
Rationalize the OpenAI position? Sam Altman gets money from DoD. He has no morals. He doesn't care if people die because of his product. It's not hard.
OpenAI and sama are literally sauing they are fine with facilitating (and even performing) any scale of killing and surveillance as long as they're not held accountable.
No, this very devious and insidious. What the executive branch believes is legal is the real agreement here. Trump can say anything is legal and that's that. There is no judicial overview, there are no lawyers defending the rights of those who are being harmed. Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act".
Mass surveillance doesn't require a warrant, that's why they want it, that's why it's "mass". warrants mean judicial overview. Anthropic didn't disagree with surveillance where a court (even a FISA court!!) issued a warrant. Trump just doesn't want to go through even a FISA court.
This is pure evil from Sam Altman.
Is anyone listing these peoples names somewhere for posterity's sake? I'd hate to think this would all be forgotten. From Altman to Zuckerberg, if justice prevails they'll be on the receiving end of retribution.
mass surveillance is explicitly unlawful in the US. it is in the bill of rights. By definition it is injustice under the law. Even for terrorists in the US they have to go through a FISA court and get warrants.
Consider this, the bill of rights stipulates that a soldier cannot be stationed on your property in times of peace, but in times of war it will be allowed. It makes exceptions for times of war. but even in times of war, 4th amendment's search and seizure protection don't have an exception. Even in times of insurrection and rebellion. To deliberately violate that for personal and political reasons, that in itself is treason. With that intent alone, even without action, it invalidates all legitimacy that government has. If a clause in a contract is broken, the contract is broken. The bill of rights is the contract between the people and their government that gives the government its powers to rule, in exchange for those rights. With the contract explicitly, deliberately and with provable malicious intent broken, the whole agreement is invalidated.
I'll even say this, the US military itself is on the hook if they stand by and let this happen.
The current US government has a fundamentally different ontology for the derivation of human rights.
Wheras you and I likely agree that human rights are inalienable due to them being derived from the universe nature of human experience, the administration believes that human rights begin and end with them, the state. When they're the one able to affect the world with violence, it doesn't matter who's on the hook. The US electorate thought they could heal a status wound by authoritarianism instead of therapy and everyone else is paying the price.
On the hook for whatever comes after. Best case scenario, democrats will peacefully take control again, and pretend to forget about Sam's complicity. But he'll still face civil suits, I hope personally as well as the company itself.
Wort case, the current admin will make nazis look like cosplayers, and within a decade or so, he'll be standing next other ceos facing a tribunal in front of whatever entity managed to topple the former regime, and it will be under warcrime terms that are yet to be defined and for atrocities, which if history teaches us anything, will be so horrific our current ability to imagine antrocities is insufficient to allows to speculate on their nature.
In short, whatever trump does with openai, Sam Altman is in the "whatever trump wants to do was lawful" camp. Even then, perhaps the next regime will fail to learn from history and focus on rebuilding, but if they do learn from history they'll understand that you really can't hold back when it comes to these things. We're in this mess because of failure to sufficiently punish the nazis and the confederates in the US, both of which lasted only for about half a decade by the way. it isn't enough to teach people how horrible nazis and confederates were, the German approach is sensible, but a more extreme approach might be required.
Funny thing is, this might just save openai from total collapse. But if this is the price to keeping the economy alive, even at my own personal cost I hope the economy collapses completely along with these companies and regime.
> I'll even say this, the US military itself is on the hook if they stand by and let this happen.
That would most definitely not be the Constitutional recourse. Or a sensible approach. If that happens, the Constitution is past tense.
Congress and the Supreme Court are the recourse. If they don't hold up the Constitution then violence or even a non-violent military coup, however well intended, are not going to put the splattered egg back together again.
The last two and a half decades have seen all four presidents, congress, the Supreme Court and both parties allow blatantly unconstitutional surveillance become the norm (evolving an adaptive fig leaf of intermediaries), and presidential military actions entirely blur out the required Congressional oversight. That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further.
When governing bodies become familiar with the convenient practice of "deciding" what the constitution means, without repercussions, that lost respect becomes very hard to reinstate.
They swore an oath to defend the constitution of the US against enemies both foreign and domestic. It is entirely lawful for them to fulfill that duty.
If the commander in cheif and the civilian administration are clearly and unquestionably violating the constitution, they are no longer legitimate. If they are acting to harm the american people, acting as agents of a foreign enemy or as a domestic enemy to harm the american people, then they are not only illegitimate but the military is oath-bound to fight them with necessary force.
> That the weakening of loyalty to the Constitution has been pervasive on those serious counts, is one of the reasons it has been so easy to undermine further.
I can agree with that, that is because the people who swore an oath to defend it have not done so. They wave flags like it's a sports team they're cheering for.
Ultimately, the design of the constitution is such that either the people taking arms, or a patriotic military resisting the government would serve as the ultimate recourse. The system of checks and balances works so long as consequences are still a thing. If in the 1800s a president decided to do half the things trump did, anyone could shoot his face off and get away with it without consequence. These things aren't practical anymore.
The military has the duty to resist unlawful orders. But if a russian agent usurped the US government and civilians are incapable of doing something about it, then that's what they're there for. The military doesn't exist to bomb foreign countries thousands of miles away, it is there to defend the homeland. The original idea was that if laws are no longer a thing (obeyed by the government) the lawlessness would be too terrifying for those in power, therefore lawfulness is in their interest.
Right, which is probably the point made by the negotiators on behalf of the US Government. "We don't want Anthropic's standard, we want the Constitution."
Maybe I'm misunderstanding but are you taking the gov's side? Anthropic's standard was the constitutions. The executive branch has no authorization under US law to perform surveillance of any kind on its own. OpenAI will now be breaking US law, Anthropic simply decided to obey US law.
The US government can update its laws and come back to Anthropic, or do what they just did
No, I'm not taking the government's side. I'm telling the government's side. That's probably true that the executive branch can't do those things, but it may be able to do so in the future. Thus, Anthropic's rule would then be inconsistent with the laws applying to the government.
> The US government can update its laws and come back to Anthropic
No, this I do take issue with. It's the people who update the U.S. government's laws.
the people via their elected reps.. the government. The government is of the people and by the people. They're not different if democracy is truly working.
> but it may be able to do so in the future.
You don't obey laws in the future, you obey laws today. Companies have an obligation to follow the laws as written today. Not only that, as americans they and all americans have a patriotic and civic duty to resist attempts to bypass or undermine the constitution of their country. You literally can't be patriotic or loyal to your country without doing so, it is what constitutes the country.
It's not like Anthropic can't update their guardrails and contracts once the laws of the land are updated. They simply resisted a criminal and treasonous abuse of power.
> Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act".
This is just incoherent. You can't have US companies fix an unhinged US government.
If the government runs wild, there are some serious questions to be asked at a state level, about how that could happen, how to fix it quickly and how to prevent it in the future – but I should hope none of them concern themselves with the ideas of individual company owners, because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries.
> This is just incoherent. You can't have US companies fix an unhinged US government.
Which part? No one expects them to fix the government, matter of fact they should stay far away from it. However, they have a duty to obey the law and to be patriotic. All companies must resist attempts by the government to betray its people, because the government derives its authority from the people, therefore in its betrayal it has become an illegitimate enemy of the people instead of their legitimate government.
> because if the government can de fact do what it wants regardless of legality the next thing that this government does could simply be pointing increasingly non-metaphorical guns at individual AI company functionaries.
It feels like you and half the country never even at least watched movies surrounding nazi germany. The government can do whatever it wants, but whether it is companies, individuals working for it, or soldiers under orders, the government's authority does not excuse their participation. The government can't do anything at all on its own, it needs people to do it. If Obama wanted to get Anthropic to let their models aid al-qaeda with attacking America, should Anthropic say "oh well, since you're the government, go ahead?" This is the same thing. Ever heard of the phrase "enemies foreign or domestic" in the swearing of oaths? Company executives are beholden to the laws of the country they operate in. I mean, with Nazis at least their orders, and the orders of companies under their regime was lawful, even then it was not an excuse but they just changed the laws to make their orders lawful. Right now, we have laws and the government is breaking it, even "i followed lawful orders" isn't an excuse. Sam Altman is complicit in the violation of the American constitution and the betrayal of its people.
If all else fails, I expect the government to just train their own models. In which case, I'd say the engineers working in that effort should have resisted.
And who decides what's legal? The US was collecting illegal tariff revenue for ten months. Does OpenAI need to wait for the Supreme Court to strike down autonomous killbots?
That's the devil in the details. Sam altman's insult upon injury, treating the public as idiots on top of being a collaborator. The answer to your question is the government decides what is legal, as in the executive branch, in the pentagon the commander in chief decides. So essentially, they can do whatever they want so long as they call it legal.
As I said in a sibling comment, mass surveillance cannot be considered legal in the US under any context. not even war, emergency, terrorism, nuclear strike, national security reasons, imminent danger to the public,etc.. targeted surveillance can, scoped surveillance of a group of people can, but not mass surveillance. In other words Sam Altman is saying "This thing can never be legal short of a constitutional amendment, but so long as trump says it is, we'll look the other way".
What a two-faced <things i can't say on HN> this guy is!
I really hope Google poaches all his top engineers. If any of you are reading this, I ask you this, I get working for money, but will Google or Anthropic offer you all that much less? Consider the difference in pay when you put a price on your conscious.
Google? They have a terrible track record on upholding moral principles. They helped Chinese censorship, wrote software for American killer drones, and offered their services to genocidal regimes. They fired dissenting employees. They are one of the worst companies to be rooting for.
This isn't about moral principles. In china, censorship is legal. In the US mass surveillance is not. Even for those "genocidal regimes", it was lawful use. even now, both anthropic and openai agree that their models can be used in war and censorship just like with china, since those things are lawful. Even with genocide, from what i understand, the safeguard is that humans have to be in the loop, not that it won't aid the efforts.
I don't expect companies to be moral, but I do expect them to be patriotic, and to obey the law. And I also expect the government to punish them sufficiently when they fail to do so. The morality part is for the people to legislate or some other way enact laws to reflect their beliefs. Companies don't get a vote at the ballot box and they certainly are not agents for moral arbitrage between a government and its people.
Yes, I think that would be the idea. Again, not my view, but we give police officers license to use lethal force and often the victims of their abuse of that power have no recourse because they're already dead.
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.
My point is that they have far more knowledge about what the product is capable of and where its limitations lie than the government. A company expressing doubt that their product can be used safely for a given task even knowing the risk to their ability to make a sale for that exact purpose is far more trustworthy than potential buyer who claims they understand but also refuse to agree not to use it for that. I know this isn't a universally popular opinion, but I wish more companies acted responsible by not trying to maximize profits at the expense of social good.
I don't understand any interpretation of this whole saga that claims that Anthropic was acting selfishly here. I could at least understand (but would vehemently disagree with) a claim that it's bad for them not to be trying to sell something that they genuinely did not think was safe for the task it was being purchased for, but the idea that they're somehow "imposing" morals on the others is nonsensical to me. If anything, I'd expect that trying to sell a complex software system for a purpose it's unfit for might even receive scrutiny for potential fraud in a more healthy regulatory environment.
The relevant (unanswered?) question for this thread is who's operating and managing that deployment, and to what extent provider (or subcontracted FDEs) is involved in integrations. I would be surprised to learn of deployment actually being independently operated. Sure the machinery can be considered a product but associated service- and support engagements are at least as relevant to take into account.
Didn't fully follow the saga, but isn't their "imposing their own morals" is that "we do not want to allow you to let our AI go on an unsupervised killing spree"?
The United States Military, in its official capacity, has been performing illegal, extrajudicial assassinations of civilians in international waters for months now.
We have been sharing technology and weapons with Israel while it prosecutes a genocide in contravention of both US and International law.
We are currently prosecuting a war on Iran that is illegal under both US and International law.
Any aid given to such a force is to underwrite that lawlessness and it shows a reckless disregard for the very notion of a 'nation of laws'.
When OpenAI says, 'The Military can do what is legal', full in the knowledge that this military has no interest in even pretextual legality, one has to wonder why you hold that you 'agree with' both of these decisions.
Do you believe the flimsiest of lies in other aspects of your life?
Even if the autonomous weapon systems ‘perform as intended’, this does not in any way mean that they are not an enormous danger.
Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.
If that’s the case how hard is it to change or alter a directive?
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? What happened to "We give each other the freedom to hold beliefs and act accordingly unless it does harm"? How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? That sounds like you're buying into the reversed victim and offender narrative.
And this is not about whether one agrees with their beliefs. It is about giving others the right to have their own.
I have the right not to sell poison to someone who I have reason to believe will use it to kill a third party. The idea of simply trusting the patron to be responsible makes sense when the patron is anonymous or a new contact. It’s generally good to assume good intentions in the absence of evidence, I think. If the government is not anonymous enough to get this treatment.
The GP's use of the word "impose" didn't seem perjorative to me or suggest that Anthropic is the offender and the government is the victim. I think you're reading a lot into a simple word choice and this response seems way too hostile.
A "simple word choice"?? This isn't just about the single word "impose", read the whole post:
> Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
So first off, regarding that first paragraph, didn't any of these idiots watch WarGames, or heck, Terminator? This is not just "oh, why are you quoting Hollywood hyperbole" - a hallmark of today's AI is we can't really control it except for some "pretty please we really really mean it be nice" in the system prompt, and even experts in the field have shown how that can fail miserably: https://www.tomshardware.com/tech-industry/artificial-intell...
Second, yes, I am relieved Anthropic wanted to "impose" their morals because, if anything, the current administration has been loud and clear that the law basically means whatever they says it does and will absolutely push it to absurd limits, so I now value "legal limits" as absolutely meaningless - what is needed are hard, non-bullshit statements about red lines, and Anthropic stood by the those, and Altman showed what a weasel he is and acceded to their demands.
It certainly was intended as such. In a commercial transaction, that's what they're doing. They don't think it's moral to use their product in certain ways. They are thus prohibiting their customer from using it in such ways.
But, as I've said, I tend to agree with both Anthropic and the Administration's positions. What was wrong here is that rather than just terminating the contract, the Administration went nuclear.
It seems value-neutral to me. It's descriptive. Particularly for anyone who understands that different groups of people will legitimately disagree on many moral questions.
> "Impose" makes it sound like Anthropic is being hostile here.
Anthropic is not asking for their product to be used in line with their ethics, they are basically demanding it. I don’t necessarily think they are wrong but I don’t think we need to sugarcoat it either. It’s a demand and if it differs from what the DoW wants to use the tech for…of course its going to be in conflict. “Impose” is appropriate.
>Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"?
>How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need?
The department of defense in particular has a law on the books allowing them to force a company to sell them something. They generally are more than willing to pay a pretty penny for something so it hardly needs used, but I'd be shocked if any country with a serious military didn't have similar laws.
So your right when it comes to private citizens, but the DoD literally has a special carve out on the books.
A lawsuit challenging it would have actually been insane from anthropic because they would have had to argue "we're not that special you can just use someone else" in court.
A more clear example would be, what would you expect to happen if Intel and amd said our chips can't be used in computers that are used in war.
buts it not a national emergency. its not a time of war. and there is a different between demanding to be customer, and demanding that you change your products because they would like them to be a different way. that is actual conscription.
for many decades, the DoD has used a carrot to get what they want. this is a stick.
Nobody is saying that Anthropic has to shut down. They’re just saying that nobody taking government money can pay Anthropic for their service as a part of that contract. Anthropic still has the right to exist on their own terms, but their business model is based on rapidly-increasing enterprise subscriptions, which included public sector spending.
If Anthropic can survive on open source contributors shelling out $200/mo and private sector companies doing the same, the government wishes them well. But surely you agree the government has a right to determine how its budget is appropriated?
Well it depends. Being that the federal government constitutes 20% of the US economy, telling federal agencies you cannot contract with someone because they are adversarial to the USA is indeed pretty severe. When in reality they are not adversarial. We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy. So it is not at all equivalent to a private company making a choice
This is obviously subjective, and the only subject that matters in this case is the leadership at the DoD.
> We have no choice but to pay taxes and make the federal government 20 percent of our economy. There is no single company or any other entity that is close. And extending it to everyone who has a government contract probably makes it the majority of the economy.
I, too, hate big government and the all-powerful executive branch. Welcome to my tent. Let’s invent a time machine together so we can elect Ron Paul in 2008 and nip this in the bud.
> But surely you agree the government has a right to determine how its budget is appropriated
I think the government doesn't have rights, it is my elected representative. And I do not agree with it trying to punish a company for not agreeing to contract terms.
> OpenAI acceded to demands that the US Government can do whatever it wants that it claims is legal.
FTFY. The administration threw a fit and tried to retroactively demote a retired military officer for making a video saying, "Troops, you should disobey unlawful orders". Over 4000 times has been told, "No, that's not what the law regarding detaining undocumented aliens means", and continues doing it. Their first response to the Supreme Court saying, "the President can't impose tarriffs" was "The Hell I can't!".
It's 100% clear that Trump thinks "what the law allows" and "what I want to do" are the same thing.
Rule of law requires that the majority of people in the system are committed to the rule of law, and refuse to go along with violations of it. Anthropic is being a good citizen here; OpenAI is not.
My interpretation of the difference is more like: Anthropic wanted the synchronous real-time authority to say "No we wont do that" (e.g. by modifying system prompts, training data, Anthropic people in the loop with shutdown authority). OpenAI instead asked for the asynchronous authority to re-evaluate the contract if it is breached (e.g. the DoD can use OpenAI tech for domestic surveillance, but there's a path to contract and service termination if they do this).
If my read is correct: I personally agree with the DoD that Anthropic's demands were not something any military should agree to. However, as you say, the DoD's reaction to Anthropic's terms is wildly inappropriate and materially harmed our military by forcing all private companies to re-evaluate whether selling to the military is a good idea going forward.
The DoD likely spends somewhere on the order of ~$100M/year with Google; but Google owns a 14% stake in Anthropic, who spends at least that much if not more on training and inference. All-in-all, that relationship is worth on the order of ~$10B+. If Google is put into the position of having to decide between servicing DoD contracts or maintaining Anthropic as an investee and customer, its not trivially obvious that they'd pick the DoD unless forced to with behind-the-scenes threats and the DPA. Amazon is in a similar situation; its only Microsoft that has contracts large enough with the DoD where their decision is obvious. Hegseth's decision leaves the DoD, our military, and our defense materially weaker by both refusing federal access to state of the art technology, and creating a schism in the broader tech ecosystem where many players will now refuse to engage with the government.
Either party could have walked away from negotiations if they were unhappy with the terms. Alternatively: the DoD should have agreed to Anthropic's red lines, then constrained/compartmentalized their usage of Anthropic's technology to a clearly limited and non-combat capacity until re-negotiation and expansion of the deal could happen. Instead, we get where we're at, which is not good.
IMO: I know a lot of people are scared of a fascist-like future for the US, but personally I'm more fearful of a different outcome. Our government and military has lost all of its capacity to manufacture and innovate. Its been conceded to private industry, and its at the point where private industry has grown so large that companies can seriously say "ok, we won't work with you, bye" and it just be, like, fine for their bottom line. The US cannot grow federal spending and cannot find a reasonable path to taxing or otherwise slowing down the rise of private industry. We're not headed into fascism (though there are elements of that in the current admin): We're headed into Snow Crash. The military is just a thin coordination layer of operators piecing together technology from OpenAI, Boeing, Anduril, Raytheon. Public governments everywhere are being out-competed by private industry, and in some countries it feels like industry tolerates the government, because it still has some decreasing semblance of authority, but especially in the US that semblance of authority has been on a downward trend for years. Google's revenue was 7% of the US Federal Government's revenue last year. That's fucking insane. What happens when we get to the point where Federal debt becomes unserviceable? When Google or Apple or Microsoft hit 10%, or 15%? Our government loses its ability to actually function effectively; and private industry will be there to fill the void.
I find myself totally agreeing with the quoted text and also this sentiment. It just makes no sense to nuke Anthropic as a negotiation tactic if your interest is in preserving the republic long term.
The novelty of "new thing! That would have been incredibly hard a decade ago!" hasn't worn off yet.
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
When movies first came out they would film random stuff because it was cool to see a train moving directly at you. The novelty didn't wear off for years.
There was something someone said in a comment here, years and years ago (pre AI), which has stuck with me.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
And the explosion in software produced with AI by lay-people will mean that those with offensive security skills, who can crack and exploit software systems, will have incredible power over others.
I think that when a software system is used by more people and has more eyes on it, it's more likely to have its security flaws be found and fixed. Then all the users will benefit from the fix.
The more that software is fragmented into bespoke applications used by small numbers of people, the less people benefit from security network effects.
I believe the security vulnerability issues will be addressed with companies using cloud based vibe-code platform or a ai security auditor agent that runs through the code base and flags security issues.
Sure it is. AI software development is here. It's not good enough for everything, but it's good enough for a majority of the changes made by most software engineers.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
No, it isn't. There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
> That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I have a similar feeling to people who upload their AI art to sites like danbooru. Like I guess I can understand making it for yourself but why do you think others want to see it
xkcd turned stick figure drawings into an art form. sometimes it is not about how something was created, but about the story being told.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?
There are two types of software engineers: Those who do and then think, or those who think and then do. Claude Code seems to strictly be for the former, while typically the engineers who can maintain software long-term are the latter.
Not sure if we have any LLM-tooling for the latter, seems to be more about how you use the tools we have available, but they're all pulling us to be "do first, think later" so unless you're careful, they'll just assume you want to do more and think less, hence all the vibeslop floating around.
> Claude Code seems to strictly be for the former, while typically the engineers who can maintain software long-term are the latter.
Given the number of CC users I know who spend significant time on creating/iterating designs and specs before moving to the coding phase, I can tell you, your assumption is wrong. Check how different people actually use it before projecting your views.
Yeah, I wasn't trying to say "These are the people who use CC, for these purposes" but rather what the intention seems to for Claude Code in the first place. I'm using CC from time to time, to keep up to date with what tooling is available, and also know people who use CC every day and plan a lot up front, sorry if I gave the impression that I meant that everyone using CC is doing that, was trying to get at what the purpose of the tool seems to be, which seems to be true today too, as the models continuously seem to steer you to "doing" and moving faster, not stopping and thinking.
This seems like a real coarse and not particularly accurate binary, but even if it were true, the thing about Claude Code and agentic coding like this is the cost of making a mistake or the cost of not being happy with a design and having to back it out is getting smaller and smaller.
I would argue that rapidly iterating reveals more about the problem, even for the most thoughtful of us. It's not like you check your own reasoning at the door when you just dive head first into something.
This isn't a binary thing - even if you prefer to build maintainable systems very often the trade-off is - you don't ship in time and there's no long term - the project gets scrapped.
So even if it comes at the expense of long term maintainability - everyone should have this in their toolbox.
I find it often helps me to see a feature before I evaluate if it was really a good idea in the first place. This is my failing--but one thing I like about Claude is that it's now possible to just try stuff and throw away whatever doesn't work out.
I usually have conversations with Claude for clearing my mind and forming the scope of a project. I usually use voice transcription from Claude app to take notes and explore all my options.
Same. When I can't be at my desk, my projects don't stop -- I just do the tasks that work well enough on the phone. Brainstorming, planning, etc. Or tasks that the agent can easily verify.
Having access to my local repository and my whole home folder is much easier than dealing with Claude or ChatGPT on the web. (Lots of manual markdown shuffling, passing in zipfiles of repositories, etc).
I agree in your basic framing but not your conclusion. Met plenty of do-ers before thinkers that are self-aware enough to also maintain software longterm.
id say claude code is designed for think then do - thats where its different from other tools!
i think it still pulls to do then think because you cant tell what the agent understood of what you asked it to do from that first think, until its actually produced something.
Claude Code and similar agents help me execute experiments, prototypes and full designs based on ideas that I have been refining in my head for years, but never had the time or resources to implement.
They also help get me past design paralysis driven by overthinking.
Perhaps the difference between acceleration and slop is the experience to know what to keep, what to throw away, and what to keep refining.
This kind of release shows Anthropic as a company is suffering from the same thing we all are right now. Removing the friction from having an idea and executing it stops you from remembering The Point. Yes, programming from your phone is an exciting modality and maybe even the future of how we work, but coding from your bedroom, AND the toilet, AND the woods AND your office is definitely (hopefully) not the future.
I wonder if is anyone working on an AI framework that encourages us to keep our eye on the big picture, then walk away when a reasonable amount of work is done for the day.
Yes, individuals are creating cool mobile coding solutions and Anthropic doesn't want to get left behind. I know I'm working my ass off at work right now because LLM coding makes it fun, but I also often don't prioritize what I'm doing for the big picture because I just try every thing that comes into my inbox, in order, because it's so fast to do with Claude Code.
"The false binary of "rest OR work" is dissolving."
If you're like most people in this forum, there are people who stand to gain financially if you convince yourself that you don't need boundaries between work and rest. You may even believe that you stand to gain financially, and that this will be best for you in the long term.
Please, take some time to rest for a day or two and really think about what you want your boundaries to be. Write them down.
> The false binary of "rest OR work" is dissolving
Sounds like someone hasn't yet worked multiple years with software engineering, or any job for that matter.
Your mind might trick you into believing it won't matter, but your body and mind NEEDS to be disconnected from work, 100%, at some point during your regular rhythms of life, otherwise you'll burn out much faster than the people you seemingly are trying to compete with.
Life never been a sprint, but it is a marathon, and if you spend all your young experience-less years on treating it as a sprint, you won't have any energy left for completing the marathon.
How is this not solved by a simple voice recorder? You can process and act on it later while not forgetting your thoughts when inspiration hits. People have been doing that for at least like 50 years now.
I’m guessing you’re suggesting it’s ok to lose time if you’re away from your computer enjoying life, and I agree. I also don’t see the issue in finding ways to be save time with work.
If you mean something different, please elaborate.
A lot of good behind this idea if nothing else than to keep Microsoft honest. The Azureware push is nauseating and such a transparent attempt to lock in its monopoly against disruptors. We’re hoping Tritium[1] can provide a free or commercial alternative for legal teams soon.
All that said, it’s easy to underestimate the quality of Microsoft’s office products. They handle millions of edge cases, accessibility, i18n. They are performant and in a lot of cases extended through long-term add ins.
Even Google hasn’t achieved real parity.
It’s Microsoft’s race to lose, but my bet is they’re too distracted by AI to even noticed those coming for them.
reply