If it's moral to strike at a country with nuclear capability that talks constantly about your country's destruction, then it's no less acceptable for Iran to strike the US than the other way around.
You can't condemn one and condone the other on that basis.
Iran has both reason and were developing capability to destroy a significant part of American national security. America absolutely must prevent that at any cost.
You could argue about how the rhetoric between the states got so bad that they each threatened each other's destruction. But the fact is that they got there.
- I forgot my password and Microsoft is sending reset emails to the account that that password bars.
- I remember my password but now it says I need a passkey and I don't know what that is.
- I forgot my password and in the process of resetting it, Microsoft created a duplicate account.
All of the above are real problems that I have seen in the wild. I could list many more.
Given that Microsoft knows--and has always known--user limitations, it behooves them to design idiot-proof software, not continually release poorly-designed changes.
I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.
I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?
> most invocations of the statement are either blindingly obvious or probably false
So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.
Secondly, look at this one specifically:
> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.
Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.
Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.
The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.
I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.
I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.
This all seems like a lot of effort so that an agent can run `npm run build` for you.
I get the article's overall point, but if we're looking to optimise processing and reduce costs, then 'only using agents for things that benefit from using agents' seems like an immediate win.
You don't need an agent for simple, well-understood commands. Use them for things where the complexity/cost is worth it.
Feedback loops are important to agents. In the article, the agent runs this build command and notices an error. With that feedback loop, it can iterate a solution without requiring human intervention. But the fact that the build command pollutes the context in this case is a double-edge sword.
If you really need that, the easy solution here is to get a list of errors using an LSP (or any other way of getting a list of errors, even grep "Error:"), and only giving that list of errors to the LLM if the build fails. Otherwise just tell the LLM "build succeeded".
That's an extremely simple solution. I don't see the point in this LLM=true bullshit.
Are you saying that somebody took translations that had already been written and replaced them with AI generated worse translations? That has got to be a rare exception, no?
But more to your point: you might not have run into languages that didn't have proper translations available, but billions of other people did. In the past I read a machine translated book before. It was almost like a derivative work because it would randomly differ by a huge amount from the source material.
1. "Defends" suggests some level of explanation and justification; the White House did not present any here.
2. "AI image showing arrested woman" could mean a fully-generated image of a woman, rather than editing an image of an existing person under law enforcement control to disguise the actual facts. The first one would be bizarre, the second one is much more problematic.
There is no meaningful difference between a 100% percent fabricated image and a some slightly smaller percentage of a digitally manipulated image when presented from the government as fact. There's no need to split hairs.
It's the difference between drawing a cartoon and editing a photograph; the second one is a definite attempt to misrepresent matters of fact, the first could be argued to be illustrative only.
reply