So if I email the company a TOS and say that continuing to allow me to use the tool should be considered acceptance of my new TOS that should be valid? Sometimes it's amazing to see the legal contortions people use to justify bad behavior on the part of companies.
When did Ben Thompson go so far down the path to autocratic sympathizer? This is such an anti-democratic, anti-free market, anti-free speech view on this whole situation.
First, everything revolves around a core conceit that "Might makes right". The idea that entities might might push back with the tools at their disposal is treated as a fools errand, you should just acquiesce.
The role of the legislative branch in deciding what private entities are allowed, or not allowed to do is treated as a side note. He equates the dictates of the executive branch as if it was the will of the United States itself, above even the Constitution.
It's dismissive of the rights of private companies and individuals to make decisions for themselves about the actions they take and whether or how they choose to transact within the law with parts of the executive branch.
He acts as if it's a foregone conclusion that every AI company should be considered an arm of the executive branch of the US government. The analogy to nuclear weapons is super flawed, there are multiple laws on the books (written into law by Congress) specifically regulating Nuclear research and development.
And most astonishingly he ends it dropping an implied threat of violence towards Anthropic (and assumedly anyone else who doesn't agree with his point of view):
> I don’t want that, and, more pertinently, the ones with guns aren’t going to tolerate it.
Yep, once you accept “might makes right” the laws in a democracy become polite suggestions. Oh, your town is in the way of hydropower? Too bad the gov’t has more guns than you. That’s how you get the Three Gorges Dam in China. Nevertheless, the Trump Mafia is demonstrating how paper thin democracy and rule of law really is in the US.
It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
> “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
This is such a bad faith question, it's annoying to see it come up again as if there's any utility to asking it.
The question itself never specifies that the car you would be driving is the same one that you need to be washed. The car that needs to be washed could be waiting in the parking lot of the car wash already. It doesn't state that you plan on washing your car at the car wash. Perhaps the car wash sells car cleaning equipment that you can bring back to wash your car at home?
The question is designed to be ambiguous so the llm answers it in a way that seem facially absurd to the people who are in on the scheme. What it's actually showing is a failure of imagination for those asking the question.
Do you want your chatbot to be suspicious of you trying to trick it? To me this seems patently unhelpful outside of LLMs tuned for roleplay or to operate in a highly adversarial environment.
Do you want it to assume you are an idiot asking the question because you didn't realize you need to have the car at the car wash to wash it?
Or do you want it to take the best faith assumption as to what you are asking and try to be as helpful as possible given the poor question?
I've already immediately mentioned disambiguations in the replies to myself. The ideal response would be the llm stating the most important assumptions
Given that these tendencies are not evenly distributed throughout the population, you can have structures that leverage the large mean to mitigate the worst tendencies of the extreme tails. Given that the natural state of things is that power begets more power, these are harder to build and maintain, but it can be done. In particular, Democracies and Republics are major historical examples of this.
A important point though is that llm code generation changes that tradeoff. The time/opportunity cost goes way down while the productivity penalty starts accumulating very fast. Outcomes can diverge very quickly.
When it comes to new emerging technologies everyone is searching the space of possibilities, exploring new ways to use said technologies, and seeing where it applies and creates value. In situations such as this, a positive sign is worth way more than a negative. The chances of many people not using it the right way are much much higher when no one really knows what the “right” way is.
It then shows hubris and a lack of imagination for someone in such a situation to think they can apply their negative results to extrapolate to the situation at large. Especially when so many are claiming to be seeing positive utility.
IMO this is a mistake, for basically the same reason you justify it with. Since most people just want the code to work, and the chances of any specific repo being malicious is low, especially when a lot of the repos you work with are trusted or semi-trusted, it easily becomes a learned behavior to just auto accept this.
Trust in code operates on a spectrum, not a binary. Different code bases have vastly different threat profiles, and this approach does close to nothing to accomodate for that.
In addition, code bases change over time, and full auditing is near impossible. Even if you manually audit the code, most code is constantly changing. You can pull an update from git, and the audited repo you trusted can be no longer trustworthy.
An up front binary and persistent, trust or don't trust model isn't a particularly good match match for either user behavior or the potential threats most users will face.
Very much agree with this. Looking at the dimensionality of a given problem space is a very helpful heuristic when analyzing how likely an llm is going to be suitable/reliable for that task. Consider how important positional encodings are LLM performance. You also then have an attention model that operates in that 1-dimensional space. With multidimensional data significant transformations to encode into a higher dimensional abstraction needs to happen within the model itself, before the model can even attempt to intelligently manipulate it.
How would you move the solar energy into the piles of dirt? You’d need something like an array of mirrors focusing the rays, which has definitely been done already but has drawbacks. Electricity can easily be moved to where it’s needed.
You could heat up a metal heat exchanger that you circulate a working fluid through. Probably easier to just convert sunlight to electricity to heat via resistive heating, less maintenance.
At home, it's suitable in warm climates but is more challenging in snowy / very cold regions. Generally speaking, converting to electricity then using an electric water heater is more efficient because there's much less insulating, heat loss, and piping that can leak and cause water damage.
If you look at "evacuated solar" panels this is as close as you could get, and they don't get anywhere near the temperature you need to generate consistent steam. Concentrated solar is closer, but cost impracticalities make it unattractive here as well.
reply