Wood is edible when processed correctly, but it's not legally considered "food" because there are a bunch of nontrivial steps to get it into that state. Likewise, any reasonable interpretation of "general purpose computer" in this context by a judge would not include your microwave oven just because someone with skill and finesse could transform it into a cursed Doom arcade machine.
Laws are interpreted by people trained to fill in the blanks[1] with a best guess of the legislative body's intent. And the intent here seems pretty clear: to regulate computing devices that let end users easily install software from a centralized catalog.
[1] which we all do subconsciously in day-to-day speech, because all language is ultimately subjective
They exempt applications that run inside another “host application” though, which is ~ everything in any modern app store.
I guess Linux native games on GoG might be covered. All windows and wsl programs run in userspace compat layers. iOS might be covered. Snap, probably not (containers), AppImage? Maybe?
The irony is that web searches for an explanation of something often lead to a discussion thread where the poster is downvoted and berated for daring to ask people instead of Google. And then there's one commenter who actually actually explains the thing you were wondering about.
It's kind of nice, though, because you can click anywhere on a window to focus it. If you want to interact with a background window without focusing it, hold Cmd and click.
This is already the pre-26 bounding box, isn't it? It's the new graphics that don't line up. (Not a great excuse, but the graphics are here to stay at least for a little while.)
> the graphics are here to stay at least for a little while
And that's the reason why I won't buy a new Mac.
Tahoe and Liquid Glass are so horrible that they're going to lose customers because of those. They should realize what they did and just backtrack: it wouldn't be the first time they admit they made a mistake [1].
The magic mouse have been there, almost unchanged, since 2009. That is a lot for a tech product, and retiring a product after 16 years is not admitting to a mistake. For example, the Logitech G5 mouse and its direct evolutions were among the most successful Logitech products, and it didn't last that long.
No, it is not just refusing to admit that the magic mouse was a mistake, it is considering that it is the best ever. That USB port on the underside is still one of the great mysteries though, maybe it is some quirk of evolution, because it is certainly not intelligent design.
In addition to vertical scrolling, the Magic Mouse can do horizontal (or diagonal) scrolling, zooming in and out, and a couple of other tricks. This makes it worthy for the people who need this for their work. There are mice that can do horizontal or vertical scrolling -- but not both at the same time.
People who do their work on large documents (pics in Photoshop, videos, CAD, music, even Excel, etc.) use these capabilities every day, and they like their Magic mice very much. If you are not one of these people (software development, for example, can be done with vertical scroll only, for the most part), it doesn't mean it's a bad product -- all it means it's a product which is not for you.
I don't use Magic Mouse but am very far from expecting Apple to admit "the magic mouse was a mistake" though.
The lack of a "refresh" option has been a problem with iCloud for years. Back in the iOS 8/9 days, I'd write in Pages on an iPad and then try to open the document on a Mac or the Pages web app. Pages itself was (and is) pretty nice, but iCloud sync was constantly broken. Things didn't appear when I needed them to.
Some designers say that refresh buttons shouldn't exist because the interface should always reflect the current state of reality. They're right, but until the day we get 100% bug-free bidirectional sync with perfect conflict resolution that instantly polls the network whenever it reconnects, refresh buttons are a necessary evil.
The only languages that eliminate logic bugs are formally verified ones, as the article points out. (And even then, your program is only as correct as your specification.) Ordinary Rust code is not formally verified. Anyone who claims Rust eliminates errors is either very naive or lying.
Type-safe Rust code is free from certain classes of errors. But that goes out the window the moment you parse input from the outside, because Rust types can enforce invariants (i.e. internal consistency), but input has no invariants. Rust doesn't ban you from crashing the program if you see input that violates an invariant. I don't know of any mainstream language that forbids crashing the program. (Maybe something like Ada? Not sure.)
I don't understand why you bemoan that Rust hasn't solved this problem, because it seems nigh unsolvable.
As someone who's been working heavily in Rust for the last year, I have to agree with you, here.
Look, there's a lot of folks who gripe about Rust; I used to be one of them. It's like someone took C-lang and pushed it to hard mode, but the core point keeps getting lost in these conversations: Rust never claimed to solve logic bugs, and nobody serious argues otherwise. What it does is remove an entire universe of memory-unsafety pitfalls that have historically caused catastrophic outages and security incidents.
The Cloudflare issue wasn’t about memory corruption or type confusion. It was a straight logic flaw. Rust can’t save you from that any more than Ada, Go, or Haskell can. Once you accept arbitrary external input, the compiler can’t enforce the invariants for you. You need validation, you need constraints, you need a spec, and you need tests that actually reflect the real world.
The idea that "only formally verified languages eliminate logic bugs" is technically correct but practically irrelevant for the scale Cloudflare operates at. Fully verified stacks exist, like seL4, but they are extremely expensive and restrictive. Production engineering teams are not going to rewrite everything in Coq. So we operate in the real world, where Rust buys us memory safety, better concurrency guarantees, and stricter APIs, but the humans still have to get the logic right.
This is not a Rust failure. It is the nature of software. If the industry switched from Rust to OCaml, Haskell, Ada, or C#, the exact same logic bug could still have shipped. Expecting Rust to prevent it misunderstands what problems Rust is designed to eliminate.
Rust does not stop you from writing the wrong code. It stops you from writing code that explodes in ways you did not intend. This wasn't the fault of the language, it was the fault of the folks who screwed up. You don't blame the hammer when you smack your thumb instead of a nail - you should blame your piss poor aim.
Some people appreciate it when terminal output is easier to read.
If chalk emits sequences that aren't supported by your terminal, then that's a deficiency in chalk, not the programs that wanted to produce colored output. It's easier to fix chalk than to fix 50,000 separate would-be dependents of chalk.
Most of your supply chain attack surface is social engineering attack surface. Doesn't really matter if I use Lodash, or 20 different single-function libraries, if I end up trusting the exact same people to not backdoor my server.
Of course, small libraries get a bad rap because they're often maintained by tons of different people, especially in less centralized ecosystems like npm. That's usually a fair assessment. But a single author will sometimes maintain 5, 10, or 20 different popular libraries, and adding another library of theirs won't really increase your social attack surface.
So you're right about "pull[ing] in universes [of package maintainers]". I just don't think complexity or number of packages are the metrics we should be optimizing. They are correlates, though.
(And more complex code can certainly contain more vulnerabilities, but that can be dealt with in the traditional ways. Complexity begets simplicity, yadda yadda; complexity that only begets complexity should obviously be eliminated)
1) Null pointer derefs can sometimes lead to privilege escalation (look up "mapping the zero page", for instance). 2) As I understand it (could be off base), if you're already doing static checking for other memory bugs, eliminating null derefs comes "cheap". In other words, it follows pretty naturally from the systems that provide other memory safety guarantees (such as the famous "borrow checker" employed by Rust).
Laws are interpreted by people trained to fill in the blanks[1] with a best guess of the legislative body's intent. And the intent here seems pretty clear: to regulate computing devices that let end users easily install software from a centralized catalog.
[1] which we all do subconsciously in day-to-day speech, because all language is ultimately subjective
reply