The lack of a "refresh" option has been a problem with iCloud for years. Back in the iOS 8/9 days, I'd write in Pages on an iPad and then try to open the document on a Mac or the Pages web app. Pages itself was (and is) pretty nice, but iCloud sync was constantly broken. Things didn't appear when I needed them to.
Some designers say that refresh buttons shouldn't exist because the interface should always reflect the current state of reality. They're right, but until the day we get 100% bug-free bidirectional sync with perfect conflict resolution that instantly polls the network whenever it reconnects, refresh buttons are a necessary evil.
The only languages that eliminate logic bugs are formally verified ones, as the article points out. (And even then, your program is only as correct as your specification.) Ordinary Rust code is not formally verified. Anyone who claims Rust eliminates errors is either very naive or lying.
Type-safe Rust code is free from certain classes of errors. But that goes out the window the moment you parse input from the outside, because Rust types can enforce invariants (i.e. internal consistency), but input has no invariants. Rust doesn't ban you from crashing the program if you see input that violates an invariant. I don't know of any mainstream language that forbids crashing the program. (Maybe something like Ada? Not sure.)
I don't understand why you bemoan that Rust hasn't solved this problem, because it seems nigh unsolvable.
As someone who's been working heavily in Rust for the last year, I have to agree with you, here.
Look, there's a lot of folks who gripe about Rust; I used to be one of them. It's like someone took C-lang and pushed it to hard mode, but the core point keeps getting lost in these conversations: Rust never claimed to solve logic bugs, and nobody serious argues otherwise. What it does is remove an entire universe of memory-unsafety pitfalls that have historically caused catastrophic outages and security incidents.
The Cloudflare issue wasn’t about memory corruption or type confusion. It was a straight logic flaw. Rust can’t save you from that any more than Ada, Go, or Haskell can. Once you accept arbitrary external input, the compiler can’t enforce the invariants for you. You need validation, you need constraints, you need a spec, and you need tests that actually reflect the real world.
The idea that "only formally verified languages eliminate logic bugs" is technically correct but practically irrelevant for the scale Cloudflare operates at. Fully verified stacks exist, like seL4, but they are extremely expensive and restrictive. Production engineering teams are not going to rewrite everything in Coq. So we operate in the real world, where Rust buys us memory safety, better concurrency guarantees, and stricter APIs, but the humans still have to get the logic right.
This is not a Rust failure. It is the nature of software. If the industry switched from Rust to OCaml, Haskell, Ada, or C#, the exact same logic bug could still have shipped. Expecting Rust to prevent it misunderstands what problems Rust is designed to eliminate.
Rust does not stop you from writing the wrong code. It stops you from writing code that explodes in ways you did not intend. This wasn't the fault of the language, it was the fault of the folks who screwed up. You don't blame the hammer when you smack your thumb instead of a nail - you should blame your piss poor aim.
Some people appreciate it when terminal output is easier to read.
If chalk emits sequences that aren't supported by your terminal, then that's a deficiency in chalk, not the programs that wanted to produce colored output. It's easier to fix chalk than to fix 50,000 separate would-be dependents of chalk.
Most of your supply chain attack surface is social engineering attack surface. Doesn't really matter if I use Lodash, or 20 different single-function libraries, if I end up trusting the exact same people to not backdoor my server.
Of course, small libraries get a bad rap because they're often maintained by tons of different people, especially in less centralized ecosystems like npm. That's usually a fair assessment. But a single author will sometimes maintain 5, 10, or 20 different popular libraries, and adding another library of theirs won't really increase your social attack surface.
So you're right about "pull[ing] in universes [of package maintainers]". I just don't think complexity or number of packages are the metrics we should be optimizing. They are correlates, though.
(And more complex code can certainly contain more vulnerabilities, but that can be dealt with in the traditional ways. Complexity begets simplicity, yadda yadda; complexity that only begets complexity should obviously be eliminated)
1) Null pointer derefs can sometimes lead to privilege escalation (look up "mapping the zero page", for instance). 2) As I understand it (could be off base), if you're already doing static checking for other memory bugs, eliminating null derefs comes "cheap". In other words, it follows pretty naturally from the systems that provide other memory safety guarantees (such as the famous "borrow checker" employed by Rust).
Discovering and using private APIs is not a walk in the park. I doubt "laziness" is a common motivation for doing so. Lack of knowledge or bad docs, perhaps. But there's often no officially sanctioned way to do something that people want (and perhaps will pay for) - most private API usage I've seen falls into this third bucket.
Laziness comes in many forms. Arguably, discovering and using private APIs is a form of intellectual laziness — it requires you to refuse acknowledging that the whole system is telling not to do things that way.
If you defer to authority. That is, you accept that the people who made the API have the authority to dictate you what you can or can't do on your hardware (or for other people on their), that privating the parts of the API you need was a conscious decision (and not just laziness on their part) and that in general you listen to commands like that.
Even with just a shroud of hacker thinking that is not something programmers should easily accept.
Oh wow, that is the opposite of Hacker Mentality to me. I may question lots of other people, but if some other coder put in the time to construct a well designed API that includes public and private methods, my first thought is never “I know better”. Took me a couple of decades to stop thinking that though, so what do I know?
I'm with you that function "coloring" (monads in the type system) can be unergonomic and painful.
> ... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the […] coloring nightmare.
Because it doesn't have Future/Promise/async as a built-in abstraction?
If my function returns data via a channel, that's still incompatible with an alternate version of the function that returns data normally. The channel version doesn't block the caller, but the caller has to wait for results explicitly; meanwhile, the regular version would block the caller, but once it's done, consuming the result is trivial.
Much of the simplicity of Go comes at the expense of doing everything (awaiting results, handling errors, …) manually, every damn time, because there's no language facility for it and the type system isn't powerful enough to make your own monadic abstractions. I know proponents of Go tend to argue this is a good thing, and it has merits. But making colorful functions wear a black-and-white trenchcoat in public doesn't solve the underlying problem.
One of the largest problems identified in the original "what color is your function" article ( https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ) is that, if you make a function async, it becomes impossible to use in non-async code. Well, maybe you can call "then" or whatever, but there's no way to take an async function and turn it into something that synchronously returns its value.
But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code. (This blocks the thread, but in Go's concurrency model this isn't a problem, unlike in JavaScript.) Similarly it's very easy to take a synchronous function and do "go func() { ch <- myFunction() }()" to make it return its result in a channel.
> But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code.
What you call "synchronous code" is really asynchronous. To actually have something that resembles synchronous code in Go you have to use LockOSThread, but this has the same downsides as the usual escape hatches in other languages. This is also one of the reasons cgo has such a high overhead.
Hm. You and parent comment have made me realize something: as much as I dislike how many useful abstractions are missing from Go, async for blocking syscalls is not one of them, since the "green thread" model effectively makes all functions async for the purposes of blocking syscalls. So I retract my "you have to do it manually" comment in this case. I guess that's part of why people love Go's concurrency.
Of course, as you said, stackful coroutines come with runtime overhead. But that's the tradeoff, and I'm sure they are substantially more efficient (modulo FFI calls) than the equivalent async-everywhere code would be in typical JS or Python runtimes.
My "you have to do it manually" comment comes from some other peeves I have with Go. I guess the language designers were just hyper-focused on syscall concurrency and memory management (traditionally hard problems in server code), because Go does fare well on those specific fronts.
I remember this article in 2015 being revelatory. But it turned out that what we thought was an insurmountable amount of JS code written with callbacks in 2015 would end up getting dwarfed by promise-based code in the years to come. The “red functions” took over the ecosystem!
With Python, I’m sure some people expect the same thing to happen. I think Python is far more persistent, though. So much unmaintained code in the ecosystem that will never be updated to asyncio. We’ll see, I suppose, but it will be a painful transition.
All goroutines are async in some sense, so generally you don't need to return a channel. You can write the function as if it's synchronous, then the caller can call it in a goroutine and send to a channel if they want. This does force the caller to write some code, but the key is that you usually don't need to do this unless you're awaiting multiple results. If you're just awaiting a single result, you don't need anything explicit, and blocking on mutexes and IO will not block the OS thread running the goroutine. If you're awaiting multiple things, it's nice for the caller to handle it so they can use a single channel and an errorgroup. This is different from many async runtimes because of automatic cooperative yielding in goroutines. In many async runtimes, if you try to do this, the function that wasn't explicitly designed as async will block the executor and lead to issues, but in Go you can almost always just turn a "sync" function into explicitly async
> I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user.
This is important, thank you for mentioning it: actions have consequences besides those that motivated the action. I don't like when people say "<actor> did <action>, and it leads to this nefarious outcome, therefore look how evil <actor> must be". Yes, there is always a chance that <actor> really is a scheming, cartoonish villain who intended that outcome all along. But how likely is it that <actor> is just naive, or careless, or overly optimistic?
Of course, the truth is almost certainly somewhere in the middle: familiarity with a hard-to-learn UI as a point of friction that promotes lock-in may not be a goal, but when it manifests, it doesn't hurt the business, so no one does anything about it. Does that mean the designers should be called out for it? If the effect is damaging enough to the collective interest, then maybe yes. But we needn't assume nefarious intentions to do so.
Then again, everyone thinks their own actions are justified within their own value system, and corporate values do tend toward the common denominator (usually involving profit-making). Maybe the world just has way more cartoonish villains than I give it credit for.
Some designers say that refresh buttons shouldn't exist because the interface should always reflect the current state of reality. They're right, but until the day we get 100% bug-free bidirectional sync with perfect conflict resolution that instantly polls the network whenever it reconnects, refresh buttons are a necessary evil.
reply