Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Zig is a simpler language, but that simplicity comes at a cost: the memory bugs that exist in C also exist in Zig. Users bear the cost of the programmer using a simple language.

Rust is a more complex language, but it eliminates a number of classes of memory error. In this case, the programmer bears the cost of users having safer programs.

Although I find Zig very interesting—and I wish Andy the best of success with it—the trade-off that the Rust offers is more in line with my values and my ethics.



> the memory bugs that exist in C also exist in Zig.

No, they don't. See Zig's release-safe mode.

Also: https://news.ycombinator.com/item?id=17184929

> the trade-off that the Rust offers is more in line with my values and my ethics.

By the way, it's awesome that you like Rust, but there's no need to imply there's something ethically wrong with Zig. Depending on the platform you're targeting, you may yet find a use for either language.


release-safe doesn't prevent all memory bugs.


Sure, but then not even "safe" languages like JavaScript will prevent all memory bugs.

And to be fair, Rust does introduce memory issues of its own in terms of allocation strategy. For certain projects, Rust would not be considered "safe" with respect to memory.

Zig's release-safe mode is worlds away from "the memory bugs that exist in C".

See also: https://github.com/ziglang/zig/issues/2301


This is very inaccurate, and as a formal methods practitioner, I'd like to try and explain why. It's part of what I call "the soundness problem," and it is this. Suppose you have some class of bugs that you can eliminate in the type system (in the case of Rust, this class of bugs -- various kinds of memory errors -- is, indeed, an important one, known to be the cause of many costly bugs). A type system eliminates 100% of those errors, but because a type system works based on deductive proofs, it, too, must make a tradeoff: either be relatively unrestrictive with regards to how the programs are written, but then those proofs can be very complicated[1], or work with simple proofs, but restrict the programs. Rust has chosen the latter (which is the correct choice, IMO). However, either way, this kind of proof has a significant cost, and the cost exists because of the 100% guarantee. In general, in formal methods, the cost of verifying a property can rise by 10x or more if you want to go from 99.5% certainty to 100% certainty. So Rust gives you 100%, but at a non-negligible cost. So now the question is, is it worth it? After all, while extremely important, these memory bugs aren't the only dangerous ones, and you still need to verify your program by other means.

Zig does something different, and no, it doesn't work like C. Zig allows (and encourages) you to turn on runtime checks that also guarantee no memory errors, but at a cost to performance. Then, after testing, you can turn those off for your entire program or just the performance-sensitive parts. So this, combined with language simplicity, also gets rid of those bugs, except not with 100% guarantee. The fact that Zig is so simple also helps reduce other kinds of bugs perhaps better than Rust. Soundness comes at a cost that, perhaps unintuitively, can harm correctness.

So to make this simplistic, you could say that Rust sacrifices other kinds of bugs (due to complexity) in order to guarantee no bugs of a certain class, while Zig gives you no 100% guarantee regarding any kind of bug (after turning off runtime checks), but it does give you the tools and the focus to reduce bugs across the board.

So even if you're uncompromising on correctness to the point you employ formal methods (as I do, and I assume you do, too, as you consider it a matter of ethics), there's still no clear winner even on correctness alone between the two approaches. You could just as well argue that your ethics directs you to preferring Zig, because it may well be that its correctness story is stronger than Rust's; we just don't know. I like to choose my own "correctness focus" and I dislike complex languages so I prefer Zig, but I completely understand that Rust is a better fit for other people's tastes.

[1]: This, e.g., is what you can do with ACSL and C.


Does a type system eliminate 100% of those errors? It eliminates those errors if you can be guaranteed that the underlying operating system calls and the hardware are always correct. That's pretty damn good but it's not 100%.


Right. Absolute guarantees can only apply to an algorithm, i.e. the abstract program, under certain assumptions about hardware and OS behavior. The correctness of software, a program running in the real world, is always probabilistic. There was an interesting case of a bug in the Java and Python sorting routines (TimSort) whose probability of manifestation in practice was about the same as a failure due to some hardware malfunction (bit-flip etc.). So you could think about it as if the algorithm was completely incorrect, but the program was about as correct as when running a totally correct algorithm.


I guess you could amend it to 'the type system prevents 100% of reasonably preventable errors of the class'!!


You’re absolutely correct that this depends on having the concrete semantics of the compiled program line up with the abstract semantics of the source language. What you may not notice is that the source language can indeed have a platform-independent abstract semantics, and it is that which 100% correctness is proved of.

Once you leave the abstract semantics, you have the problem of proving your compiler to be a full and faithful transformer of program semantics. But this is a different problem space with different tools of analysis. It’s beneficial to start off already knowing that your source space is correct.


Sure, but there's also an assumption that the computer faithfully executes (w.r.t hardware spec) the machine instructions the compiler emits, and that is only true with probability. So system correctness is, therefore, always probabilistic. Whether, when and how it pays to absolutely guarantee certain aspects of it is a complex question (or, rather, a large set of questions) that can only be answered empirically.


Oh, definitely. My point was only that verifying semantics at a layer, and verifying faithful translation between layers, are legitimately distinct activities that benefit from being considerd separately. I would consider "faithful execution of machine instructions" to be a translation between layers -- specifically, between the abstract semantics of the machine instructions and the concrete semantics of, well, physics!

Siloing things this way helps us tighten down where faults can happen. It's true that faithful execution of a program can only be answered empirically -- but the question can be broken down into many other sub-questions which can be answered formally, with a smaller core of sub-questions that are necessarily empirical.


> but the question can be broken down into many other sub-questions which can be answered formally, with a smaller core of sub-questions that are necessarily empirical.

I don't think this path is particularly fruitful, though. For one, the question of cost is still empirical. For another, researchers who study technique X are interested in answering the question of what it can do, but developers are interested in an entirely different question: of the many techniques of achieving their particular correctness requirements, which is the cheapest? This question is clearly empirical.

After all, engineers ultimately care about one thing only: the probability of system failure and its severity vs the cost of achieving this correctness goal. Since none of these factors is ever zero, I don't think the right path is to separate the layers and pick which ones we want to dial to 100%, especially when considering that at each layer, the difference between 99.99% and 100% can be 10x in cost. I think that a more holistic view is suitable in the vast majority of cases, which means that we may prefer less than 100% certainty at all layers. In particular, dialing some layers to 100% is almost certainly not the cheapest way to achieve some realistic correctness goal. Put differently, the mere fact that you could achieve your correctness goal by dialing the correctness of some of the layers to 100%, doesn't mean that's how you should achieve it and that there aren't cheaper ways.


I wonder if there is a framework where we can systematically reason about all those trade-offs we usually navigate empirically


> So to make this simplistic, you could say that Rust sacrifices other kinds of bugs (due to complexity) in order to guarantee no bugs of a certain class,

I'd say more that Rust sacrifices productivity and complexity in order to guarantee no bugs of a certain class. I think there's plenty of room for a language like Zig (just like there is room for C as well as C++), but Rust is going to win every time on correctness.


> but Rust is going to win every time on correctness.

This is just not true (or, more precisely, your reasoning is not true and so your conclusion is not necessarily true), because guaranteeing no bugs of a certain class comes at the cost of complexity, which can harm the ability to reduce bugs of other kinds (e.g. if writing a program takes you 90% of the time than in another language, the remaining 10% can be spent on reducing bugs). I mean, it could be that Rust wins on correctness vs. Zig, it could be that it's the same, and it could be that it loses, but it's impossible to determine without an empirical study. So even if correctness is the only thing you care about, you cannot conclude (without a study) that Rust (or soundness in general) is always a win [1]. Correctness is very complicated [2], and there are many ways of achieving it, many of them come at the expense of others.

If your claim were so obviously true, we'd see a big difference in correctness between, say, Haskell and Python, but we just don't see it.

[1]: As usual when correctness is concerned, this needs to be qualified. If correctness is truly your only concern, then, at the extreme, you could have a language that always guarantees 100% correctness of safety properties by rejecting all programs. Your software would always be correct, but you'll never be able to produce software that does anything.

[2]: See my post on computational complexity results on software correctness: https://pron.github.io/posts/correctness-and-complexity


> at the extreme, you could have a language that always guarantees 100% correctness of safety properties by rejecting all programs

To get at this from another angle: consider a buggy C program. It will always be possible to write a Rust program that behaves identically, even faithfully simulating the bugs. They're both 'just' Turing-complete languages, after all, and correctness depends on what program behaviour you want.


> Users bear the cost of the programmer using a simple language.

this is basically the point of having a simple language




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: