I've worked on two "production" zig codebases: tigerbeetle [0] and sig [1].
These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.
Two different philosophical approaches with Zig and Rust.
- Zig: Let's have a simple language with as few footguns as possible and make good code easy to write. However we value explicitness and allow the developer to do anything they need to do. C interoperability is a primary feature that is always available. We have run time checks for as many areas of undetermined behaviour as we can.
- Rust: let's make the compiler the guardian of what is safe to do. Unless the developer hits the escape hatch, we will disallow behaviour to keep the developer safe. To allow the compiler to reason about safety we will have an intricate type system which will contain concepts like lifetimes and data mobility. This will get complex sometimes so we will have a macro system to hide that complexity.
Zig is a lot simpler than Rust, but I think it asks more of it's developer.
don't know if it's still on the table, but Andrew has hinted that the unused variables error may in the future still produce an executable artefact but return an nonzero return code for the compiler. And truly fatal errors would STILL produce an executable artefact too, just one that prints "sorry this compilation had a fatal error" to stdout.
If I comment out sections of code while debugging or iterating I don't want a compile error for some unused variable or argument. Warning. fine, but this happens to me so frequently that the idea of unused variables being an error is insane to me.
It is insane and you are completely right. This has been a part of programming for over 50 years. Unfortunately you aren't going to get anywhere with zig zealots, they just get mad when confronted with things like this that have no justification, but they don't want to admit it's a mistake.
i think the plan is to make no distinction between error and warning, but have trivial errors still build. that said i wouldn't be surprised if they push that to the end because it seems like a great ultrafilter for keeping annoying people out so they don't try to influence the language.
They also made a carriage return crash the compiler so it wouldn't work with any default text files on windows, then they blamed the users for using windows (and their windows version of the compiler!).
It's not exactly logic land, there is a lot of dogma and ideology instead of pragmatism.
Some people would even reply how they were glad it made life difficult for windows users. I don't think they had an answer for why there was a windows version in the first place.
I'm not sure why you shouldn't make your compiler accept CRs (weird design decision), but fixing it on the user-side isn't exactly hard either. I don't know an editor that doesn't have an option for using LF vs CRLF.
The unused variable warning is legitimately really annoying though and has me inserting `_ = x;` all over the place and then forgetting to delete it, which is imo way worse than just... having it be a warning.
I don't know an editor that doesn't have an option for using LF vs CRLF.
And I don't know any other languages that don't parse a carriage return.
The point is that it was intentionally done to antagonize windows even though they put out a windows version. Some people defend this by saying that it's easy to turn off, some people defend it by saying windows users should be antagonized.
No zig people ever said this was a mistake, it was all intentional.
I'm never going to put up with behavior like that with the people making tools actively working against me.
defer is still something you have to consciously put in every time so it destroys the value semantics that C++ has, which is the important part. You don't have to "just write defer after a string", you can just use a string.
The 'not a problem for me' is what people would say about manual memory in C too. Defer is better but it isn't as good as what is already in use.
That's disingenous, Rust tries to minimize errors, first at compile time then at runtime, even if it at some discomfort of to programer.
Zig goes for simplicity while removing a few footguns. It's more oriented towards programmer enjoyment. Keep in mind that programmers don't distinguish ease of writing code from ease of writing unforeseen errors.
Yes, I've written a few unsafe-focused crates [0], some of which have been modified & merged into the stdlib [1] [2] exposing them to the fringe edge-cases of Rust like strict provenance.
IMO, Rust is good for modeling static constraints - ideal when there's multiple teams of varying skill trying to work on the same codebase, as the contracts for components are a lot clearer. Zig is good for expressing system-level constructs efficiently: doing stuff like self-referential/intrusive data structures, cross-platform simd, and memory transformations is a lot easier in Zig than Rust.
As someone who never liked writing anything C++ since 2000+ (did like it before) I cannot agree with this. C++ and Rust are not comparable in this sense at all.
One can argue Rust is what C++ wanted to be maybe. But C++ as it is now is anything but clean and clear.
I think the comparison is fair, strictly in the sense that both Rust and C++ are designed around extensible programming via a sort of subtyping (C++ classes, Rust traits), and similar resource management patterns (ownership, RAII), where Zig and C do not have anything comparable.
My take, unfortunately, is that Zig might be a more modern C but that gives us little we don’t already have.
Rust gives us memory safety by default and some awesome ML-ish type system features among other things, which are things we didn’t already have. Memory safety and almost totally automatic memory management with no runtime are big things too.
Go, meanwhile, is like a cleaner more modern Java with less baggage. You might also compare it to Python, but compiled.
Zig gives things we really dont have yet: C + generics + good const eval + good build system + easy cross compilation + modern niceties (optionals, errors, sum types, slices, vectors, arbitrary bit packing, expression freedom).
Are there any other languages that provide this? Would genuinely consider the switch for some stuff if so.
Definitely not. Rust gives you a tangible benefit in terms of correctness. It's such a valuable benefit that it outweighs the burden of incorporating a new language in the kernel, with all that comes with it.
Zig offers no such thing. It would be a like-for-like replacement of an unsafe old language with an unsafe new one. May even be a better language, but that's not enough reason to overcome the burden.
actually that's not true at all. Zig offers you some more safety than C. And it also affords you a compiler architecture and stdlib that is so well designed you could probably bolt on memory safety relatively easily as a 3rd party static checker
"More safety than C" is an incredibly low bar. These are hygiene features, which is great, but Rust offers a paradigm shift. It's an entirely different ballpark.
I don't think you've necessarily understood the scope and impact of the borrow checker. Bounds checking is just a sane default (hygiene), not a game changer.
so yes, I understand that it's important. It doesn't need to be in the compiler though? I think it's likely the case that you also don't need to have annotations littering the language.
I wish you good luck! Successive attempts to achieve similar levels of analysis without annotations have failed in the C++ space, but I look forward to reading your results.
Memory safety by default in kernel sounds like a good idea :). However I don't think that C is being _replaced_ by Rust code, it's rather that more independent parts that don't need to deeply integrate with the existing C constructs can be written in a memory safe language, and IMO that's a fine tradeoff
I believe Rust is mainly being used for driver development, which seems a great fit (there's so many people of different skill levels who write Linux drivers, so this should help avoid bad driver code being exploited). It may also end up in the core systems, but it also might not fit there as well.
It is not about timelines. Linux Torvalds doesn't spend nights reading bunch of books with crabs on their covers rewriting random bits and pieces of the kernel in Rust. It is basically a dedicated group of people sponsored by megacorps doing the heavy lifting. If megacorps wanted Zig we could have had it in the kernel instead (Linux might have rejected it though, not sure what he thinks of it).
It’s like people do it just because Zig is very comparable to C. So the more complex Rust must be like something else that is also complex, right? And C++ is complex, so…
But that is a bit nonsensical. Rust isn’t very close to C++ at all.
I wrote lots of C++ before learning Rust, and I enjoyed it. Since learning Rust, I write no more C++. I found no place in which C++ is a better fit than Rust, and so it's my "new C++".
There are places a language could be a better fit, but which haven't adopted it. E.g. most languages over typescript on the backend, most systems programming languages over Java for games.
If you define success for Rust as "everything is written in Rust", then Rust will never be successful. The project also doesn't pursue success in those terms, so it is like complaining about how bad a salmon is at climbing trees.
That is however how the Rust Evangelism Strike Force does it all the time, hence these kind of remarks I tend to do.
C++ is good for some things regardless of its warts due to ecosystem, and Rust is better in some other ones, like being much safer by default.
Both will have to coexist in decades to come, but we have this culture that doesn't accept matches that end in a draw, it is all about being in the right tribe.
So... Like, what? Do you agree that there is no technical reason for LLVM to be written in C++ over Rust?
Have you considered that you perhaps do more damage to the conversation by having it with this hypothetical strike force instead of the people that are actually involved in the conversation? Whose feelings are you trying to protect? What hypocrisy are you trying to expose? Is the strike force with us in the room right now?
I assert there is no reason to rewrite LLVM in Rust.
And I also assert that the speech that Rust is going to take over the C++, misses on that as long as Rust depends on LLVM for its existence.
Or ignoring that for the time being NVidia, Intel, AMD, XBox, PlayStation, Nintendo, CERN, Argonne National Laboratory and similar, hardly bother with Rust based software for what they do day to day.
They have employees on WG14, WG21, contribute to GCC/clang upstream, and so far have shown no interest in having Rust around on their SDKs or research papers.
> I assert there is no reason to rewrite LLVM in Rust.
Everybody agrees with that, though? Including the people writing rustc.
There's a space for a different thing that does codegen differently (e.g. Cranelift), but that's neither here nor there.
> And I also assert that the speech that Rust is going to take over the C++, misses on that as long as Rust depends on LLVM for its existence.
There's a huge difference between "Rust depends on LLVM because you couldn't write LLVM in Rust [so we still need C++]" and then "Rust depends on LLVM because LLVM is pretty good". The former is false, the latter is true. Rust is perfectly suited for writing LLVM's eventual replacement, but that's a massive undertaking with very little real value right now.
Rust is young and arguably incomplete for certain use cases, and it'll take a while to mature enough too meet all use cases of C++, but that will happen long before very large institutions are also able to migrate their very large C++ code bases and expertise. This is a multi-decade process.
Tbh Go is also really nice for various local tools where you don’t want something as complex as C++ but also don’t want to depend on the full C# runtime (or large bundles when self-contained), or the same with Java.
With Wails it’s also a low friction way to build desktop software (using the heretical web tech that people often reach for, even for this use case), though there are a few GUI frameworks as well.
Either way, self contained executables that are easy to make and during development give you a rich standard library and not too hard of a language to use go a long way!
- It was explicitly intended to "feel dynamically-typed"
- Tries to live by the zen of Python (more than Python itself!)
- Was built during the time it was fashionable to use Python for the kinds of systems it was designed for, with Google thinking at the time that they would benefit from moving their C++ systems to that model if they could avoid incurring the performance problems associated with Python. Guido Van Rossum was also employed at Google during this time. They were invested in that sort of direction.
- Often reads just like Python (when one hasn't gone deep down the rabbit hole of all the crazy Python features)
Go has a garbage collector though. This makes it unsuitable for many use cases where you could have used C or C++ in the past. Rust and Zig don't have a GC, so they are able to fill this role.
GC is a showstopper for my day job (hard realtime industrial machine control/robotics), but would also be unwanted for other use cases where worst case latency is important, such as realtime audio/video processing, games (where you don't want stutter, remember Minecraft in Java?), servers where tail latency matters a lot, etc.
> GC is a showstopper for my day job (hard realtime industrial machine control/robotics)
Which is a very niche use case to begin with, isn't it? It doesn't really contradict what the parent comment stated about Go feeling like modern C (with a boehm gc included if you will). We're using it this way and it feels just fine. I'd be happy to see parts of our C codebase rewritten in Go, but since that code is security sensitive and has already been through a number of security reviews there's little motivation to do so.
> Which is a very niche use case to begin with, isn't it?
My specific use case is yes, but there are a ton of microcontrollers running realtime tasks all around us: brakes in cars, washing machine controllers, PID loops to regulate fans in your computer, ...
Embedded systems in general are far more common than "normal" computers, and many of them have varying levels of realtime requirements. Don't believe me? Every classical computer or phone will contain multiple microcontrollers, such as an SSD controller, a fan controller, wifi module, cellular baseband processor, ethernet NIC, etc. Depending on the exact specs of your device of course. Each SOC, CPU or GPU will contain multiple hidden helper cores that effectively run as embedded systems (Intel ME, AMD PSP, thermal management, and more). Add to that all the appliances, cars, toys, IOT things, smartcards, etc all around us.
No, I don't think it is niche. Fewer people may work on these, but they run in far more places.
Not familiar with it, but reading the github page it isn't clear how it deals with GC. Do you happen to know?
Some embedded use cases would be fine with a GC (MicroPython is also a thing after all). Some want deterministic deallocation. Some want no dynamic allocator at all. From what I have seen, far more products are in the latter two categories. While many hobby projects fall into the first two categories. That is of course a broad generalization, but there is some truth to it.
Many products want to avoid allocation entirely either because of the realtime properties, or because they are cost sensitive and it is worth spending a little bit extra dev effort to be able to save an Euro or two and use a cheaper microcontroller where the allocator overhead won't fit (either the code in flash, or just the bookkeeping in RAM).
Yes, just like with real time Java for embedded targets from PTC and Aicas, it is its own implementation with another GC algorithm, additionally there are runtime APIs for regions/arenas.
Here is the commercial product for which it was designed,
You can also see it differently: If the language dictates a 4x increase in memory or CPU usage, you have set a much closer deadline before you need to upgrade the machine or rearchitect your code to become a distributed system by a factor 4 as well.
Previously, delivering a system (likely in C++) that consumed factor 4 fewer resources was an effort that cost developer time at a much higher factor, especially if you had uptime requirements. With Rust and similar low-overhead languages, the ratio changes drastically. It is much cheaper to deliver high-performance solutions that scale to the full capabilities of the hardware.
I think the issue is OOP patterns are one part missing features, one part trying to find common ground for Java, Modula, C++, SmallTalk, that it ends up too broad.
A much saner definition is looking at how languages evolved and how term is used. The way it's used is to describe an inheritance based language. Basically C++ and the descendants.
> one part trying to find common ground for Java, Modula, C++
The primary common ground is that their functions have encapsulation, which is what separates it from functions without encapsulation (i.e. imperative programming). This already has a name: Functional programming.
The issue is that functional, immutable programming language proponents don't like to admit that immutability is not on the same plane as imperative/functional/object-oriented programming. Of course, imperative, functional, and object-oriented language can all be either mutable or immutable, but that seems to evade some.
> SmallTalk
Smalltalk is different. It doesn't use function calling. It uses message passing. This is what object-oriented was originally intended to reference — it not being functional or imperative. In other words, "object-oriented" was coined for Smalltalk, and Smalltalk alone, because of its unique approach — something that really only Objective-C and Ruby have since adopted in a similar way. If you go back and read the original "object-oriented" definition, you'll soon notice it is basically just a Smalltalk laundry list.
> how term is used.
Language evolves, certainly. It is fine for "object-oriented" to mean something else today. The only trouble is that it's not clear to many what to call what was originally known as "object-oriented", etc. That's how we end up in this "no its this", "no its that" nonsense. So, the only question is: What can we agree to call these things that seemly have no name?
> The primary common ground is that their functions have encapsulation
You omitted Smalltalk. Most people would agree that SmallTalk is object-oriented.
But that kinda ruins the common ground thesis.
> Language evolves, certainly. It is fine for "object-oriented" to mean something else today.
pjmlp definition is very fuzzy. It judges object-orientedness based on a few criteria, like inheritance, encapsulation, polymorphism, etc. More checks, stronger OOP.
By that, even Haskell is somewhat OOP, and so is C, assembly, Rust, and any language.
---
What I prefer is looking at it as it's used. And how it's used for appears to be akin to using it as an everyday term fish or fruit.
No one would agree that a cucumber is a fruit. Or that humans are fish. Even though botanically and genetically they are.
Exactly. It isn't functional. It doesn't use functions. It uses message passing instead. That is exactly why the term "object-oriented" was originally coined for Smalltalk. It didn't fit within the use of "imperative" and "functional" that preceded it.
> But that kinda ruins the common ground thesis.
That is the thesis: That Smalltalk is neither imperative nor functional. That is why it was given its own category. Maybe you've already forgotten, but I will remind that it was Smalltalk's creator that invented the term "object-oriented" for Smalltalk. Smalltalk being considered something different is the only reason for why "object-oriented" exists in the lexicon.
Erlang is the language that challenges the common ground thesis: It has both functions with encapsulation and message passing with encapsulation. However, I think that is easily resolved by accepting that it is both functional and object-oriented. That is what Joe Armstrong himself settled on and I think we can too.
> What I prefer is looking at it as it's used.
And when you look you'll soon find out that there is no commonality here. Everyone has their own vastly different definition. Just look at how many different definitions we got in this thread alone.
> No one would agree that a cucumber is a fruit.
Actually, absent of context defining whether you are referring to culinary or botanical, many actually do think of a cucumber as a fruit. The whole "did you know a tomato is actually a fruit?" is something that made the big leagues in the popular culture. However, your general point is sound: The definitions used are consistent across most people. That is not the case for object-oriented, though. Again, everyone, their brother, and pjmlp have their own thoughts and ideas about what it means. Looking at use isn't going to settle on a useful definition.
Realistically, if you want to effectively use "object-oriented" in your communication, you are going to have to explicitly define it each time.
> That is exactly why the term "object-oriented" was originally coined for Smalltalk.
Sure but your definition doesn't cover it. If language for which the term was coined, it's a bit meaningless, ain't it.
Problem with making encapsulation and polymorphism essential to OOP definition, is that it then starts garbling up functional languages like Haskell and imperative like C.
I can see them being necessary but not enough to classify something as OOP.
> And when you look you'll soon find out that there is no commonality here.
Perhaps, but broadly speaking people agree that C++ and Java are OOP, but for example C isn't.
Same way when people say and give me a fruit (as in fruits and vegetables), you'd be looked oddly if you gave a cucumber, rather than an apple.
Same way can be thought of OOP. The common definition is basically covers Message-passing-languages, and inheritance/prototype based languages.
> Problem with making encapsulation and polymorphism essential to OOP definition, is that it then starts garbling up functional languages like Haskell and imperative like C.
Polymorphism? That was never mentioned. Let me reiterate the definitions:
Let me also reiterate that there are other axis of concerns. Imperative, functional, and object-oriented are not trying to categorize every last feature a programming language might have. Mutable/immutable, or polymorphic/monomorphic, etc. are others concern and can be independently labeled as such.
> Perhaps, but broadly speaking people agree that C++ and Java are OOP
Many do, but just as many hold on to the original definition. Try as you might, you're not going to find a common definition here, I'm afraid. If you want to use the term effectively, you're going to have to explicitly define it each time.
If you are pointing out that there is no consistent definition for OOP, I agree. I've said so multiple times. Yes, the proof is in the pudding, as they say.
It is not clear where you think that might otherwise fit into our discussion? I, to the best of my ability, spelled out the historical definitions that we are talking about so that we had a shared understanding. What someone else may have defined the same words as is irrelevant.
I think we can agree that these dividing lines aren't even useful, but the history behind them is understandable. In the beginning there was imperative programming, named to differentiate from unstructured programming. Then came encapsulation, which didn't fit under imperative, so they named it functional to separate it from imperative. But then came Smalltalk, and it recognized that it doesn't fit under imperative or functional, so it gave itself the name "object-oriented".
If we could go back in time we'd realize that none of these names bring any significance [hence why there is no consistent definition] and throw them away. But we cannot go back in time. We could recognize today that they are just a historical curiosity and throw them away now, but it seems there is too much emotional attachment to them at this point.
So, if you want to use them to satisfy your emotional desires, you can! But you need to also explicitly define them each time so that the reader/listener understands what you mean by it. Failure to do so means they will pick their own pet definition, and then you will talk past each other. There is no commonality found around these terms because, again, any definition you choose (pjmlp's, mine, yours, anyone's) none of them convey any truly useful information, so any definition offered is never retained by anyone else.
> It's pjmlp's insistence that Rust is object-oriented.
It is, for some definition of object-oriented. But this perfectly highlights how there isn't useful information to be found in the use of the term. Even if we all agreed on what object-oriented means, what would you learn from it? Nothing, is what. It was a pointless statement and we can accept it as such.
Sure, for some definition of red, green is red. E.g., colorblind people. I'm interested in more broadly accepted jargon.
The problem is, Rust isn't really object-oriented either. I'm interested in a mostly consistent and hopefully majority definition.
It's not message-passing sense (can't do cool fancy things* a la Ruby or Smalltalk); nor is it inheritance-based (can't do inheritance-based nor prototype-based OOP patterns).
There is one more mathematical definition of whether two features are equal, but it involves languages, local macros, and Turing machines. See https://www.youtube.com/watch?v=43XaZEn2aLc
* There was some kind of message recorder and playback in Ruby/Smalltalk, that I can't find. Basically send methods to objects and record them, then playback them at later date. Will update if I find it.
> The problem is, Rust isn't really object-oriented either. I'm interested in a mostly consistent and hopefully majority definition.
May I suggest "programming language"? I think you will find that most everyone agrees that Rust is a programming language.
In context, it's functional, but I think you rejecting that historical definition means that you agree with me that the attempt at categorization here doesn't provide any useful information. So, the question here is: What specific information is it that you think is failing to be effectively communicated?
If I take a walk down the street and tell the first guy I meet, "Hey, Rust is a programming language", what information did he miss out on that you find critical?
When we establish that, we might find out there is already a widely recognized term. You won't find it in "object-oriented", however. It has never been used in a context where the information was useful. Even the original message passing definition was never useful as you always had to explain what message passing is at the same time anyway, negating the value of a single word to use as a shorthand.
Words are not given to us naturally by the universe. They are a human invention. Consistent definitions for words only become accepted consistently when those humans find utility in adopting something consistent. "If you build it, they will come" only works in movies.
> So, the question here is: What specific information is it that you think is failing to be effectively communicated?
Expressivity. As the video I linked before shows, there is a quantifiable and objective difference between a language that has exceptions and one that doesn't. Or lambda's or async.
What terms like "message passing" and "inheritance-based" capture is unique ability of each language to do something novel* other languages can't. Rust as of now lacks such capabilities, although it can probably simulate them to some extent.
*For message passing, it's the method record and replayer.
For inheritance-based it can be something like easy DOM manipulation.
Then you might say that Rust is an expressive programming language. But then I'm going to ask: What does expressivity mean?
Ruby is always hailed for its expressivity. Is it also an expressive programming language despite having very little in common with Rust technically?
It seems to me you're going back down the road Kay did thinking that "object-oriented" could become the way to describe his actor based, message passing model. It never caught on because what that means isn't well understood and had to be explained in more detail, so a single word didn't add any value, and thus nobody ever took note of it.
> there is a quantifiable and objective difference between a language that has exceptions and one that doesn't.
Well, I suggest we have a way to say that: {X} {has|does not have} exceptions. The terminology there already exists and is commonplace, as far as I see. If you need to talk about multiple features, then make it a list: {X} has exceptions, lambdas, and inheritance. Laundry list of features are easy to describe. It is when one wants to speak more conceptually that it is harder to find something of actual value as it is usually the concept that you want to explain.
And maybe that's all you really need to get the information conveyed here? "Rust is a programming language" → "Rust is a programming language that has x, y, and z."
> Then you might say that Rust is an expressive programming language.
That's not what I mean. Expressivity allows you to objectively test if two languages are different. The functional/objective/imperative are trying to capture some expressive features.
Using expressivity, you can finally put a Turing machine to that feeling and test it.
> The terminology there already exists and is commonplace, as far as I see.
Missing the point. Message oriented language captures the expressivity of having
the ability to send and receive arbitrary methods. This is what I mean.
If OOP or MOP is just a marketing term, then it carries no value.
Yes, of course you can call objc_msgSend or equivalent in Rust just as you can in C. But you are pushing the object-oriented model into a library. It is not native to the language.
I am talking about Rust OOP language features for polymorphism, dynamic and static dispatch, encapsulation, interfaces.
Which allowed me to port 1:1 the Raytracing Weekend tutorial from the original OOP design in C++ to Rust.
Also the OOP model used by COM and WinRT ABIs, that Microsoft makes heavy use of in their Rust integration across various Windows and Office components.
Absolutely. That's why it is best to stick to the already established definitions. Kay was quite explicit about what "object-oriented" meant when the term was uttered for the first time; including specifically calling out C++ as not being object-oriented.
And yes, we all know the rest of the story about how the C++ guys were butthurt by that callout and have been on a mission to make up their own pet definition that allows C++ to become "object-oriented" ever since. I mean, who wouldn't want to latch onto a term that was about the unique features of a failed programming language that never went anywhere?
Once someone offers up the replacement name so that we can continue to talk about what "object-oriented" referred to 40 years ago — and still refers to today, sure. Nobody cares about the exact letters and sounds.
But, until then, no. It is still something we regularly talk about. It needs a name. And lucky for us it already has one — and has had one for 40 years.
Zig is what you want to write, because it gets out of the way.
Rust is what you want your colleagues to write, to enforce good practices and minimise bugs. It's also what I want my past self to have written, because that guy is always doing things that make my present life harder.
> so you have no clue if the shared data might be incompletely modified or otherwise logically corrupted.
One can make a panic wrapper type if they cared: It's what the stdlib Mutex currently does:
MutexGuard checks if its panicking during drop using `std::thread::panicking()`, and if so, sets a bool on the Mutex. The next acquirer checks for that bool & knows state may be corrupted. No need to bake this into the Mutex itself.
My point is that "blindly continuing" is not a great default if you "don't care". If you continue, then you first have to be aware that a multithreaded program can and will continue after a panic in the first place (most people don't think about panics at all), and you also have to know the state of the data after every possible panic, if any. Overall, you have to be quite careful if you want to continue properly, without risking downstream bugs.
The design with a verbose ".lock().unwrap()" and no easy opt-out is unfortunate, but conceptually, I see poisoning as a perfectly acceptable default for people who don't spend all their time musing over panics and their possible causes and effects.
There's compiler-level traits like `Iterator` and `Future` which enforce references. If wanting to do intrusive pointers into them, one risks creating overlapping references: https://github.com/tokio-rs/tokio/issues/3399
References to UnsafeCell<> should still be safe, because the only thing you can do with an &UnsafeCell<T> is extract a possibly-mutating raw pointer to T. (UnsafeCell<> is also special in that, like Cell<>, you can mutate through it without incurring UB.)
Tokio's focus is on low tail-latencies for networking applications (as mentioned). But it doesn't employs yield_now for waiting on a concurrent condition to occur, even as a backoff strategy, as that fundamentally kills tail-latency under the average OS scheduler.
> Green threads give you all the advantages of async
They require more memory over stackless coroutines as it stores the callstack instead of changing a single state. They also allow for recursion, but its undelimited meaning you either 1) overrun the guard page and potentially write to another Green thread's stack by just declaring a large local variable 2) enable some form of stack-probing to address that (?) or 3) Support growable stacks which requires a GC to fixup pointes (isn't available in a systems lang).
> green threads should run faster, as storing state on a stack is generally faster than malloc.
Stackless coroutines explicit don't malloc on each call. You only allocate the intial state machine (stack in GreenThread terms).
> The primary objection seems to be speed
It's compatibility. No way to properly set the stack-size at compile time for various platforms. No way to setup guard pages in a construct that's language-level so should support being used without an OS (i.e. embedded, wasm, kernel). The current async using stackless coroutines 1) knows the size upfront due to being a compiler-generated StateMachine 2) disallows recursion (as that's a recursive StateMachine type, so users must dynamically allocate those however appropriate) which works for all targets.
> They require more memory over stackless coroutines as it stores the callstack instead of changing a single state.
True, but in exchange you don't have to fight the borrow checker because things are being moved from the stack. And the memory is bounded by the number of connections you are serving. The overheads imposed by each connection (TCP Windows, TLS state, disk I/O buffers) are likely larger than the memory allocated to the stack. In practice on the machines likely to be serving 1000's of connections, it's not going to be a concern. Just do the arithmetic. If you allowed a generous 64KB for the stack, and were serving 16K connections, it's 1GB of RAM. A Raspberry PI could handle that, if it wasn't crushed by the 16K TCP connections.
> They also allow for recursion, but its undelimited meaning you either 1) overrun the guard page and potentially write to another Green thread's stack by just declaring a large local variable 2) enable some form of stack-probing to address that (?) or 3) Support growable stacks which requires a GC to fixup pointes (isn't available in a systems lang).
All true, but also true for the main stack. Linux solved it by using 1MB guard area. On other OS's gcc generates probes if the frame size exceeds the size of the guard area. Lets say the guard area is 16KB. Yes, that means any function having than 16KB of locals needs probes - but no function below that does. Which in practice means they are rarely generated. Where they are generated, the function will likely be running for a long time anyway because it takes a while to fill 16KB with data, so the relative impact is minimal. gcc allows you to turn such probes off for embedded applications - but anybody allocating 16KB on the stack in an embedded deserves what they get.
And again the reality is a machine that's serving 1000's of connections is going to be 64bit, and on a 64bit machine address space is effectively free so 1MB guard gaps, or even 1GB gaps aren't a problem.
> No way to properly set the stack-size at compile time for various platforms.
Yet, somehow Rust manages that for it's main stack. How does it manage that? Actually I know how - it doesn't. It just uses whatever the OS gives it. On Windows that's 1MB. 1000 1MB stacks is 1GB. That's 1GB of address range, not memory. Again, not a big deal on a modern server. On embedded systems memory is more constrained, of course. But on embedded systems the programmer expects to be responsible for the stack size and position. So it's unlikely to be a significant problem in the real world. But if does become a problem because your program is serving 10 of 100's of concurrent connections, I don't think many programmers would consider fine tuning the stack size to be a significant burden.
> No way to setup guard pages in a construct that's language-level so should support being used without an OS (i.e. embedded, wasm, kernel).
There is no way to set up the main stack without the kernel's help, and yet that isn't a problem? That aside are you really saying replacing a malloc() with mmap() with the right flags is beyond the ken of the Rust run time library authors? Because that is all it takes. I don't believe it.
> The current async using stackless coroutines 1) knows the size upfront due to being a compiler-generated StateMachine 2) disallows recursion (as that's a recursive StateMachine type, so users must dynamically allocate those however appropriate) which works for all targets.
All true. You can achieve a lot by moving the burden to the programmer. I say the squawks you see about async show that burden is considerable. Which would be fine I guess, if there was a large win in speed, or run time safety. But there isn't. The win is mainly saving on some address space for guard pages, for applications that typically run on 64bit machines where that address space address space is effectively an unlimited resource.
The funny thing is, as an embedded programmer myself who has fought for memory I can see the attraction of async being more frugal than green threads. A compiler that can do the static analysis to calculate the stack size a number of nested calls would use, set the required memory aside and then general code that so all the functions use it instead of the stack sounds like it could be really useful. It certainly sounds like an impressive technical achievement. But it's also true I've never had it before, and I've survived. And I struggle to see it being worth the additional effort it imposes outside of that environment.
Preemption simulates localized concurrency (running multiple distinct things logically at the same time) not parallelism (running them physically at the same time). You can have concurrency outside continuations. OS threads for example are not continuations, but still express concurrency to the kernel so that it can (not guaranteed) express concurrency to the physical CPU cores which hopefully execute the concurrent code in parallel (not guaranteed again, due to hyperthreading).
getaddrinfo() is a synchronous function that can do network requests to resolve DNS. The network property isn't reflected in its function signature becoming async. You can have an async_getaddrinfo() which does, but the former is just a practical example of network calls in particular being unrelated to function coloring.
These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.
[0]: https://github.com/tigerbeetle/tigerbeetle/pulls?q=is%3Apr+a...
[1]: https://github.com/Syndica/sig/pulls?q=is%3Apr+author%3Akpro...
reply