Hacker Newsnew | past | comments | ask | show | jobs | submit | leecommamichael's commentslogin

I just wish the camps would stop being as tribalistic. I see a broad spectrum of fights between any "better C" language and Rust enthusiasts. There is room for both of these things. Just use what works for you. Rust is a bit more like Ada in spirit, it introduces a lot of friction compared to "C like" things which gladly accept you blowing your leg off. Each tool has unique benefits, and is uniquely suited to different problems.

If I'm building a simple GUI app, I'm not sure the friction from Rust is all that worthwhile. If I'm sending someone to space, I think I'd rather have the safeties of a Rust or an Ada, or MISRA C.


If this is a "beyond standard" test suite, (so much so that it _uniquely_ makes this work possible compared to other projects,) then how is Bun also uniquely unstable compared to other Zig programs (and so deserving of rewrite?) If the blame lies partially with the test suite, what does this imply (if anything) about the Rust port?

Because tests validate behavior, not undefined behavior.

The thesis is that Rust makes undefined behavior less likely.


Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool? There is a lot of quality stuff made with C/C++, so what is Zig doing wrong?

> Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.


It's not hallucination, it's a basic extrapolation. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" is the same statement as "using Zig resulted in Bun having an extremely high amount of crashes/memory bugs". It is then natural to ask whether their position is "using Zig results in an extremely high amount of crashes/bugs" in general.

That's a hell of a lot more than "basic extrapolation." You're misrepresenting the original claim to fight against one that's trivially easy to dispute. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" (which unlike Rust, doesn't prevent you from writing them) is a completely different statement than your "using Zig results in an extremely high amount of crashes/bugs." Implying that such a generalization was even on the table is insulting.

Yes, obviously you can write high-quality software in Zig. But does Zig categorically reject the kind of bugs Bun was suffering from? Rust does.


The point is that the "extremely high amount of crashes/bugs" is maybe not the fault of Zig after all, as was implied.

How software behaves is very obviously downstream of the tools (in this case programming language) used to build it.

"Downstream of" is doing a lot of work in that sentence. Language has an effect on, but in no way determines, the reliability of software written in it.

Downstream doesn't imply determinism.

The original claim is one of determinism. Your use of the term "downstream" is hiding the distinction; it can be read in either way, so it bridges the gap between the position you want to defend ("using Zig causes a higher probability of memory bugs") and the position you're forced to defend ("using Zig results in extremely many memory bugs").

In short, I'm accusing you of doing a motte-and-bailey.


It's not deterministic, it's probabilistic. Bun does have memory bugs that would not be there if they'd written in safe Rust instead of Zig. You could imagine a scenario where ZigBun has zero memory issues, but it is not the most likely outcome, and is arguably an incredibly unlikely outcome given the entire history of software written in memory-unsafe languages.

I am less motte-and-bailey'ing, and you are more not subscribing to the principal of charity, choosing to interpret the original comment as its weakest possible version rather than the strongest.


It's generalizing from Bun (which might be especially tricky code) to other software that might not have the similar issues. There are lots of different kinds of software.

Even assuming that's a correct interpretation, does "using C/C++ results in having an extremely high amount of crashes/memory bugs" not true?

No, that's provably false by a fairly simple existence proof. If it was true that using C results in an "extremely high amount of crashes/memory bugs", we would expect to not find any substantial pieces of software written in C without an "extremely high amount of crashes/memory bugs". Now where exactly you draw that line is necessarily going to be somewhat arbitrary, but by any definition, I think we can all agree that SQLite does not fit that description. Yet SQLite is written in C. Therefore, we conclude that the statement must be false. QED.

Now C does have some aspects which make it more prone to crashes and memory bugs. The less strong statement of "using C results in a higher propensity for crashes/memory bugs than Rust" is absolutely true, I would argue. And both C++ and Rust inherit some (but not all, and not the same) of the aspects which make C prone to memory bugs. (So does Go, I would argue, but less than C++ and Zig.)


Bah waking up today to notice a typo, after the edit window. "And both C++ and Rust inherit some ... aspects" was of course meant to be "And both C++ and Zig inherit some ... aspects".

>I think we can all agree that SQLite does not fit that description

One of the reasons WebSQL died was due to how many memory bug related vulnerabilities SQLite had.


You know, I try to ask questions rather than making assertions in order to better my chances at provoking useful thought and conversation.

It is basically Modula-2 / Object Pascal with C like syntax.

While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.

Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.

Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.


You would like the T3X language as an exercise to port stuff from Free Pascal too it. In a near future I plan to port two libre text adventures with it, Beyond the Titanic and Supernova. If it fits under T3X, it might run in 'high end' CP/M systems out there.

https://t3x.org/t3x/0/index.html

https://t3x.org/t3x/0/t3xref.html

Beyond these Curses simple games, there's a 6502 assembler and disassembler among a Kim-1 simulator, Micro Common Lisps and whatnot.


Have to look into it, thanks.

Nice. A tip: there are 'modules' where are just helpers (strings, io) over main functions.

Kinda like write vs printf in C, but easier to grasp. The cheatsheet will help you a lot.

Another thing: setting up the compiler might be cumbersome, I might post a guide soon. I am not the author but making it compile well on some arches can be odd (openbsd/amd64) vs native code (fbsd, 32 bit linux)... nothing complex once you set it up once.

My T3XDIR in the makefile and bin/ scripts it's set to $HOME/t3x0/lib and the bn PATH being set to $HOME/T3XDIR/bin in both Unix env vars and the scripts. It's a 10 minute setup, but after than you will just run

        tx0 -c -s file 

        
(file actually being file.t) and get a binary. Cross compiling for DOS or CP/M involve simlar flags. And it's cool as hell, as I translated Ladder into Spanish for some Spanish OpenBSD pubnix... and the same port will work in DOS too.

On Titanic/Supernova, well, it was a former TP game ported to FPC, is not very complex, and tons of stuff could map 1:1 to t3x. The game might be too big for CP/M but for DOS it would be ideal (even by using the T3X 'big' libraries).

The bundled cheatsheet (make will generate a cheatsheet.pdf file if you have groff) might help you. For instance, gotoxy can be written in T3X as con.move(x,y). You need to import the console library as:

         use console: con;
Also, the WYOP book from the samepge comes with a good chunk of examples to play with in a ZIP file.

Have fun.


Eh, I made a typo. The PATH for tcvm and tx0 should be $HOME/t3x0/bin.

> Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.

Seems like their luck finally ran out. For the longest time, they were getting all kinds of passes, as if a post 1.0 language, that others don't get. 10 years is quite a long time not to hit 1.0 or still be into beta breaking changes. Though I think that (the luck) was significantly aided by their perpetual and odd HN boosting.

> While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

While Zig was a bit safer and more modern C alternative, safety was arguably not so much their selling point. Plenty of other C alternative languages are equally or more safe. Dlang and Vlang, both now having optional GCs and ownership, are examples.


Yeah, pity that D somehow lost its adoption opportunity.

Now you can get most of it via C# AOT or Swift, with much better ecosystem.

Still, it is part of the official GCC and LLVM frontends, so there is that.


Thank you for actually making the effort to respond to the curiosity in my question.

It is much harder to write quality stuff in c/c++ that doesn't have memory bugs (use after free, out of bounds access, use of unitialized memory, double free, memory races, etc.). I wouldn't say it isn't feasible to build high quality software in those languages, but even the highest quality software written in those languages has these types of bugs. Zig is better than c, and maybe a little bit better than c++, especially with respect to spatial memory bugs, but it doesn't provide the same garantees as rust.

I use clang, LLM, zig compiler, brave, firefox, kde, linux, steam, PC games, neovim, ghostty and more software written in c/c++/zig, and I can't remember the last time I had a crash issue with memory issues.

KDE also includes many other programs inside it like music player, document reader etc. that I never had any issues with.


Based on what? I am not familiar with this language called called "c/c++" but if you are writing Modern C++, you shouldn't be creating problems like "double free." It's really not that hard to avoid at all. This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust. I think so many people call themselves programmers now but so few know very much about computing beyond whatever the latest fad web framework is up to.

This kind of argument is why security folks look down on C and C++ developers.

Because instead of discussing serious matters, they missed English grammar class on the use of / and then get up in arms about the use of "and, or".

Additionally, even code bases from companies that seat at WG21, lack the use of the so called Modern C++, without any language feature or header files inherited from C.

Better C with some niceties keeps being the prevalent approach, unfortunately.

C strings, C arrays, pointer math, printf family, C style casts, macros instead of templates, no STL, and if not hardned ...


Sure if you restrict yourself to a subset of c++ that avoids the more unsafe features, you can avoid some of those problems, but not all of them. And IME, a lot of c++ in the wild still uses those unsafe features, especially when interfacing with c libraries. And even if you always use smart pointers and make sure you always initialize your variables there are still plenty of ways you can get undefined behavior in c++.

> This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust.

Those are not contradictory. Confining unsafe code to a few unsafe blocks makes it easier to identify areas that need closer scrutiny. Just because there are unsafe blocks doesn't mean that using rust in the kernel isn't making it safer.


The answer is that C (and by extension Zig, C++) code goes through a hardening process. New code in these languages tends to be unsafe. But bugs and vulnerabilities get squashed over time. Bun gets updated fast and so has a lot of new unsafe code.

it's feasible to write good software but anything on the scale of millions of lines of code will have memory and pointer issues. I've worked in large C++ code bases with people much more experienced and skilled than I was and every single one of them would tell you that at that scale, no matter how economic and simple you program you will produce memory bugs, the smartest person in the world makes errors holding that much stuff in their head.

They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.


This is correct but people with too big of an ego or affected too much by Dunning-Kruger) will try to say otherwise even when presented with ample evidence. Instead of a valid response you'll get "skill issue" from people that produce segfaulting code on a regular basis.

The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.

This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.


> There is a lot of quality stuff made with C/C++

There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.


Zig is a love letter to C. It does not do much of anything to address memory management. Doesn't even have any concept of ownership like C++ does (ergo, no equivalent of unique_ptr / shared_ptr). All you get over C is the addition of defer, and even that isn't really that different if you're using GCC or Clang and thus have __attribute__((cleanup)).

This is a hot take, but programming languages haven't progressed since the 90's. We've been conditioned to believe that if you want to be a serious programmer, you have to either use C++-style RAII (which includes Rust), or garbage collection, and there's no in-between, and C programmers are dinosaurs who can be ignored.

Arena allocators are a great way to automatically manage memory allocations. You malloc a whole bunch of memory and release it all with a single free, which makes it much easier to reason about your program's memory safety.

Casey Muratori has a good video talking about this. https://www.youtube.com/watch?v=xt1KNDmOYqA

And about Zig, you have an Arena Allocator out of the box: https://zig.guide/standard-library/allocators/ . And it's not just limited to that, you have debug allocators that detects memory leaks and gives you stack traces where they occurred.

This isn't to say that Zig is great at everything. I think Rust is great for things like kernels, high-frequency trading systems, and authentication servers where memory safety and performance is paramount. But for things like video games, memory leaks and buffer overflows aren't that big of a deal, and Zig's "Good Enough" approach is great for those types of applications.


Arena allocators are not some grand new concept. They're already commonly used in C++ in the places it makes sense to use them. Which is really not that many places, it's a fast but rather niche optimization. There's not a whole lot of scenarios where lots of temporary memory is needed for one well defined scope.

Video games are large and have lots of state and lots of threads. Zig's lack of ownership here with fully manual memory management is overall a poor fit.


I disagree with a lot of what you said, but I don't feel authorative enough to say you're wrong.

> Which is really not that many places, it's a fast but rather niche optimization. There's not a whole lot of scenarios where lots of temporary memory is needed for one well defined scope.

Arena allocators are not niche optimizations, or not something picked first for optimization. Contrary to what you said, arenas are useful for temporary allocations with poorly defined intermediate scope or lifetime (think functions directly or indirectly called by the arena owner). If the scope is local and well-defined, a regular allocator or even a fixed buffer would do just fine.

> Zig's lack of ownership

Zig doesn't have explicit annotations for it, but the concept of ownership and lifetime doesn't go away. It's not enforced by the compiler, which is an intentional tradeoff to let the programmer have more control and freedom. When you use languages with manual memory management, it's expected that you are capable of designing sensible programs in such a way that ownership and lifetimes are tractable and are part of the program design, rather than something to workaround to please the compiler.


> Zig doesn't have explicit annotations for it, but the concept of ownership and lifetime doesn't go away. It's not enforced by the compiler, which is an intentional tradeoff to let the programmer have more control and freedom.

Right, it's exactly like C, and we kinda all know how that worked out in practice already...

Hence why I called Zig a "love letter to C". If all you want is C with a dash of zest, that's Zig. If you want a modern language that has learned from the many hard lessons the industry has dealt with over the years... well, Zig ain't it. Which is a perfectly fine thing for Zig to be, it doesn't have to be a good general purpose language. We have plenty of those already from Rust to Go to Java/C#/Kotlin to etc...

> arenas are useful for temporary allocations with poorly defined intermediate scope or lifetime (think functions directly or indirectly called by the arena owner).

Arenas are not good for that because the arena as a whole has to outlive all of those poorly defined scopes & lifetimes, which is hard to do. Especially if you later go add on something like an retry-with-backoff or asynchronous metrics/tracing or caching or whatever. Then suddenly you're either fighting use-after-frees or doing deep-copying of data.


> Right, it's exactly like C, and we kinda all know how that worked out in practice already...

Production operating systems have been written in C, along the with the countless tooling, libraries and game engines (which you said are a poor fit for manual memory management) that modern systems depend on. I say it worked out it pretty well.

And zig did learn from the hard lessons from the industry and fixes a lot of problems with C. It also has a lot of affordances that makes it more than suitable for general purpose use.

> Arenas are not good for that because the arena as a whole has to outlive all of those poorly defined scopes & lifetimes, which is hard to do.

I don't what else to tell you, arenas outliving temporary allocations is exactly what it is made for, they go poof as soon as the arena owner is done. That's not hard, it makes it easier if anything. To give concrete examples, arenas are used on HTTP requests that are clean up in one go as soon as the request is done. They are also used on (possibly deep) recursive functions that are cleaned up as soon as the root function returns. Of course, you don't store arena-allocated memory elsewhere that outlives the arena, that would be dumb.

That's why you have to be consciously aware of the ownership and lifetimes that a piece of memory has. Ownership and lifetimes are just one part of the API contract of a function or module. You break it, that's on you. Having a compiler help with ownership model would be nice, but it's a not substitute for having a good mental model of your programs. It's not that different from the tradeoff of a having a less strict type system. Not every sanity check can or has to be performed at compile time. Zig also has debug allocators that catches a lot of memory mismanagement during testing. Hard to debug double-frees, use-after-frees and other things are a symptom of poor cavalier YOLO programming.

That all said, I do agree that manual memory management is really hard to do if you are used to just sweeping gigabytes of memory under rug, hoping the GC vacuum cleaner slurps it afterwards. It takes a mindset and a set of practice. But once you internalized it, it becomes second nature.

(Not to sound like a zig fanboy, I do think it's still rough around the edge and there are a lot of things I don't like. But manual memory management is not that big of a problem).


> written in C [..] tooling and game engines (which you said are a poor fit for manual memory management)

Game engines moved to C++ over 20 years ago.

Most major compilers are also in C++, including GCC (it switched over a decade ago). Which means the two largest C compilers are themselves not written in C. They have un-bootstrapped.

> That all said, I do agree that manual memory management is really hard to do if you are used to just sweeping gigabytes of memory under rug, hoping the GC vacuum cleaner slurps it afterwards. It takes a mindset and a set of practice. But once you internalized it, it becomes second nature.

Sorry, but no, you cannot internalize this. Nobody can. Once a program grows past some point, purely manual memory management & "git gud" are simply not practical. The amount of evidence against this is beyond any doubt.

Zig's emphasis on cross compilation seems like it's a better fit for embedded than anything else, which is where things shouldn't realistically grow to be huge projects, but with how coding efficiency (or lack thereof) works today along with microcontrollers getting ever more powerful... who knows.


Zig does in fact do some stuff to address memory management like making allocations more explicit using allocators and shipping with arenas.

C also has only explicit memory allocators...

rust does not promise leak safety.

True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.

> But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope.

RAII has entered the chat.


> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.

Bun's entire history has been a kind of haphazard move as fast as you can story, so...


They've turned into a pretty unserious, non-critical, non-hardcore ad page.

So, they're like an LLM?

Death to liquid glass!

Isn't that where everyone's strategy is shifting?

Yes, but I think Google was playing that strategy from essentially day 1 or very early in this AI race, where as the others are there now because of their lack of access of compute.

The general narrative I would read on HN/others, was that Google would be able to outlast/outcompete OpenAI and Anthropic because Google had both more money and more compute. Playing the game of subsidizing their most capable models to capture market share longer than the VCs could.

But instead I feel like Google opted out of that much earlier. Shifting their focus on efficiency and scaling much much earlier. Flash and Gemma being where Google was actually ahead of the competition while everyone was focused on bigger more capable models.

In the last month the environment has changed, compute is constrained, costs for consumers are way higher than expected. Copilot pretty much imploded, and I'm guessing both Anthropic and OpenAI are starting to feel the squeeze.

My personal opinion was this was necessary because integrating AI into products like AI overview, search meant scaling to billions of users was a requirement right out of the gate. And theres not enough money/compute no matter who you are to use frontier models for that.


It benefits Google's bottom line to have very capable small models that can cheaply cache results for search queries, even if they're frequently wrong. But I wonder if they use Gemini for the top X% of search terms to try and get better retention? Also the TPU vertical gives a good advantage here. I've never been super impressed with Gemini out of the box, but surely, surely, Google is best positioned here.

As a consumer, 24-32 GB VRAM is affordable ($1-2 k) and that's the frontier I'm most interested in. It's very "two papers down the line". Those models are also feasible to fine-tune, unlike the O(100+B) behemoths. The 4000 Pro Blackwell has very good TDP compared to people insisting on using 300-600W gaming cards. If I was freelancing, I would definitely consider getting a 6000 for work.


They also just have the resources- both in $$ to spend time optimizing, but the people like Jeff Dean who have already been focused on AI efficiency for a long time.

Please don't give them ideas. :(


I'm not familiar with Zitron as a character, but the article disagrees with his critique of AI progress slowing with the rebuttle that AI is getting more efficient to build and use. That comports with my perception that much of the recent work in the field has been focused on making the technology more profitable. See special emphasis in the last 2 releases from Google, OpenAI and Anthropic. I do think that's a meaningful change in their messaging. I don't know what it means exactly, but they are clearly sending a message about economics.

I'm not a big user of AI for lack of interest, but have held for several years that I'd be more interested if it were faster and cheaper. If this form of AI is the future, I do hope it gets significantly more efficient, even if the capability caps out. I think there is plenty of room for interesting applications, if so.


Actually, the right is missing out on AI. My source is the same as the article’s.


Could you expand on that? I'm genuinely interested.

I personally identify as a leftist and it's my perception that the left is completely missing the moment on AI, to my great frustration. From my perspective, left-leaning people increasingly project an "AI evil" vibe even though most of them simply don't have any direct exposure beyond seeing Sora slop and hearing about data centres.


I actually think the anti-ai on the left is subsiding. More of my friends are using and asking about it, and I have become active in a local indivisible group, where more than half are using it. Those people were very excited to have someone with deep knowledge around. The remaining anti are softer resistance, more skeptical because they have heard bad environmental things. I'm personally more concerned about the social side and second order effects.

I'm trying to help them understand two things

1. Like all of computing history, we will become more efficient and have less environmental impact. The most likely slow down will come from energy availability. We need to step up our renewables, it's not so bad if it's good energy

2. We have moved up the stack. These are not simple text-in-out machines. The training and models are more sophisticated. We now give them tools, skills, constraints and have them operate in teams. Human in the loop is still important.


I think Raylib satisfies a similar CAPABILITIES-niche to Godot...

I am _not_ talking about ease of use or interface.

For a long time Godot has not been ready for medium-large 3D releases, that is changing, but for the most part both it and Raylib are very reliable and will be perfectly good to release a 2D game with.

I'm not actually sure whether a 3D game with skinned meshes will ever be in-scope for Raylib. Wouldn't seem like it.


They didn't say anything about godot or 3d meshes.


“not sure if it will last further on development”

I interpret this to mean something like “as my game gets more involved” which is not unrelated to a venture into 3D. Why are you policing my comment which is trying to be helpful?


Your comment was so unrelated I thought you replied to the wrong comment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: