Hacker Newsnew | past | comments | ask | show | jobs | submit | Jarred's commentslogin

`PathString` worked the exact same way in our Zig code, with less visibility from the compiler & type system. And yes, it will be refactored heavily (or deleted overall) in the next week or so.

> mostly a JavaScript interpreter wrapper

Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.


Don't forget the image rendering library!

Now that Bun can leverage Rust do you think some of this code will get disaggregated? Eg, Bun could use swc crates

It wouldn't have been that hard to do that from Zig if they'd wanted to. They don't, because they want to do everything themselves so that it works exactly the way they want (except the core JS engine for which this is infeasible—though even that has custom patches). After all, there are already plenty of libraries on npm for those other parts of the stack and they do work in Bun.

That was me - not CI marking as slop. It kept around 60 .zig files around that should’ve been moved to .rs files.

It looks like you were spamflagged on your last comment https://news.ycombinator.com/item?id=48133806

That's wild, how are people going so crazy over a rewrite.


> it's basically solving the ,,tests not pass'' problem by changing the tests themselves.

False.

0 test files were deleted. 0 pre-existing tests were skipped, todo’d, or had assertions removed. 5 new tests were added in test.skip/test.todo state to track known not-yet-fixed bugs in the port that lacked test coverage before.

The merge changed 28 test files in total.

+1,312 lines

−141 lines

Most of that +1,312 is new tests.

The depth-of-recursion tests for TOML/JSONC parsers went from 25_000 -> 200_000 because Rust’s smaller stack frames (LLVM lifetime annotations let the optimizer reuse stack slots) mean 25k levels no longer reaches the 18 MB stack on Windows.


That's great!

It's too bad you haven't structured the commits and pull requests a bit differently so that it's easier to review the exact changes, but I hope it goes well.

For example doing the test refactorings in a first pull request, and using something like test.xfail that is first fails then after the merge succeeds (but the test code itself doesn't change).

Also I have seen some tests getting stricter, which is again not a problem, but separating to a different pull request would have improved the reviewability significantly for a runtime that many people and companies depend on.

I'm sorry you were downvoted by HN and your comment got ,,dead'', that's not the way to review things.


Still writing the blog post about this. Will share more details.

For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.


You, nine days ago[0]:

> I work on Bun and this is my branch

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

Maybe... it wasn't such an overreaction?

[0]: https://news.ycombinator.com/item?id=48019226


I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?

I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.


Because in the software world, especially before 2022, ownership and stability have been valued. People like using things that do not randomly start breaking more often after every new release, and if things break, there is a human who knows exactly why it broke and what's the best way to fix it. Businesses would not want their losses to be attributed to an AI rewriting an entire codebase. AI owns nothing, not even the bugs which it produces. I would not want my SaaS to have downtime because a JavaScript runtime it depends on decided that they had to market their LLM by rewriting years of code recklessly.

People are not betrayed by a rewrite. They are betrayed by an LLM rewriting with minimal supervision fasttracked to a merge within 9 days of commencement.

To the contrary I do not understand how we have become so insensitive towards stability since the LLM era. Why is unbreakable code no longer the goal but a truckload of generated code is.


The unhappiness is primarily stemming from Bun’s ownership by Anthropic - HN sees this as Anthropic using an OSS project for reckless marketing stunts.

For the record I don’t believe it’s a stunt, it’s ridiculous to me - everyone’s just seeing what they want to see out of sheer hate for anything Anthropic does.

In any case if the rewrite is really as reckless as many in this thread claim, we will see Bun collapse in on itself with a 1M LOC codebase the core team doesn’t understand, or rollback to Zig. So we don’t need to have a flamewar over it, time will answer the question.


The useful thing about useful idiots is that they don't have to be "in" on it to effectively have the same outcome.

When trillions of dollars are on the line, along with literally killing thousands of Americans due to the utter destruction of hyper scale data centers, it becomes extremely prudent to be critical of such stunts.


My read is it's less the rewrite and more the messaging around the rewrite. Nine days between "you're over-reacting" and merge is surprising, to say the least. Sure will be interesting to see that blog post!

Vibe coding a Rust rewrite of a widely used tool is basically catnip for the HN crowd.

Not if you use that tool, then it's just scary.

The context nobody is mentioning is this came shortly after Bun forked Zig in the name of optimization, but then a Zig maintainer came out and basically said they (Bun) don't know what they're doing, or else they would have known that wasn't an effective optimization.

It outwardly seemed like they forked Zig for a flashy headline, were called out, then immediately started moving to Rust. This, combined with being bought by Anthropic, and plugging vibe coding the whole way, just gives the impression of random and chaotic technical decisions, which is not what people want in software their business depends on.

https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...


My read. If the code has a comprehensive feature test suite, a performance test suite (how long a function takes), and a linter with readability guidelines (e.g. cyclomatic complexity; no code duplication), and the LLM rewrite passes all three, then it should be fine. But I think that in the real world only the first one (functional tests) exists.

My read is that it just seems a bit reckless doing a full rewrite so quickly.

posting my read (since it differs so much from the others')- there's a 'holy war' being waged by people that think LLMs shouldn't do full rewrites of software. There are various reasons people think this (think LLMs are parrots that make slop and are incapable of writing good code, have environmental concerns, or are angry that software licenses can be circumvented). I call it a 'holy war' because I think most see our current trajectory as a bit inevitable and have a strong urge to proselytize their views and chide maintainers that use LLMs in ways they don't like.

Very similar angry comments happened with the discussions of the Chardet rewrite, next.js/vinext, and JSONata/gnata if you want to look at this in context.


You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).

Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?

Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.


Yes sure it's ok to change your mind. But don't you think the people Jarred accused of "overreacting" in retrospect didn't?

No, what we knew then is still what was known then. Today is different, and seemingly they've committed to the rewrite, so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.

> so now it makes sense that people have strong feelings about it, as it's no longer just an experiment.

It also makes sense to have strong feelings when you're able to pattern match well enough to predict something will happen despite others trying to convince you that your predictions are incorrect.

It's not overreacting when correctly predicting the future, just because others couldn't. In the same vein, the idea that "everyone out to get you" is not called paranoia when there are people actually out to get you. That's better called being observant.

Some of those who predicted correctly might also have overreacted, but I believe that the majority understood that to be a blanket statement about prediction as a whole vs any specific individual reaction.


“Nobody could have seen this coming…”?

Well apparently a lot of people did. Maybe Jarred didn’t, maybe you didn’t, but most people correctly predicted what was coming.


See what coming?! I really don't understand what's going on here. Correctly predicted what, that Bun was being rewritten into Rust? I'm not sure anyone doubted that, all the work they did was public???

What on earth is going on here?


> I'm not sure anyone doubted that, all the work they did was public???

https://news.ycombinator.com/item?id=48019226

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.


> What on earth is going on here?

With the nearly complete PR with the port to rust, a number of people predicted that it was going to happen. They were assured it's unlikely to happen and then they were accused of overreacting over effectively nothing. When those same people who were already upset about the rewrite, learned that their predictions the same ones that were rudely dismissed, were in fact, correct, they became upset again; this time about being lied to.

Correct or not, it's reasonable to conclude they were lied to. Especially given they correctly predicted the future.


>Correct or not, it's reasonable to conclude they were lied to.

No it's not. If we were 9 days away from a human written version of this experiment then yeah it would be reasonable to conclude they were lied to, because a human written version would progress so much slower and steadier that it's very unlikely you hadn't made up most of your mind a week before merge time.

But it's not human written. It's months, perhaps years of work compressed into a week, where the machine can go from 'nothing is working' to 'everything is working' in a few days. There is nothing reasonable about concluding you must have been lied to when such a delta in such a short time is possible. And if people fail to see that, then perhaps the initial assertions about an emotional meltdown were not so far off after all.


I might surprise you, but tech projects have social part of it. Decisions like that are discussed with community. It is completely fine to not give a single shit about community, but then don't act surprised when community doesn't give a shit about you.

Decisions like this are discussed however the maintainers of the project wish to discuss them. And a majority of the time, these decisions are made and discussed solely by the maintainers, so I really have no idea what you're talking about.

It's really simple.

9 days ago this is how the migration was described:

> I work on Bun and this is my branch

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.

9 days after that comment, the rewrite has been merged to master.

9 days after "this is my branch" "the code doesn't work" "I'm just curious" "high chance it's thrown out"... it's merged to master.

-

Some people saw the original as an attempt to downplay the importance of the branch in response to negative feedback, rather than accurately describing what the branch represented.

Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.

Experiments graduate to production all the time, but given the timelines involved, their predictions were correct.


> Those people essentially predicted that Bun's actions would shortly reflect much more conviction than was being let on.

Ironically these people are displaying great confidence in AI’s abilities.

If that’s the case, what are they objecting to exactly?


> Ironically these people are displaying great confidence in AI’s abilities.

Maybe they were displaying high confidence in a marketing machine's ability to commit to dangerous stunts.


Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.

There is no way a human rewrite like this wouldn't be roughly at the same stage with a 9 day delta. In that case, some of these accusations would be reasonable to make. But that is not the case here.


Thats fine if some Claude code agent made PR and committed it. No human involved, no human drama ensued.

People here are pointing the problem because Anthropic dude claimed, it is an experiment, tests are still failing, may go nowhere.. blah..blah.


Yes because it was an experiment and tests were indeed failing at that point in time, but guess what ? When an experiment succeeds you probably don't throw away the results.

You know, we used to look down on engineers who didn't realize there's more to software than the raw lines of code.

You're free to look down on whoever you want. I'm free to tell you I couldn't care less, and that both replies so far just confirm how much of an emotional meltdown the reactions here really are. Your comment has managed to have nothing to do with the point I was making.

You're getting the responses you earned by intentionally being flippant as possible.

If you had presented your point more thoughtfully, maybe I'd have spoon fed the point of my response, which 100% relates to what you said: your model of time compression is describing the speed of creating code.

But Bun is more than lines of code and serves as core infrastructure for lots of other projects. It's a terrible look in terms of governance to approach this migration as they have, especially the initial denial.

That shouldn't be contentious.


There's no reason to think there was an 'initial denial'. That's the point. Everyone here is saying there was denial because all of this happened in 9 days, and again, that's a silly assertion to make when humans did not create or review the code. Someone can have a swift turn in opinion when an incredible amount of change happens in a short time. The LoC comment I made was simply to serve as an illustration to how fast things can change with LLM generated code.

I'm being flippant because this should be incredibly easy to understand.


Maybe it might be easier to understand if I was a really terrible engineer.

AI gives me 750k LoC PR that's mostly broken and unuseable on Monday.

AI then fixing it by adding another 250k LoC, is not going to convince me, a competent maintainer of a major Js runtime with years of contributions, plenty of downstream dependents, and an understanding of the AI zeitgeist... to merge it all in by the next Wednesday


Just because the machines can generate code that quickly doesn't mean that human thought has changed to moving faster. Everyone's had a problem they were working on, and the solution doesn't come sitting at the desk staring at the code, but three days later in the shower, eureka! hits. Just because machines are writing code hasn't changed the underlying human thought speed substrate. That's why people see nine days as too fast, even in this sped up AI era.

Human speed thought doesn't matter here because it's not human reviewed. The code was generated. It exists and it (now) works to the extent they're satisfied with going through with a canary release. Going on about about '9 days' is working with a mental model that simply does not apply here. That is my point.

If you think there should be human review or that there should have been a lot more human collaboration, that's one thing but accusing Jarred of lying about his intentions is another thing entirely, and one where '9 days' is not remotely the proof people think it is in this situation.


I'm not sure where I accused Jarred of lying. All I'm saying is that 9 days is not very long.

The chain we're on and the comments I originally responded to have such concerns. And I mean, if it's not going to be reviewed by humans then really what makes 9 days too soon ? Should the code just sit there collecting dust until everyone agrees an arbitrary amount of time has passed ?

> Stop thinking about '9 days' like it means the same thing in an era where machines can generate thousands of lines of code in a few hours.

You need to lay off the kool-aid.


Making a factual statement is drinking Koolaid ? Okay

> What on earth is going on here?

Irrational armchair quarterbacking driven by emotional reactions to change and perceived threats. It’s not worth worrying about this specific instance, but the overall trends could get messy. This is just a taste of that.


Maybe the people who "were overreacting" just happened to have more foresight than you and me? Perhaps they saw where this was heading, and that led to their "overreaction"?

In what way? Foresight about what? It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.

> It was an experiment before, regardless of people's reaction at the time doesn't make it less of an experiment back then. I feel like I'm misunderstanding this entire conversation right now.

Yes - I think I didn't explain my feelings well. But, now I understood them finally! So:

It was an experiment back then. Now, nine days and a million lines later, it suddenly isn't an experiment anymore? I understand there's a comprehensive test suite (yay!) but still... a million-line diff in nine days still sounds like an experiment to me.


The difference is an assumption of good faith, for the most part, and that is to some extent modulated by how reasonable people believe a large scale LLM and/or rust rewrite is a reasonable idea.

Why are you defending them so much, lol. It's no longer an underdog open source project fighting for survival, it's a freaking Anthropic subsidiary that has been bought for hundreds of millions of dollars.

The top comment at that link points out how many of the sibling comments are delirious and emotional, kneejerk responding to the news rather than giving any sort of sober analysis.

That people were overreacting with emotional meltdowns (common in AI-related threads) is perfectly compatible with the branch making enough progress to get merged.


Anyone who disagrees with me is having an emotional meltdown and obviously they're delirious AI-haters.

I'm not in a cult, you are in a cult and delusional!

This seems dishonest.

I'm reading through the top comments next to his and don't see that. You can always find delirious and emotional takes, but those didn't dominate the discussion

https://news.ycombinator.com/item?id=48017005

> [...] Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.

https://news.ycombinator.com/item?id=48017358

Compares this to Go runtime's C to Go migration

https://news.ycombinator.com/item?id=48017309

Link to Github diff view

https://news.ycombinator.com/item?id=48017505

> I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.


Who cares? Go see a therapist

It's a high profile open source project. While Bun/Jarred don't owe anything to anyone, nobody should be surprised when decisions like these result in strong backlash.

Imagine if Guido or Linus said a couple of days ago that they're just experimenting and then submitted and merged complete machine-assisted rewrite of CPython or Linux in Rust.


This actually happened to me a couple months ago. Started a Rust rewrite of a project as an experiment, then a few weeks later it was presented to the team and promoted to mainline.

Although in that case the language change was almost incidental — the rewrite was very much not a straight 1:1 port, but more of a substantive architectural overhaul and longstanding tech debt cleanup; Rust was just one of many tools and design decisions that helped get the best possible end result. There were also various reasons it made sense to attempt a rewrite within that particular window of time.

The upshot is we've ended up with a substantially stronger QA posture, a much higher-quality and more maintainable codebase, and an extremely positive audit report by a group that was brought in to review the project. There were some early kinks to work out, but the longer we've lived in this version of code the more it's proven itself to be a stronger foundation than its predecessor.

Of course, Bun is its own thing and all circumstances are unique. I have no idea how that rewrite was approached, whether it was the right decision, or how it will ultimately prove itself. Just saying the shift from "experiment" to "official new direction" is normal and credible, and that I'd give it some time to see how it handles contact with reality before passing judgement. If it's truly a disaster, nothing's stopping them from reversing course and backporting any new changes to the old Zig codebase.


The author discussed this here four days ago

https://news.ycombinator.com/item?id=48077663


I was down voted pretty hard for calling this comment out. I would say I'm surprised but honestly? Completely predictable.

Yea, what the heck.

Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?

That's way too smart, safe and sensible.

They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.

I guess the answer is "no".


I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass

Do you have any proof Anthropic pushed for this? Because the author has been clear this was an experiment they wanted to test out on their own, only when it seemed to be in a working state did they consider, okay maybe this might work for us.

Does it take a phd in psychoanalysis to not see that the company that has been marketing the fuck out of lame publicity stunts, to not take advantage of another publicity stunt? Good lord, no wonder the public hates tech workers.

I refuse to blindly hate something because someone tells me to with no evidence, if you want to hate me for that, so be it, that sounds like a personal problem.

Was there pressure to do this, or freedom to do this? If I had an unlimited token budget I'd probably try all sorts of crazy things. Also you (one) can read the tests and see that they weren't modified to forcibly pass.

I'm curious how much this would cost a paying customer. Can you please give us an estimate?

Great question and I'd love the answer.

I bet the answer is industry changing even if the token cost is high.

This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.

That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.


Any plans to issue a CVE for this HTTP request smuggling attack vector fixed in the latest bun release?

https://github.com/oven-sh/bun/issues/29732


https://github.com/oven-sh/bun/security

Surprisingly, they appear to have not disclosed any vulnerabilities whatsoever. It's likely there have been numerous vulnerabilities in the past, but they are all being ignored.

https://x.com/DavidSherret/status/2031432509301428644


This is really poor form given that Anthropic is going around getting all kinds of public goodwill for finding CVEs in other people’s products.

Yeah! Why would the company that stands to make themselves look better in front of an IPO do such a thing?! Next thing you're going to tell me was that this whole rewrite was another marketing ploy to help potentially turn themselves in multi-millionaires!

maybe you should ask on the issue directly?

Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?

Will this likely fix stability issues in the Bun Workers API? https://bun.com/docs/runtime/workers

Is writing the blog post taking longer than the rewrite

almost

> The codebase is otherwise largely the same. The same architecture, the same data structures.

How can you possibly verify this, if a 1M line patch was written over 7 days? It's at best a hunch (vibes?), and at worst a lie.


Because it passes the existing test suite? And he knows what's in the test suite?

The test suite explicitly verifies the architecture and the data structures used? Depends on the suite, I suppose.

I can hope this will lead to little to no memory issues in using bun as a web server

I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.

    git rev-parse HEAD && ag "unsafe" src | wc -l
    19d8ade2c6c1f0eeae50bd9d7f2a4bf4a2551557
    14865

On the other hand - now it should be possible to tackle some of those one by one?

Oh yes, I don't doubt they'd eventually be able to seriously reduce that number, probably to a handful of places. I don't doubt the strategy employed here, rewriting it keeping it similar, then slowly change it. I do still doubt they'd be able to completely eliminate memory issues in the end regardless.

Doesn't that count anything that has 'unsafe' in it, not just the keyword?

It does, see the sibling comment made about an hour before yours, fixing that issue has marginal difference.

That's picking up all the "bunsafety" references in there :P

When I read what you wrote, I was like "of course, duh, I'm stupid" but running `ag "unsafe" src | grep -i "bunsafety"` it doesn't seem to be the case actually, I see zero bunsafety mentions from it.

However, `ag unsafe` does over-count anyways, just in a different way, matching stuff like SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION and _unsafe_ptr_do_not_use and others.

Better command with same previous commit, `ag -w unsafe src | wc -l`, reports 13914 "unsafe" usages now, slightly better but pretty awful still.


My understanding is that that's because they were trying to do a structurally homologous port from Zig to Rust, precisely to keep their mental model and not change "too much" at once, and then they plan to refactor to make it safe Rust later.

it's clear that as of the time of this merge, no human has read any appreciable fraction of current mainline bun, so it's not particularly clear how much of a "mental model" exists anymore.

Does that mean that from now your coding agents working on the Bun codebase are themselves running on that rust-Bun runtime?

So a question you should answer: Couldn't you just train the super SOTA model on fixing those issues instead of porting it?

[flagged]


Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?

Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.


You don’t speak for most of us.

cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.

If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.

From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?

[0] https://ngspice.sourceforge.io/


I hear your suggestion without feeling the need to remark the far too common Linux/Deveoper response of “but if you just do all this other stuff and run it this special way and install 15 dependencies and compile XYZ lib from source then clearly it works fine and you’re mistaken”.

That’s exactly the type of thing that is needed is to optimize projects for modern compatibility, portability and safety when other modernization efforts or forks don’t exist.

That said, I suspect this rewrite went so quickly and so optimally because it had the benefit of (effectively) 100% test coverage already in place in a really well defined system. Most open source project spawn from efforts of a single developer who frequently never waste time writing tests for a little side project. Later as it grows, they rarely stop and go back to implement testing. So if you’re truly working with an old dead project, there is a really good chance there are zero tests to be found. That is far more difficult to reach the same completeness unless the goal is simply to port all of those same problems to a new language and hope type safety fixes them.

(Not specific to ngspice, just mean generally.)


You can instruct an LLM to improve the test coverage.

You are absolutely correct!

I've found Rust to be pretty enjoyable to work with in terms of Agent assisted development. Easier still if you have something you're trying to port or recreate in Rust for various reasons. There are definitely some rougher edges around a few things as you get more general purpose in terms of app targets. Some of the DB engines can use some work or may be missing interfaces you use in other supported languages/platforms... There's a somewhat limited set of UI options, and no clear winner.

Lifetimes can get pretty hard in very complex code bases... even if other aspects of burrow checking may be more common, this is where I've had and seen the biggest gaps in understanding in practice. That said, you can usually do inefficient things to work around these issues with the opportunity to come back later. Often inefficient Rust with lots of clone operations is still faster, smaller, lighter than the same services in Java or C# as an example.


[flagged]


As an amateur in the space: I download on Mac, run `ngspice`, "Error: Can't open display: :0". I look in the code - hardcoded X11-era assumptions. Not exactly modern affordances...

Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.

So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.


This explains a lot. But you merely need to look into the family of spice forks to realise, given the way that they're strangely limited to certain operating systems and embedded inside certain proprietary IDEs, that's there's something very wrong with the code architecture.

So, that would be an awesome project!


> As an amateur in the space

Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations. ng-spice (and Spice3 and Spice2) were not written for programming ease; they were written to get a real job worth real money done.

In addition, any change you make to that code needs to be run back through numerical regression tests to make sure you didn't break things since this is software that people expect to get correct answers.

However, if the legacy seems to bother you so much, perhaps you should look at Xyce from Sandia?


> Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

They sound like an amateur at circuit design, not software engineering (which is how I'd describe myself too).


KiCad is still the preferred interface.

The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.

I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.


> And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations.

I can 100% guarantee you, that these are never mutually exclusive at all.


I see "sourceforge" and immediately I think "this project is way behind time and is going to pose a lot of issues to new users, if it's still active".

I could have linked Github repo which has been abandoned for 11 years and ranks higher on Google than the sourceforge page, but that would have maybe been disingenuous. (https://github.com/ngspice/ngspice)

I moved to codeberg and google still insists on linking SOLELY the old archived project on github. While of course snyk and such awful scanners mark them as abandoned because they don't know codeberg exists.

I think this is highlighting the problem the poster you're responding to laments!

+1, a project presenting at FOSDEM certainly does not need a "revive".

The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.


> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.


Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.


To be specific, a linear solver can be (as in I have done) written in a week.

A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.


The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.

Dude these are incredibly oversimplified models of real components. How are you getting 1ppm when basic shit like tempco and self heating are missing from pretty much every vendor provided spice model?

As others have mentioned, it's not actually that performant. The matrix solve is about as fast as a single threaded solution can do, but the problem is parallelizable. There are a number of GPU implementations and I have even heard of offloading the matrix solve to an FPGA, though without unified memory a lot of the gains are irrelevant.

Even if you avoid most of the numerical code initially, the interface in the original spice core is a mess of string handling and building a custom shell experience. There are tricks like setting the upper bit of every byte to 1 when inside quotes so that the custom shell history matching skips over things in quotes. Very elegant for the time, but now that means if you want nodes with non ascii names you're either keeping a mapping outside or using utf-7.

Another great example is the expression parsing. There was a long standing bug where the expression parser leaked ~160 bytes for every step of an output expression for every timestep. So for example, if you had "($2 * 4) + 1" as an expression and ran a simulation for 10,000 timesteps you'd leak 8M bytes.


> That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.

I bet that just compiler optimizations that LLVM could do with clean code gonna be faster


and correctness too - I guess there aren't that many hardcore electrical engineers/physicists/mathematicians that can make sure the results it makes are correct and sound, and debug weird issues coming from numerical stability.

The sort of people who can do this are very rare, and it's not likely they will just randomly decide to donate their time to rewrite the codebase.


> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.


That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.

That's not a revive though, revive (at least to me) implies it's dead.

UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).

This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!


Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.

  how long does it take to compile?

  @jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
https://x.com/jarredsumner/status/2053050239423312035

This is at least partially disingenuous. Zig is working on, and has already shipped for some situations, a faster compiler. Bun runs on an outdated version of Zig that doesn't include it.

In my experience Bun in Zig compiles more slowly than Deno in Rust.

Single compiles for sure. Where Zig is optimizing compilation is in the incremental compiler, which I've seen compile the compiler itself in an instant after a single line change. Of course, that kind of speed is probably not interesting to some people if the AI is writing tons of lines of code before they go to the compilation step.

I found making single line changes in Bun’s zig code led to very long compiles compared to doing the same in Rust code. It was a while ago though and maybe I was doing something wrong.

Probably a very long time ago then. Try again with Zig 0.16. It's amazing how fast recompiles can be.

They can't, because Bun is tied to a fork of Zig 0.14 which is not compatible with regular Zig compiler.

Bun’s patched Zig is on Zig 0.15.1

What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?

Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?

Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.

Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.


That's a post I am eagerly waiting to read.

Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.


> Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)


LLMs have no conception of time, unless you explicitly feed in timestamps to the context

It doesn't stop LLMs provide "this feature set will require 4 months to finish" (and then finishing it one hour)

Sorry yeah, I meant to say LLMs have no concept of time, so time estimates they give are almost always hallucinations

Scotty from Star Trek does approve!

This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.

I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.

Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."

"I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."

These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.


Those are not assertions of anything meaningful. We have no idea what his expectations were. Maybe he expected it to be absolute crap, and it was only kind of crap. None of it means that it's actually viable. My fat uncle trying to beat Bolt's time could exceed my expectations by improving from 30s to 20s, doesn't mean it's ever going to be a reality.

> In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.


I disagree. This is the same sort of marketing strategy as Mythos.Wow it out performed so much we have to tell you in the future. If he wasn't aligned financially with the outcome I'd agree but he's not

So do you picture them locking up the Rust port behind closed doors as well, or what's the game gonna be? Cause it reads like it's kinda all public already.

Absolutely not, I think they prioritize it because it's internal. I to expect to see a stronger marketing push on its ability to do language translations because there is honestly value in that. Question is when they have compute but it's less crisis marketing then their security stuff so I'd see it at a lower priority. I just don't think it's as honest as the parent post posits

The Mythos-truther community is absolutely batshit, sorry. You wrote fanfic and now you're writing more fanfic. The company is faking for marketing so therefore they're faking for marketing. The only things in common between the two situations are you and the word Anthropic, the rest of us are just confused and worried. I'm worried, that's why I'm speaking to you plainly.

> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

haven't used zig...(only used rust)

but zig doesn't solve those problems?


Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.

I am of the opinion that it is horses for courses and not a universal better proposition.

Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…

While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.

1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.

2) defer[0] allows you to collocate the the freeing of resources with code.

That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.

I was working on some eBPF code in C and did really miss zig.

For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.

[0] https://zig.guide/language-basics/defer/


Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.

I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.


[flagged]


Not really though. That's like saying that no language is "safe" because the compiler could have a bug.

It's true that safe wrappers around unsafe code sometimes have bugs in them, but it's orders of magnitude easier to get the abstraction right once than to use unsafe correctly in many places sprawled across a large codebase.


It's not as simple as that. All software is abstraction and with any software if you go deep enough you'll find unsafe code.

E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.

If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.

It isn't as binary as you think.


If you don’t see any difference between those two, I’m really not sure what to say.

Show code

I think he meant "show me a true linked list / node graph in rust that isn't unsafe". The reason being its not possible using c-style pointer following (or without just putting everything auto-pointers). What you've shown is exactly the tradeoff they were referring to. In rust, the answer is: make sure lifetime of all memory is explicitly managed, then use integers for the 'links' between nodes.

His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).

Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).

I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').

If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.


> it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node

This is not actually a different kind of bug; it's just use-after-free, which you can of course get when using pointers instead of indices.

Actually it's slightly safer than pointer use-after-free because it is type safe and there's no UB.

Also some of the Rust arenas give you keys (equivalent to pointers) which can check for this. There's a good list here (see "ABA mitigation"):

https://donsz.nl/blog/arenas/


Err https://github.com/petgraph/petgraph

What are you asking for exactly?


Forgive me if I've mis-understood this thread, but there are unsafe declerations in that crate. Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?


> Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

Yes, in the same way that there's a difference between using `std::Vec` (which uses `unsafe`), and writing an unsafe Vec class yourself.

Or even the difference between using Python (which wraps an unsafe CPython implementation), and doing everything in unsafe Python code.

The difference is that widely used code like CPython and `std::Vec` are much much better tested and audited than anything I would write myself, because so many people use them. This is a continuum so something like petgraph is going to be not as well tested as std::Vec but still way better tested than anything I've written.


I would say yes, there’s a difference, in general. I would much rather leave the unsafe code to crates used and tested by many other applications, than have them in the application code itself.

I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.

You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.


It doesn't seem clear what code you're asking for.

zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.

What language doesn't allow memory leaks?

There are two kinds of memory leaks: forgotten manual freeing (all references are gone, but allocation is not) and forgetting to get rid of references that keeps an allocation alive. Both are a kind of logical error, but the first is mostly possible in languages with manual memory management. The second one is a universal logical error (only programmer knows which live references are really needed).

In the Haskell community I’ve seen the second kind called “space leaks.” I don’t see it used much outside that community but I like the term and use it when talking about other languages as well.

Rust allows reference-counting cycles, right?

I suppose all languages allow them, depending on how you define a memory leak. Garbage collected languages generally prevent them, since you never have to explicitly free memory, but if there are reference cycles, that memory can never be freed automatically. Rust has the same problem, but since rust uses lifetimes to understand when to drop things, many people expect that this will mean there can be no memory leaks, but leaks are not considered a correctness or safety issue (oom is a panic and panic is safe!). Not only explicitly possible (through Box::leak) but also possible by mistake (again, usually through reference cycles).

> but if there are reference cycles, that memory can never be freed automatically.

Many garbage collection algorithms can deal with cycles.


Zig doesn't even have RAII...

which is a good thing. C++'s RAII is magic-sauce that does a lot for you when you can simply use `defer` in zig. A constructor is just a function call. A destructor is just a function call.

And a function call is just a fancy JMP, still it's generally acknowledged to be better to have all the bookkeeping automated.

Does defer in zig track the objects lifetime directly, or is it like the various other 'context' features in other languages where it only really works for lifetimes of function-local variables and leaves you on your own when things get more complicated? (which, IMO, is precisely when RAII becomes most useful. It does seem like most of these languages only consider the 'forgetting to cleanup on an early return from a function' case)

Constructors and destructors are also just function calls in C++

And you can't forget to type defer


It's not a good thing. The reasoning is extremely simple and I don't understand how can anyone oppose it: there are some operations that you don't want to forget BY DEFAULT.

If I open a file, eventually I want to close it. If I allocate some memory, eventually I want to deallocate it.

Any programming language design that intentionally puts the onus BY DEFAULT on the user to *not forget to manually do something* is honestly asinine.

Defer has a place (I do use defer in C++, in fact you can implement it with RAII, proving that RAII is strictly more powerful/more flexible), but the default should be the safest and most straightforward option.

Also "magic-sauce that does a lot for you" is just false. It's literally a function call injected at the end of a scope.


How is defer not magic sauce?

Whether you consider it magic is up to you, but, unlike a destructor in RAII, there is nothing automatic going on. If you don't explicitly invoke a destructor, you won't get a destructor.

The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.


And more importantly, you can choose what destructor to call. This is perhaps what's most underrated about defer, because defer can select among many different destructors possible, at multiple different levels (group free with arenas, individual free, etc).

Or even whether you need a destructor, or something simpler, like nulling out a pointer or two to break a reference loop.

defer is a perfectly general structured flow concept; it only cares about when you do something, and is completely orthogonal to what you need to accomplish.


I'm not sure the folks responding can tell the difference.

> If you don't explicitly invoke a destructor, you won't get a destructor.

When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

>The fact that you can explicitly invoke the destructor to happen later

You don't specify where the `defer`-red "destructor" will be invoked.


> When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

Unless, of course, you do it inside a defer block.

> You don't specify where the `defer`-red "destructor" will be invoked.

Yes, actually, you do. It is patently obvious, by code inspection, where the destructor, or anything else specified in a deferred block, will be invoked. defer is a perfectly cromulent part of structured control flow, allowing for easy reasoning about when things occur without having to calculate an insane number of permutations of conditional branch instructions.


Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.

It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.


It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.

https://github.com/ityonemo/clr


It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.

Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.


> I suppose you could work around that by adding lifetime annotations in zig comments.

you can make a no-op function that gets compiled out but survives AIR

> rust knows when it can Drop.

and its possible to cause problems if you aren't aware where rust picks to dropp.

> And rust can put noalias everywhere in emitted code.

zig has noalias and it should be posssible to do alias tracking as a refinement.

> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.


> and its possible to cause problems if you aren't aware where rust picks to drop.

Can you give some examples? I've never ran into problems due to this.

> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage

Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.

I wish you luck in your project! If you land somewhere interesting, I hope you write it up.


> Can you give some examples? I've never ran into problems due to this.

If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D


> If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.

The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.


In zig the solution is to use an arena allocator. That’s about as easy as it gets. Maybe Rust also allows doing that, I don’t know.

You can use arenas in Rust, it's just not as trivial to swap allocators generally. But there are plenty of crates for it.

no, in zig it's never unexpected, because if you're freeing memory the freesite is known, it's a function call.

Right; because in zig the default behaviour is to leak memory. Rust adds an invisible free() call. Leaking is something you have to do explicitly.

I understand zig's philosophy here. But I prefer rust's default behaviour.


yeah, IMO generally explicit is better. It's hard to take something implicit and increase the visibility (I'm aware there are tools to show you lifetimes in rust). But another option is to statically analyze the code (or the IR) and have something else check that you aren't leaking.

Those tools exit in C tooling as well, now that many ignore them is another matter.

MSVC has a debug allocator since at least Visual Studio 5.


It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.

Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.

Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.


You might want to check their Codeberg then, because they've moved all their development over there...

Zig very much could of moved all of their GitHub issues over to Codeberg, to be resolved, but chose not to do so. Thus left thousands of issues unsolved and stranded.

This maneuver was arguably obfuscated by the anti-LLM stance and finger pointing at Microsoft, but nevertheless, many still have noticed. Zig, for a long time, had been falling behind and doing poorly on their open to close ratio for resolving issues. It should be embarrassing to leave so many issues open.

Even if not accepting new GitHub issues, they have demonstrated an inability to resolve existing issues, except at an extremely slow pace. Considering there are just about no new issues on their GitHub repo, it is understandable if there are those that find the pace to close and amount of issues unacceptable or questionable, in addition to the clearly bad open to close ratio.


Did you read their migration post? They are thinking about it as COW, so they're using both issue trackers right now, but as soon as the update an issue it jumps straight to the Codeberg issue tracker. It's an unconventional way of doing it, but it's no conspiracy.

Peter Naur: Programming as Theory Building

Bun: Hold my beer


I work on Bun and this is my branch

This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.


It is a pity that you can't make an experimental commit on an experimental branch without igniting a fire of delirium through some people who -- if they were able to put their emotional response aside for a minute and could weigh this up on the basis of merit -- would probably agree with the motivations for researching this approach.

> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable

Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.

For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.

[0] - https://github.com/cloudflare/vinext


This is what it means to work on a popular project, unfortunately.

You also don't have a duty to read or respond to the social media flames. Just do the work you want to do.

If people get worked up about experimentation, that's their problem, not yours.


It’s not your problem until it becomes your problem.

It only becomes your problem if you choose it to be your problem.

That is not how these things usually work.

Exactly. Usually is most people. You can choose to deviate.

You can delete your social media accounts and just keep working on what you want to, for one. Nobody is forcing you to use social media.


I'm mostly on board with what you're saying, but under such an interpretation of "forcing", people are never truly forced into anything. That puts it in fundamental conflict with the very existence of the word, i.e. renders it meaningless.

That said, I did also walk away from most mainstream platforms already, so it's not like I disapprove of the message necessarily. I did find it regrettable that the calculus worked out that way though, and I don't find it reasonable to deny that there is / was a calculus. You do give up on things that are not just the assholes. I'd definitely classify that as a force.

But maybe I'm just missing that this was supposed to be inspirational rather than literal, and mistook your words. I don't know.


I am a topic starter, and I had no emotional response, was just being curious. Never expected it will land at HN #1. I specifically posted the link to the first commit and not to the whole branch, because currently the prompt is the most interesting part.

The title kinda set the tone for this post.

The title is "Bun is being ported from Zig to Rust". The docs/PORTING.MD starts with "Zig → Rust porting guide"

I don't think the tone was the problem.


Imaging title it "Bun is being ported from Zig to Rust in an experimental branch" though. Not enough drama with that

The branch name is "claude/phase-a-port", there was zero indication this was an experiment until Jarred commented. The more accurate title might have simply been "there is a branch in the official repo of bun describing a port to rust from zig". No amount of soft titles would have prevented the discussion. People have their opinions about Bun, about Zig, about Rust and it's all going to come out in a discussion board.

Can’t every branch be considered an experiment? I have a ton of experimental branches that I don’t label «experimental». One of the reasons you use git…

If every branch is experimental. Then there is no need to put ut in the title.

Sure, but then how does it change anything around the discussion? You are still running an experiment to port to Rust, it still gets posted, the Rust-heads and Zig-heads still make their comments.

> there was zero indication this was an experiment

  The goal of Phase A is a **draft** `.rs` next to the `.zig`
  that captures the logic faithfully — it does **not** need to compile. Phase B
  makes it compile crate-by-crate.
I mean, it would be hard to spell it out any clearer than that! Code that fails to compile is just not very useful for real work.

Phase B clearly says compilation is the next goal. The first goal is to get a like for like logic, the second goal is to get it to compile. Can you guess what the third goal will be? Throw out the code?

The branch is named phase-a-port and the document explains what "phase-a" means. It's quite clear.

Yes, but that would require people to read past the title. You can't get a proper knee-jerk first post in if you do that! Completely unfair to expect people to make that sacrifice/effort.

[there was some sarcasm there, BTW, if anyone has a faulty detector that didn't pick up on it]


I couldn't use that title because I didn't know if it an experiment at the moment. Even now the correct title would be "Bun author says that he is entertaining the idea of porting it from Zig to Rust, creates an experimental branch".

But you also didn't know a port was happening, which the title implies.

How would an outside observer know it’s an experiment?

An original topic starter? I'm pretty sure that this was originally posted on X by someone else, as I commented there, and minutes after, it was copied and put here on HN with the twisted title; the original was more of a "question, surprise tone"

This topic starter. I saw a post on Twitter in "for you" feed, verified it, found an interesting bit (rewriting prompt) and started a topic on HN. Like I said, I never expected it to hit #1.

It’s annoying for the team members I suppose, but to be fair, if you’re working on a high-profile open source project, owned by one of the most hyped companies in the world, and your branches are public, it’s probably a good idea to be clear in the branch naming and supplemental files if you’re just “experimenting”.

By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.


The fact someone who works on Bun is willing to create and even push a branch generated by a stochastic parrot is very telling of the direction the project is going.

Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.


Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.

Do you think the same about bitcoin? Where do you draw the line as to what programs are allowed to be written?


Why are you treating branches as if they are holy? This is all OSS, people work on this in their free time, git is got and people can use branches as they like to experiment and share their experiments with others. If you don't like the code, don't use it you damn leech.

Underplaying AI, overselling what an experimental branch is, and suggesting it's representative of the entire project, all while suggesting people shouldn't even consider new tools and methodologies. Where to start.

That's not a very constructive, nor accurate, way of trying to dismiss all concerns around bun that has been raised.

I think that was a very constructive comment about the unconstructive way people are shoe-horning other concerns about bun into this thread abut a specific aspect which itself turns out to be just an experiment that someone knee-jerk reacted to, despite several active threads already discussing those matters one of which only just fell off the front page.

While the concerns many have about Bun's potential future direction are valid IMO, of the posts on this thread the one you are criticising is one of the more constructive.


Maybe time to rethink your stance?

https://news.ycombinator.com/item?id=48094745


I love your work on bun. How do you feel about all the constant concerns being raised about the quality of the project lately? I understand some of them might just be typical twitter hate but some of them are real. And I think people are right to question why you are adding image processing or web views inside a javascript runtime when there are bugs affecting production that sit unaddressed. For example on of our biggest blockers right now is https://github.com/oven-sh/bun/issues/6608 which was reported in 2023, still affecting us 3 years later.

When you start getting hate, you’ve made it. Up until then you’re a hypothetical that people like. Maybe they’ve built a side project with you or read the docs. You only get hate when people have used your tool and butted up against limitations. We saw this with Deno too where they went from beloved potential savior to realistic, limited tool. Hate is good. It means people rely on you

Do you know which project gets the most hate? Nodejs, so in that sense, Nodejs has made it and it is widely deployed but this hate was the reason that two seperate alternatives for Node have emerged as Deno and bun.

Recently Bun's latest version had memory leaks which crashed production code from my understanding and their attitude[0] of saying OSS will have no human contribution allowed, now doing these ports of zig to rust, going back for years what the decision making of using zig was and this code basically being vibed as there is no way that they are reviewing the code while being VC funded/bought by anthropic.

These are all genuine issues which cause hate. You can say people are hating because people rely on it but the true thing is that also seems like a bait and switch and that people switched from node.js to bun (maybe even being locked inside bun), only for them to do these highly questionable decisions which is the reason why people are starting to hate on bun.

Atleast that's my interpretation right now reading this whole thread.

[0]:https://x.com/jarredsumner/status/2048434628248359284: "I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026."

- Jarred Sumner


Well yeah, it's in Zig, not a memory-safe language, so of course I'd expect memory leaks. That's why I haven't seriously used bun and instead use a runtime that actually is in a memory-safe language, Deno in Rust. It's like wearing roller skates without brakes and wondering why you keep running into things.

Memory safety has nothing to do with memory leaks, and it's perfectly valid to leak memory in Rust?

e.g. `Box::leak(Box::new( ... ))`


Generally it's automatically dropped unless you go out of your way to use the leak function, which most software doesn't do.

Memory safety doesn't help too much here, but "RAII" (automatically dropping values when they go out of scope) does.

Unit tests in zig will fail if the tested code leaks memory.

It’s a reasonable expectation from a clearly successful and competent engineer who is using the latest tooling.

Who is to say that it’s wrong?


Okay, let's be honest. That's a feature request, not a bug report.

I'd agree but bun is supposed to be a "drop-in replacement" and is marketed as such. This breaks several packages and projects.

Ohh thanks, did not realize. That I can understand.

Why not offer a bounty to get this issue fixed? Are you otherwise paying any money to the bun team?

This is getting stupid. Now one can’t even make a reasonable polite question with praise without being asked if they pay.

Bun raised millions of dollars and was acquired by a commercial entity which bragged in the same blog post of reaching $1B. They’re not a guy with an eyepatch and a tin can out on the street.

Open-source developers should be compensated, but they don’t have to be. You can’t reasonably offer your work for free then complain someone isn’t paying you. If you want to be paid, charge for it.

Signed: A long time open-source developer who has dedicated years of full-time work to useful projects without compensation or raising VC money or being acquired.


Come on, whenever a project is discussed on hackernews, there is always one comment of "why are you working on X, when you should be fixing bug Y?!".

We are all software engineers on here (or at least many of us are), we all know how project management and prioritisation works right? We can't work on everything all at once.


given the alleged context, X being something "reported in 2023, still affecting us 3 years later", is this not a reasonable PM / priority decision to question?

> Come on, whenever a project is discussed on hackernews, there is always one comment of "why are you working on X, when you should be fixing bug Y?!".

That is not what the question is about, which you’ll see if you engage with it properly in good faith. There is a single question in the comment (indicated, as one does in English, by a question mark):

> How do you feel about all the constant concerns being raised about the quality of the project lately?

Everything else is context and opinion to explain the question.


I think the question still deserves a proper answer.

No it doesn't. No opensource dev need to answer anything, if you dont like it, fork it and do the work yourself.

Maybe it can be better phrased as "I think this question doesn't deserve that answer"

No, open-source maintainers don't owe you anything if you don't pay for it

I have said the same many times here on HN. This in/famous blog post really changed my view: "Open Source Maintainers Owe You Nothing": https://mikemcquaid.com/open-source-maintainers-owe-you-noth...

I have similar problems with product I do pay for, and I still get told I have no say. FO/OSS distinction is a red herring.

At some point it need to be made clear; it's not a legal obligation, but a reputational challenge.



I do know why your post is downvoted, and I disagree with it. Here is my upvote.

I read the link that you shared. This is genius. To quote:

    > Community backed
    > Fody requires significant effort to maintain. As such it relies on financial support to ensure its long term viability.
    > It is expected that all developers using Fody become a Patron on OpenCollective.
I can remember years ago reading some posts/writings from none other than Richard Stallman (yeah, that guy). He was talking about charging people for a copy of the source code to your open source project. At the time, I thought it was weird and did not make sense. This is basically the same thing but in 2026. After watching so much bullshit around open source projects (basically, assholes expecting free service for whining the loudest), I have come to the conclusion that "money talks" and helps to realign incentives that are warped by open source.

Are you being ironic or serious? I can see both pros (encourage people to see themselves as customers) and cons (less initial adoption) to the licensing, although I'd maybe leave bug issues open for everybody.

What aspect do you think dominates?


Serious. And although 'seeing yourself as a customer' certainly makes things slightly better, I'm also referring just to the amount of cash that enters the coffers once it's no longer a tip jar per se. It is open source on the subject of copyright, but as was described in an article on here the other day, open-source doesn't mean community. By positioning the community aspect as something you have to buy into to enter, you end up (a) selling a product for cash without compromising open source and (b) ensuring everyone you deal with is serious. It's like the Red Hat model but workable at the lower end of software at the expense of lower upside.

The answer is because YOU haven’t fixed it yet. Chop chop, we’re all waiting on you.

What's the main motivation for considering Rust?

For what it's worth, in my last experience with Bun[0] I ran into a couple of bugs where it seemed Rust could have helped, e.g. using Bun.write

[0]: https://mastrojs.github.io/blog/2025-10-29-what-struggled-wi...)


With AI agents and how good they are in doing "language translation" tasks against an identical target with a comprehensive test suite, you end up doing these things out of curiosity. The AI agent has the originals to test it's assumptions with too.

I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.


Totally agreed... It enables you to try swapping out dependencies you might not otherwise even consider because of the cognitive load in trying to do so as an individual, and get it done/working in a few hours and a few days to follow in order to review.

Or take on an additional/related feature (like Redis grepping over the new array data types). Because you can be relatively sure the borders are stable and you can limit the surface/scope.


Thank you for the clarification!

While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.


Thank you, Jarred, for your work. It’s unfortunate to see so much backlash toward legitimate research. Bun is often seen by some as “the flagship project for zig” - especially among those frustrated with rust who want zig to "win over rust" for whatever reasons. At the end of the day, you should do what makes the most sense for your project and your circumstances, regardless of the language or tools involved.

Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.


I can only speak for myself... but I've found at least Claude Opus to handle Rust very well, and in my own use cases WebAssembly (wasm) and FFI for interoperation with TS/JS has been pretty smooth.

>who want zig to "win over rust" for whatever reasons

I don't understand why this mentality is so common. Zig and Rust are both fine languages with markedly different design goals and they can coexist.


Honestly, I don't know. I think it's because of frustration, but the community attitude is part of it. I experienced first hand people frustrated with Rust moving to Zig and finding other people to pick onto Rust and finding fertile ground (especially if moderators and heads of the community let this kind of behavior continue).

....you were saying?

You can view it as an overreaction, but also as a sign that your work is significant. It impressed some, and scared others. In any case, you made something interesting.

You're replying to the original author of Bun. Given the usage of Bun, and the fact that his company (primarily him, actually) was recently acquired by Anthropic for what I'm guessing was a bajillion dollars, I think he probably already knows his work is significant and that he made something interesting.

Lol! My bad, I wasn't aware it was the original author (my fault for replying with too little reading). In any case, I think what I said still applies to his LLM experiment.

Calm and curious about your results.

I hope you get the code elegant and not only maintainable but future friendly and performant.


I'm very curious what Zig vs Rust code looks like for the same project! What are your thoughts so far?

Might be a good idea to let AI handle social media. I'm not saying you're doing it badly, just that it doesn't seem like worth the drained energy to do manually.

the is lovely, how admirable that you have the space to do this. its very rare that we as a community take the time to actually implement a non trivial system in X and Y and look at the differences. so much discussion around these things is based on pointless tribalism.

I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.

would love to read a postmortem


A research prototype. This is normal.

Advice for the future: experiments should be explicitly tagged as such. The commit message "docs: add Phase-A porting guide" says nothing about the experimental and looks like a planned move to rust. That message certainly looks very official to me.

> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.


Or the community at large could stop acting deranged over language wars like it’s 2001.

It’s their repo, let them do what they want lol


I didn't get the impression that anyone cares about the source or destination language. I think the concern is centered around the long history of failure with large scale rewrites like this-- See Netscape 5, Perl 5, etc. Joel Spolsky wrote a legendary article about this [0]. I think the NextJS app router might be slowly joining this conversation as well.

It could get even worse if they get Second System Syndrome[1] and try to add features as they rewrite it. Considering Bun's rapid development cycle, this seems likely.

[0] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

[1] https://en.wikipedia.org/wiki/Second-system_effect


Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.

A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.

It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.


> Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.

The majority of the community feels this way which says something. The author's reaction is to publicly display being upset and dismissive of the communities reaction. That is just making it worse.

When you work on a project this big, more care is needed. The commit was an innocent mistake. The blunder is blowing off the communities response as overblown which it would be had the commit been tagged experimental. But it wasn't. And the author did themselves no favor blowing it off.

If the author was smart, their reply would simply have been:

Hello, To clarify, this is an experimental branch only. There are no plans to port, only experiment. I will tag the repo as such to ensure people understand its intention and avoid future misunderstandings.

Nothing difficult to understand here.


> The majority of the community feels this way which says something.

Yes, it says that those people are spoiled rotten brats and the community needs to start calling it out to improve itself.

They aren't contributors. They aren't employees. They aren't paying customers. Bun is not a web standard. They benefit from a free product that they chose to opt into over the standard ecosystem.

And for some reason they feel they have a right to know every decision and experiment everyone who does work on that project is making apriori. And, God forbid, if somebody even so much as starts working on something in an off branch that doesn't affect them in any way without getting their approval, they're going throw an absolute hissy fit.

And to criticize the person actually doing their job for feeling slighted that hundreds of people have verbally accosted them over it, because one feels they don't recognize an "implied responsibility" to those folk, is silly.

I'll also push back, though. The majority of the community doesn't seem to be doing anything.


> We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

Props for the effort man, but people have already picked up on Zig-to-Rust transition.

Poor Zig folks ...


More like poor Bun

Hoping that an AI rewrite is thrown out.

You may even be an OK programmer, but IF YOU AREN'T ABLE TO DO THE WORK I DON'T WANT TO USE IT.

Not worth your time? Not worth my time.


Most of Bun’s code is already written by LLMs. If you feel that way, it’s already been too late for a while. Furthermore, we’re talking about a million line port done in a couple of days. The question of whether it’s worth the time looks extremely different if done by hand. It would take a year.

The "too late" argument isn't gonna fly with someone like me who has both the time and energy to own a Javascript runtime. Heck, I'm quickly becoming the most prolific author of the ES spec too.

Cool. Fork it and maintain your own runtime, then. Why are you complaining about what the bun team does with their project?

Cause I'm sick of this amateur hour shit

Then show it wrong by making something people want to use more instead. If it's too difficult then the existing project must not be very amateur.

From what I'm reading, it's too late for Bun. I hear the whole dev stream is slop now. It was nice while it lasted, but that's not a foundation to build rock-solid stuff on top of. Not for me, not for them, not for anyone.

I think the criticism is still a valid to an extent because I don't see how this would give you a good way to evaluate Zig vs. Rust. Maybe a better approach is to migrate a particularly problematic space and bench that on its own?

It's not like OP asked for any criticism to start with, right? This whole thread is pretty good example of why saying "Fools and children should never see half-finished work" exists. ¯\_(ツ)_/¯

Since when was HN ever about asked for criticism?

Not HN, but from my experience of over a decade, it's certainly US culture to criticise without expertise.

I can say from expertise that vibing a full move of any project from one language to another is probably not a great way to evaluate if the decision is a good one. I got downvoted, maybe I said it too authoritatively. But hey, that is just like, my experienced opinion, man.

Will you have a way to measure the ecological impact it has to make such a throw away attempt?

Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.

But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.

So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.


Right now it seems to say:

> Showing 1,808 changed files with 790,916 additions and 151 deletions.

Just looking at the git diff [0].

I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?

If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?

[0] https://github.com/oven-sh/bun/compare/main...claude/phase-a...

[1] https://github.com/oven-sh/bun/blob/dacc59c62a8f93eabe6d9998...


Thanks, that's a really great answer.

It feels odd that the same message can be thus down voted and give the impulse to provide courteous response with reasoning, metrics and values.

Glory to your kindness and informarive way to react.


Less than the impact of people who can't be bothered to remember basic historical facts or directions in terms of hitting Google services dozens of times a day across the population.

Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.

If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.


Isn't b it same to emit doubts that the resources required to find and access webpages or use a GPS is causing at scale the same ecological impact as everyone selling the world by the token. Though that might be wrong, of course, then doubts should be addressed with proper reasoning, not aggressive rejection which would be a call to run with a blind fold.

Having people working together at some goal is not not going to create the same social structures as running LLMs at the same goal. That's missing the ecosocietal forest for the digital output.

Actually, at societal level, no, people are not free to go into gather and hunt mode, that is not at scale. Sure some individual can do it on the margin, but by definition that won't make the mainstream societal impact disappear.


I'm rejecting the pedantry of the premise altogether. YOU don't know the sources of energy used for the data centers in question... you aren't responsible and in a position to change anything... you are making statements to a negative assumption from the start and in such a hostile manner that any reasonable person would probably just ignore you. (I'm not always the most reasonable person)

As for social structures in creating software... the social structures around creating software shouldn't be a goal... software serves to scratch an itch or serve a purpose... and that purpose can even be social or entertainment... but the creation of the software itself doesn't need to serve any other purpose and if it can be done via automation, or partly automation, all the better.

As to going into hunter/gatherer mode... have you tried? My brother isn't even online and regularly hunts and fishes... so did my dad. They weren't wealthy people and still managed to get by. A lot of people do and did through history... because most people wouldn't be willing to do it... I realize that some countries and regions are more populated... but there's plenty of space in the US to achieve this kind of lifestyle.

For that matter, there's absolutely very little standing in your way if YOU want to take on the goals of creating cleaner energy or pairing with "responsible" data centers.

But I really think you're just virtue signaling and grand standing in order to try to shame others because you feel guilty for things you aren't actually responsible for.


I work on Bun, and this post is confusing to me. Me personally and the Bun team continues to dogfood & make Bun better everyday. Our development pace has only gotten faster. Bun's stability has improved significantly since joining Anthropic.

Here are some things shipping in the next version of Bun:

- 17 MB smaller Windows x64 binaries [0]

- 8 MB smaller Linux binaries [1]

- `--no-orphans` CLI flag to recursively kill any lingering processes spawned [3]

- SSL context caching for client TCP & unix sockets, which significantly reduces memory usage for database clients like Mongoose/MongoDB [4]

- Experimental HTTP/3 & HTTP/2 client in fetch [5]

- Experimental HTTP/3 support in Bun.serve() [6]

- Bun.Image, a builtin image processing library [7]

(Along with several reliability improvements to node:fs, Worker, BroadcastChannel, and MessagePort)

The Anthropic acquisition also means Bun no longer needs to become a revenue-generating business. We are very incentivized to make Bun better because Claude Code depends on it, and so many software engineers depend on Claude Code to help get their work done.

[0]: https://github.com/oven-sh/bun/pull/30219

[1]: https://github.com/oven-sh/bun/pull/30098

[2]: https://github.com/oven-sh/WebKit/pull/211

[3]: https://github.com/oven-sh/bun/pull/29930

[4]: https://github.com/oven-sh/bun/pull/29932

[5]: https://github.com/oven-sh/bun/pull/29863

[6]: https://github.com/oven-sh/bun/pull/30032


Acquisitions in this industry tend to lead to a certain inevitable conclusion. The software that has been acquired gets worse as the original team members cash out and their culture is replaced with the culture of the new owner.

Perhaps Bun will be the exception, but you can't say that the concern is unfounded.

The CEO of Anthropic has a habit of making outlandish predictions about how AI is so very close to replacing human programmers. Anthropic has been applying this belief to Claude Code and it has become a giant heap of unmaintainable spaghetti.


Hasn't your team shrunk a lot? Word on the street is that many of Bun's employees left or let go in the time leading up to the acquisition. How many people are left working on Bun?

Has development velocity increased because you are merging large quantities of unreviewed LLM generated code? If so, I would be very worried about future stability if I used Bun.


Saying that you “work on Bun” is such a radical understatement. I have my reservations about Anthropic, but I don’t see how Bun could go wrong with you at the helm. And I’m sure that you are putting the stability and funding of a larger organization to good use :)

I’ve been a Bun maximalist since the beginning. Thank you Jarred!!!


Perhaps it could go wrong because he uses AI robots to generate responses to issues on claude code that are also generated by AI robots? Just bots talking to each other like moltbook. It shows a level of AI maximalism that is absurd, concerning, and funny. But probably par for the course for someone working at Anthropic. I can imagine being surrounded by people doing similarly foolish things only encourages the foolery.

It's a little heartbreaking. The DX of Bun is legitimately amazing and perhaps even revolutionary (I say this as a long-time javascript backender). I'm all for LLM-based development velocity enhancement, but it does really feel like they are taking it too far and moving too fast.

I don't really see the issue here. They are language models after-all and they work by talking. Whether it's one model talking to itself (i.e thinking/reasoning), or one model talking to another it amounts to the same thing.

The best feature Bun delivered recently is portable binary. That portability is a huge deal to me as my users are often on ancient Linux distros. Thank you. Both node and deno require recent Linux, more exactly, recent glibc.

I think velocity is a real risk to stability, dogfooding or not. That's what made me swear off the python transformers library. It's doubtful that LLMs will change that calculus for the better.

Hey Jarred, first of all, thanks. I've been doing backend JS since the first release of node and bun is genuinely the first really big improvement in terms of DX. It's an absolute delight to build glue and scripts with... Bun.* just seems to have everything I need. Bun.$ is revolutionary. etc. etc. I'm hoping to run a collection of backend services on it in the near future but it seems the general consensus is that there are still some gremlins holding it back (memory leaks, etc.)

Can you shed a little light on the recent giant rust based commits though? Are you guys moving away from zig? These kind of big curious movements and the spectre of giant LLM-based commits are not exactly confidence inspiring.


From Amol, who is the Head of Growth:

> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.

https://x.com/TheAmolAvasare/status/2046724659039932830


But the current plans are unsustainable and prices will have to be effectively raised sooner or later:

> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.

https://xcancel.com/TheAmolAvasare/status/204672528250217304...


This is the dumbest PR tactic in the book, and it annoys me that it works on so many people.

April: "The fact that we're doing X isn't news because we're only starting to do X"

August: "The fact that we've fully rolled out X isn't news because we started X in April"


Once again random tweets from insiders being the only clues we have to what Anthropic actual policy is


you could try customer support, that chat bot will happily loop you with some more non answers, but try to make you feel good about those non answers :)


> turns out for me, bun is not production ready

What issue did you run into?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: