Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.

But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.

I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).

It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).



I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.

The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.

Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.


I wrote a few benchmarks a few years ago comparing JS vs C++ compiled to WASM vs C++ compiled to x64 with -O3.

I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.

The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).

Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.

I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.


I've also seen Cython used to this effect for hotspots or entire applications in scientific Python code.


I am not super familiar with python but that sounds quite annoying to set up in the build process. You would need to compile different files/modules using a different compiler right?


I'd rather write rust than java, personally


If I have all the time in the world, sure. When I'm racing against a deadline, I don't want to wrestle with the borrow checker too. Sure, it's objections help with the long term quality of the code and reduce bugs but that's hard to justify to a manager/process driven by Agile and Sprints. Quite possible that an experienced Rust dev can be very productive but there aren't tons of those going around.

Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.


Java is incredibly productive - it's fast and has the best tooling out there IMO.

Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.

Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]

[1] https://www.youtube.com/watch?v=tzt36EGKEZo


He would vouch as great as he might be, he has a bias, and Mono GC was never a great implementation.

Also here is how great his new Swift love performs in reality against modern tracing GCs,

https://github.com/ixy-languages/ixy-languages


Interesting link, but that's a nearly 7 year old version of Swift (4.2) running on Linux.

I wonder how the performance would be with Swift 6.1 which has improved support for Linux.


Probably much better, given the improvements on the Swift optimizer, but just goes to show "tracing GC" bad, "reference counting GC" good isn't as straighforward as people make it to be, even if they are renowned developers.


It's a cherry picked, out-of-date counter-example. Swift isn't designed for building drivers.

In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.

Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.


> It's a cherry picked, out-of-date counter-example. Swift isn't designed for building drivers

Do you have a counter benchmark? The burden is on you here to disprove the data presented. What that benchmark shows is that you spend A TON of time counting references, much more than tracing GC, unless you enter Swifts C++ networking code. I would think games don’t spend most of their time calling into networking code.

> In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.

So what you’re saying is that any language will be good for gaming since they all can delegate to C?

> Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.

What makes swift better at calling C than Java? AFAIK Java has a perfectly good and brand new foreign function interface.

> This is why Swift will probably become an excellent game language eventually.

I would take this bet that it won’t. Purely on the sheer fact that gaming occurs on Windows and Swift is barely capable there.


> I would think games don’t spend most of their time calling into networking code.

Exactly. That is why the out-of-date networking example being touted as evidence is irrelevant here.

What it boils down to is that Java and C have fundamentally incompatible memory models. Direct access to C memory is impossible because of the managed heap and GC.

> I would take this bet that it won’t. Purely on the sheer fact that gaming occurs on Windows and Swift is barely capable there.

This is a very odd comment - gaming occurs in a lot of places. Quite a lot happens on mobile these days. Turns out a lot of mobile devices run Swift, on which it appears to be reasonably capable.


And 70% of the mobile world runs Android, which is powered by Java, so it is equally capable.


Android does not run Java and you know that.

https://omeraydin.dev/blog/does-android-run-java


It surely is, according to Apple's own documentation.

> Swift is a successor to the C, C++, and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics.

-- https://developer.apple.com/swift/

If developers have such a big problem glueing C libraries into Java JNI, or Panama, then maybe game industry is not where they are supposed to be, when even Assembly comes to play.


> GC pauses aren't really acceptable

Java has made great progress with low-pause (~1 ms) garbage collectors like ZGC and Shenandoah since ~5 years ago.


People have 240hz monitors these days, you have a bit over 4ms to render a frame. If that 1ms can be eliminated or amortised over a few frames it's still a big deal, and that's assuming 1ms is the worst case scenario and not the best.


I don’t think you need to work in absolutes here. There are plenty of games that do not need to render at 240hz and are capable of handling pauses up to 1ms. There’s tons of games that are currently written in languages that have larger GC pauses than that.


What about the C# garbage collector? Is it much better? Because Unity is in C#, right?


Unity uses aginging Mono runtime, because of politics with Xamarin, before its acquisition by Microsoft, migration to .NET Core is still in process.

Additionally they have HPC#, which is a C# subset for high performance code, used by the DOTS subsystem.

Many people mistake their C# experience in Unity, with what the state of the art in .NET world is.

Read the great deep dive blog posts from Stephen Toub on Microsoft DevBlogs on each .NET Core release since version 5.0.


Yes and it's impressive.

For the competitive Minecraft player, I suspect starting their VM with XX:+UnlockExperimentalVMOptions is normal.

A casual gamer is however not going to enjoy that.


Are you sure that enabling ZGC or Shenandoah requires UnlockExperimentalVMOptions ?


Let's get back to the point. Is Java a good gaming language?


I have found that the ClassFactoryGeneratorFactories sneak up on you. Even if you don't want to the ecosystem slowly but surely nudges you that way.


That has not been my experience. Sure, you don't have any control over the third-party stuff but I haven't seen this issue being widespread in the mainstream third-party libraries I've used e.g. logback, jackson, junit, jedis, pgJDBC etc which are very well known/widely used. The only place I've actually seen proliferation of this was by a contractor, who I suspect, was trying to ensure job security behind impenetrability.


It is ironic how Java got that stigma and other systems that are just as bad, or worse, like Objective-C, have not.


Well I have never used Objective-C so I can't comment on it.


On Objective-C, due to the way the language works, besides ClassFactoryGeneratorFactories, you would need to add all parameter names to the identifier.

Here, enjoy https://github.com/Quotation/LongestCocoa

There is even a style guide on it,

https://developer.apple.com/library/archive/documentation/Co...


I'd have said the same thing 10 years ago (or, I would have if I were comparing 10-year-old Java with modern Rust), but Java these days is actually pretty ergonomic. Rust's borrow checker balances out the ML-style niceties to bring it down to about Java's level for me, depending on the application.


I’d rather write Java than Rust, personally


Same here, and if I get bored with Java, there is also Scala, Kotlin and Clojure to chose from.

However, I would still prefer C# or F#.

Hence why I enjoy both stacks, lots of goodies to chose from, with great tooling.


I would do C#, but I don’t want to be in async/await hell.

Also it’s subjective but PascalCase really irks me.


PascalCase has been my favourite since MS-DOS days, I have been through most Borland products, and Microsoft ones, alongside many Pascal influenced languages, thus it feels like home. :)

But yeah it is subjective, also don't have much qualms with other alternatives.


Wow, way to be un-hip.


Note that I mentioned JVM languages. There is Scala, Kotlin and others. Kotlin is the default for Android, and it is really nice.


Kotlin is nice indeed. Most of the issues I had with it were in interop with Java code (those pesky platform types, that behave like non-nullable but are nullable: and you are back in the NPE swamp!)


>I realized that what I'd boiled it down to looked an awful lot like Rust code

you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.


If I'd started in Rust I likely wouldn't have finished it at all. Java allowed me to start out just focused on the algorithm with very little regard for memory usage patterns and then refactor towards zero garbage collection. Rust can sort of allow the same thing by just sprinkling everything with clone and/or Rc/Arc, but it's much more in the way than just having a garage collector there automatically.


Yes but it would just be the hot loop in this case; the rest of the app can still be in idiomatic Java, and you still get the GC.


Exactly. Write it in Java, optimize what you need to, leave the rest alone.


As polyglot dev, I never understood this religious approach that it has to be 100% pure unadulterated in language XYZ for performance.

Nope, embrace the productivity of managed languages, if really needed, package that rest in a native library, done.


I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!

For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.

Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.

Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here

High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.

Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.


What do you mean by 'a mix of Haskell and Rust'? Is that a per-project choice or do you use both in a single project? I'm interested in the latter. If so, could you point me to an example?

Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!


Yeah, sometimes I'll do e.g. a backend state management server in Haskell and then a lightweight (embedded) client in Rust. I haven't ever tried linking rust from haskell yet, if that's what you mean.

I would actually flip your HDL definition a bit. Clash is a true HDL, but specifically because it's not just a shallowly embedded DSL.

Clash is actually a GHC plugin that compiles haskell code to a synchronous circuit representation and then can spit that out as whatever HDL you like. It's emphatically not just a library for constructing circuit descriptions, like most new gateware development tools. This is possible because the semantics of Haskell are (by a mixture of good first-principles design and luck) an almost exact match for the standard four-state logic semantics of synchronous digital circuits.

This is also different from the standard HLS approach where the semantics of the source language do not at all match the semantics of the target. With Haskell, they are (surprisingly) very close! Only in a few edge cases (mostly having to do with zero-bit wires and so on) does the evaluation semantics of haskell differ from the evaluation semantics of e.g. verilog.


> "I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year."

I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.

For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.


You can't avoid a lot of this stuff, once libraries start using it or colleagues add it to your codebase then you need to know it. I'd argue you need to know it well before you decide to exclude it.


Then better be quite picky of what libraries one choses, because that is the thing, while we may not use them, the libraries migth impose them on us.

Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.

Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.


My experience with C++ is that it fundamentally "looks worse" and has worse tooling than more modern languages. And it feels like they keep adding new features that make it all even worse every year.

Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.


I'm a C++ developer and it's always great when we move to a newer language version, with all the language improvements that come with that.


>> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)

The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.


I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.

This said, they moved to Unity, which is C#, which is garbage collected, right?


C# also has "Value Types" which can be stack allocated and passed by value. They're used extensively in game dev.


You can already get halfway there with Java, by making use of Panama, even if not exposed at language level.

And lets be real, how many devs manage to sell as many copies as Minecraft?

Too much discussion about what language to use, instead of what game to make.


Hopefully that changes once Java releases their value types.


C#/.NET has huge feature area for low-level/hands-on memory manipulation, which is highly relevant to gamedev.


The core unity game engine is c++ that you can't access, but all unity games are written in c#.


And you could do that with any garbage collected language, right? You could reuse that C++ core with a JVM language.


Unity games are C#, the engine itself is C++.


The advantage C has over C++ is it won't let you use templates.


this!


VPS/Cloud providers skimp on RAM. The JVM sucks for any low RAM workload, where you want the smallest possible single server instance. The startup times of JVM based applications are also horrendous. How many gigabytes of RAM does Digital Ocean give you with your smallest instance? They don't. They give you 512MiB. Suddenly using Java is no longer an option, because you will be wasting your day carefully tuning literally everything to fit in that amount.


You can get decent startup times if you have fewer dependencies. The JVM itself starts fairly quickly (<200 ms), the problem is all the class loading. If your "app" is a bloated multi gigabyte monstrosity... good luck!


I think the choice of C++ vs JVM depends on your project. If you're not using the benefits of "unsafe" languages then it probably doesn't matter.

But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.

I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.


While there are many technical reasons to use C++ over Java in game development, many commercial games could be easily done in Java, as they are A or AA level at most.

Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.


Minecraft was Indie then. And anyway, it's now in C++.


Many people dream to make it as indie, most don't even achieve that.

No it isn't, there are now two versions of Minecraft, the classical one, and Minecraft Bedrock, that is the one written in C++.

Minecraft Bedrock doesn't have half of the community that Minecraft classical enjoys, hence why Microsoft is trying to use JavaScript based extensions to bring the mod community into Minecraft Bedrock.

Finally without Minecraft classical market success, there wouldn't exist Minecraft Bedrock at all, so Java did serve well enough to Notch's fortunes.


I'm not knocking indie development, the scene is very very vibrant. But indies don't typically push the hardware to its limits the same way.

And Java was a perfectly good choice of language for Notch for the same reasons.

I don't play Minecraft so I guess I'm outta touch. I knew about Bedrock and I've heard kids call Java the "old one". I didn't realise there's still an active community. Thanks for the correction :)


Literally no one who has access to the Java version cares even a little bit about Minecraft bedrock edition.


> but there is a reason that commercial gamedev is typically C++.

Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.

But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.


We were using a modified Luajit, in assembly, with a bit of other assembly dotted around the place. That assembly takes a long time to write (to beat a modern C++ compiler).

Then we had C++ for all our low level code and Lua for gameplay.

We were floating a middle layer of Rust for Lua bindings and the glue code for our transformation pipeline, but there was always a little too much friction to introduce. What we were particularly interested in was memory allocation bugs (use after free and leaks) and speeding up development of the engine. So I could see it having a place.


Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.

But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.

Yes, Rust is hard. But it doesn't have to be if you don't want.


This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.

This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.

I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.


Building a proper ORM is hard. Querying a database is not. See the postgres crate for an example.

Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.


Sqlx is completely lacking in the query composability department, and leads to a very large amount of boilerplate.

You can derive FromRow for your structs to cut down the boilerplate, but if you need to join two tables that happen to have a column with the same name it stops working, unless you remember to _always_ alias one of the columns to the same name, every time you query that table from anywhere (even when the duplicate column names would not be present). If a table gets added later that happens to share a column name with another table? Hope you don't ever have to join those two together.

Doing something CRUD-y like "change ordering based on a parameter" is not supported, and you have to fall back to sprintf("%s ORDER BY %s %s") style concatenation.

Gets even worse if you have to parameterize WHERE clauses.


You don't need to derive anything, sqlx creates structs with the query results for you. The rest of your complaints are just the natural consequence of SQL's design. sqlx is no more difficult to use than similar libraries in other languages.


No.

The structs sqlx creates through the macros are unnameable types, they are created on the fly at the invocation site. This means you can't return them without transforming them into a type you own (you declared), either through the FromRow derive, or writing glue code that associates this unnameable type's fields to your own struct's fields, leading to the boilerplate I was referring to. This is very specific to sqlx, don't try to dilute this into "other libraries are similar".

If you choose to forgo the macros, and use the regular .query() methods, then the results you get are tied to the lifetime of the sqlx connection object, which makes them unergonomic, which is again very specific to sqlx.


My feeling is that rust makes easy things hard and hard things work.


I'm not going to deny your experience. But is Rust really that hard? It's a very smooth experience for me - sometimes enough for me to choose it instead of Python.

I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.


"Hard" maybe the wrong word.

Tasks that are simple in most other languages, like a tree that can store many data types, are going to take a lot more code and a lot more thinking to get it to work in rust.

Generics and traits have some rough edges. If I rub up against them, they're really annoying. Otherwise, if I can avoid those then I agree with you that rust is smooth, and the trade off is worth it.


> This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.

Most languages used with DBs are just as safe. This idea about Rust being more safe than languages with GC needs a rather big [Citation Needed] sign for the fans.


If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?

The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).


For me personally, doing the clone-everything style of Rust for a first pass means I still have a graceful incremental path to go pursue the harder optimizations that are possible with more thoughtful memory management. The distinction is that I can do this optimization pass continuing to work in Rust rather than considering, and probably discarding, a potential rewrite to a net-new language if I had started in something like Ruby/Python/Elixir. FFI to optimize just the hot paths in a multi-language project has significant downsides and tradeoffs.

Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.


> generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages

What about e.g. Kotlin or Swift?


I don't find the single-vendor governance / commercial origins of those two languages very reassuring, but that's not something that will trouble everyone equally if at all.


Yeah only in Scala, Kotlin, F#, Standard ML, OCaml, Haskell, and all others that derive from them.


None of those are to my personal taste and I think Kotlin is the only one with unambiguously strong adoption in industry. I'm trying not to make value-judgment statements about others that do like them.


Agree in this, i enjoy Rust and use the same approach.

People are saying rust is harsh, i would day its not that much harder then other languages just more verbose and demanding.


This couldn't be any more accurate even if you compiled with CFLAGS='-march native ' and RUSTFLAGS='-C can't remember insert here'


Install Gentoo


As I said, I use Gentoo already ;-).


Quite.

I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.

It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.

@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.

Errrm, sorry, I seem to have subverted this thread.


What about the binary packages now supported in Gentoo?


You have approximately described guix.


Gentoo Silverblue?


> C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)

If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.


> C++ has been faster than C for a long time.

What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?

There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.


This is really an “in theory” versus “in practice” argument.

Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.

I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.

I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.


None of this really supports your claim that "C++ has been faster than C for a long time."

You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.

In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.

For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).


> What is your basis for this claim?

Great question! Here's one answer:

Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.

Why doesn't that happen with C code?

C code is not plastic. It is brittle. It does not bend, it breaks.

This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.

A simple example:

Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.

Or let's change a linked list to an array. Aarrgghh again.

Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.


At a certain point, C++ compile time computation becomes something you really can’t do in C. https://codegolf.stackexchange.com/a/269772


C and C++ do have very different memory models, C essentially follows the "types are a way to decode memory" model while C++ has an actual object model where accessing memory using the wrong type is UB and objects have actual lifetimes. Not that this would necessarily lead to performance differences.

When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.

The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.


Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.

That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:

https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...


It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.

One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.


Most C++ code I have seen uses the STL. As for “hyper optimized” data structures, you already have those in C. See the B-Tree code those binary search routine I patched to run faster. Nothing C++ adds improves upon what you can do performance wise in C.

You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.


C and C++ can be made to generate pretty much the same assembly, sure. I find it much easier to maintain a template function than a macro that expands to a function as you did in the B-Tree code, but reasonable people can disagree on that.

Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.

Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.

In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.

Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.


Having worked on performance sensitive code (OpenZFS), I have found less to be more.

While C code makes more heavy use of linked lists than C++ code, most of the C code I have helped maintain made even heavier use of balanced binary search trees and B-trees than linked lists. It also used SLAB allocation to amortize allocation costs. In the case of OpenZFS, most of the code operated in the kernel where external memory fragmentation makes dynamic arrays (and “large” arrays in general) unusable.

I think you have not seen the C libraries available to make C even better. libuutil and libumem from OpenSolaris make doing these things extremely nice. Some of the first code I wrote professionally (and still maintain) was written in C++. There really is nothing from C++ that I miss in C when I have such libraries. In fact, I have long wanted to rewrite that C++ code in C since I find it easier to maintain due to the reduced abstractions.


> Nothing C++ adds improves upon what you can do performance wise in C

Implementations of both languages provide inline asm, so this is trivially true. Yet it is an uninteresting statement.


This is not a convincing argument for C. None of this matches my experience across many companies. In particular, the specific things you cite — excessive dynamic memory usage, exceptions, bloat — are typically only raised by people who don’t actually use C++ in the kinds of serious applications where C++ is the tool of choice. Sure, you could write C++ the way you describe but that is just poor code. You can do that in any language.

For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.

C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.


This risks becoming a no true Scotsman, but it is indeed true that there is really no common idiomatic C++. Even the same code base can use vastly different styles in different areas.

Even regarding exceptions, I would not touch them anywhere close to the critical path, but, for example, during application setup I have no problem with them. And yet I know of people writing very high performance applications that are happy to throw on the critical path as long as it is a rare occurence.


> Sure, you could write C++ the way you describe but that is just poor code.

C++ puts people into a sea of complexity and then blames them when they do not get a good result. The purpose of high level programming languages is to make things easier for people, not make them even more likely to fail to write good code and then blame them when they do not.

> For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company.

Unfortunately, C++ does not make exceptions optional and even if you use a compiler flag to disable them, libraries can still throw them. Just allocating memory can throw them unless you use the nothrow version of the new operator that C++11 introduced. Alternatively, you could use malloc with a placement new and then manually call the destructor before freeing the memory. As far as I know, many C++ developers who disable exceptions do not do either. Then when a memory allocation fails, their program terminates.

That said, there are widely used C++ code bases that rely on exception handling. One of the most famous is the Windows NTFS driver, which reportedly makes heavy use of structured exception handling:

https://blog.zorinaq.com/i-contribute-to-the-windows-kernel-...

> It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.

I have seen plenty of C++ software throw exceptions in wine, since it prints information about it to the console. It is amazing how often exceptions are used in the normal operation of such software. Contrary to your assertions of everyone turning off exceptions, these exceptions are caught and handled. Of course, this goes unseen on the original platform, so the developers likely have no idea about all of the exceptions that their code throws.

As for excessive dynamic memory allocations, C++11 introduced move semantics to eliminate a major source of them, and that very much was a problem in real code. It still can be, since you need to always use std::move and define move constructors. C++ itself tends to discourage use of intrusive data structures (as they break encapsulation), which means doing more dynamic allocations than C code does since heavy use of intrusive data structures in C code avoids allocations that are mandatory without them.

> C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C.

My goal had been to say that performance was not a reason to use C++ over C, since C++ is often slower. Had my goal been different, there is plenty of material I could have used. For example, I could have noted the binary compatibility breakage that occurred for C++11 in GCC 5.0, the horrendous error messages from types that are effectively paragraphs, changes in the STL definitions across different versions that break code, and many other things, but I was not trying to throw stones at an obviously glass house.

> It isn’t like C++ can’t use C code.

It increasingly cannot. If C headers use variably modified types and do not have a guard macro an alternative for C++ that turns them into regular pointers, C++ cannot use the header. Here is an example of code using it that a C++ compiler cannot compile:

https://godbolt.org/z/T5T4Y1n68

The C preprocessor also now has generics which are not supported by C++ either:

https://godbolt.org/z/cof14W7vM


Unfortunately Stepanov and the STL are widely misunderstood. Stepanov core contributions is the set of concepts underlying the STL and the iterator model for generic programming. The set of algorithms and datastructures in the STL was only supposed to be a beginning, was never supposed to be a finished collection. Unfortunately many, if not most treat it that way.

But if you look beyond, you can find a whole world that extend the stl. If you are not happy, say, with unordered_map, you can find more or less drop in replacements that use the same iterator based interface, preserve value semantics and use the a common language to describe iterator and reference invalidation.

Regarding your specific use case, if you want intrusive lists you can use boost.intrusive which provides containers with STL semantics except it leaves ownership of the nodes to the user. The containers do not even need to be lists: you can put the same node in multiple lists linked list, binary trees (multiple flavors), and hash maps (although this is not fully intrusive) at the same time.

These days I don't generally need boost much, but I still reach for boost.intrusive quite often.


I have met a number of people who will not use the boost libraries. It has been so long that I have long forgotten their reasons. My guess is that it had to do with binary compatibility issues.


Except, nothing forbids me to use two linked lists in C++ using sys/queue.h, that is exactly one of the reason why Bjarne built C++ on top of C, and also unfortunely a reason why we have security pain points in C++.


Yet the C++ community is continually trying to get people to stay away from anything involving C. That said, newer C headers using _Generic for example are not usable from C++.


Because C++ was "TypeScript for C", plenty of room to improvement that WG 14 refuses to act on for the last 50 years.

Yes, most language features past the C89 subset are not supported, besides the C standard library, because C++ has much better alternatives, like why _Generic when templates are a much saner approach, than type dispatching with the pre-processor.

However that is besides the point, 99% of C89 code minus a few differences, is valid C++ code, and if the situation so requires, C++ code can be exactly the same way.

And lets not forget most FOSS projects have never moved beyond C89/C99 anyway, so stuff like _Generic is of relative importance.


C, unlike C++, does not really force new versions onto you, even if dependencies begin using them. That said, Linux switched to C11. Newer versions of C will gradually be adopted, despite the incompatibilities this causes for C++.

As for WG 14, they incorporated numerous C++isms into C. While you claim that they did not go far enough, I am sure you will find many who would say that they went too far.


I very much doubt it, when someone decides to make full of use of recent ISO C on a C library header file.

I claim they aren't focused on what matters, we don't need C++isms into C, we already have C++, and C should have been done as language back in C89.

Anyone that wanted more has always been able to use C++ instead, or Objective-C on Apple/NeXT land.

What we need is for WG14 to finally take security regarding strings, arrays seriouslys, not yet another reboot of functions using (ptr, length) pairs.


> I very much doubt it, when someone decides to make full of use of recent ISO C on a C library header file.

I have tested this WRT _Generic as it was a concern. It turns out that GCC will accept it on older versions, which permits compatibility. You might feel that this is wrong, but that is how things are right now.


In my experience, templates usually cause a lot of bloat that slows things down. Sure, in microbenchmarks it always looks good to specialize everything at compile time, whether this is what you want in a larger project is a different question. And then, also a C compiler can specialize a sort routine for your types just fine. It just needs to be able too look into it, i.e. it does not work for qsort from the libc. I agree to your point that C++ comes with fast implementations of algorithms out-of-the-box. In C you need to assemble a toolbox yourself. But once you have done this, I see no downside.


I know you're going to reply with "BUT MY PREPROCESSOR", but template specialization is a big win and improvement (see qsort vs std::sort).


I have used the preprocessor to avoid this sort of slowdown in the past in a binary search function:

https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...

The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.

That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:

https://news.ycombinator.com/item?id=43827857

Honestly, I think these weaknesses are more severe than qsort being unable to inline the comparator.


A comparator can be inlined just fine in C. See here where the full example is folded to a constant: https://godbolt.org/z/bnsvGjrje

Does not work if the compiler can not look into the function, but the same is true in C++.


That does not show the comparator being inlined since everything was folded into a constant, although I suppose it was. Neat.

Edit: It sort of works for the bsearch() standard library function:

https://godbolt.org/z/3vEYrscof

However, it optimized the binary search into a linear search. I wanted to see it implement a binary search, so I tried with a bigger array:

https://godbolt.org/z/rjbev3xGM

Now it calls bsearch instead of inlining the comparator.


With optimization, it will really inline it with an unknown size array: https://godbolt.org/z/sK3nK34Y4

That's not the most general case, but it's better than I expected.


Nice catch. I had goofed by omitting optimization when checking this from an iPad.

That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:

https://en.algorithmica.org/hpc/data-structures/binary-searc...

That had been the entire motivation behind this commit to OpenZFS:

https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...

It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:

https://github.com/scandum/binary_search

My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110001


> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time.

In certain cases, sure - inlining potential is far greater in C++ than in C.

For idiomatic C++ code that doesn't do any special inlining, probably not.

IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).

But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.


> C++ has been faster than C for a long time.

Citation needed.


I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.

While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.

The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.

You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.

Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.


OO (implementation inheritance) is frowned upon in modern C++. Also, all production code bases I’ve seen pass -fno-exceptions to the compiler.


Ahh yes, now we are getting somewhere. "C++ is faster because it has all these features, no not those features nobody uses those. The STL, no, you rewrite that"


The poster you are responding to is correct. Modern C++ has established idiomatic code practices that are widely used in industry. Imagining how someone could use legacy language features in the most naive possible way, contrary to industry practice, is not a good faith argument. You can do that with any programming language.

You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.


> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.

That's interesting, did ChatGPT tell you this?


I agree with you except for the JVM bit - but everyone's application varies


My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.

In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.

Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.


The JVM is excellent for throughput, once the program has warmed up, but it always has much more jitter than a more systemsy language like C++ or Rust. There are definitely use cases where you need to consistently react fast, where Java is not a good choice.

It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.


Yet it made Notch rich, because he had the right idea for a game, and compeling gameplay.


You think the JVM is slow?


IME large linear algebra algos run like molasses in a jvm compared to compiled solutions. You're always fighting the gc.


Do you have any benchmarks to show, out of curiosity?


Ok. But we have plenty of C libraries to bind to that for.

They're far slower in Python but that hasn't stopped anyone.


Depends. JVM is fast once hotspot figures things out - but that means the first level is slow and you lose your users.


You can always load JIT caches if you can’t wait for warm up.


What about AOT?


Rust is actually quite suitable for a number of domains where it was never intended to excel.

Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.

edit: The downvoters are missing out.


To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?

I am absolutely convinced I can find success story of web backends built with all those languages.


Yeah, "web services backend" really means "code exercising APIs pioneered by SunOS in 1988". It's easy to be rock solid if your only dependency is the bedrock.


> probably even Swift

It's possible to write web backends in Swift, but it's probably not a good idea. When I last did so, I ran into ridiculous issues all the time, such as lazy variables not being thread-safe, secondary threads having ridiculously small (and non-adjustable) stack sizes and there being generally absolutely no story for failure recovery (have fun losing all your other in-flight requests and restarting your app in case of an invalid array access). It's possible that some of this has been fixed in the last 5 years since I stopped working on that project, but given Apple's priorities, I somehow doubt that the situation is significantly better.


The bar for web services is low, so pretty much anything works as long as it's easy. I wouldn't call them a success story.

When things get complex, you start missing Rust's type system and bugs creep in.

In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.


There are 3 cases. The first is that you are comfortable with Rust and you just choose it for that. The second is that you're not comfortable with Rust and you choose something else that works for you.

The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.

This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.


> Rust is an obvious improvement over the interpreted languages.

Do we agree that most of the languages I mentioned above are not interpreted languages? You seem to only consider Go as a non-interpreted alternative...


I replied using my phone and couldn't see your original comment for reference. Yes, your list contains more compiled languages.

Of those, I'm not very comfortable with using C and C++ for backend development. Their lack of automated memory safety measures is an issue for services that are exposed to the internet. (To be clear, memory safety isn't the only type of safety required). Swift may be a fine choice. (I'm unfamiliar with it.)

JVM languages like Java, Kotlin and Scala are all compiled languages, but I'm unsure how well they satisfy what I said before. To repeat, what matters is energy efficiency and resource utilization (not speed). I hope that somebody can provide an insight into how much overhead they incur on account of running in a VM.


Other than Go it's just C/C++ and Swift.


Java, Scala and Kotlin too.


Those are interpreted.


You are one of those people stuck in the 90s, then.

Take 5 min to read about JIT and AOT compilation.


What point are you trying to make? V8 has a JIT compiler too, does that make JavaScript a compiled language?


If you run JavaScript with a JIT compiler, you can't say "it is slow because it is interpreted". And it is obviously wrong to say "V8 is slow because it is an interpreter", isn't it?


Perhaps. But a comparable Rust backend stack produces a single binary deployable that can absorb 50,000 QPS with no latency caused by garbage collection. You get all of that for free.

The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.


Yep, that's precisely it! When dealing with other languages I miss the "match" keyword and being able to open a block anywhere. Sure, sometimes Rust allows you to write terse abominations if you don't exercise a dose of caution and empathy for future maintainers (you included).

Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.

But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.

Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.

Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.

Thanks for coming to my TED talk.


>Anyway, would I recommend Rust to everyone?

For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.


What language are you using that doesn’t have match? Even Java has the equivalent. The only ones I can think of that don’t are the scripting languages.. Python and JS.


Does Java have sum types now?


Yes via sealed classes. It also has pattern matching.


So they are there, but ugly to define:

    public abstract sealed class Vehicle permits Car, Truck {
      public Vehicle() {}
    }

    public final class Truck extends Vehicle implements Service {
      public final int loadCapacity;

      public Truck(int loadCapacity) {
        this.loadCapacity = loadCapacity;
      }
    }

    public non-sealed class Car extends Vehicle implements Service {
      public final int numberOfSeats;
      public final String brandName;

      public Car(int numberOfSeats, String brandName) {
        this.numberOfSeats = numberOfSeats;
        this.brandName = brandName;
      }
    }

In Kotlin it's a bit better, but nothing beats the ML-like langs (and Rust/ReScript/etc):

    type Truck = { loadCapacity: int }
    type Car = { numberOfSeats: int, brandName: string }
    type Vehicle = Truck | Car


You could use Java records to make things more concise:

  record Truck(int loadCapacity) implements Vehicle {}
  record Car(int numberOfSeats, String brandName) implements Vehicle {}
  sealed interface Vehicle permits Car, Truck {}


Scala 3 has:

  enum Vehicle:
    case Truck(loadCapacity: Int)
    case Car(numberOfSets: Int, brandName: String)


You implemented this much more verbosely than needed

    sealed interface Vehicle {
        record Truck(int loadCapacity) implements Vehicle {}
        record Car(int numberOfSeats, String brandName) implements Vehicle {}
    }


Ah! Thanks, I didn't know that. I should have RTFMD better - https://docs.oracle.com/en/java/javase/21/language/sealed-cl...

Turns out you can do this and not have the annoying inner class e.g. Vehicle.Car too:

  package com.example.vehicles;

  public sealed interface Vehicle
      // The permits clause has been omitted
      // as its permitted classes have been
      // defined in the same file.
  { }
  record Truck(int loadCapacity) implements Vehicle {}
  record Car(int numberOfSeats, String brandName) implements Vehicle {}


I think Java 21 does. Scala and Kotlin do as well.


Python has it as well.


Ah my mistake. It’s been at least 5 years since I’ve written it. I’m honestly surprised that JS has moved no where on it considering all of the fancy things they’ve been adding.


It has been proposed, but since there is all the process on how features get added into the standard, someone needs to champion it, and then there is the "at least two implementations" factor.

https://github.com/tc39/proposal-pattern-matching


Yeah, anything with nulls ends up with Option<this> and Option<that> which means unwraps or matches. There is a comment above about good bedrock and Rust works OK with nulls but it works really well with unsparse databases (avoiding joins).


Tokio + Axum + SQLx has been a total game-changer for me for web dev. It's by far the most productive I've been with any backend web stack.


I prefer rusqlite over SQLx; the latter is too bloated.


People that haven't tried this are downvoting with prejudice, but they just don't know.

Rust is an absolute gem at web backend. An absolute fucking gem.


We know, it stil isn't at Spring/ASP.NET level, coupled with Scala/Kotlin/F#.


I hate Spring(Boot): too much magic due to overuse of annotations.

On the JVM I'd prefer Kotlin/http4k/SQLDelight any day over {Java,Kotlin}/Spring(Boot)/{Hibernate,sql-in-strings}.


Because macro magic, or compiler plugins, is so much better, I guess.


What do you mean? Where are the "macro magic or compiler plugins"?


Most Rust frameworks, which was the point of this thread,

> Rust is an absolute gem at web backend. An absolute fucking gem.


Nothing beats vertx on JVM!


Curious, do you mind going into more detail on why?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: