Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it a hot take to believe that no humans are infallible and that only languages with memory safe guarantees can offer the kind of safety the author seeks? With the advent of rust, c and c++ programmers can no longer argue that the performance tradeoff is worth giving up safety.

There are, of course, other good reasons to choose c and c++ over rust. And of course rust has its own warts. just pointing out that performance and memory safety are not necessarily mutually exclusive



I'm not sure what you're definition of performance parity is. Are you claiming that the existence of rust proves that there is no performance penalty for memory safety? The penalty may be relatively small, but I am not aware of any proof that the penalty is non-existent. I am not even sure how you could prove such a thing. I could imagine that C and C++ implementations of exactly the same algorithms and data structures as are implemented in safe rust might perform similarly, but what about all of the C and C++ implementations that are both correct and not implementable in safe rust? do they all perform only as well or worse than rust?


1. Those fast algorithms that can't be implemented in safe Rust are rare.

2. Even when they exist, Rust lets you use unsafe code but only where needed. It's still much better than having your entire program be unsafe.

3. In practice Rust versions of programs are as fast, if not faster than C/C++ ones.


As an example, a really fast sort can't be expressed in safe Rust

However, the two sort algorithms in Rust are safe to use, as well as being faster than their equivalents in C++. In fact even the previous sorts, the ones which were replaced by the current implementations, were both faster and safer than what you're getting in C++


Do you happen to have a link for benchmark ? i would like to learn what i miss happening in rust. Last time i read about sort implementations here on HN [0] rust panic safety had some measurable costs: 0. https://news.ycombinator.com/item?id=34646199


Take a look at these results:

https://youtu.be/rZ7QQWKP8Rk?t=2054

Watch until the next slide - it shows a comparison of a port of a very fast C++ sorting algorithm to Rust. Rust is faster due to algorithmic changes. Ignoring those they are very similar speeds; certainly not an issue.


Non video version is there [0]. My take on it is that there are no 'language comparisons' there. Difference is between older and newer algorithms and benchmark favorite is written in C. Its cool that new algorithms are implemented in new-ish language first.

[0] https://github.com/Voultapher/sort-research-rs/blob/main/wri...


Ah, now I think I see the misunderstanding. When I said "equivalents" I mean that these are the standard library sorts, and so I was comparing against the standard library sorts in the three popular C++ implementations.

You're correct that if you implemented these algorithms carefully in C++ you can expect very similar results. I don't believe that anybody has done that and certainly there is no sign the three major implementations would attempt to switch to these algorithms for their standard library sorts.

In Rust today the standard library sorts provided are further refinements of the "ipnsort" and "glidesort" algorithms described in the paper you linked. As the papers arguing to use these algorithms point out, the downside is that although they've been tested extensively with available tools we can't actually prove they're even safe, the upside is of course performance.


That version is 2 years old. Rust has had a new faster sort implementation since then.

> My take on it is that there are no 'language comparisons' there.

Watch the video a few minutes forwards from when I linked, there are "language comparison" slides. Basically C++ and Rust are on par.


Thanks! Updated results are interesting.


That assumes that people know what they're doing in C/C++, I've seen just as many bloated codebases in C++ if not more because the defaults for most compilers are not great and it's very easy for things to get out of hand with templates, excessive use of dynamic libraries(which inhibit LTO) or using shared_ptr for everything.

My experience is that Rust guides you towards defaults that tend to not hit those things and for the cases where you really do need that fine grained control unsafe blocks with direct pointer access are available(and I've used them when needed).


Is there a name for a fallacy like "appeal to stupidity" or something where the argument against using a tool that's fit for the job boils down to "All developers are too dumb to use this/you need to read a manual/it's hard" etc etc?


I think there is something to be said about having good defaults and tools that don't force you to be on every last detail 100% lest they get out of control.

It also depends on the team, some teams have a high density of seasoned experts who've made the mistakes and know what to avoid but I think the history on mem vulns show that it's very hard to keep that bar consistently across large codebases or disperse teams.


This is ultimately the crux of the issue. If Google, Microsoft, Apple, whatever, cannot manage to hire engineers that can write safe c/c++ all the time (as has been demonstrated repeatedly), it’s time to question whether the model itself makes sense for most use cases.

Grandparent can’t argue that these top tier engineers aren’t RTFM here. Of course they are. Even after the manual reading they still cannot manage to write perfectly safe code. Because it is extremely hard to do


Personally my argument would be the problems at the low level are just hard problems and doing them in rust you'll change one set of problems of memory safety to another set of problems probably of unexpected behaviour with memory layouts and lifetimes at the very low level.


It's not that all developers are dumb/stupid. It's that even the smartest developers make mistakes and thus having a safety net that can catch damaging mistakes is helpful.


Yes. Even the most seasoned programmers write CVE worthy c++. The foremost engineers still fail.


I've read several posts here where people say things like "this is badly designed becausw it assumes people read the documentation".

???????

Yes you need to read the docs. That is programming 101. If you have vim set up properly then you can open the man page for the identifier under your cursor in a single keypress. There is ZERO excuse not to read the manual. There is no excuse not to check error messages. etc.

Yet we consistently see people that want everything babyproofed.


_When_ there is a manual.

On the other hand, there's no excuse for designers & developers (or their product manager, if that's the one in authority) not to work their ass off on the ergonomics/affordance of the tools they release to any public (be it end users or developers, which are the end users of the tool makers, etc.).

It benefits literally everyone: the users, the product reputation & value, the builders reputation, the support team, etc.


Implying documentation exists. You're supposed to read the code, not man pages.


Yes, you need to read the docs. Yes.

And yet...

Do people read the docs? Often, no, they don't. So, are you creating tools for the people we have, or for the people you think we should have? If the latter, you are likely to find that your tool makes less impact than you think it should.

Computer languages are not tools for illiterates. You need to learn what you're doing. And yet, programmers do so less than we think they should. If we don't license programmers (to weed out the under-trained), then we're going to have to deal with languages being used by people who didn't read the docs. We should give at least some thought to having them degrade gracefully in that situation.


Nah, rust also guides you to "death from a million paper cuts" aka RAII (aka everything is singularly allocated and free'd all over the place).

You need memory management to be painful like in C so that it forces people to go for better options like linear/static group allocations.


I assure you that people do not go for better options


Why is RAII bad?


RAII is fine when it is the right tool for the job. Is it the right tool for every job? Certainly there are other more or less widely practiced approaches. In some situations you can come up with something that is provably correct and performs better (in space and/or time). Then there are just trade-offs.


Because it's micromanagement.


Micromanagement how?


Once you know how Rust works it is likely your Rust code will be faster than C/C++ with less effort. I can say this because I was using C++ for a long time since Visual C++ 6.0 and moved to Rust recently about 3 years ago.

One of the reason is you get the whole program optimization automatically in Rust while C/C++ you need to use put the function that need to be inline in the header or enable LTO at the link time. Bound checking in Rust that people keep using as an example for performance problem is not actually a problem. For example, if you need to access the same index multiple times Rust will perform bound-checking only on the first access (e.g. https://play.rust-lang.org/?version=stable&mode=release&edit...).

Borrow checker is your friend, not an enemy once you know how work with it.


This kind of assumes old and naive C++. There was a lot of that 20 years ago but a lot of that was replaced by languages with garbage collectors. New C++ applications today tend to be geared toward extreme performance/scale. The idioms are far beyond thinking much about anything you mention.

People seriously underestimate how capable and expressive modern C++ metaprogramming facilities are. Most don’t bother to learn it but it is one of the most powerful features of the language when it comes to both performance and safety. The absence of it is very noticeable when I use other systems languages. I’m not a huge fan of C++ but that is a killer feature.


OK, but what is this wonderful subset of C++ that is geared towards extreme performance without sacrificing safety, and has expressive metaprogramming facilities that to do not tank compilation and run times?

Not a rhetorical question, I'd love to see a book or notes that carves out precisely that subset so those of us who want to learn can avoid the tons upon tons of outdated or misleading documentation!



> not implementable in safe rust

This is moving the goalposts. "Safe rust" isn't a distinct language. The unsafe escape hatch is there to make sure that all programs can be implemented safely.


It is not moving the goalposts. The parent that I replied to said "c and c++ programmers can no longer argue that the performance tradeoff is worth giving up safety." If you don't limit to safe rust you are giving up safety.


> If you don't limit to safe rust you are giving up safety

This is at best a misunderstanding of the way rust works. Unsafe is a tool for producing safe abstractions.


> Unsafe is a tool for producing safe abstractions.

I think we disagree on what "giving up safety" means, or perhaps you thought I meant "you are giving up all safety." (And honestly, I'm just trying to clarify what I meant when I read/wrote it. I'm not going for a No True Scotsman, or trying to move the goalposts here.)

Manually convincing yourself (proving) that an implementation is correct is how you write correct code in any language. In this sense you never "give up safety" in any language, but that's clearly not the sense that is being discussed in this thread. In this thread "giving up safety" appears to me to mean giving up automated safety guarantees provided by the language and compiler.

I acknowledge that it is possible to write just the bare minimum in unsafe rust to realise an abstraction, and that these "unsafe rust" fragments may be provably safe thus rendering an entire abstraction safe. This may be best practice, or "the way rust works" as you say. None the less the unsafe fragments are not proved safe by construction/use of safe rust and/or automatically safe by virtue of the type system/borrow checker.

My point was that if you use unsafe rust you have reduced the number of automated safety guarantees. It is on the developer to prove safety of the unsafe rust, and of the abstraction as a whole. Needless to say, human proof is a fallible process. You may convince yourself that you have not given up safety, but I argue that you have merely contained and reduced risk. You have still "given up safety."


Safe Rust often performs significantly worse than C++ for many kinds of code where you care a lot about performance. You can bring that performance closer together with unsafe Rust but at that point you might as well use C++ (which still seems to have better code gen with less code). Everyone has their anecdotes but, with the current state of languages and compilers, C++ still excels for performance engineering.

The performance tradeoff is not intrinsic. Rust’s weakness is that it struggles to express safety models sometimes used in high performance code that are outside its native safety model. C++ DGAF, for better and worse.

The hardcoded safety model combined with a somewhat broken async situation has led me to the conclusion that Rust is not a realistic C++ replacement for the kinds of code where C++ excels. I am hopeful something else will come along but there isn’t much on the horizon other than Zig, which I like in many regards but may turn out to be a bit too spartan (amazing C replacement though).


> a somewhat broken async situation

Isn't Rusts's async situation "somewhat broken" in the exact same way the C++'s async situation is?


C++ often perform significantly worse than assembly for many kinds of code where you care a lot about performance. You can bring that performance closer together with bits of ASM in your C++ but at that point you might as well use ASM.


The word is C++ performance comes from asm-like simd integration that can be not as mature in other languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: