Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.

That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:

https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...



It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.

One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.


Most C++ code I have seen uses the STL. As for “hyper optimized” data structures, you already have those in C. See the B-Tree code those binary search routine I patched to run faster. Nothing C++ adds improves upon what you can do performance wise in C.

You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.


C and C++ can be made to generate pretty much the same assembly, sure. I find it much easier to maintain a template function than a macro that expands to a function as you did in the B-Tree code, but reasonable people can disagree on that.

Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.

Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.

In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.

Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.


Having worked on performance sensitive code (OpenZFS), I have found less to be more.

While C code makes more heavy use of linked lists than C++ code, most of the C code I have helped maintain made even heavier use of balanced binary search trees and B-trees than linked lists. It also used SLAB allocation to amortize allocation costs. In the case of OpenZFS, most of the code operated in the kernel where external memory fragmentation makes dynamic arrays (and “large” arrays in general) unusable.

I think you have not seen the C libraries available to make C even better. libuutil and libumem from OpenSolaris make doing these things extremely nice. Some of the first code I wrote professionally (and still maintain) was written in C++. There really is nothing from C++ that I miss in C when I have such libraries. In fact, I have long wanted to rewrite that C++ code in C since I find it easier to maintain due to the reduced abstractions.


> Nothing C++ adds improves upon what you can do performance wise in C

Implementations of both languages provide inline asm, so this is trivially true. Yet it is an uninteresting statement.


This is not a convincing argument for C. None of this matches my experience across many companies. In particular, the specific things you cite — excessive dynamic memory usage, exceptions, bloat — are typically only raised by people who don’t actually use C++ in the kinds of serious applications where C++ is the tool of choice. Sure, you could write C++ the way you describe but that is just poor code. You can do that in any language.

For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.

C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.


This risks becoming a no true Scotsman, but it is indeed true that there is really no common idiomatic C++. Even the same code base can use vastly different styles in different areas.

Even regarding exceptions, I would not touch them anywhere close to the critical path, but, for example, during application setup I have no problem with them. And yet I know of people writing very high performance applications that are happy to throw on the critical path as long as it is a rare occurence.


> Sure, you could write C++ the way you describe but that is just poor code.

C++ puts people into a sea of complexity and then blames them when they do not get a good result. The purpose of high level programming languages is to make things easier for people, not make them even more likely to fail to write good code and then blame them when they do not.

> For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company.

Unfortunately, C++ does not make exceptions optional and even if you use a compiler flag to disable them, libraries can still throw them. Just allocating memory can throw them unless you use the nothrow version of the new operator that C++11 introduced. Alternatively, you could use malloc with a placement new and then manually call the destructor before freeing the memory. As far as I know, many C++ developers who disable exceptions do not do either. Then when a memory allocation fails, their program terminates.

That said, there are widely used C++ code bases that rely on exception handling. One of the most famous is the Windows NTFS driver, which reportedly makes heavy use of structured exception handling:

https://blog.zorinaq.com/i-contribute-to-the-windows-kernel-...

> It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.

I have seen plenty of C++ software throw exceptions in wine, since it prints information about it to the console. It is amazing how often exceptions are used in the normal operation of such software. Contrary to your assertions of everyone turning off exceptions, these exceptions are caught and handled. Of course, this goes unseen on the original platform, so the developers likely have no idea about all of the exceptions that their code throws.

As for excessive dynamic memory allocations, C++11 introduced move semantics to eliminate a major source of them, and that very much was a problem in real code. It still can be, since you need to always use std::move and define move constructors. C++ itself tends to discourage use of intrusive data structures (as they break encapsulation), which means doing more dynamic allocations than C code does since heavy use of intrusive data structures in C code avoids allocations that are mandatory without them.

> C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C.

My goal had been to say that performance was not a reason to use C++ over C, since C++ is often slower. Had my goal been different, there is plenty of material I could have used. For example, I could have noted the binary compatibility breakage that occurred for C++11 in GCC 5.0, the horrendous error messages from types that are effectively paragraphs, changes in the STL definitions across different versions that break code, and many other things, but I was not trying to throw stones at an obviously glass house.

> It isn’t like C++ can’t use C code.

It increasingly cannot. If C headers use variably modified types and do not have a guard macro an alternative for C++ that turns them into regular pointers, C++ cannot use the header. Here is an example of code using it that a C++ compiler cannot compile:

https://godbolt.org/z/T5T4Y1n68

The C preprocessor also now has generics which are not supported by C++ either:

https://godbolt.org/z/cof14W7vM


Unfortunately Stepanov and the STL are widely misunderstood. Stepanov core contributions is the set of concepts underlying the STL and the iterator model for generic programming. The set of algorithms and datastructures in the STL was only supposed to be a beginning, was never supposed to be a finished collection. Unfortunately many, if not most treat it that way.

But if you look beyond, you can find a whole world that extend the stl. If you are not happy, say, with unordered_map, you can find more or less drop in replacements that use the same iterator based interface, preserve value semantics and use the a common language to describe iterator and reference invalidation.

Regarding your specific use case, if you want intrusive lists you can use boost.intrusive which provides containers with STL semantics except it leaves ownership of the nodes to the user. The containers do not even need to be lists: you can put the same node in multiple lists linked list, binary trees (multiple flavors), and hash maps (although this is not fully intrusive) at the same time.

These days I don't generally need boost much, but I still reach for boost.intrusive quite often.


I have met a number of people who will not use the boost libraries. It has been so long that I have long forgotten their reasons. My guess is that it had to do with binary compatibility issues.


Except, nothing forbids me to use two linked lists in C++ using sys/queue.h, that is exactly one of the reason why Bjarne built C++ on top of C, and also unfortunely a reason why we have security pain points in C++.


Yet the C++ community is continually trying to get people to stay away from anything involving C. That said, newer C headers using _Generic for example are not usable from C++.


Because C++ was "TypeScript for C", plenty of room to improvement that WG 14 refuses to act on for the last 50 years.

Yes, most language features past the C89 subset are not supported, besides the C standard library, because C++ has much better alternatives, like why _Generic when templates are a much saner approach, than type dispatching with the pre-processor.

However that is besides the point, 99% of C89 code minus a few differences, is valid C++ code, and if the situation so requires, C++ code can be exactly the same way.

And lets not forget most FOSS projects have never moved beyond C89/C99 anyway, so stuff like _Generic is of relative importance.


C, unlike C++, does not really force new versions onto you, even if dependencies begin using them. That said, Linux switched to C11. Newer versions of C will gradually be adopted, despite the incompatibilities this causes for C++.

As for WG 14, they incorporated numerous C++isms into C. While you claim that they did not go far enough, I am sure you will find many who would say that they went too far.


I very much doubt it, when someone decides to make full of use of recent ISO C on a C library header file.

I claim they aren't focused on what matters, we don't need C++isms into C, we already have C++, and C should have been done as language back in C89.

Anyone that wanted more has always been able to use C++ instead, or Objective-C on Apple/NeXT land.

What we need is for WG14 to finally take security regarding strings, arrays seriouslys, not yet another reboot of functions using (ptr, length) pairs.


> I very much doubt it, when someone decides to make full of use of recent ISO C on a C library header file.

I have tested this WRT _Generic as it was a concern. It turns out that GCC will accept it on older versions, which permits compatibility. You might feel that this is wrong, but that is how things are right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: