> C is sort of a dead end. There is very little innovation there.
C is a small language. There are benefits to that. But it also has a handful of historical oddities. Innovation here means to keep C small while also getting rid of those quirks.
C++ is enormous. Rust is headed in the direction of similar enormity.
C is a small language which fails even at the most basic abstractions (you can’t really create a safe and ergonomic “zero-overhead” generic vector type). Due to the inherent complexity of low-level programming, expressive languages in this niche (C++, Rust) has to be reasonably complex just for this reason alone.
Zig is an interesting “exception” due to its strong compile-time metaprogramming capabilites, resulting in a small, but quite expressive language. But all 3 has a future. C is here to stay, but I really wouldn’t start any new project in it.
C++ -ffreestanding is remarkably usable without the standard library, provided compiler intrinsics are used to fill in some gaps. I'm slowly coming around to using that instead of C for language runtimes.
> C++ is enormous. Rust is headed in the direction of similar enormity
I don't think this is a fair characterization of rust really. For the most part the things on the horizon still for rust seek to reduce complexity by filing down sharp edges that force you to write complicated code. Stuff like GATs seem complicated until you repeatedly slam your head into the lack of them trying to do things that "seem" natural.
C++ on the other hand (after 2011) just never saw an idiom it didn't like enough to throw on the pile and there's little coherence to the way the language has grown in the last decade.
> there's little coherence to the way the language has grown in the last decade.
IMHO it's been incoherent from the earliest times.
The lame exceptions without 'finally', and no consistency in exception types. Then the desperate attempts to make all resources into objects, except that practically no OS calls bothered with this.
The overloading of the shift operators in the standard library. Indeed, operator overloading itself is just a recipe for abuse. You read 'a=b+c' and you literally have no clue what that means.
Multiple inheritance with the brittle semantics.
The awful STL, with its multi-kB error messages (the allocator of the trait of the string of the tree of map of king Caractacus doesn't match ...)
There's no wonder the 'obfuscated C competition' never happened with C++ given the fact it's unreadable, right out of the box.
> You read 'a=b+c' and you literally have no clue what that means.
I never understood that argument. Even in C operators do different things depending on what types you pass it. Two very simple examples:
1. Adding a number to a char* vs adding a number to an int* (or a pointer to any other larger type). The second automatically creates an invisible multiplication. This was confusing for me when I first learned C after already knowing the concept of a memory address (which is just a number). It was an unexpected abstraction for me.
2. This regularly bites novices to programming: Dividing two numbers. If at least one of them is floating point, you get the 'correct' result, overwise rounded towards zero. To 'fix' it, you have to explicitly cast at least one of them to float or double. Then the language imlpicitly casts the other for you. std::pow went the other way, which is less confusing. It always promotes integers to floating point numbers and returns a floating point result.
As soon as your language has types and operators, you get operators that do different things based on the types of the values they are applied to. The only new thing that operator overloading adds is that it makes libraries first class citizens.
fwiw I'm with you even though I've gotten very frustrated with c++'s bloat in other ways. The shift operators were probably a poor choice for iostreams but I don't think the idea was inherently unsound, iostreams is just a very bad library in a bunch of ways.
The thing is that in the end, once you have "methods", there are a bunch of reasons you sometimes want to have "infix methods" and for various reasons languages have been reticent to add them as a general concept, so all you get is the operators you have. And it's not like in C++ types are hidden from you most of the time. I guess now that auto is more normalized it happens more but this argument goes back way before auto getting its modern meaning.
But fundamentally, I would rather concat strings with + and I'd rather use normal math operators for vector ops like matrixes or combinatorial concatenation. Every time I see code in a language that doesn't allow you to do this, but people write code doing those things, I cringe at the result.
At a more fundamental level, the fact that struct+struct has no meaning to begin with in C makes it fair game to add it as a concept.
And it's worth noting that addition of pointers is only really a trivial op in non-segmented architectures without any kind of pointer tagging anyways. The window on that assumption opened with 32bit archs and will probably close soon as CPU security primitives evolve.
> And it's worth noting that addition of pointers is only really a trivial op in non-segmented architectures without any kind of pointer tagging anyways. The window on that assumption opened with 32bit archs and will probably close soon as CPU security primitives evolve.
It was trivial on char* in the 16 bit segmented days (artihmetic was only applied to the offset part of the pointer, if you had a far pointer at all), unless you absolutely needed the huge memory model, which was discouraged anyway because of its slowness. What it did mean though, was that code and data lived in something akin to separate address spaces. But portable modern C code usually doesn't use performance tricks like self-modifying code anymore anyway, because even if it wasn't problematic it wouldn't really be needed anymore.
The difference is that you know (or can know, or predict) the outcome. In C++ a=b+c could be a simple addition or an operation that takes an hour, allocates 1GB of RAM, opens a socket and requests a JSON document from a server in China. You can't know wihtout looking at the code.
You don't know what the function called "add" does either.
There's no reason for the name "+" to be anymore special than "add" - especially in any language supporting unicode identifiers which allows even more crazy names.
See, framing it as a personal preference makes much more sense. Of course disallowing nontrivial work in operators will also mean you can't concatenate strings in them, so you will have few languages to choose from.
Sorry. I don't know if you are saying that you won't be able to concatenate strings without operators, or if you choose a language just because it allows you to use a "a = b + c" form with strings.
For the first case, of course you can create something like:
str1.append(str2);
For the second... I don't know what to say.
> framing it as a personal preference
But follow my logic for a moment: if "str3 = str1 + str2" concatenates a string, what "str3 = str1 - str2" should do?
(Yes I know there is no minus operator for std::string)
I never said, that it is impossible to work without operator overloading (actually the opposite). Just that it is a very common feature in most programming languages, and I don't understand why an experienced developer would be confused by it. Even in maths expressions (where the operators come from), you have this behavior. If you multiply two matrices, it does something completely different. The result could be huge and thus in computing world require huge memory allocation. Also you can multiply matrices, but you can't divide them. The primary school level math you seem to be limiting the world to is not practical.
You can predict the outcome very well. It is a call to an operator, which can be trivial for simple types. But of course if you concatenate two 500MB strings using +, you will have a 1GB allocation. I don't see how that is a problem or even a surprise. Of course you could do something in that operator that nobody would expect to happen inside it. But the same is true if you give a function a bad name. For all intents and purposes, an operator is just a function with a particularly short name and (default) style of how it is called.
Not always. BTW, I don't know if you noticed it, but I was exagerating to make a point.
> Of course you could do something in that operator that nobody would expect to happen inside it
To me, this is the problem. For example, I do embedded. I have to know with a certain precision what is going on in my program.
It's a very different thing to call a bad-named or badly-done function. It's an unlucky case... I guess?...
But if I use an operator, there might be hidden code there I have to see.
Yes, I know you can use "=" to copy a struct. But also "a = b + c" might be doing something slow, like re-allocating internal pointers and making copies, allocating new objects, etc. I have to read the code to see what it will do. Believe me, I have done it with inherited embedded C++ projects.
I don't do embedded (anymore), so please consider that your use-case and concern is very specific.
But even in C++ you can't redefine what + means for types, where it already has a builtin meaning. The only thing you can do is define what it does for custom types. That means when you see the + operator applied to some class or struct type, you should already be aware that this is nothing built-in and equivalent to a function call. If you are not, you are just not fluent in your tooling. That is completely different problem. It is something learnable, not a fundamental problem of the tooling.
C is a small language. There are benefits to that. But it also has a handful of historical oddities. Innovation here means to keep C small while also getting rid of those quirks.
C++ is enormous. Rust is headed in the direction of similar enormity.