I'm in the WG14 and my opinion is that there isn't one good way to do strings it all depends on what you value (performance/ memory use) and the usage pattern. C in general only deal with data types, north their semantic meaning. (IOW We say what a float is not what it is used for). The two main deviations from that are text and time and both of them are causing us a lot of issues. My opinion is that writing your own text code is the best solution and the most "C" solution. Te one proposal i have heard that i like is for C to get versions of functions that use strings that take an array and a length, so as to not force the convention of null termination in order to use things like fopen.
Its been 50 years so pretty much everything has been considered. In my opinion the mistake was not having arrays decay in to pointers but rather arrays should be pointers in the first place. An array should be seen as a number of values where with a pointer pointing at the first one. I think adding a third version of the same functionality would just complicate things further. (&p[42] is a "slice" of an array) Another thing I do not like about slices that store lengths, is that they hide memory layout from the user and that is not a very C thing to do.
You are right, sizeof is the other big difference. I think these differences are small enough that it was a mistake separate the two. The similarities / differences do make them confusing.
An array of pointers to arrays? Basically, a `T**` C#'s "jagged" arrays are like this, and to get a "true" 2D array, you use different syntax (a comma in the indexer):
int[][] jagged; // an array of `int[]` (i.e. each element is a pointer to a `int[]`)
int[,] multidimensional; // a "true" 2D array laid out in memory sequentially
// allocate the jagged array; each `int[]` will be null until allocated separately
jagged = new int[][10];
Debug.Assert(jagged.All(elem => elem == null));
for (int i = 0; i < 10; i++)
jagged[i] = new double[10]; // allocate the internal arrays
Debug.Assert(jagged[i][j] == 0);
// allocate the multidimensional array; each `int` will be `default` which is 0
// element [i,j] will be at offset `10*i + j`
multiDimensional = new double[10, 10];
Debug.Assert(multiDimensional[i, j] == 0);
Yes, this is people with pre-C99 compilers that do not support variably modified types sometimes do. It is horrible (although there are some use cases).
I plan to bring such a proposal forward for the next version. Note that C already has everything to do this without much overhead, e.g. in C23 you can write:
int N = 10;
char buf[N] = { };
auto x = &buf;
and 'x' has a slice type that automatically remebers the size. This works today with GCC / clang (with extensions or C2X language mode: https://godbolt.org/z/cMbM57r46 ).
We simply can not name it without referring to N and we can also not use it in structs (ouch).
How is this not a quality of implementation issue? Any implementation is free to track all sizes as much as they want with the current standard.
Either a implementation is forced to issue an error at run time if there is an out of bounds read/write and in that case its a very different language than C, or its feature as-if lets any implementation ignore.
Tracking sizes for purposes of bounds checking is QoI and I think this is perfectly fine. But here we can also recover the size with sizeof, so it is also required for compliance:
And I agree that this is a misuse of auto. I only used it here to show that the type we miss already exists inside the C compiler, we simply can name it only by constructing it again: