Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am actually not talking about the lack of fat pointers. That is almost entirely orthogonal to my point. I am talking about the fact that what would be the syntax for passing a array by value was repurposed for automatically decaying into a pointer. This results in a massive and unnecessary syntactic wart.

The fact that the correct type signature, a pointer to fixed-size array, exists and that you can create a struct containing a fixed-size array member and pass that in by value completely invalidates any possible argument for having special semantics for fixed-size array parameters. Automatic decay should have died when it became possible to pass structs by value. Its continued existence continues to result in people writing objectively inferior function signatures (though part of this it the absurdity of C type declarations making the objectively correct type a pain to write or use, another one of the worst actual design mistakes).

Fat pointers or argument-aware non-fixed size array parameters are a separate valuable feature, but it is at least understandable for them to not have been included at the time.



> The fact that the correct type signature, a pointer to fixed-size array, exists and that you can create a struct containing a fixed-size array member and pass that in by value completely invalidates any possible argument for having special semantics for fixed-size array parameters.

That's not entirely accurate: "fixed-size" array parameters (unlike pointers to arrays or arrays in structs) actually say that the array must be at least that size, not exactly that size, which makes them way more flexible (e.g. you don't need a buffer of an exact size, it can be larger). The examples from the article are neat but fairly specific because cryptographic functions always work with pre-defined array sizes, unlike most algorithms.

Incidentally, that was one of the main complaints about Pascal back in the day (see section 2.1 of [1]): it originally had only fixed-size arrays and strings, with no way for a function to accept a "generic array" or a "generic string" with size unknown at compile time.

[1] https://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pas...


depending upon how one has structured the code, a less painful way to write the same is:

    typedef char array[5];

    void do_something(array *a) {
        enum { a_Size = sizeof *a };
        memset(*a, 'x', a_Size);
    }
it rather depends upon how painful it will be to create a bunch of typedefs.

Beyond a certain point, if there are too many arrays of the same size with different purposes, my inclination is to wrap the array in a struct, and pass that around (either by pointer or value depending upon circumstances.)

The existence of the decaying form is if I recall correctly a backward compatibility thing from either B or NB; simply because in one or the other pointers were written in the (current) array syntax form.


It stems from B, because it didn't have either pointers or arrays on the type level. Declaring an array allocated the storage, but the variable itself was still a word-typed pointer to said array. In fact, you could even reassign it!

   foo(a) {
      return(&a[1]);
   }
   bar() { 
      auto a[10];
      a = foo(a);
   }
The decaying system made it mostly work with minimal changes in C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: