While I agree about the points you made, you cant take the zeroth element out of your shopping bag, you take the first one. The array indexing operator gives you access to the nth element of your data store. IMO OPs point is valid.
You created an unfair definition of the array indexing operator. I can just as easily say that the array indexing operator gives you access to the element at index I, which starts counting at 0. That's not an argument.
I'm mad at myself for getting dragged into this, but. Leaving aside the fact that array indexes are not continuous in the way ruler measurements are, what does it say that you just called it the 1st inch, not the 0th?
> I have always seen a ruler as a discrete sequence of centimeters even if the physical object is continuous
I have trouble believing this. Does this mean you think of a ruler as a tool to assign lengths (continuous) to discrete intervals, rather than as a tool to (perhaps imprecisely) measure the (continuous) length.
for me a ruler is something that tells me how many centimeters there is in whatever it is I want to measure - but maybe it's for linguistic reasons? e.g. in french I'd say "y'a combien de centimètres là?" when measuring something long with a tape meter and someone's help, which translates to "how many centimeters are there right now?" more or less
The sequence of the cardinal numbers, i.e. of the equivalence classes of sets having the same number of elements, is 0, 1, 2, 3 and so on and those words in any language have been used originally for cardinal numbers, not for ordinal numbers.
For the sequence of the ordinal numbers, whose purpose is to identify the position of the elements of an arbitrary sequence, any sequence of arbitrary symbols may be chosen and fixed by convention.
Most languages had at least in the beginning special words for the first and for the second elements of a sequence, without any relationship with the cardinal numbers. Even in English that remains true, even if "second" is a more recent substitution of the older word used previously. Many languages have special words for the last element and for the element before it. Some languages had special words for the third element and for the third element going backwards from the last. So in some languages it was possible to identify the elements of a 6-element sequence without using words derived from the cardinal numbers.
However inventing a very long sequence of words to be used as ordinal numbers, in order to identify positions in sequences with more than 2 to 6 elements, would have been difficult, so in most languages someone noticed that there already is a sequence of words that everybody had to memorize when learning how to count and which had rules for being extended to any length. So the ordinal numbers were derived by using a suffix or some other derivation rule from the cardinal numbers.
There is no logical reason for using 1 for the first ordinal position, this is just a historical accident.
The reason is that the children have always been taught to count by saying 1, 2, 3 and so on, instead of being taught to recite the sequence of the cardinal numbers from zero.
All the languages have always had a word for zero, but those words were normally created by applying a negation to words meaning "something", "one" or the like.
Because of this, the words for zero were not perceived as having an independent meaning and there was no need to learn them separately when the recitation of the cardinal numbers was learnt.
Nowadays we have a much better understanding of the meaning of the cardinal numbers and we are aware that 0 is a cardinal number like any other, so the children should really be taught to count 0, 1, 2, 3 ... and not 1, 2, 3, ... like 5000 years ago.
In the natural languages there is a huge inertia. Even if one would decide that since tomorrow the ordinal numbers should be 0th, 1th, 2th, 3th, 4th and so on, everybody would still have to know that whenever reading older writings the sequence of the ordinal numbers was 1st, 2nd, 3rd, 4th and so on, so a change of the convention to a more logical one would bring no simplification.
On the other hand, in the programming languages you can ignore the legacy conventions and choose the best conventions. Using for ordinal numbers the sequence 0, 1, 2, 3 ... is the best choice for many reasons, which have been explained in the literature many times, e.g. by Dijkstra.
Choosing to start the ordinal numbers from 1 in a programming language just demonstrates a lack of understanding of what the cardinal numbers and the ordinal numbers are and a lack of practical experience in programming and of understanding of how the programming language will be translated into machine language.
The first programming language for which I have studied the machine code generated by its compiler, when I was young, happened to be Fortran, which uses indices starting from 1. Until today I remember how I considered ugly and error prone all the tricks that the compiler was forced to use in order to avoid in many cases to make extra computations due to the poor choice of the origin of the indices.
The point is valid, but the rationale is not, let me explain:
Caring about 0 based or 1 based indexing is, to me, a sign of someone who struggles with programming in general, or is stuck doing a lot of finicky conversion between the two.
Most modern, higher level languages have generally abandoned using indexing, instead (even c++) they have something like:
"for x in y do z"
1-based indexing is a bit more readable, but in the end, you have hardly any benefit. There are better paradigms, algorithms, which don't require indexing at all, which IMO is the majority of what programmers are doing anyways. Even if you need to process two same-size collections at once (the majority of the remaining legitimate uses for index based looping), you are likely working pre-sorted data and should consider using a zip or pair which eliminates the need for managing indexes.
You say "but aren't you just imposing your own style on others"? Not really, if we want clean, minimal code, there should be as few references to the underlying architecture as possible, even the fact that we are dealing with a list is an implementation detail (is it actually a list, a linked list, a stream, a dictionary, an event, etc...), so in the context of implementing a higher-level language, this point is not only irrelevant, but shortsighted. If you are creating and looping over lists, you are likely not doing anything interesting, which is the whole point of programming in higher level languages, to do interesting things simply, right?
As for the remaining use case when we do actually want array based access, typically you find this in high-performance, architecture aware applications - then we actually want random memory access. We may even want to deal with explicit memory offset, which is what 0-based arrays are good at doing (often times the array is syntax sugar and we are literally assigning to/from pointer offsets).
To bring this around to my previous statement: "Caring about 0 based or 1 based indexing is, to me, a sign of someone who struggles with programming in general, or is stuck doing a lot of finicky conversion between the two."
The reason this comes up in the first place is that there is a divide. 0-based arrays are arguably much better for low level activities, and higher level languages generally left them in as a familiarity. Translating from 1 to 0 based arrays is not any easier than translating from 0 based to 1 based arrays, 1-based arrays provide just as much confusion and less familiarity in these cases. This is not a good thing.
Now I suppose Lua is trying to be something weird, a "high level" low-level language. Maybe that's fine, but it is weird, and people are right to be put-off by the change. If you are looping over bananas in lua, maybe you're not really using it the way it was intended, if you are doing memory-level access then its unnecessary language cruft.