People will go out of their way to avoid talking directly to machines at a low level
I would put it differently. At 30 bugs per kLOC, I'd prefer my codebase expresses a problem & it's solution- and as little below that level as possible.
Each well-vetted layer of abstraction between a scientific programmer and the machine's low level interface eliminates whole classes of bugs that are irrelevant to the problem that user is actually working on.
> but rather the discovery that the ratio is pretty stable
The thing is, it isn't stable. It just doesn't depend on the language, what is very surprising. But it varies enormously from one study to another and, AFAIK, nobody has a good set of factors explaining it.
I don't find it that surprising. I think what programming languages (and styles) do is fill up each line of code with information until a roughly constant level of cognitive effort is required to process that line.
At that constant level of effort, we make a certain constant number of mistakes. And that's what I think these studies show.
Some languages are very dense, others break things down in more lines. Some languages care about hard to control details of your computer's working, others handle that automatically. Some languages come with builtin validators, others let you write any kind of trash and try to make sense of it.
Personally, I suspect the number of bugs per line is defined by social and psychological factors, and what changes from one language to the other is the amount of effort one has to put into testing and debugging. But well, none of this is obvious to me.
I would put it differently. At 30 bugs per kLOC, I'd prefer my codebase expresses a problem & it's solution- and as little below that level as possible.
Each well-vetted layer of abstraction between a scientific programmer and the machine's low level interface eliminates whole classes of bugs that are irrelevant to the problem that user is actually working on.