Well, yes. But they do that by replicating a lot of functionality that might as well be pushed down a level. Because every program with output that people or other systems rely on needs that level of reliability and yet only very few provide it to a degree that the company behind it would accept liability if it doesn't.
Usually 'working' and 'reliable' get redefined to 'working with what we've tested it with' and 'reliable insofar as our statistics indicate'. Without knowing for sure that you've really covered all your edge cases you're a typo away from some disaster. Fortunately most software isn't that important. But for software that is that important these strategies, even if imposed from the outside rather than embedded in the language will pay off.
Oh, I am not saying there are no problems. And I don't deny a certain emotional appeal to having safety features provided by the language.
However, great (quality) is delivered with those kinds of features and without, and crap software is delivered with those kinds of features and without. And more importantly, I have seen little to no evidence that having those sorts of features actually substantially changes the statistical distribution of crap/quality software, no matter what we feel should be the case.
People can use these safety features or not, and they can use them well or not. Just like they can use non-linguistic safety mechanism, such as really good test-suites...or not.
Elsewhere, he writes:
> This is where I stop understanding how the rest of the world can work at all. And so you probably need to upgrade your understanding.
If the world doesn't conform to your understanding of it, the thing that's lacking is almost certainly your understanding of the world. Because it does work.
> And more importantly, I have seen little to no evidence that having those sorts of features actually substantially changes the statistical distribution of crap/quality software, no matter what we feel should be the case.
I have. Our company has done a fairly large number of studies on the internals of companies producing software and the better companies are at the tech the better they do in the long run.
Note that there is such a thing as 'good enough', and once that bar is cleared I'm fine with cutting a corner here or there to meet a deadline. But I'm not fine with categorically ignoring quality and security in favor of short term wins.
Usually 'working' and 'reliable' get redefined to 'working with what we've tested it with' and 'reliable insofar as our statistics indicate'. Without knowing for sure that you've really covered all your edge cases you're a typo away from some disaster. Fortunately most software isn't that important. But for software that is that important these strategies, even if imposed from the outside rather than embedded in the language will pay off.