I'm exceedingly well-aware of how prevalent UB is and how "rarely" it actually turns into an issue in practice. The problem is that you have no way of knowing when or if a particular instance of UB will be dangerous. Even if you somehow know the impact today, that can change without warning in the future.
There's a wealth of studies on this subject, like this one [0] documenting cases where undefined behavior leads to miscompilations or examples like [1] where undefined behavior leads to security vulnerabilities. There's a quote from that second link that's deeply applicable here:
> This blog post provides an exploit technique demonstrating that treating these bugs as universally innocuous often leads to faulty evaluations of their relevance to security.
Aside from the contrived examples in the paper, the rest are bugs. The kernel exploit was a just a lack of a NULL check; another bug. Bugs are going to happen, and they're going to have unpredictable consequences. What does that have to do with the language and undefined behavior? These are all just more evidence of needing to know what you're doing if you're going to write code at this level, but really not because the vast majority of people aren't writing code where bugs in the form of crashes or security exploits will have serious consequences or can't be fixed.
I'm not sure what you're going for by trying to call the examples I linked bugs. Yes...?
The issue is that you can't solve these at the code level. The kernel vuln could have been solved by a null check only because the kernel build system explicitly tells the compiler not to omit null checks as a fix for earlier exploits [0] caused by the language allowing the compiler to omit null checks.
I don't think it's reasonable to brush these off as things that only affect "serious" code. For one, someone needs to write that important code and history has repeatedly demonstrated that even the best programmers write UB occasionally. Secondly, "important code" is pretty much the biggest remaining niche for large scale C development, and C++ to a lesser extent. Very few people are using Ada/SPARK for safety critical development, for example. Compilers have also become significantly more aggressive at optimizing against UB and security significantly more important, which means this problem is far worse than it was 30 years ago.
There's a wealth of studies on this subject, like this one [0] documenting cases where undefined behavior leads to miscompilations or examples like [1] where undefined behavior leads to security vulnerabilities. There's a quote from that second link that's deeply applicable here:
> This blog post provides an exploit technique demonstrating that treating these bugs as universally innocuous often leads to faulty evaluations of their relevance to security.
[0] http://dx.doi.org/10.1145/2517349.2522728
[1] https://googleprojectzero.blogspot.com/2023/01/exploiting-nu...