CPUs do different things when they know it is invalid. Often what happens is it send some sort of trap/signal to the operating system - on unix this would send a segfault and dump core. Not all invalid pointers can be detected, and not all CPUs have the required hardware (MMU) to detect any invalid pointer.
Exactly. Even though you could specify what happens mechanically on any given system, the results are kind of uncontrollable on most real computers.
Whether things should be UB or implementation defined requires a good value judgement. Per se there is not a real need of UB, (assuming that the behaviour of a machine can be specified), but on the other hand there isn't much value in specifying a situation where all control is lost to the point where implementation details leak (runtime structures are overwritten etc.).
Since there is (I assume?) an expectation for "implementation-defined" to have an actual definition, pedantically an implementation leak would require all implementation details to be codified, and thus set in stone.
It is more that while we can define everything, there are costs. I can track every allocation in a hidden table, then before following a pointer verify that the point being followed is in the table. However that is extremely expensive (don't forget that you might follow a pointer in one thread while a different thread deletes it, so there is a complex race condition logic needed here that is tricky to get right)
By making things undefined we can avoid a large amount of complex code to detect a situation that shouldn't happen.
Exactly. While it is definitely harder with a low-level language, it is absolutely doable in terms of costs with managed languages. In Java for example even data races are well-defined to a degree (and OCaml’s new multithreaded mode defines data races with even bigger guarantees)