At the lowest level, you don't need to double-up, you can use an error correcting code. If the memories and registers on a chip use a multiple-error correcting code, then the underlying error rate could be quite high without making any difference in the user-visible error rate.
Similarly you could use noisy-network protocols for on-chip wires, so that each signal path doesn't need to be perfect. Again you don't need to double-up. Instead you lose a small percent to overhead, and a delay in order to encode / decode.
How would the error correcting code work for something like a floating point multiplication? Correcting errors in storage is simple, but correcting errors in computation seems like a significantly harder problem.
Similarly you could use noisy-network protocols for on-chip wires, so that each signal path doesn't need to be perfect. Again you don't need to double-up. Instead you lose a small percent to overhead, and a delay in order to encode / decode.