It depends on the dynamic range you are working with and they type of operation. Dynamic range is critical with addition and subtraction but usually not important with multiplication and division (where only maximum exponent ranges are of concern).
For example, 1.0 + 1.616e-35 = 1.0 (exact) with double precision as the dynamic range is far too high to encode the sum within the ~16 decimal digits available. The second term just gets rounded out.
1.0 / 1.616e-35 however can be successfully encoded and you will not lose much precision, at most a rounding error in the last digit.
So, to answer your question double precision is usually sufficient even at Planck scale as long as you are not also adding/subtracting terms at much larger scales (like the 1 meter example above)
I immediately wondered this too. Other than seeing that two numbers are strictly equal when a computer evaluates them, how much precision do physicists and mathematicians actually need?
In my experience as a physicists many things are perfectly fine even with single precision. This is especially true if you're dealing with experiments, because other errors are typically much larger.
To give you an example from my line of work (optical communication). We use high speed ADCs and DACs which have effective number of bits of around 5. While you can't do DSP with 6bit resolution, anything above 12 bits is indistinguishable. This is in fact used by the people designing the circuits for the DSP used in real systems. They are based on fixed point calculations and run on around 9 bits or so.
While other fields might have higher precision needs just remember that when you interact with the real world, your ADCs will likely not have more that 16bit resolution (even if very slow), so you're unlikely to need many more bits than this.
32-bit float in audio has a dynamic range of 1528dB. The loudest possible physical dynamic range is around 210db. So that's quite a bit of headroom. Real hardware audio converters max out around 22 bits of resolution, so for sampling the maximum dynamic range is 110dB to 120dB on super-spec top grade hardware.
Of course for synthesis you can use the entire dynamic range. But you can't listen to it, because the hardware to play the full resolution doesn't exist. (For 32-bit float it's physically unbuildable.)
64-bit floats are still useful in DSP because there a few situations where errors recirculate and accumulate and 32-bit float is significantly worse for that than 64-bits. It doesn't take all that many round trips for the effects to become audible. Worst case is some DSP code can become unstable and blow up just from the numeric errors.
You could go up to 128-bit floats, but the benefits are basically zero.
I wonder what considerations might apply to its use at the subatomic scale.