> Generally speaking, if you think you need more than double precision, what you really want is double precision and a better algorithm. Generally speaking.
Though a lot of the time, the better algorithm is using an error accumulator-- so 2 doubles. This tends to outperform 80-bit extended precision, double-double, or long double arithmetic... but more precision would often also suffice and use the same amount of space.
Error accumulation is basically a way to emulate higher precision numbers. That’s not what I’m talking about—I’m saying that you can use an algorithm which accumulates error at a lower rate to begin with.
For example, if you are summing numbers, you can divide the numbers in half and recursively sum each half. This is superior, in terms of error, to a simple loop. If you are solving linear equations, you can calculate a matrix inverse—but this is awful in terms of error. Better idea is to use Gauss-Jordan elimination and back substitution. Better yet, use a pivoting. Better yet, factorize the matrix. Etc.
Though a lot of the time, the better algorithm is using an error accumulator-- so 2 doubles. This tends to outperform 80-bit extended precision, double-double, or long double arithmetic... but more precision would often also suffice and use the same amount of space.