Hm, perhaps the answer is a subtle definition of "rounding error".
A rounding error is when you have a number, lets say 0.75, and due to rounding it is recorded as 1.00. The "rounding error" is 0.25.
An alternative to rounding to 1.00 would be to have a mechanism which says "the value is between 0.50 and 1.50". This way, there is no actual rounding, as it doesn't commit to a rounded value, so there is technically no "rounding error".
A neat advantage of recording an interval rather than rounding is that the "error" is preserved in the data through arithmetic, so if there is some following code that runs x * 100, a rounding mechanism would say "the value is 100" whereas an interval mechanism would say "the value is between 50 and 150". Then, if the user only looks at the output, it will be clear that there is a wide error range and something needs to be fixed, rather than the output indicating a precise answer when really it suffers from significant rounding errors.
A rounding error is when you have a number, lets say 0.75, and due to rounding it is recorded as 1.00. The "rounding error" is 0.25.
An alternative to rounding to 1.00 would be to have a mechanism which says "the value is between 0.50 and 1.50". This way, there is no actual rounding, as it doesn't commit to a rounded value, so there is technically no "rounding error".
A neat advantage of recording an interval rather than rounding is that the "error" is preserved in the data through arithmetic, so if there is some following code that runs x * 100, a rounding mechanism would say "the value is 100" whereas an interval mechanism would say "the value is between 50 and 150". Then, if the user only looks at the output, it will be clear that there is a wide error range and something needs to be fixed, rather than the output indicating a precise answer when really it suffers from significant rounding errors.