The focus should be on _rational_ numbers. This particular example is all about representation error - precision is implicated, but not the cause.
Ignore precision for a second: The inputs 0.1 and 0.2 are intended to be _rational_. This means they can be accurately represented finitely (unlike an irrational number like PI). Now when using fractions they can _always_ be accurately represented finitely in any base:
1/10=
base 10: 1/10
base 2: 1/1010
2/10=
base 10: 2/10
base 2: 10/1010
The neat thing about rationals, is that when using the four basic arithmetic operations: two rational inputs will always produce one rational output :) this is relevant: 1/10 and 2/10 are both rationals, there is no fundamental reason that addition cannot produce 3/10. When using a format that has no representation error (i.e fractions) the output will be rational for all rational inputs (given enough precision, which is not a realistic issue in this case). When we add these particular numbers in our heads however, almost everyone uses decimals (base 10 floating point), and in this particular case that doesn't cause a problem, but what about 1/3?
This is the key: rationals cannot always be represented finitely in floating point formats, but this is merely an artifact of the format and the base. Different bases have different capabilities:
1/10=
base 10: 0.1
base 2: 0.00011001100110011r
2/10=
base 10: 0.2
base 2: 0.00110011001100110r
1/3=
base 10: 0.33333333333333333r
base 2: 0.01010101010101010r
IEEE754 format is a bit more complicated than above, but this is sufficient to make the point.
If you can grok that key point (representation error), here's the real understanding of this problem:
Deception 1: The parser has to convert '0.1' decimal into base 2, which will cause the periodic significand '1001100110011' (not accurately stored at any precision)... yet when you ask for it back, the formater magically converts it to '0.1' why? because the parser and formater have symmetrical error :) This is kinda deceptive, because it makes it look like storage is accurate if you don't know what's going on under the hood.
Deception 2: Many combinations of arithmetic on simple rational decimal inputs also have rational outputs from the formatter, which furthers the illusion. For example, nether 0.1 or 0.3 are representable in base 2, yet 0.1 + 0.3 will be formatted to '0.4' why? It just happens that the arithmetic on those inaccurate representations added up to the same error that the parser produces when parsing '0.4', and since the parser and formatter produce symmetric error, the output is a rational decimal.
Deception 3: Most of us grew up with calculators, or even software calculator programs. All of these usually round display values to 10 significant decimals by default, which is quite a bit less than the max decimal output of a double. This always conceals any small representation errors output by the formatter after arithmetic on rational decimal inputs - which makes calculators look infallible when doing simple math.
FWIW, their "basic answer" page is the simplest i've seen that neither lies or omits critical factors in the 0.1 + 0.2 problem. It's probably a good starting point that induces some lingering questions tempting you to find out more.
If you want a thorough understanding you will want to look at representation-error, rounding-error, error-propagation, why they exist and how they interact.
The interplay between those three forms of numerical error in floating point numbers will also allow you to more easily see the world of limitations of fp beyond 0.1 + 0.2 for yourself.
I understand the "problem" from the hardware perspective, but I still don't accept their "basic answer" as reasonable.
> It’s not stupid, just different.
Over the past 50 years, my computer has adapted to how humans normally operate in nearly every other way. Why do they continue to use this system which produces results different from what any normal person expects?
> Computers use binary numbers because they’re faster at dealing with those
Computers are faster at dealing with all-caps ASCII, too, but we've accepted here that micro-optimization is less important than doing what people want. Most of the languages I use have even moved past fixnums. Why have we not improved real arithmetic since 1985?
> Over the past 50 years, my computer has adapted to how humans normally operate in nearly every other way. Why do they continue to use this system which produces results different from what any normal person expects?
Lisps and Lisp-derived languages, like Scheme, have had a proper numerical tower, including rationals, for decades now. Using reals is optional, but using rationals and everything else imposes an efficiency cost, so people make their decision. Implementing rationals in hardware would not necessarily make them more efficient; that is, if you think having rational support in hardware would help, you have to make the case. It isn't an automatic win:
The numerical tower is one of the features of Lisp that the Algol descendants have not yet stolen, and I don’t know why not.
“Efficiency” is tough to believe given all the other inefficient yet nice features that have been universally adopted, like Unicode, variable length lists, bigints, etc. In many dynamic languages, every method call is a hash table lookup, yet we’re expected to believe they don’t use Decimals by default because it would be too slow? In C++ I’d buy that excuse.
> Over the past 50 years, my computer has adapted to how humans normally operate in nearly every other way. Why do they continue to use this system which produces results different from what any normal person expects?
If you look at any low level hardware it's behavior will be alien and unintuitive to most end-users, your computer has not adapted, only it's ability to support sophisticated high level software that abstracts these things away from end users has improved.
FPUs are no different, calculators round output to 10sf, and as a result most users think calculators are perfect, and that's usually fine... it's not fine when you do programming, because eventually you will need to understand the fundamental limitations of the hardware and more generally finite numerical computation regardless of implementation.
> Why have we not improved real arithmetic since 1985?
Because most people actually want either fixed-point or floating-point arithmetic. Especially if you only consider the population who is willing to spend money to get better hardware to support their use cases.
> 1. Replace the default format for numbers with a decimal point in suitably high level languages with a infinite precision format.
Unlimited precision in any radix point based format does not solve representation error. If you don't understand why:
- How many decimal places does it take to represent 1/3 (infinite, AKA out of memory)
- Now how many places does it take to represent 1/3 in base3? (1)
If you are truly only working with rational numbers and only using the four basic arithmetic operations, then only a variable precision fractional representation (i.e a numerator and demonstrator, which is indifferent to underlying base) will be able to store any number without error (if it fits in memory). Of course if you are using transcendental functions or want to use irrational numbers e.g PI then by definition there is no numerical solution to avoid error in any finite system.
> Most think of it as base-10 [...] I'm surprised nobody has made a decent fixed point lib that is widely used already.
Note that fixed radix point does not solve the common issues with representing rational base 10 fractions. A base10 fixed radix solution would, but so would IEEE754's decimal64 spec, which would eliminate representation error when working exclusively in the context of base10 e.g finance, but these are not found in common hardware and do not help reduce propagation of error due to compounding with limited precision in any base.