Maybe my point is uninteresting, but I feel you haven't really understood it.
Combining two FP numbers with arithmetic leads to possible errors that you can't represent with FP on every operation.
Combining two decimal numbers with non-division arithmetic never leads to that case. With division, sure, things are bad, but that's more of an exception than a rule to me.
This is because decimal numbers don't really come with any inherent limit on digits, and it's a bit strange to add a clause to the other's claims before making a counterargument.
Well in part I didn't understand your point because you didn't mention division as an exception.
But even then, multiplication causes an explosion in number length if you do it repeatedly.
When you specifically talk about numbers not being in computers, I think it's fair to talk about digit limits. Most real-world use of decimals is done with less precision than the 16 digits we default to in computers. Let alone growing to 50, 100, 200, etc as you keep multiplying numbers together to perform some kind of analysis. Nobody uses decimal like that. Real-world decimal is lossy for a large swath of multiplication.
I agree that if you're doing something like just adding numbers repeatedly in decimal, and those numbers have no repeating digits, then you have a nice purity of never losing precision. That's worth something. But on the other hand if you started with the same kind of numbers in floating point, let's say about 9 digits long, you could still add a million of them without losing precision.
And nobody has said anything about irrational numbers as you dismissed in your other comment.
So in summary: decimal division, usually lossy; decimal multiplication, usually lossy; decimal addition and subtraction, lossless but with the same kind of source numbers FP is usually lossless too
> Combining two decimal numbers with non-division arithmetic never leads to that case.
Most types are subject to overflow that has similar effects. Most of the FP error people encounter is actually division error. For example, the constant 0.1 contains an implicit division. The "tenths" place is defined by division by 10. I think that almost all perceptions of floating point lossiness come from this fact.
Combining two FP numbers with arithmetic leads to possible errors that you can't represent with FP on every operation.
Combining two decimal numbers with non-division arithmetic never leads to that case. With division, sure, things are bad, but that's more of an exception than a rule to me.
This is because decimal numbers don't really come with any inherent limit on digits, and it's a bit strange to add a clause to the other's claims before making a counterargument.