Using float is perfectly OK since using fixed point decimal (or whatever "exact" math operations) will lead to rounding error anyway (what about multiplying a monthly salary by 16/31 (half a month) ?)
The problem with float is that many people don't understand how they work to handle rounding errors correctly.
Now there are some cases where float don't cut it. And big ones. For example, summing a set of numbers (with decimal parts) will usually be screwed if you don't round it. And not many people expect to round the results of additions because they are "simple" operations. So you get errors in the end.
(I have written applications that handle billions of euros with floats and have found just as many rounding errors there as in any COBOL application)
OK, the salary example was a bit simplified; in my case it was about giving financial help to someone. That help is based on a monthly allowance and then split in the number of allocated days in the month, that's for the 16/31.
Now for your example, I see that float and decimal just give the same result. Provided I'm doing financial computations of a final number, I'm ok with 2 decimals. And both your computations work fine.
Th decimal module in python gives you number of significant digits, not number of decimals. You'll end up using .quantize() to get to two decimals which is rounding (so, no advantage over floats).
As I said, as soon as you have division/multiplication you'll have to take care of rounding manually. But for addition/subtraction, then decimal doesn't need rounding (which is better).
The fact is that everybody say "floats are bad" because rounding is tricky. But rounding is always possible. And my point is that rounding is tricky even with the decimal module.
And about bragging, I can tell you one more thing : rounding errors were absolutely not the worse of our problems. The worse problem is to be able to explain to the accountant that your computation is right. That's the hard part 'cos some computations imply hundreds of business decisions. When you end up on a rounding error, you're actually happy 'cos it's easy to understand, explain and fix. And don't start me on how laws (yes, the texts) sometimes explain how rounding rules should work.
sum = 0
for i in range(0, 10000000):
sum += 0.1
print(round(sum*1000, 2))
what should this code print? what does it print?
I mean, sure, this is a contrived example. But can you guarantee that your code doesn't do anything similarly bad? Maybe the chance is tiny, but still: wouldn't you like to know for sure?
We agree, on additions, floats are tricky. But still, on division, multiplications, they're not any worse. Dividing something by 3 will end up in an infinite number of decimals that you'll have to round at some point (except if we use what you proposed : fractions; in that case that's a completely different story).
No, exact precision arithmetic can do that 16/31 example without loss of precision:
from fractions import Fraction
# salary is $3210.55
salary = Fraction(321055,100)
monthlyRate = Fraction(16,31)
print(salary*monthlyRate)
This will give you an exact result. Now, at some point you'll have to round to the nearest cent (or whatever), true. However, you don't have to round between individual calculations, hence rounding errors cannot accumulate and propagate.
The propagation of errors is the main challenge with floating point numbers (regardless of which base you use). The theory is well understood (in the sense that we can analyse an algorithm and predict upper bounds on the relative error), but not necessarily intuitive and easy to get wrong.
Decimal floating-point circumvents the issue by just not introducing errors at all: money can be represented exactly with decimal floating point (barring very exotic currencies), therefore errors also can't propagate. Exact arithmetic takes the other approach where computations are exact no matter what (but this comes at other costs, e.g. speed and the inability to use transcendental functions such as exp).
For binary floating point, that doesn't work. It introduces errors immediately since it can't represent money well and these errors may propagate easily.
Of course, if you use "fractions" then, we agree, no error will be introduced nor accumulated over computations which is better. The code base I'm talking about is Java 10 years ago. I was not aware of fractions at that time. There was only BigDecimal which was painful to work with (the reason why we ditched it at the time).
It's mostly painful because Java doesn't allow custom types to use operators, which I think was a maybe reasonable principle applied way too strictly. The same applies to any Fraction type you'd implement in Java.
Still, I'll take "verbose" over "error-prone and possibly wrong".
The problem with float is that many people don't understand how they work to handle rounding errors correctly.
Now there are some cases where float don't cut it. And big ones. For example, summing a set of numbers (with decimal parts) will usually be screwed if you don't round it. And not many people expect to round the results of additions because they are "simple" operations. So you get errors in the end.
(I have written applications that handle billions of euros with floats and have found just as many rounding errors there as in any COBOL application)