Why would you say it's an error to use a float for currency? I would imagine it's better to use a float for calculations then round when you need to report a value rather than accumulate a bunch of rounding errors while doing computations.
It is widely accepted that using floats for money[1] is wrong because floating point numbers cannot guarantee precision.
The fact that you ask is a very good case in point though: Many programmers are not aware of this issue and would maybe not question the "wisdom" of the AI code generator. In that sense, it could have a similar effect to blindly copy-pasted answers from SO, just with even less friction.
[1] Exceptions may apply to e.g. finance mathematics where you need to work with statistics and you're not going to expect exact results anyway.
Standard floats cannot represent very common numbers such as 0.1 exactly so they are generally disfavored for financial calculations where an approximated result is often unacceptable.
> For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it.
I fail to see your point. Floats are best practice for many financial applications, where model error already eclipses floating point error and performance matters.
You don't want to kick the can down to the floating point standard. Design for deterministic behavior. Find the edge cases, go over it with others and explicitly address the edge case issues so that they always behave as expected.
GCP on there other hand has standardized on unit + nano. They use this for money and time. So unit would 1 second or 1 dollar, then the nano field allows more precision. You can see an example here with the unitPrice field: https://cloud.google.com/billing/v1/how-tos/catalog-api#gett...
When you are approximating fixed-point using floating-point there is a lot more you need to do correctly other than roun ding. Your representation must have enough precision and range for the beginning inputs, intermediate results, and final results. You must be able to represent all expected numbers. And on. There is a lot more involved than what you mentioned.
Of course, if you are willing to get incorrect results, such as in play money, this may be okay.
When did mdellavo anything about floating point? You can, and should, use plain old fixed-point arithmetic for currency. That’s what he means by “microdollar”.
Using float for currency calculations is how you accumulate a bunch of rounding errors. Standard practice when dealing with money is to use an arbitrary-precision numerical type.
Because it's an error to use floats in almost every situation. And currency is something where you don't want rounding errors, period. The more I've learned about floating point numbers over the years, the less I want to use them. Floats solve a specific problem, and they're a reasonable trade-off for that kind of problem, but the problem they solve is fairly narrow.
Using float is perfectly OK since using fixed point decimal (or whatever "exact" math operations) will lead to rounding error anyway (what about multiplying a monthly salary by 16/31 (half a month) ?)
The problem with float is that many people don't understand how they work to handle rounding errors correctly.
Now there are some cases where float don't cut it. And big ones. For example, summing a set of numbers (with decimal parts) will usually be screwed if you don't round it. And not many people expect to round the results of additions because they are "simple" operations. So you get errors in the end.
(I have written applications that handle billions of euros with floats and have found just as many rounding errors there as in any COBOL application)
OK, the salary example was a bit simplified; in my case it was about giving financial help to someone. That help is based on a monthly allowance and then split in the number of allocated days in the month, that's for the 16/31.
Now for your example, I see that float and decimal just give the same result. Provided I'm doing financial computations of a final number, I'm ok with 2 decimals. And both your computations work fine.
Th decimal module in python gives you number of significant digits, not number of decimals. You'll end up using .quantize() to get to two decimals which is rounding (so, no advantage over floats).
As I said, as soon as you have division/multiplication you'll have to take care of rounding manually. But for addition/subtraction, then decimal doesn't need rounding (which is better).
The fact is that everybody say "floats are bad" because rounding is tricky. But rounding is always possible. And my point is that rounding is tricky even with the decimal module.
And about bragging, I can tell you one more thing : rounding errors were absolutely not the worse of our problems. The worse problem is to be able to explain to the accountant that your computation is right. That's the hard part 'cos some computations imply hundreds of business decisions. When you end up on a rounding error, you're actually happy 'cos it's easy to understand, explain and fix. And don't start me on how laws (yes, the texts) sometimes explain how rounding rules should work.
sum = 0
for i in range(0, 10000000):
sum += 0.1
print(round(sum*1000, 2))
what should this code print? what does it print?
I mean, sure, this is a contrived example. But can you guarantee that your code doesn't do anything similarly bad? Maybe the chance is tiny, but still: wouldn't you like to know for sure?
We agree, on additions, floats are tricky. But still, on division, multiplications, they're not any worse. Dividing something by 3 will end up in an infinite number of decimals that you'll have to round at some point (except if we use what you proposed : fractions; in that case that's a completely different story).
No, exact precision arithmetic can do that 16/31 example without loss of precision:
from fractions import Fraction
# salary is $3210.55
salary = Fraction(321055,100)
monthlyRate = Fraction(16,31)
print(salary*monthlyRate)
This will give you an exact result. Now, at some point you'll have to round to the nearest cent (or whatever), true. However, you don't have to round between individual calculations, hence rounding errors cannot accumulate and propagate.
The propagation of errors is the main challenge with floating point numbers (regardless of which base you use). The theory is well understood (in the sense that we can analyse an algorithm and predict upper bounds on the relative error), but not necessarily intuitive and easy to get wrong.
Decimal floating-point circumvents the issue by just not introducing errors at all: money can be represented exactly with decimal floating point (barring very exotic currencies), therefore errors also can't propagate. Exact arithmetic takes the other approach where computations are exact no matter what (but this comes at other costs, e.g. speed and the inability to use transcendental functions such as exp).
For binary floating point, that doesn't work. It introduces errors immediately since it can't represent money well and these errors may propagate easily.
Of course, if you use "fractions" then, we agree, no error will be introduced nor accumulated over computations which is better. The code base I'm talking about is Java 10 years ago. I was not aware of fractions at that time. There was only BigDecimal which was painful to work with (the reason why we ditched it at the time).
It's mostly painful because Java doesn't allow custom types to use operators, which I think was a maybe reasonable principle applied way too strictly. The same applies to any Fraction type you'd implement in Java.
Still, I'll take "verbose" over "error-prone and possibly wrong".