Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Floating point 101. And it almost never matters.


> And it almost never matters.

I take issue with this. Drift from floating point inaccuracies can compound quickly and dramatically affect results. Sure if you're just looping over a 1000 item list, it's not going to matter that JavaScript is representing that as a float/double, but in a wide variety of contexts, such as anything to do with money, it absolutely does matter.


Another example is open world games. They have to keep world coordinates centered on the player because for large worlds, the floating point inaccuracy in the far reaches of the world starts to really matter. An example of a game that does this is Outer Wilds.


caveat: this is not my direct experience so i might be wrong -- but someone who was doing an different masters project at the same time as mine was doing a mini on-rails video game and mentioned this.

apparently it's also because of the "what is up?" question.

e.g. in outer wilds ... how do you determine which way is "up" when "up" for the player can be any direction.


Oh for 9999999999999999 pennies! I wouldn't care if I got one less that I was supposed to! :-)


You say that, and then an army of idiots out in the real world continues to use floats for financial data and other large integers.

I ran into a site that broke because they were using 64b unix nanotime in Javascript and comparing values which were truncated. You see this in js, python, etc. constantly.


For the JS case, that's really JavaScript's fault, since double-precision float ("number") is the only built in numeric type, other that BigInt, which has only existed for a few years.


Not just floating point, but 64-bit IEEE 754 specifically. The last few bits of the mantissa are not sufficient to represent the last decimal digit exactly. 80 bits would suffice for this particular example, but would fail similarly with a longer mantissa.

BTW this is one of the reasons why you should never represent money as a float, except when making a rough estimate. Another, bigger reason is that 0.1 is an infinite repeating fraction in binary, so it can't be represented exactly.


For me, I encounter a floating point bug/issue every 4 years of so. So "almost never" sounds about right to me.


I encounter bugs around this semi-rarely, but most of my career has been building tools around data analytics. While it's rare that I encounter bugs tied to floating point it's frequent that I need to be aware of floating point math and if a float will be acceptable here. The rareness of the bug has more to do with it being a rookie mistake that won't make it past code review than "it doesn't matter" as the comment implies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: