> no pattern could ever match a term containing a floating point number?
No equivalence matching on floating point numbers seems like a much healthier approach, yes. I don't expect people to have huge problems with "x greater than 0.0" or "z less than 16.35" but "p is exactly 8.9" suggests muddled thinking.
The floats are very strange. While machine integers are already a bit weird compared to the natural numbers we learned in school, the floats are much stranger so I expect code which is trying to test equivalence on these values is going to keep running into trouble.
We know that humans, including human programmers, do not appreciate how weird floats really are, so I think telling programmers they just can't perform these matches will have a better outcome than confusing them by allowing this and then it rarely has totally unexpected consequences because a+epsilon was equivalent to b even though mathematically a+epsilon != b
you have to get rid of order comparisons too, though, or people will just replace a == b with a >= b && a <= b
also it seems like a compiler attempting to determine whether two pieces of code are equivalent (so it can throw one away) needs to be able to test whether two constants in them could ever produce different computational results; this is important for code-movement optimizations and for reunifying the profligate results of c++ template expansion
similarly, a dataflow framework like observablehq needs to be able to tell if an observable-variable update needs to propagate (because it could change downstream results) or not; for that purpose it needs to even be able to distinguish different nans. like the compiler, it needs the exact-bitwise-equality relation rather than ordinary arithmetic equality
where i think we agree is that floating-point arithmetic is probably a bad default for most people most of the time because it brings in all kinds of complexity most programmers don't even suspect
your comment reads to me as 'i don't understand floating point equality and therefore no human does so no human should have access to it'
but there are more things in heaven and earth than are dreamed of in your philosophy, tialaramex
My fear isn't about floating point equality but about equivalence. As we discussed repeatedly, nothing changed for equality, -0.0 = 0.0 just as -0 = 0 in integers. Equality works fine, though it may be surprising in some cases for floating point values because sometimes a != a and b + 1 == b.
"Exact-bitwise-equality" is yet a further different thing from equivalence. Is this what Erlang's =:= operator does? Erlang's documentation described it as "Exactly equal to" which is a very silly description (implying maybe == is approximately equal to), presumably there's formal documentation somewhere which explains what they actually meant but I didn't find it.
Presumably Exact-bitwise-equality is always defined in Erlang? In a language like Rust or C++ that's Undefined in lots of cases so this would be a terrible idea. I still think it's better not to prod this particular dragon, but you do you.
No equivalence matching on floating point numbers seems like a much healthier approach, yes. I don't expect people to have huge problems with "x greater than 0.0" or "z less than 16.35" but "p is exactly 8.9" suggests muddled thinking.
The floats are very strange. While machine integers are already a bit weird compared to the natural numbers we learned in school, the floats are much stranger so I expect code which is trying to test equivalence on these values is going to keep running into trouble.
We know that humans, including human programmers, do not appreciate how weird floats really are, so I think telling programmers they just can't perform these matches will have a better outcome than confusing them by allowing this and then it rarely has totally unexpected consequences because a+epsilon was equivalent to b even though mathematically a+epsilon != b