He will keep failing to replace IEEE floating point as long as he insists on making NEGATIVE infinity the same as POSITIVE infinity.
Also, IEEE 754 floating point standard guarantees the results of addition, subtraction, multiplication, division, and square root to be the exact correctly rounded value, ie a deterministic result, contrary to what he says.
Not so sure. I mean, yes, what you say is right, but there are problems nevertheless, see eg Wikipedia:
> Reproducibility
> The IEEE 754-1985 allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has strengthened up many of these, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.
Reproducibility is an orthogonal issue to Posits vs IEEE754.
Most developers prefer speed over reproducibility, and are encouraged to use denormal-to-zeros, fast-math optimizations, fused MAC, approximated square root function and whatever else is available to achieve results.
The IEEE754 standard provides a guarantee for deterministic results, and many multi-precision and interval arithmetic libraries depend on this guarantee to be true to function properly.
IEEE754 defines unique -infinity and +infinity values, and any "new and improved" standard that breaks this axiom is just incompatible with all existing floating-point libraries written in the last +30 years.
"These claims pander to Ignorance and Wishful Thinking." Kahan (main author of IEEE754) on Posits claims.
You're free to voice your own opinion, but I take some issue with people asserting theirs as if they speak for "most developers". Especially if it comes from a new account with a name like "Gustafnot". That doesn't exactly scream "unbiased" to me.
Having written scientific numerical software for decades, and having been in situations where I want high speed or I want reproducibility, and having worked with hundreds of developers, I agree wholeheartedly with Gustafnot - the vast majority of developers prefer performance over bitwise reproducibility. Lose a few bits here and there, and most don’t care, because they treat floats as fuzzy to begin with (and almost never care about reproducibility since it’s very hard to obtain due to compilers, libraries, etc.) . But slow down code, and they sure notice quickly.
If you really want to claim the opposite, do you have evidence? Or experience that it’s true?
He argues that underflow/overflow are usually caused by bugs in the code. Having so many special numbers implies reducing the numeric representation (there are a lot of NaN in IEEE-754) and a hardware overhead to deal with all the special cases. That small hardware overhead can add up when working with thousands of FPU.
The vast majority of application dosen't need fine control of the FPU. There is always will be hardware for the few application need the ieee-754 features.
Also, IEEE 754 floating point standard guarantees the results of addition, subtraction, multiplication, division, and square root to be the exact correctly rounded value, ie a deterministic result, contrary to what he says.