Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The default PG type, `numeric`, has almost arbitary precision.

    select pg_typeof(9999999999999999.0); 
    -- numeric

    select 9999999999999999.0::double precision 
           - 9999999999999998.0::double precision; 
    -- 2
More interesting perhaps, is mixing up `real` (aka float32) with `numeric`:

    select 9999999999999999.0::real - 9999999999999998.0;
    -- 272564226 (?! can anyone explain?)

    select 9999999999999999.0::real;
    -- 10000000300000000
Wat


The `real` type (float32) only has 24 bits of precision. So converting `9999999999999999.0` or `10000000000000000` or even `10000000300000000` yields the 32-bit float `10000000272564224`.

For some reason Postgres prints it as `10000000300000000`. It uses a heuristic to print a "pretty" number that converts to the actual stored value, and it's not smart enough to give `10000000000000000`. Some heuristic like this is needed so something like `0.3` doesn't print the actual stored value of `0.300000011920928955078125`, which would be confusing.

You can check all this here: https://www.h-schmidt.net/FloatConverter/IEEE754.html


    #include<stdio.h>

    int main(void) {
      long a = (float)9999999999999999;
      long b = 9999999999999998;
      printf("%ld - %ld = %ld\n", a, b, a-b);
    }

produces

    10000000272564224 - 9999999999999998 = 272564226




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: