The correct answer. In particular, up to 2 ^ 24 the float32 behaves like a regular integer, which can be important in some cases. Above that value, the integer starts to have missing values, and strange int behavior such as (n+1)+1 not equal to (n+2)
But why are 32b floats relevant? JS, the language famously representing ints as floats, uses 64b floats. Who puts ints into 32b floats and cares about precision loss?
I didn't say JavaScript anywhere in my comment. No relation to JavaScript. Rendering is typically done in tiles, in GPU, and the best precision that can be used across all GPUs is float32. Some GPUs don't implement float64.