Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The type signature of the function will also enforce that.

You can also use generics to allow you to accept a wider variety of numbers, e.g.:

  fn takes_a_float_like_number<T>(number: T) where T: Into<f64> {
    // Number will now be a 64-bit IEEE floating point object
    // for this scope
    let number = number.into();
  }
This will only compile if your type is small enough to fit into an f64. So takes_a_float_like_number(500_u32) will compile but takes_a_float_like_number(500_i64) will not. The error message will be a wee bit obtuse as it will be something like "trait Into<f64> not implemented for i64".


How would you improve that error?


"Literal {} is too large for type {}" (or thereabouts) is how this is handled in C/C++ compilers. I think it is enabled by -Wall or -Wpedantic in clang.

In other languages I've seen something like "type {} can't represent literal of value {}" which is a bit more generic and applies to things like floats/signed ints and values that are too small/negative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: