Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, let's assume you know how long an operation is supposed to take, but want to stop it from leaking if it returns faster.

To accomplish this, you would need a high-resolution timer that runs before the operation, after the operation, and then you can subtract the two and wait. And maybe that might work.

But you're not running in kernel space.

The resolution of timing information you'd get from this tactic would be noisy, and may reintroduce the signal you were trying to hide. Especially if you're running on multiple threads or physical processors, and aren't careful enough to constrain the operations and timings to the same core.

All it will do is slow your software down and maybe not actually help.

And that's the best case scenario!

Better to just use constant-time algorithms.



Yeah, that's pretty much what I was asking.

  start = gettime()
  variable_time_crypto_operation()
  end = gettime()
  sleep(1sec - (end - start)) // Use a loop if necessary here
If the operation takes a couple ms and the sleep was 1sec, then how much information would realistically leak here? Sure the sleep might not be perfect, but I'd imagine the actual timing difference would get lost just in how much longer the sleep is.


Somewhat tangential, but there are much better options if you're looking for opportunities for optimization. It's literally trying to improve efficiency by skimping on safety features, like trying to save on vehicle weight by removing unnecessary seatbelts or crumple zones. Eliminating side channels concincingly is very difficult, you're just better off taking the tiny performance hit and virtually* eliminating that vector instead of trying to come up with a novel low-density seatbelt.

(I say virtually, because even constant time crypto isn't bulletproof - GoFetch, a recent Apple M-series CPU vulnerability inadvertently broke the "constant" part because of a quirk of the prefetcher. Side channels are hard, no need to make it harder.)


is a constant-time algorithm just an algorithm that completes the entire calculation before returning even if it knows the correct answer sooner?


"constant-time" isn't a literally technically correct descriptor, it's a short-hand

What matters is that the variance of execution time, which memory addresses are accessed, etc. is independent of any secret inputs.


so theoretically using timers and adding sleeps could solve the problem but practically there are better ways to achieve the goal of no timing information leakage?


Yes. I wrote this a few years ago as an introduction to how to write constant-time algorithms

https://soatok.blog/2020/08/27/soatoks-guide-to-side-channel...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: