Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

assert()s prove that something isn't happening.

Types prove that something cannot happen.



Types prove that something cannot happen in the formal system the program represents. In the real world, though, radiation can flip memory bits into impossible positions, drive blocks can fail and spit random garbage, IPC messages can get dropped on their way to the receiver, etc.

Sadly, even when your system is pure and elegant and you've proven it all out, you still need runtime asserts (and more advanced things like quorum voting on the results of computations, with the ability to fail nodes experiencing problems) before you can be "sure." Ada wasn't enough for NASA; they needed redundant processors too.

CS is weird as a discipline: we get all the tools of mathematics (digital logic, proof-verification), and all the tools of engineering (rate-of-failure calculations, high-assurance systems)... and then it turns out that the problem-space (ensuring perfect automation over trillions of repetitions of a task) is so difficult that it requires both! Arguing types vs. asserts is a "rabbit-season, duck-season" debate. Anyone who isn't using both a type system and runtime checks, is working in the dark.

However, I can and should probably point out that useful runtime checks can be derived from static properties of your code--and a Sufficiently-Smart Compiler[1] would automatically insert them as it compiled your code.

---

[1] not all that rare these days, GHC's stream fusion goes way beyond "sufficiently smart"


Yeah I use strong typing, assert()s, and 'if (!x) throw ...' everywhere. Nevertheless, I figure if the biggest risk to my code comes from cosmic rays, then I'm doing pretty good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: