Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The claim that functions are more secure is quite reasonable, and it stems from the fact that they are simple. You or I could write a contained function, or even a piece of software that verifies a function is contained by scanning its source code. We could do this in perhaps a few weeks of spare time. Compare and contrast to the effort that goes into other containerisation techniques. Docker would take more that a week of spare time to write. It is orders or magnitude more complex.


> Compare and contrast to the effort that goes into other containerisation techniques. Docker would take more that a week of spare time to write. It is orders or magnitude more complex.

That’s because you’re comparing a butter knife to a chainsaw. You can quite easily run an entire process in manner that just as limited as simple function by using something like seccomp[1] which prevents the process from doing anything except exiting, or read/writing to file handles that have already been opened as pass to it.

No need to spend a weekend or whatever writing something to “verify a function is contained”, or anything like that. Just give your process to the OS and tell the OS to contain it.

Heck if you’re on a system with systemd, all you need a handle full of lines of config, and you’re done. No need to worry about how good your static analysis is (spoiler alert, it’s never good enough), or creating a new language, or reading anyone’s code.

You use containerisation frameworks when you want your software to be able to interact with the world, but want to be able to carefully analyse and limit how it interacts with the wider world. It’s total overkill if you just want to execute code that performs computations without interacting with other systems.

[1] https://en.m.wikipedia.org/wiki/Seccomp


> You can quite easily run an entire process in manner that just as limited as simple function by using something like seccomp

Yet most software will not work when you do this because most software expects to be able to interact implicitly with the wider system. By using functions as the fundamental unit of containment, you force software to explicitly declare anything it depends on as an argument to the function. For instance, if it wishes to listen on a port, it would need to receive some object representing access to that port as an input rather than just receiving one large implicit object representing every function of an operating system which then has to be retroactively hacked down to only the required functionality. Full Docker-like containerisation on all processes, and done as simply as if you had used seccomp. It is the best of both worlds.

> spoiler alert, it’s never good enough

This is provably false. Haskell is an instance of static analysis good enough to verify that functions don't have side-effects (and additionally that they don't use mutable state). Safe Rust also has more practical methods, though it doesn't fully implement this. Verifying these things is trivial, and requires only three conditions: global variables are immutable, immutable objects cannot be converted to mutable ones, and that the mutable state of the operating system shall only be exposed through explicit mutable objects representing it.

It lacks the third condition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: