Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really buy their approach to security honestly. Trying to fix all bugs is great, but they provide little to prevent unknown bugs bing exploited (pledge is nice for software that opts in to use it, but otherwise not so much). I'd love to see them implement something like AppArmor with their approach, it would probably be amazing.

I actually think NetBSD is a pretty interesting alternative, it has some nice security features like veriexec that don't get talked about much.



I think in the past they tried to fix all the bugs, and realized they couldn't, so they started to build all sorts of mitigations in the same vein as the one you see posted here today. As for pledge, and the related mitigations, yes, they're not useful if you don't use them, but I see this as them innovating in the space and giving application developers more tools to build hardened applications.

I see tools like AppArmor as band-aids to fix problems that shouldn't exist in the first place. The problem with these approaches are the band-aids tend to break things in unexpected ways and when that happens they simply get removed and unused.


> I see tools like AppArmor as band-aids to fix problems that shouldn't exist in the first place.

I fundamentally disagree on that. I think tools like that are amazing at protecting against unknown threats/exploits. They let you lock down software and protect against future unknown exploits, badly behaving software, malicious employees etc. I think something similar should be a part of any OS claiming to be security focused. Basic DAC is woefully insufficient.

On the other hand, the industry has largely found other solutions like sandboxing, but I still think MAC or RBAC or whichever has a place, certainly as art of a defense in depth strategy.


> they provide little to prevent unknown bugs bing exploited

They provide plenty of mitigations (https://www.openbsd.org/innovations.html). In fact OP's article is for preventing unknown bugs from being exploited.


They don't provide any mitigations of the sort I was clearly referencing. Specifically, for restricting malicious code or users that already has access to the system, exploiting insecure software that was not compiled with pledge support.


What kind of mitigations would help here?


SELinux/RSBAC/AppArmor/grsecurity and similar.


These largely require buy-in from applications just like pledge.


They absolutely don't, that's the key difference.

What makes you think otherwise?


You can’t just stick sandboxing around arbitrary apps without them breaking.


The technologies I listed are not sandboxing, as that term refers to a different category of technology.

And you're right, kind of; you need to set the permissions for apps, but that doesn't mean they need cooperation from the software developers. The whole point is that they don't. With those technologies you can lock down complex closed source programs, something not possible with pledge.


Those seem to be of the category of “I have a program and I want to restrict what it does” which seems like a sandbox to me. The problem here is that trying to figure out what goes on this list is difficult for arbitrary programs, even when you’re the one writing it. When you’re just applying it to third party software it’s very likely something will not function correctly.


It's not a sandbox though, because it's a different type of technology. You can say it's a type of sandbox in concept, and you could make an argument, but referring to it as a sandbox in a technical discussion simply isn't correct.

> The problem here is that trying to figure out what goes on this list is difficult for arbitrary programs, even when you’re the one writing it. When you’re just applying it to third party software it’s very likely something will not function correctly.

That's why there are things like, for example, SELinux permissive mode, where you run the software as needed and observe the permissions it needs, and then grant it those permissions while denying everything else.


I mean the typical term used for such things is “mandatory access control” but they always get used to implement a sandbox so that’s what I call them.

Also, watching a program to see what it does is exactly the issue I’m talking about. You’re stuck with whatever behaviors you tested and everything else that you didn’t hit will fail (loudly if you’re lucky, silently if you’re not). There are platforms that do exactly what you’re talking about and believe me working on these rules is miserable. You’ll have reports on your desk like “the profiler doesn’t work anymore” (nobody tested this) or “on desktop controls don’t render anymore” (someone changed the implementation and it needs something you didn’t include in your rules). Again, this is when you control the stack, doing this for arbitrary programs is an order of magnitude harder.


Some implement role based access control or other access control paradigms as well. I just don't think sandbox is a good term, but I see where you're coming from.

I agree initial setup can be cumbersome, but I think it's worthwhile. I'm a fan of RSBAC personally, it's as powerful as SELinux but a lot simpler. If people run in permissive mode and test properly, not just run it and do a few things, but test every function exhaustively before setting up permissions, it should be good.

Really, it only has to be done once, and I think it's a worthwhile investment given the security gained.

That's what I was saying higher up in the thread though. OpenBSD is known for having good, simple implementations of complex stuff like this, so if they ever were itnerested in implementing a version, it would probably be amazing.


OpenBSD has these on while on compiling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: