I want that as well, but I don't think it's practical to do that on the Linux desktop ecosystem. Too slow, too much politics. The gist of it is done by Android though, but that required extensive re-engineering of the user space.
The joy of having a properly implemented capability system is that, well, you can create arbitrary capabilities.
You don't need to give a process/component the “unrestricted network access capability” -- you could give it a capability to eg “have https access to this (sub)domain only” where the process wouldn't be able to change stuff like SSL certificates.
EDIT: and to be clear, fuchsia implements capabilities very well. Like, apart from low-level stuff, all capabilities are created by normal processes/components. So all sorts of fine-grained accesses can be created without touching the kernel. Note that in fuchsia a process that creates/provides a capability has no control on where/to who that capability will be available -- that's up to the system configuration to decide.
Ok, give me access to a subdomain I control and I’m phoning home and there’s no way you can restrict mysubdomain.foo.com/phonehome vs mysubdomain.foo.com/normal - and even if you tried to do path restrictions, I can arbitrarily side-channel phoning home with normal access (which by the way you can’t unless you’re sniffing the encrypted HTTP session somehow).
Also imagine you are trying to run a browser. It’s implicitly going to be able to perform arbitrary network access and there’s no way you can restrict it from phoning home asides from trying to play whackamole blocking access to specific subdomains you think are it’s phone home servers.
That’s why I said “semantic” capabilities aren’t a thing and I’m not aware of anyone who’s managed to propose a workable system.
I imagine one could create a capability such that the app gets a way to shove bits in and a way to get bits out, but no knowledge of the IP address or anything like that. A phone (or set of phones) that are already connected and have no keypad.
Ok great. Now I put the phone home stuff within payload. It’s a game of whackamole you’re bound to lose. Like I said - if I control both endpoints, it’s going to be very hard for you to simultaneously give me a pipe connecting them while controlling the set of messages I’m allowed to send.
Sure, like you said, having control of the endpoints you could communicate anything if you can transmit bits. That’s unavoidable.
But don’t lose the perspective on the benefits of such an architecture. Considering the networking access example:
* If your process gets compromised it won’t be able to access the attacker’s C&C server. It wouldn’t have access to any other stuff that the process didn’t already have for that matter.
* You wouldn’t be able to use http. It would be https only.
* Your process wouldn’t need a lib to talk HTTP. It would just need to talk the IPC protocol (whose wire-format and related details are standardized in Fuchsia which allows for the binding code for (de)serialization to be auto-generated).
* You wouldn’t be able (for better or worse) to mess with SSL certificates, proxies, DNS resolution, etc.
Consider another example -- file access. Say your app wants to access a photo. It doesn’t have access to the filesystem nor to the user’s folders -- it only has access to, say, an “app framework services” capability (eg. services from an UI-capable OS like Android) whose one of the “sub-capabilities” is requesting a photo. When your app does that request the ‘system’ opens a file selection GUI for the user to pick a photo. Note that the photo picker GUI is running on a different process and your app doesn’t know and can’t access anything about it. All that matters is that your app will receive an opened file handle in the end. The opened file handle is a capability as well. The file handle would be ready-only and wouldn't need to actually exist in any file system. In this example, before handing the file descriptor to your app, the “system” (or whatever process is implementing the ‘photo-picking’ capability) could process the image to remove metadata, blur faces, offer the user to edit the image (and maybe actually save it to a persistent store), log that access for reviewing later, etc.
(We already have something kinda similar in Android, but the implementation is not from first principles, so it’s very complex and prone to issues (requires an obviously non-POSIX Android userspace using lots of security features from the Linux kernel to sort of implement a microkernel/services like architecture)).
EDIT: adding the detail that Fuchsia's IPC lang ecosystem has autogen features due to its standardization.
I really don’t know what point you’re trying to make. I am 100% in favor of capabilities and think it leads to better decomposed software with better security boundaries (provided the software engineers put in the work to separate components across process boundaries and the APIs make it convenient to do so).
All I said was that capabilities don’t solve the spyware problem and they largely don’t. They help protect you write software that itself can’t be hijacked to become uncontrolled spyware due to a compromise but if I am selling you software with “malware” bundled you’re going to have a hard time isolating the functional and “malware” bits (malware here being defined as software against the users wishes and intents).
You’ve extolled the benefits of it and they’re great and I think I largely agree with all of that, but it’s completely irrelevant to my initial point that it’s not a silver bullet for the vendor intentionally bundling malware into the code they’re distributing.
In my experience lots of folks simply won't work with capability systems no matter how good the implementation is or whatever level of security and configuration granularity is provided.
For many people it's just extra friction in search of a use case.
It makes testing a lot easier honestly. Also keep in mind that mobile apps and web apps are fairly capability oriented these days, so I wouldn't say no one will work with it...
I'm just hearing about capability systems today, so your experience is undoubtedly richer than mine, but I'd estimate that we're just scratching the surface re: ways to harm somebody by making their tech behave in surprising ways.
Maybe once those harms are all grown up, we'll find that fancier handcuffs for our software is worth a bit more than "just extra friction."
I am curious what your experience is with capability based security? They are still incredibly niche(unfortunately) so I’ve never had a chance to work with one at a job.
It's technically a general-purpose OS. They had a workstation build target sometime ago which was used for the desktop use-case. They've shipped only for an IoT device so far (Google Nest Hub).
Main goal would be to replace the core of AOSP considering the main work that's being done, but it seems like Google isn't convinced it's there yet.
Hasn't this project been running for (checks notes) almost ten years now? Isn't that enough runway to determine that it's never going to replace AOSP at this rate?
As far as I could tell, its main goal was to have fun writing an OS. At that, it seems to have succeeded for a number of the people involved?
In terms of impact or business case, I'm missing what the end goal for the company or execs involved is. It's not re-writing user-space components of AOSP, because that's all Java or Kotlin. Maybe it's a super-longterm super-expensive effort to replace Linux underlying Android with Fuchia? Or for ChromeOS? Again, seems like a weird motivation to justify such a huge investment in both the team building it and a later migration effort to use it. But what else?
When I worked at $GOOG my manager left the team to work on Fuchsia and he described it as a "senior engineer retention project", but also the idea was to come up with a kernel that had a stable interface so that vendors could update their hardware more easily compared to linux.
Many things that google did when I was there was simply to have a hedge, if the time/opportunity arose, against other technologies. For example they kept trying to pitch non-Intel hardware, at least partly so they could have more negotiation leverage over Intel. It's amazing how much wasted effort they have created following bad ideas.
The problem with Fuchsia is it went from that to "We're taking all your headcount and rewriting your entire project on Fuchsia" and then started making deadline promises to upper management that it couldn't fulfill.
They seemed to have unlimited headcount to go rewrite the entire world to put on display assistant devices that had already shipped and succeeded with an existing software stack that Google then refused to evolve or maintain.
Fuchsia itself and the people who started it? Pretty nifty and smart. Fuchsia the project inside Google? Fuck.
> the idea was to come up with a kernel that had a stable interface so that vendors could update their hardware more easily
interesting... if that was a big goal, i wonder why they didn't go with some kind of adapter/meta-driver (and just maintain that) to the kernel that has a stable interface.
Yep. It's anyone's guess what's been going on there. Lots of theories out there. IMO Google doesn't consider this a high priority and the cost to keep the development going on considering the engineers working on it is low enough.
Also note that swapping the core of a widely used comercial OS like AOSP would be no easy feat. Imagine trying to convince OEMs, writing drivers practically from scratch for all the devices (based on a different paradigm), the bugs due to incompatibility, etc.
It really depends. If you have a good, small enough team, and a clear design, with a well defined and limited scope, it shouldn't take that long.
If your team is too large, and especially if you don't know what the use case is, it can take a very long time. You asked for general purpose and fully capable, so you're probably in this case, but I think the desired use cases for Fuchsia could be scoped to way less than general purpose and fully capable: a ChromeOS replacement needs only to run Chrome (which isn't easy, but...), and an Android replacement needs only to run Android apps (again, not easy), and the embedded devices only run applications authored by Google with probably a much smaller scope.
But it also depends on what 'from scratch' means. Will you lean on existing development tools, hosted on an existing OS? Will you borrow libraries where the scope and license are appropriate? Are you going to build your own bootloader or use an existing one?
Yeah, it's brand new as far as you would consider in practice (they use existing libraries and the like).
The answer is not much time. The real question is how long to develop good quality drivers for a give platform (say, an x64 laptop)? How long to port/develop applications so that the OS is useful? How long to convince OEMs, app developers and such folks to start using your brand new OS? It's a bootstrap problem.
That would be surprising. Where do you get that? I don't mean toy OSes or experiments. Linux, MacOS and Windows are still in development and I can't imagine the number of hours invested.
IIRC it didn't take that long to develop first production versions of macOS? A couple of years maybe?
It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components, most of the software stack would remain same as Android/Linux at least for the time being.
> first production versions of macOS? A couple of years maybe?
Ok, I'll bite. If we're talking classic Macintosh OS, perhaps.[0] macOS? No way. The first Mac OS X was released in 2001, and was in development between 1997 and 2001 according to Wikipedia.[1] But the bulk of the OS already existed 1997. Mac OS X was a reskin of NeXTStep. NeXTStep was released in 1989, final release 1995, final preview 1997 (just before Apple sold out to NeXT).[2] NeXTStep was in production for quite some time before the x86 version shipped (around '95 from memory). In case you are wondering, I can assure you that NeXTStep was a very capable OS. NeXTStep was in development for a couple of years before the first hardware shipped in 1989. NeXTStep was built on top of Mach and BSD 4.3 userspace. Mach's initial release was 1985.[3]. Not sure how long the first release of Mach took to develop. You can check BSD history yourself. But I'd say, conservatively, that macOS took at least 14 years to develop.
> IIRC it didn't take that long to develop first production versions of macOS?
If you mean the early 1980s OS, that is not comparable. It probably ran in something like 512K of memory off of a 5.25" floppy disk (or a tape?).
> It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components
I don't know the answer, but doesn't the second sentence describe Linux?
The original Mac has 128kB RAM, a 64kB ROM with a fair chunk of the OS in it, and used 400kb single sided 3.5" discs. The paltry RAM is generally considered to be the main problem, but the Mac team were working to a target price of $1500 (which they missed), and that’s all that could afford, with the largish ROM being a compensation. A quick unscientific look at Byte's January 1984 issue seems to show 128kB as the base level for IBM PC clones at the time as well, but they don't have a GUI.
In comparison, the Lisa OS required 1MB RAM and a 5MB hard disc, hence the eye watering $10,000 introductory price.
Development on the Mac apparently started in 1979, and release in 1984 although the early Jeff Raskin era machine was quite different to the final Steve Jobs led product.
Android is very unapologetically Linux and it's unlikely anyone seriously proposed doing anything other than use Linux.
Fuchsia more likely was for all the stuff that Google kept experimenting with using Android just because it was there rather than because it was a good fit - wearables, IoT, AR/VR, Auto, etc...
Other operating systems have emulated Linux so that you can run Linux userland applications on top of a different kernel, WSL1 and FreeBSD are good examples.
> On mobile devices, the proprietary drivers are in user space.
So, the situation would stay the same regarding that.
> The drivers will probably be back in the kernel
Manufacturers putting drivers back into the kernel would be a step back for them since it would be more work than just using them in user space given there is a stable driver API.
> and you won't be able to patch the kernel to make it work on unsupported, newer / alternative user space
I think people forget that Fuchsia is more than the kernel. It comprises more things than Linux. Being based on a microkernel design, the kernel part represent just a small surface of the whole 'system interface'. All the rest is in the user space. The Android (platform) part of the system will be layered on top of this 'system interface', and will not define all the user space per se alone. This 'system interface' is well defined and documented [0] and, similarly to the Android (platform), they also have a CTS that tries to ensure that [1]. For example, the software layer implementing the product representing the Nest Hub is outside the Fuchsia platform, as will be with the Android (platform). So, a manufacturer implementing a particular feature inside the kernel, or in other component like drivers, will still result in its device having a 'system interface' at least compatible with the oficial Fuchsia one; if it's not, but offers something more, we all would be in the same situation where the provider bundles that features in closed blobs anyway.
I think Fuchsia's decoupled system architecture will make using an alternative user space not a big problem, if not even easier that it is with Android based systems.
But that seems to be exactly what they are doing. They're surely porting the Android runtime to Fuchsia, or even already ported it.
Remember there are Android APKs containing native Linux banaries (made with Android's NDK) that you should be able to run. That is the whole main point of Fuchsia' starnix. I think this is even mentioned on starnix's RFC.
Isn't this growth pattern a straightforward consequence of disorderly growth?
When microorganisms divide, the children "appear" in the same location. Our species growth dynamics kind of has this same property in that it's easier to build something closer to the already established region than far away.
So, in the end, looking from far away, the growth pattern is the same.
I think this pattern of exponential growth of human beings is a curse of their intelligence. All other animals end up being controlled by these large ecosystem dynamics because they can't adapt and/or do group work sufficiently. Species with this property are easily influenced/controlled by large scale dynamics.
Maybe, a species with more intelligence than us wouldn't grow so fast and unorderly because they can see the future consequences of their own dynamics more easily. We just happen to not deal with large temporal scales very well.
Risking getting down voted but I don't want to repeat myself: https://news.ycombinator.com/item?id=43255985