The graphic stack in NT is done in a microkernel fashion, it runs in kernel space but doesn't (generally) crash the whole OS in case of bugs.
There are a few interviews of Dave Cutler (NT's architect) around where he explains this far better than I am here.
Overall, you have classic needs and if you don't care about OSS (either for auditability, for customizability or for philosophical choice about open source), it's a workable option with its strength and weaknesses, just like the Linux kernel.
Parts of the kernel can be made more resilient against failures, but that won't make it a microkernel. It'll still run in a shared address space without hardware isolation. It's just not possible to get the benefits of microkernels without actually making it one.
Also Linux being OSS can't be dismissed because it means it'll have features that Microsoft isn't interested in for Windows.
I would recommend looking into the chroot based build tools like pbuilder (.deb) and mock (.rpm).
It greatly simplifies the local setup, including targeting different distributions or even architectures (<3 binfmt).
But I tend to agree, these tools are not easy to remember, specially for the occasional use. And packaging a complex software can be a pain if you fall down the dependency rabbit hole while trying to honor distros' rules.
That's why I ended-up spending quite a bit of time tweaking this set of ugly Makefifes: https://kakwa.github.io/pakste/ and why I often relax things allowing network access during build and the bundling of dependencies, specially for Rust, Go or Node projects.
Fragile against upgrades, tons of unmaintained plugins, admin panel UX is a mess where you struggle to find the stuff your are looking for, half backed transition to nicer UI (Blue Ocean) that has been ongoing for years, too many ways to setup jobs and integrates with repos, poor resource management (disk space, CPU, RAM), sketchy security patterns inadvertently encouraged.
This stuff is a nightmare to manage, and with large code bases/products, you need a dedicated "devops" just to babysit the thing and avoid it becoming a liability for your devs.
I'm actually looking forward our migration to GHEC from on-prem just because Github Actions, as shitty as they are, are far less of an headache than Jenkins.
Maybe I have low standards given I've never touched what gitlab or CircleCi have to offer, but compared to my past experiences with Buildbot, Jenkins and Travis, it's miles ahead of these in my opinion.
Am I missing a truly better alternative or CI systems simply are all kind of a pita?
I don't enough experience w/ Buildbot or Travis to comment on those, but Jenkins?
I get that it got the job done and was standard at one point, but every single Jenkins instance I've seen in the wild is a steaming pile of ... unpatched, unloved, liability. I've come to understand that it isn't necessarily Jenkins at fault, it's teams 'running' their own infrastructure as an afterthought, coupled with the risk of borking the setup at the 'wrong time', which is always. From my experience this pattern seems nearly universal.
Github actions definitely has its warts and missing features, but I'll take managed build services over Jenkins every time.
Jenkins was just build in pre-container way so a lot of stuff (unless you specifically make your jobs use containers) is dependent on setup of machine running jenkins. But that does make some things easier, just harder to make repeatable as you pretty much configuration management solution to keep the jenkins machine config repeatable.
And yes "we can't be arsed to patch it till it's problem" is pretty much standard for any on-site infrastructure that doesn't have ops people yelling at devs to keep it up to date, but that's more SaaS vs onsite benefit than Jenkins failing.
My issue with Github CI is that it doesn't run your code in a container. You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.
> You just have github-runner-1 user and you need to manually check out repository, do your build and clean up after you're done with it. Very dirty and unpredictable. That's for self-hosted runner.
Yeah checking out everytime is a slight papercut I guess, but I guess it gives you control as sometimes you don't need to checkout anything or want a shallow/full clone. I guess if it checked out for you then their would be other papercuts.
I use their runners so never need to do any cleanup and get a fresh slate everytime.
Sometimes, I'm wondering if Mozilla did not get too much money.
With an order of magnitude less money, I think they would have been more focused on improving Firefox rather than trying to diversify with projects like Firefox OS, VPN services or AI.
Even today, given their ~$1.5B in the bank, at the cost of a really painful downsizing, the interests alone could probably pay for a Firefox development focused on standard adherence, performance, quality and privacy.
Mozilla is not a company trying to reinvent itself to survive. If it becomes irrelevant because the Browser becomes irrelevant in the future, that's fine in my book, the organization would have fulfill its mission.
But it is sad to see it become irrelevant because of mismanagement and lack of focus.
Perl was pretty much first in the wave of interpreted languages from the late 80ies and 90ies. It set the bar on what to expect from such ecosystems.
But being the first meant it got some oddities and the abstractions are not quite right imho.
A bit too Shell-esque, specially for arguments passing and the memory abstractions are a bit too leaky regarding memory management (reference management fills too C-esque for an interpreted language, and the whole $ % @ & dance is really confusing for an occasional and bad Perl dev like me). The "10 ways to do it" also hurts it. It lead to a lack of consistency & almost per developer coding coding styles. The meme was Perl is a "write only language".
But I would still be grateful of what it brought and how influential it was (I jock from time to time how Ruby is kind of the "true" Perl 6, it even has flip flops!).
In truth, these days, I feel the whole "interpreted languages" class is on the decline, at least on the server. There are a lot of really great native languages that have come up within the last few years, enabled in large part by LLVM. And this trend doesn't seem over yet.
Languages like Rust, Swift, Go, Zig or Odin are making the value proposition of interpreted languages (lower perf but faster iterations) less compelling by being convenient enough while retaining performance. In short, we can now "have the cake and eat it too".
But the millions of lines in production are also not going awywhere anytime soon. I bet even Perl will still be around somewhere (distro tooling, glue scripts, build infra, etc...) when I retire.
Anyway, thank you Perl, thank you Larry Wall, love your quotes.
I think Apple has kind of a culture problem where the whole organization has to look-up way too much to its chief to make key decisions.
This could have worked in Jobs times, because of the personality & vision of the latter, plus a rapidly evolving market.
But this was no longer possible once the dust settled, specially with a logistician/beam counter like Tim Cook.
Every bet he made was an abject failure, from the Apple Car to the Vision Pro.
His only success was the M series macs, a really good but by no mean revolutionary step-up on a now minority segments of Apple's main market (i.e. internet terminals).
Even the chaos relating to Apple's AI efforts seems to clearly indicate a clear lack of leadership and vision.
For me, he will probably be remembered like Apple's Steve Ballmer. But even with a Nadela-like replacement, Apple needs probably a good hard look at itself and its internal culture.
There are a few interviews of Dave Cutler (NT's architect) around where he explains this far better than I am here.
Overall, you have classic needs and if you don't care about OSS (either for auditability, for customizability or for philosophical choice about open source), it's a workable option with its strength and weaknesses, just like the Linux kernel.