If you're talking about something like Ubuntu, there's a tonne of work that goes into making sure it works with ..say.. a random three year old laptop with all the binary blobs in place.
Try getting something like wifi or Bluetooth working on some weird ARM dev board and suddenly there's no vanilla Linux unless you're willing to write device drivers.
> there's a tonne of work that goes into making sure it works with ..say.. a random three year old laptop with all the binary blobs in place.
But that work gets done and "users" of Ubuntu or Debian or Arch get to use it from the sources they trust (aka Ubuntu, Debian or Arch). I'm not claiming to have a full understanding of how every package or kernel module Debian or Arch or Fedora ships works. But I'm trusting Debian or Arch or Fedora for my packages. If it comes to light that debian or fedora maintainers had no idea that they shipped a malware in a release, then I'll seriously question going with that distribution in the future. And without sounding facetious, that has happened multiple times in past especially with debian. But the times it happened it was clear to me that it was merely a mistake rather than extreme incompetence or malice.
With Android, you have Google, which despite the general HN rhetoric I personally trust to not ship straight up malware. But when we're talking about Android that's not what we're talking about. We're often talking about binary ROM blobs from random XDA or RW users. Funny thing is 20 years ago, I'd have shouted about what's the difference between a random anon on XDA or WZBB providing a ROM blob. But now I know better.
Binary blobs are an unfortunate reality and no amount of trust in a company or entity can really solve this.
For the record, Debian and Arch doesn't work very well on non standard hardware. I use arch on my desktop but gave up on using it productively on a new HP laptop.
Heh, and you could argue that laptops are standard these days. More laptops are sold than desktops, and HP is definitely a mainstream brand.
I understand your usage of the word, I'm just pointing out that if the mainstream ain't "standard" anymore... it kind of sets the standard in practice.
That should be true for everything, if no one has written drivers for something it won't work. But once driver is added will it fail/updated for newer versions of Linux?
It depends on how open the particular drivers are implemented. E.g. over the last couple of years the Nvidia driver situation for cards from the last 3 generations has changed across pretty much all 3 major levels:
1. Originally you had to use the proprietary binary driver to get anything useful to happen with your card at all. Updating the kernel without updating this would more or less lead to having an expensive brick in your PC. Some Wi-Fi adapters fall into the "can't really be updated" category as well. A _LOT_ of ARM shit is like this.
2. nvidia-open came along (still beta for desktop cards at the moment) and it puts enough in the kernel that you can update the kernel without needing an updated binary driver for your card to function
3. nouveau/nvk have very recently started to come to a decently usable state (i.e. they have reclocking via GSP and somewhat usable graphics API coverage/efficiency) for an even more open driver stack which tracks system updates even better.
If your binary blobs fall into 1/2 then long term upgrades can be anywhere from impossible to unreliable. If they fall into 2/3 they can be anywhere from somewhat reliable to "will be working longer than any sane person would still be trying to update the kernel on that device". E.g. the AMD 7750 is 12 years old but can run OpenGL 4.6 and Vulkan via the latest AMDGPU driver in mainline mesa/kernel.
LTS distros solve this by using LTS kernels and security patching them rather than requiring "actual" underlying OS updates during the version lifecycle.
The reality is that drivers are not added, as you say. Most companies release an out-of-tree BSP targeting a specific kernel version. They often contain blobs and are often not GPL. Linux doesn't support a stable kernel ABI/API (https://www.kernel.org/doc/Documentation/process/stable-api-...) and the only way to avoid the associated issues is to mainline drivers, which most companies don't want to do (don't want to open source their IP, don't want to invest in maintaining it etc.)
Android GKI/KMI addresses the issues related to this. GKI is relatively recent and OEMs don't offer 5+ years of Android updates because they haven't adopted it yet.
> and the only way to avoid the associated issues is to mainline drivers, which most companies don't want to do (... don't want to invest in maintaining it etc.)
They wouldn't have to, that would be done for them, for free, if they got their code into the kernel...
That Linux note about why they don't want a stable API is ridiculous. This shows how unfriendly is Linux to out-of-tree software. They want third-party developers to refactor their working and tested drivers every time they decide to change something.
I think that at least open source community should move to "never break things" concept. So that you can write and test code once and never touch it anymore. Refactoring your code because kernel/library has changed its interface is just a waste of time which gives nothing in return, often time of unpaid volunteers who could do something useful instead.
This should apply to libraries, applications, programming languages, browser extensions. Take upgrading your Python 2 code to Python 3 as an example. You need to put a lot of efforts while gaining nothing. This is not how software should work. You should write a code once and it should work forever without any maintainance.
Correct me if I'm wrong, but I thought that the actual code itself is stable, just that the compiled kernel API/ABI isn't.
So if you open source your drivers and get them accepted into the kernel, then you don't need to rewrite/recompile them for each new kernel version. Just like AMD did with their drivers. And I think this is part a conscious decision that forces people to open source drivers.
But this is honestly just guesswork based on what I've read, would love to learn more!
The Linux developer community promised almost 20 years ago [1] that no release of the stable kernel will ever break something that worked in a stable kernel before. AFAICT, this promise holds. If you upstream your driver, the community will take care (cf. AMD), if you don't your users experience occasional pain (cf. NVidia).
> So if you open source your drivers and get them accepted into the kernel, then you don't need to rewrite[...] them for each new kernel version.
You don't, but someone does. In the note linked above they give an example of how the USB interface changed from synchronous to asynchronous, this must have required some refactoring in all drivers that used USB
That someone updating the drivers is exactly the same person who makes the change in the subsystem that they depend on. They cannot break what is already in the kernel, but they can break what's outside; it is after all an internal API.
What reasons are there other that corporate shenanigans?
Sure it takes more effort to mainline the driver and address review comments, but if you aren't designing a throw-away product, it's effort well spent.
Those reasons tend to be petty ones that benefit the hardware company and noone else. Seems fair that the hardware company should pay for the consequences of their decisions and not the people advancing the kernel.
I know a lot of people who are unfriendly towards sloppy work, yes. if the driver is of a good enough quality, they will be happy to take it off your hands and look after it for you!
as a user, I hate apps that aren't in the repository. it takes effort to make sure it doesn't clobber something important just because the developer wanted to use the latest and most broken version of some library. thankfully nixos allows me to run whatever version of every dependency and then nuke the whole mess once I'm done with it.
That's not going to be fixed. This is a typical M-N problem. There are too many distros and it would require lot of work to port every app to every distro. Instead, there should be a "platform" (set of APIs) that app can work with and the same app would be able to work in any compatible distro.
To be fair, if the app isn't in the nixos distro, it's rarely that useful. The only one I had to figure out was a proprietary logic analyser interface, but it wasn't too hard.
> Take upgrading your Python 2 code to Python 3 as an example. You need to put a lot of efforts while gaining nothing.
Just the python's 3 string/binary distinction and explicit conversion squashed an entire category of bugs. Yes, it required to make explicit what was implicit before, but still, that's huge achievement in my book; saying that it is nothing is a very, very shortsighted.
If you're talking about something like Ubuntu, there's a tonne of work that goes into making sure it works with ..say.. a random three year old laptop with all the binary blobs in place.
Try getting something like wifi or Bluetooth working on some weird ARM dev board and suddenly there's no vanilla Linux unless you're willing to write device drivers.