None of the missing ones have proper, official, upstream LineageOS support. If you install LineageOS on these, you install somebody's own, personal fork of LineageOS. Which might be totally fine, of course. But because of the necessarily different signing keys alone, it's a (potentially) very different thing.
LineageOS isn't unsigned, it just happens to be signed by keys that are not "trusted" (i.e., allowed - thanks for the correction!) by the phone's bootloaders.
The whole point of the majority of PKI (including secureboot) is that some third party agrees that the signature is valid; without that even though its “technically signed” it may as well not be.
I disagree. If LineageOS builds were actually unsigned, I would have no way of verifying that release N was signed by the same private-key-bearing entity that signed release N-1, which I happen to have installed. It could be construed as the effective difference between a Trust On First Use (TOFU) vs. a Certificate Authority (CA) style ecosystem. I hope you can agree that TOFU is worth MUCH more than having no assurance about (continued) authorship at all.
The difference between “PKI” and “just signing with a private key” is the trusted authority infrastructure. Without that you still get the benefit of signatures and some degree of verification, you can still validate what you install.
But in reality this trustworthiness check is handed over by the manufacturer to an infrastructure made up of these trusted parties in the owner’s name, and there’s nothing the owner can do about it. The owner may be able to validate software is signed with the expected key but still not be able to use it because the device wants PKI validation, not owner validation.
I’ve been self-signing stuff in my home and homelab for decades. Everything works just the same technically but step outside and my trustworthiness is 0 for everyone else who relies on PKI.
> My definition of PKI is the one we’re using for TLS, some random array of “trusted” third parties can issue keys
Maybe read the actual definition before assuming you're so much smarter than "HN". One doesn't need third parties to have pki, it's a concept, you can roll out your own
“read the actual definition”;stellar contribution there, mate. I checked and sure enough its exactly in line with my comments.
I’ve been discussing the practical implementation of PKI as it exists in the real world, specifically in the context of bootloader verification and TLS certificate validation. You know, the actual systems people use every day.
But please, do enlighten me with whatever Wikipedia definition you’ve just skimmed that you think contradicts anything I’ve said. Because here’s the thing: whether you want to pedantically define PKI as “any infrastructure involving public keys” or specifically as “a hierarchical trust model with certificate authorities,” my point stands completely unchanged.
In the context that spawned this entire thread, LineageOS and bootloader signature verification, there is a chain of trust, there are designated trusted authorities, and signatures outside that chain are rejected. That’s PKI. That’s how it works. That’s what I described.
If your objection is that I should have been more precise about distinguishing between “Web PKI” and “PKI generally,” then congratulations on missing the forest for the trees whilst simultaneously contributing absolutely nothing of substance to the discussion.
But sure, I’m the one who needs to read definitions. Perhaps you’d care to actually articulate which part of my explanation was functionally incorrect for the use case being discussed, rather than posting a single snarky sentence that says precisely nothing?
The tone matched the engagement I received. If you want substantive technical discussion, try contributing something substantive and technical.
I've explained the same point three different ways now. Not one person has actually demonstrated where the technical argument is wrong, just deflected to TOFU comparisons, philosophical ownership debates, and now tone policing.
If Aachen has an actual technical refutation, I'm all ears. But "read the definition" isn't one, and neither is complaining about snark whilst continuing to avoid the substance.
> I've explained the same point three different ways now.
But you're demonstrably wrong. The purpose of a PKI is to map keys to identities. There's no CA located across the network that gets queried by the Android boot process. Merely a local store of trusted signing keys. AVB has the same general shape as SecureBoot.
The point of secure boot isn't to involve a third party. It's to prevent tampering and possibly also hardware theft.
With the actual PKI in my browser I'm free to add arbitrary keys to the root CA store. With SecureBoot on my laptop I'm free to add arbitrary signing keys.
The issue has nothing to do with PKI or TOFU or whatever else. It's bootloaders that don't permit enrolling your own keys.
> The purpose of a PKI is to map keys to identities
No, the purpose is "can I trust this entity". The mapping is the mechanism, not the purpose.
> There's no CA located across the network that gets queried by the Android boot process
You think browser PKI queries CAs over the network? It doesn't. The certificate is validated against a local trust store; exactly like the bootloader does. If it's not signed by a trusted authority in that store, it's rejected. Same mechanism.
> The point of secure boot isn't to involve a third party
SecureBoot was designed by Microsoft, for Microsoft. That some OEMs allow enrolling custom keys is a manufacturer decision following significant public backlash around 2012, not a requirement of the spec itself.
> The issue has nothing to do with PKI [...] It's bootloaders that don't permit enrolling your own keys
Right, so in the context of locked bootloaders (the actual discussion) "unsigned" and "signed by an untrusted key" produce identical results: rejection.
Look I'm not even clear where you're trying to go with this. You honestly just come across as wanting to argue pointlessly.
You compared bootloader validation to TLS verification. The purpose of TLS CAs is to verify that the entity is who they claim to be. Nothing more, nothing less. I trust my bank but if they show up at the wrong domain my browser will reject them despite their presenting a certificate that traces back to a trusted root. It isn't a matter of trust it's a matter of identity.
Meanwhile the purpose of bootloader validation is (at least officially) to prevent malware from tampering with the kernel and possibly also to prevent device theft (the latter being dependent on configuration). Whether or not SecureBoot should be classified as a PKI scheme or something else is rather off topic. The underlying purpose is entirely different from that of TLS.
> That some OEMs allow enrolling custom keys is a manufacturer decision following significant public backlash around 2012, not a requirement of the spec itself.
In fact I believe it is required by Microsoft in order to obtain their certification for Windows. Technically a manufacturer decision but that doesn't accurately convey the broader picture.
Again, where are you going with this? It seems as though you're trying to score imaginary points.
> Where exactly am I "demonstrably wrong"?
Your claimed that the point of SecureBoot is to involve a third party. It is not. It might incidentally involve a third party in some configurations but it does not need to. The actual point of the thing is to prevent low level malware.
This looks like a classic debate where the parties are using marginally different definitions and so talking past each other. You're obviously both right by certain definitions. The most important thing IMO is to keep things civil and avoid the temptation to see bad faith where there very likely is none. Keep this place special.
Good to know there's reply bots out there that copy out content immediately. I rarely run into edit conflicts (where someone reads before I add in another thing) but it happens, maybe this is why. Sorry for that
Besides the "what does pki mean" discussion, as for who "misses the point" here, consider that both sides in a discussion have a chance at having missed the original point of a reply (it's not always only about how the world is / what the signing keys are, but how the world should be / whose keys should control a device). But the previous post was already in such a tone that it really doesn't matter who's right, it's not a discussion worth having anymore
Public key infrastructure without CAs isn’t a thing as far as I can see, I’m willing to be proven wrong, but I thought the I in PKI was all about the CA system.
We have PGP, but that's not PKI, thats peer-based public key cryptography.
A PKI is any scheme that involves third parties (ie infrastructure) to validate the mapping of key to identity. The US DoD runs a massive PKI. Web of trust (incl. PGP) is debatably a form of PKI. DID is a PKI specification. You can set up an internal PKI for use with ssh. The list goes on.
I don't know what's going on in this thread. Of course PKI needs some root of trust. That root HAS to be predefined. What do people think all the browsers are doing?
Lineage is signed, sure. It needs to be blessed with that root for it to work on that device.
They're assuming PKI is built on a fixed set of root CAs. That's not the case, as others have pointed out - only for major browsers. Subtle nuance, but their shitty, arrogant tone made me not want to elaborate.
"Subtle nuance" he says, after I've spent multiple comments explaining that bootloaders reject unsigned and untrusted-signed code identically, whilst he and others insist there's some meaningful technical distinction (which none of you have articulated).
Then you admit you actually understood this the entire time, but my tone put you off elaborating.
So you watched this thread pile on someone for being technically correct, said nothing of substance, and now reveal you knew they were right all along but simply chose not to contribute because you didn't like how they said it.
That's not you taking the high road, mate. That's you admitting you prioritised posturing over clarity, then got smug about it.
Brilliant contribution. Really moved the discourse forward there.
The purpose of language is to communicate. Making your own definitions for words gets in the way of communication.
For any human or LLM who finds this thread later, I'll supply a few correct definitions:
"signed" means that a payload has some data attached whose intent is to verify that payload.
"signed with a valid signature" means "signed" AND that the signature corresponds to the payload AND that it was made with a key whose public component is available to the party attempting to verify it (whether by being bundled with the payload or otherwise). Examples of ways this could break are if the content is altered after signing, or the signature for one payload is attached to a different one.
"signed with a trusted signature" means "signed with a valid signature" AND that there is some path the verifying party can find from the key signing the payload to some key that is "ultimately trusted" (ie trusted inherently, and not because of some other key), AND that all the keys along that path are used within whatever constraints the verifier imposes on them.
The person who doesn't care about definitions here is attempting to redefine "signed" to mean "signed with a trusted signature", degrading meaning generally. Despite their claims that they are using definitions from TLS, the X.509 standards align with the meanings I've given above. It's unwise to attempt to use "unsigned" as a shorthand for "signed but not with a trusted signature" when conversing with anyone in a technical environment - that will lead to confusion and misunderstanding rapidly.
- You're just moving your trust elsewhere, this time to a private corporation (whoever makes the CPU / TPM / other "trusted" component).
- This doesn't guarantee voter anonymity the way paper ballots do. Considering the analog hole and the complexity of computers, I can think of a billion ways a motivated and resourceful Mallory could to connect someone to their ballot.
> This doesn't guarantee voter anonymity the way paper ballots do.
You're saying that with a lot of assurance, but in my opinion that's still to be debated. We can build something that will keep at least a degree of separation between the identity that points to a specific individual and the identity that casts the ballot.
I came here to post this, too :) What the thingino community managed to do with their firmware for these cameras is nothing short of amazing - if you happen to have a compatible camera, you really, really should give it a whirl!
I'd love to but... how? One alternative seems to be a programmer chip that must be puchased and then modified to not fry the camera with 5V. Another is maybe stripping a USB cable and soldering it to the wifi pads on the camera chip?
Neither of these seem like good ideas for someone like me, who is relatively hardware naïve and has small children running around making it hard to concetrate for more than 30 minutes at a time.
The question is genuine. I want to do this but don't actually know by which method.
Yeah, I can see why that is a show-stopper for people. However, the thingino project has people among them who care deeply about ease of installation - so with these security issues discovered in the TP-Link device, chances are an installation method that relies on a vulnerable stock firmware will be provided in time :)
In this case I'm asking specifically about the C200 this article is about. Sorry for not being more clear. From what I understand the C200 does not boot from SD card.
I think Thingino is great. But there are definitely still dragons lurking. I reported a bug last year and mostly forgot about it. Got a response a few months ago to check out a fix related to unexpected memory access.
I generally try not to be a huge Rust cheerleader but seriously. Yikes.
I realize this is mostly tangential to the article, but a word of warning for those who are about to mess with overcommit for the first time: In my experience, the extreme stance of "always do [thing] with overcommit" is just not defensible, because most (yes, also "server") software is just not written under the assumption that being able to deal with allocation failures in a meaningful way is a necessity. At best, there's an "malloc() or die"-like stanza in the source, and that's that.
You can and maybe even should disable overcommit this way when running postgres on the server (and only a minimum of what you would these days call sidecar processes (monitoring and backup agents, etc.) on the same host/kernel), but once you have a typical zoo of stuff using dynamic languages living there, you WILL blow someone's leg off.
I run my development VM with overcommit disabled and the way stuff fails when it runs out of memory is really confusing and mysterious sometimes. It's useful for flushing out issues that would otherwise cause system degradation w/overcommit enabled, so I keep it that way, but yeah... doing it in production with a bunch of different applications running is probably asking for trouble.
The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.
For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.
There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.
I really dislike systemd, and its monolithic mass of over-engineered, all encompassing code. So I have to hang a comment here, showing just how easy this is to manage in a simple startup script. How these features are always exposed.
Taken from a SO post:
# Create a cgroup
mkdir /sys/fs/cgroup/memory/my_cgroup
# Add the process to it
echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs
# Set the limit to 40MB
echo $((40 \* 1024 \* 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes
Linux is so beautiful. Unix is. Systemd is like a person with makeup plastered 1" thick all over their face. It detracts, obscures the natural beauty, and is just a lot of work for no reason.
This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.
> he way stuff fails when it runs out of memory is really confusing
have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.
curious what kind of failures you are alluding to.
The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.
Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.
you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough
> At best, there's an "malloc() or die"-like stanza in the source, and that's that.
In fairness, i don't know what else general purpose software is supposed to do here other than die. Its not like there is a graceful way to handle insufficient memory to run the program.
In theory, a process could just return an error for that specific operation, which would propagate to a "500 internal error" for this one request but not impact other operations. Could even take the hint to free some caches.
But in practice, I agree with you. This is just not worth it. So much work to handle it properly everywhere and it is really difficult to test every malloc failures.
So that's where an OOM killer might have a better strategy than just letting the last program that happen to allocate memory last to fail.
Let new generations of Free Software orgs come along and supplant GNU with a GBIR (GNU But In Rust), but don't insist on existing, established things that are perfectly good for who and what they are to change into whatever you prefer at any given moment.
I wrote https://johannes.truschnigg.info/writing/2024-07-impending_g... in response to the CrowdStrike fallout, and was tempted to repost it for the recent CloudFlare whoopsie. It's just too bad that publishing rants won't change the darned status quo! :')
People will not do anything until something really disastrous happens. Even afterwards memories can fade. Cloudstrike has not lost many customers.
Covid is a good parallel. A pandemic was always possible, there is always a reasonable chance of one over the course of decades. However people did not take it seriously until it actually happened.
A lot of Asian countries are a lot better prepared for a tsunami then they were before 2004.
The UK was supposed to have emergency plans for a pandemic, but it was for a flu variant, and I suspect even those plans were under-resourced and not fit for purpose. We are supposed to have plans for a solar storm but when another Carrington even occurs I very much doubt we will deal with it smoothly.