Given how much of a huge pain it was to use PGP / gpg back in the days (for email encryption/signatures, to be precise), it's amazing how easy end-to-end encryption on Signal and Whatsapp is. And how it lacks that drama that somehow always surrounded GPG.
Is it really because trust on first use is good enough for most cases? Or is email somehow so much different than chat? Or was PGP the proof-of-concept, and current e2e encrypted platforms are the v1.0? Or all of the above? Did I miss anything important?
PGP as an ecosystem has always suffered from toolbox-itis: when a tool (especially a cryptographic tool) tries to be everything, it ends up doing nothing particularly well. This got ossified in the form of OpenPGP, and has basically remained the status quo for the last ~25 years.
More generally, the OpenPGP world is in a bit of a double-bind: they can either fix things by breaking compatibility (at which point someone can reasonably observe that there's no good reason not to ditch OpenPGP entirely), or retain compatibility and accept that OpenPGP will never get much better than RFC 4880 and whatever smattering of drafts the GnuPG maintainers agree to implement. One way essentially results in an entirely different tool/standard that happens to be wearing PGP's skin; the other means keeping around misuse-prone and outright broken cryptographic primitives (and bad formats to boot).
(To answer your actual question: email is just a bad substrate for attempting E2EE messaging. Latacora has a great explainer post on why[1]. TOFU is a mostly adjacent concern; trust/identity negotiation is hard, but the thing that makes WhatsApp, Signal, etc. actually work is that they eliminate manual key management and make cryptographic right choices for the user, rather than expecting the user to hold the tool correctly. In other words: they're misuse-resistant, where PGP as an ecosystem has historically not been.)
I’ve read this response before, and I don’t think it’s great: it makes claims about time-testedness and simplicity that on first glance apply to PGP, but in reality are either outright wrong (the results against MDC instead of true AEAD are in, and it’s a fail) or misleading in their conclusions (a “simple” packet structure that encourages EXPTIME parsing is not actually simple).
The assumption underlying much of the post is that PGP is only used in offline, stateless applications. This would make the arguments stronger, except that it isn’t true[1].
>...the results against MDC instead of true AEAD are in, and it’s a fail...
It's really OCFB-MDC. It's only a authenticated mode when the two things are used together (like GCM). It doesn't provide protection of associated data (the AD part of AEAD) but that isn't something that seems to be potentially useful for the stuff that the OpenPGP standard is used for. I don't know what "true AEAD" means in this context. As a user I only care that the OCFB-MDC mode is actually secure. I am not interested in any philosophical aspects.
The stuff about offline stateless applications only adds to the argument. A version of OCFB-MDC could be used, for say, TLS and would be expected to be secure.
> It doesn't provide protection of associated data (the AD part of AEAD) but that isn't something that seems to be potentially useful for the stuff that the OpenPGP standard is used for.
It isn't sensible to assert this, because people use OpenPGP in all kinds of crazy ways. One of the recurring headaches in applied cryptographic engineering is discovering that people do, in fact, attempt to use PGP for instant messaging (per above), as a TLS certificate delivery mechanism, etc. These are contexts where the use of an AEAD is frequently appropriate.
> As a user I only care that the OCFB-MDC mode is actually secure.
In practice, it has not been[1]. PGP's decision to use MDC instead of a real MAC is a classic example of home-rolled primitives being conceptually algebraically sound but broken in user settings. The solution here is simple: OpenPGP is not special, and should use a MAC or AEAD mode like everyone else does. It also shouldn't release the plaintext until the authentication tag is actually validated, which was another profound historic breakage with MDC.
You posted a link to the email where Trevor Perrin acknowledged that the corrected specification was secure.
>PGP's decision to use MDC instead of a real MAC...
The MDC can be interpreted as a MAC. The hash is first seeded with a MAC key (the "random block"). So a boring old hash MAC. SHA-1 is vulnerable to length extension attacks but that is not an issue here as the attacker never gets access to the state of the hash (everything is encrypted). I guess this could be considered an advantage of MAC then encrypt. The only property the hash requires is that the MAC key will be propagated to the check value in such a way that it is indistinguishable from random. So SHA-1 is wild overkill here.
Neither Signal nor WhatsApp is MITM-able. Could you explain why you think they are?
(Signal and WhatsApp also have forward secrecy properties that PGP, as an ecosystem, is nowhere close to providing. PGP very much still operates under the "one master key that you must never ever lose or disclose" paradigm.)
They are trivially MITM-able. Not long ago there was news posted here that telecom employees are selling SIM swaps. There's your MITM vulnerability. People can pay to impersonate others by taking over their phone numbers and consequently their messenger accounts.
This attack will be noticeable: there will be a message warning the user that the cryptographic keys have changed on the other end. You know what else generates these warnings? Reinstalling WhatsApp, switching phones... They're basically noise to users by now due to alarm fatigue. How many users even know what that this key stuff means anyway? How many people even use the built-in key verification function which consists of just scanning a QR code? I've yet to meet a single person who cares about any of this stuff.
Signal acts as a central key server and can present different identity keys to users, so signal itself can middle communications unless there is out-of-band verification of identity keys. This is described here https://www.ndss-symposium.org/wp-content/uploads/2017/09/09... (II. Background) and many other places.
This is conflating TOFU and (active) MITM. If you don’t check your peer’s security number in Signal, you are essentially trusting the key directory to be honest about its identity mapping. This is identical to trusting a PGP keyserver, except that Signal has empirically developed a more secure and private key directory than the PGP keyserver ecosystem.
In other words: you always need to perform OOB identity verification, regardless of your messaging system of choice. There is no way around this; it’s a fundamentally social and UX problem rather than a cryptographic one. But this doesn’t change the fact that, once you have a trusted identity, Signal’s MITM protections are significantly stronger than PGP’s (including forward secrecy, as noted before).
The WoT has been defunct in PGP for over 4 years at this point[1]. It was also never a particularly good solution to this problem, because it (1) was itself largely unauthenticated and blindly trusted by relying parties, much like a normal key directory, and (2) failed to encode what key holding parties were trusted for (the classic example being that I absolutely trust party A to do C for me, but this does not imply that I want to encourage networked party B to trust A for D). And of course the keyserver birthday attacks and DoS, as cherries on top :-)
(I think WoTs can solve these problems. But PGP's implementation has been so far the most ambitious tried by a semi-large audience, and it didn't hold up. We need a better starting point.)
It's hard to prove the whole chain of possibly-tampered hardware and proprietary software hasn't gotten any malicious update to steal private keys at any point
If your threat actor is omnipotent, you’re going to have a very hard time designing a coherent threat model against them. Neither Signal nor PGP nor hiding under the bedsheets will protect you from an adversary capable of directly recording your screen.
(But note: absent something like that, it in fact is possible for Signal to prove that you’re communicating with the same identity you initiated with. That’s the whole point of authenticated key agreement, which is not something that PGP can natively perform.)
Email being different from chat is a big factor and another one is that the entire system in Signal is designed and engineered specifically for the purpose, from the protocol on up. Retrofitting privacy, confidentiality, etc onto email is one of those noble 90's cryptography dreams that turned out to be unrealistic and impractical.
These are the reasons project such as Wave (By google) and matrix.org are introduced. But E-mail dies hard. As one can see it just keeps on adding new bandages to try to make it work. And most user don't care about security or privacy until it is a mess.
I find it hard to believe that you couldn’t retrofit Signal into an email app. Encrypted comms is quite good at sitting on unencrypted untrusted channels and signal is already asynchronous.
The challenge is that there’s no money to be made so you’d need a non profit like Signal to do it to disrupt that industry.
You can make an email-like app out of Signal but nobody would call that email since it won't work with the thing everyone else calls email. You can't just add Signal to actual email and get something with the security and privacy properties of Signal, though.
There is, however, an app like DeltaChat that looks more or less like WhatsApp, uses AutoCrypt for E2E encryption, and uses IMAP/SMTP as its storage and transport media.
Fun fact, the main reason I developed rpgp in the first place was to power the pgp portion pf deltachat, as we ran into the same problem of not having a good library based implementation. And spawning gnupg on a phone was never an option.
Why couldn't you have a browser extension for gmail that checked whether the other end was registered & if it was encrypt the email body before it's sent? You get transparent upgrades for your friends who have the extension installed.
The closest solution I remember to this happened roughly 15 years ago when there was a browser extension that detected pgp on Email bodies and was able to get public keys and encrypt/decrypt emails automatically or with a single click, if I recall correctly.
It felt a small team or a single person project. There was a cat and mouse game being played against Gmail interface changes, with the encryption being broken every now and then. It required me, a random user, to trust the extension developer(s) with my Gmail and gpg keys. I had fear of having an xz backdoor situation compromising my Gmail account and gpg keys.
And for this extension to succeed I also had to convince all my Gmail friends and contacts to use it, and convincing them to trust the developers of the extension as I had done.
While I felt confident that the extension was perfectly safe I didn't dare to convince my family and friends to use it because (a) I could not guarantee that the extension wouldn't be compromised in the future and (b) the chance of the extension being broken in the future due to a Gmail UI change was close to 100%. Also the more users the extension had the higher the chances for someone to attempt to infiltrate to attempt a backdoor...
If I recall correctly, eventually the developer got tired of the Gmail UI changes and increasing demands of users and moved on, stopping the development.
Anyone with time could try to develop such an extension, but there were drawbacks back in the day...
You can but if you sent the email yo someone that doesn't have the extension, they wouldn't be able to read it, and you have no way yo know who has the extension installed. That's the core problem here: pretty much nobody uses encrypted emails, so you can't send encrypted emails to them. The problem doesn't exist with Signal since everyone using Signal uses a client that supports encryption.
Except that it’s no longer email at that point. As others have pointed out: if you want to do things right you need to violate ordinary message and metadata assumptions in SMTP, at which point you’re better off skipping the heartburn and using a modern design.
Nope actually it doesn’t. It makes the case that pgp is a bad implementation but makes the leap to claim that means there’s no way to improve the situation.
For example, you could run a proxy server to deliver the messages between parties. So if you’re emailing someone, the browser extension changes the address to be a server you control, sends you the encrypted email that has everything meaningful encrypted, you decrypt the part your able to (ie the intended receiver) and forward that.
As far as either email server is concerned, your email server is sending out and receiving encrypted emails but there’s no metadata to connect anyone. The key handling uses your access to the email account to validate access same as signal uses the phone number.
That seems like a pretty close system to how Signal works and I don’t see anything meaningfully different. If you do, can you actually elaborate? I don’t feel like the link you pointed me answers why my proposal would be larping security since the security and threat model feels very similar to Signal.
Here are some choice quotes I pulled out of that link and why I have problems with the claims made:
> Searchable archives are too useful to sacrifice, but for secure messaging, archival is an unreasonable default. Secure messaging systems make arrangements for “disappearing messages”.
Signal and most apps except Snapchat (which isn’t e2e afaik) don’t archive by default. But yes, it is true that having encrypted web mail and search are incompatible whereas messaging apps store the data locally and thus can provide search. You’d need a dedicated client that could store all the encrypted messages locally but people aren’t as used to for that.
> Some email clients have obscure tools for automatically pruning archives, but there’s no way for me to reliably signal to a counterparty that the message I’m about to send should not be retained for more than 30 minutes.
Talk about security LARPing. Once the message is sent you’ve lost all control of it. Your just following social contracts that the app honors the request and the user is using an unmodified app.
> No matter how good a job one does securing their own data, their emails are always at the mercy of the least secure person they’ve sent them to.
> it's amazing how easy end-to-end encryption on Signal and Whatsapp is
They both use a centralized, authority for identification and key sharing, sidestepping the hardest part entirely. Also, they are their own protocol and as such you don't suffer the encrypted/plain-text-only email dichotomy, sidestepping another pain point.
When you don't care about the decentralized nature of email and can force everything to be encrypted, then the problem becomes much easier.
> it's amazing how easy end-to-end encryption on Signal and Whatsapp is
Amazing that you consider a proprietary thing like whatsapp remotely secure. For all we know it's backdoored.
And signal… not available on fdroid… thus mostly installed via apple/google… how difficult would it be to push a backdoored update to selected individuals?
If Gmail had decided to care about it, the experience in email might be nice too (or at least for Gmail users). But instead it wants to be able to read it to suggest calendar events etc.
i.e. WhatsApp and Signal care about UX and mainstream adoption. Nothing like that for email has tried to do PGP.
State actors is just a term usually refer to threats from dedicated abd resourceful entities that have don't have to worry about funding, tracked by their government..etc. They usually are in the market for 0days and have dedicated resources to throw at their targets. So you can basically say the same about any tool. If I know that I am being targeted by a state actor while living in the US, the only thing that can help is me going to FBI. This assume of course that state actor is the US itself and assume that you know your are being targeted.
On individual level, if you have being targeted or use a tool with 0day then you lost before you even know that there was a game.
I was making a poorly informed assumption that signal's popularity must hinge on a key exchange process that can be MITM'd by authorities with enough privileges on the network to setup a persistent interception operation that spans every access point on the target's network partition.
So for instance, if you are using signal to do an outreach and support operation for a human rights group operating in China, it might be reasonable to assume that some Chinese intelligence orgs can view your plaintext. The strategy would then be to not transmit any information that would allow a decisive action against the group. The opportunity cost of revealing the surveillance has to be greater than the utility of acting on the intelligence. But obviously this would compromise the scope of assistance you could provide.
And the risk is iffy, for a few reasons that I've thought of since seeing your response:
1) PGP key exchange has the same problems when done over the network. Doing foreign outreach via PGP has the same problems.
2) I don't know of any examples of Signal MITM attacks stemming from weak key exchange. Worse, I don't actually know how key exchange works on signal
4) Key exchange problems are more of a universal nitpick against all crypto systems, not a full compromise
I'd be curious about your feedback - even if it's a chastisement.
I am also curious if you feel signal has any REAL issues which keep you up at night, if not key exchange.
Yes, phone operating systems are more modern, and implement: sandboxing, hardware security, MAC, app permissions, etc. That’s why they do less. In this category, I would use iPadOS in lockdown mode.
But a phone is a device for the every day use of an average user. It’s not designed to be a security device to protect people against targeted attacks. If your threat model is a state actor, you might be better off with a desktop. Phones have phone numbers and take unauthenticated input from the external sources (messages). There is an opaque baseband chip. The threat model in which they are secure is skewed in favor of the phone provider: essentially you rent their device, and share your information. Phones communicate information about user such as the location, and might conveniently backup users data to the external servers by default. There is limited app choice. You can’t inspect, select and customize a phone OS. On the other hand, I can cherry pick a desktop OS, even build one selecting components, see, monitor and control the software, and run it on hardware from a source I trust.
No, we just don't agree. I've spent a fair bit of time on this problem, and more time than that talking to people who specialize professionally in helping targets of state attacks protect their comms, and the consensus I've heard is that nobody is better off on a desktop platform. If you're using a mainstream desktop/laptop platform, even Belize can afford to buy their way into your device.
I'm not saying this to convince you, just to establish that we are definitely talking about the same thing, and I am definitely not being flippant.
I could think of a few ranging from just asking Apple for a copy of your iCloud backups, asking you personally for a copy of the messages through to exploiting iOS itself.
The context was what are things that those with more or less unlimited resources can do in order to read your signal messages.
They are 3 particular paths that aren’t in any way open to most attackers but will work here.
Saying it impacts all applications isn’t wrong but it’s also not relevant to what was asked.
If whatever the hell you are doing requires a threat model where the NSA is interested in finding out who you are talking with and what you are saying the original comment of signal is inappropriate is 100% accurate.
Ok, sure! Just: this is a thread about why WhatsApp and Signal are so much more successful than PGP, and the argument given was that they sacrificed security to get their popularity. No, that's not it.
WhatsApp is a phone app for messaging. PGP is a protocol for email and file encryption in any operating system. It’s unclear why you are comparing them.
PGP could have a phone app that will do easy encryption, such as protonmail.
IMO, the way GPG was done killed what could have been a decent ecosystem. It's a combination of two factors:
1. Don't roll your own crypto.
2. The available code (gpg) is a colossal pain to use.
So for instance, how does something like KMail deal with gpg? There's no libgpg originally. There's just the gpg tool, so you've got to call it as a sub-process, and it really sucks:
1. You have to deal with process management, multiple filehandles, text parsing, non-trivial interactions, etc.
2. It's slow. You pay startup costs every single time. This is a huge problem on something interactive like a mail client, and it's dependent on things like the amount of keys in the gpg store.
3. gpg has very specific ideas about how it wants to be used, and not everything fits.
Say that you oh, want to do some stats on GPG keys. There's no libgpg to just read an .asc file and get the list of signatures from that, no. You have to call gpg, feed it the key, parse the result. For some things you might actually have to have gpg import the key first. Manage a fake home dir for GPG. Deal with the horrible performance as the keystore grows. A million keys at a gpg invocation per second is going to be around 2 weeks.
Unfortunately it's only now that gpg is effectively dead that the problem started to get fixed.
Also, at this point GPG is effectively a legacy technology anyway. Modern cryptographic thought considers GPG to be a terrible idea for a whole bunch of reasons that are deeply built into it, so the only solution for that is throwing it out.
I agree that gpg did not age well. If we compare it to a different project with similar history: curl, it's apparent that gpg chose wrong on several fronts. It should be a library first instead of a cli tool. Funny part is that even the library of gpg (gpgme) is internally calling the binary.
I've played around with designing a higher level library to OpenPGP once (https://pypi.org/project/pysequoia/) and personally I think it yields more readable, faster and secure code.
> Funny part is that even the library of gpg (gpgme) is internally calling the binary.
Sounds like a great way to transition things in a saner direction. You know it will be bug-for-bug compatible with calling the gpg-binary. Having one blessed text-parser with a lot of eyes on it is much better than everyone rolling their own.
Getting people used to depending on a library instead of shelling out also means it becomes possible to move the library to independent implementations.
Disclaimer: I work on GPG and I am one of the KMail core developer.
The standard way to use GPG is via the gpgme librairie which then communicate with the Assuan protocol to the gpg deamon. Gpgme has official bindings to C++, Qt and python. In KMail we use the Qt bindings and it works fine.
> Disclaimer: I work on GPG and I am one of the KMail core developer.
Nice work :) Looking forward to seeing further improvements to it, it's still got a few rough edges, but otherwise I really like it.
> The standard way to use GPG is via the gpgme librairie which then communicate with the Assuan protocol to the gpg deamon.
Yeah, but all GPGme does is calling the gpg binary and presenting a more palatable interface to applications.
That solves the problem as far as the API goes, but performance is still terrible for some uses, and it's still liable to run into all kinds of weird problems in edge cases. But at least for something like mail clients it seems to work well enough.
This is exactly the meme I would try to propagate if I had backdoored every popular crypto implementation. More people should roll their own crypto so they can stumble and learn and eventually get good at it.
This assumes that implementation diversity stymies an attacker, which isn't really true in cryptography. Cryptographic errors have well-understood "shapes," frequently present even when the raw algebra of the implementation is correct; adversaries are delighted when they discover yet another hand-rolled RSA implementation that's susceptible to BB'98 or BB'06.
Rolling your own crypto is obviously a bad idea. It turned out the cryptography implemented by developers not expert in cryptography sometimes was merely obfuscation.
If you want to roll your crypto, become a cryptographer first. That sounds circular, but isn’t.
I guess you are considering the case where a single signer transfers commits over SSH to a single verifier, with both putting the same meaning into the act of transfer as people normally would into a signature. But git is a distributed VCS, with signatures handling many more scenarios: consider multiple committers and multiple signatures, transfer over multiple, different, untrusted channels, multiple verifiers.
I am not aware of any outstanding SHA-1 issues that would require a change in the current RFC4880 OpenPGP standard. There was an obscure attack that involved generating two keypairs with colliding SHA-1 signatures and getting a third party to sign one of them but you can just use a different hash (say SHA256). The SHA-1 used in the MDC portion of the authenticated encryption mode doesn't and is very unlikely to ever represent any security weakness (the hash used there doesn't require any particular cryptographic properties). SHA-1 is used for the key fingerprint, but the use of a hash with collision resistance is not required in general for key fingerprints. An attacker could in theory create two different keys with the same fingerprint, but then they would just own two keys that would be hard for to distinguish from one another. You don't sign the fingerprints, you sign the actual public key. In general, it would be a bad idea to specify that the hash used for a key fingerprint required collision resistance as that would mean that the fingerprint would have to be something like an unusable 256 bits long to prevent birthday attacks.
There are higher level implementations that use the dates on signatures to straight out reject sha1 material, but that gives only a limited protection.
Yep. We've got it working with OpenPGP Card devices (Yubikeys, Nitorkeys, etc.). The signing part was actually pretty easy and the decryption required a bit more work but the maintainer was super responsive (https://github.com/rpgp/rpgp/pull/315).
How does it interface with the cards? IIRC the rust pcsc library used by Sequoia needed C libraries. I've been doing some NFC stuff too and was looking for a pure-rust solution if there was one.
Sequoia aims to be a "complete gnupg replacement" but in many ways it's still the same mindset as gnupg (core devs are former developers of gnupg). Sequoia also got support from VCs and just recently received significant funding from the STF which gives them resources to polish the documentation etc.
Rpgp is much smaller but also very flexible. It's easy to adjust to more specific needs. It's also associated with RustCrypto project and with Rust in general.
In particular, in the presence of an insufficiently wide hash, the absence of padding here means that RSA signature validation is not secure under EUF-CMA. Matt Green has a great post on why and when EUF-CMA matters[1].
(This isn't necessarily this implementation's fault, since PGP seemingly (!) encourages the stripping of padding from signatures. But I can't find another source for whether this is actually encouraged by OpenPGP, or whether implementations just widely allow it.)
I am not sure if this is an actual issue, all auditors that looked at this so far haven’t mentioned this being a problem. But I will have to investigate what the exact state is.
According to `git blame`, this was introduced June 2023, i.e. after your audit in 2019. But maybe it was moved from an older piece of the codebase, I didn't dig too deep.
(Looks like the IncludeSec folks did a decent job in 2019. Hi Eric!)
However, I misread this: I thought the padding was being done on the cleartext signing side, but this is padding of the signature itself. So there's some malleability here, but it isn't susceptible to DO'1985. I'll update my top-level comment.
GnuPG has by default started emitting keypairs with a preference for the LibrePGP version of the OCB block cipher mode. That mode is not compatible with what the other faction is doing and is not generally supported in any case. Arch[1] and other distributions have apparently patched this default out.
Is Rpgp emitting any new block cipher modes or generating keys that might cause such emission in the future? The risk here is a sort of incompatibility nightmare where decryption becomes a crap shoot.
While rpgp is slowly gaining support for this format (the aid users to be able to talk to anyone they want) it will keep emitting the broadest compatible format for a while by default, until the whole ecosystem has upgraded. The aggressive stance from gnupg is really hurting people and removes one of the biggest benefits of pgp, broad interop with compliant implementations.
Each faction has a seemingly legitimate position. My understanding is that there are no cryptographic weaknesses in the existing block cipher mode (SEIP) so there are no downsides to just doing that indefinitely.
Rpgp is great (we're currently using it for a better git signer with smartcards) but I wonder why is it trending right now at HN? Maybe because it's currently #1 in the test suite? https://tests.sequoia-pgp.org/
Indeed. There were some discussions over choosing MPL but the parent foundation was more aligned with GPL and after it collapsed it seemed th3 idea of relicensing to MPL was dropped.
Do note that sequoia and rpgp are different in many aspects so the licensing is just one thing.
unless you’re stating crypto primitives should only be written in assembly
Yes, that's exactly what I'm stating.
Have a look at openssl, boringssl, nspr, etc. They all implement the core modular arithmetic for RSA and the s-box table for AES using assembly language. There is no reliable way to prevent a C compiler from "optimizing" your constant-time code into non-constant-time code.
another rust TLS implementation
rustls uses assembler (from boringssl) for these routines. It is not 100% rust, and that's a good thing.
Because you lack the understanding that different CPUs behave differently with the same instructions, so your "solution" to timing attacks doesn't solve anything until you force every device to have the same identical CPU.
Is it really because trust on first use is good enough for most cases? Or is email somehow so much different than chat? Or was PGP the proof-of-concept, and current e2e encrypted platforms are the v1.0? Or all of the above? Did I miss anything important?