So much vitriol. I understand it's cool to hate on EA after the SBF fiasco, but this is just smearing.
The key to scientific thinking is empiricism and rationalism. Some people in EA and lesswrong extend this to moral reasoning, but utilitarianism is not a pillar of these communities.
Empiricism and rationalism both tempered by a heavy dose of skepticism.
On the other hand, maybe that is some kind of fallacy itself. I almost want to say that "scientific thinking" should be called something else. The main issue being the lack of experiment. Using the word "science" without experiment leads to all sorts of nonsense.
A word that means "scientific thinking is much as possible without experiment" would at least embedded a dose of skepticism in the process.
The Achilles heel of rationalism is the descent into modeling complete nonsense. I should give lesswrong another chance I suppose because that would sum up my experience so far, empirically.
EA to me seems like obvious self serving nonsense. Hiding something in the obvious to avoid detection.
As far as I am aware, there is no way to stop malicious tags without modifying the protocol to authenticate the messages being broadcast as originating form a genuine tag. [1]
Making a tag that is not trackable is currently as easy as flipping a bit in the BLE advertisement. The same message is broadcast to all phones, but yes, a tag could also produce multiple identifiers and evade detection. [2]
[1]: Section 8 of "Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem". https://eprint.iacr.org/2023/1332.pdf
This is such a charade. Making "invisible" airtags is trivial [1], and I wouldn't be surprised if such airtags are being manufactured en-masse.
We allowed the creation of a global tracking network under the false pretense of privacy. The entire Find My security model falls apart when considering "malicious" tags, and Apple knew about this from the start.
In the security world, it seems accepted that no security effort is a silver bullet that's 100% impossible to get around.
Rather, it seems best practice to compose many layers of security efforts, which all work to raise the level of effort an attacker is required to exploit people.
I would argue that it is not misleading -- the website domain is, after all, "breakingthe3ma.app".
The title of the paper presents a more academic angle, and is intended to highlight what the "learned lessons" are, but let's not forget that Threema was vulnerable to our attacks for 10+ years.
> well-reviewed zero-footgun nacl.SecretBox()-style thing for this use case, but there simply isn't.
You'd be surprised, but I've seen designers who managed to shoot themselves in the feet with SecretBox() calls alone.
Anything more complex than using a library that does the crypto for you calls for an external/crypto team review.
Interesting! While you're here, is there anything you can add to the general debate about Telegram's crypto design? I see a lot of people here disparaging it, but nothing really from Telegram to explain what choices were made or why.
To put it in Igor's words, it is like "somebody baked a cake following a recipe, but without ever having tasted or seen a real cake".
The crypto design is brittle, but the practical attacks are somewhat limited. The reason why it's so disparaged by cryptographers it because it ignores several decades of cryptographic advances -- the whole saga of attacks on SSL / TLS<=1.2 taught us that key separation and clear protocol composition boundaries are important, but Telegram fails disastrously at these.
Security proofs should be made before a protocol is used, not as an afterthought.
The real reason why I would not recommend Telegram is that chats (by default) and group chats (by necessity) are not encrypted. Telegram's servers will be eventually breached by someone. A malicious actor will be hired as a software engineer, or as an intern. When this happens, all you ever wrote in Telegram will be a plaintext at their disposal -- unacceptable in 2022, and post-Snowden.
This speaks volumes about the need of standardized encrypted cloud storage protocols.
It always surprises me how fragmented the entire space is:
Syncthing "untrusted devices" support is still experimental, Nextcloud does support encryption, but it's hard to judge how trustworthy it is. Gocryptfs and ecryptfs should be solid, but they are hard to use in a browser or on mobile. Resilio, Borg, Tarsnap, EteSync -- yet more protocols, and without clear security analyses.
Same holds for commercial cloud operators: support for client-side encryption is starting to appear (Google Drive), but without an open, standardized client you still need to trust software from the cloud provider, which mostly defies the point of encrypting in the first place.
Tarsnap does not actually look bad. But any client-to-server protocol that is not TLS1.3 will make cryptographers twitch, and (as noted in the documentation pages) compression is bound to offer a side-channel attack (if only an impractical one, with hundreds of queries per recovered byte).
TLS 1.3 is definitely better than previous versions. Note however that it wasn't published until 2018; Tarsnap's transport layer has been in use since 2007, before even TLS 1.2 was published. If I had used TLS at the time, it would have been TLS 1.1. Hopefully you agree that would have been a bad thing?
I mean, TLS 1.1 isn't a good thing, but which <TLS1.3 bugs actually would have impacted Tarsnap? SMACK, maybe? Probably not POODLE, given the ciphersuites you'd have locked down to. Not BERserk (you'd never use NSS). The TLS BB'98 attacks didn't hit any library you'd actually use. No Triple Handshake, since you wouldn't do renegotiation. No BREACH, TIME or CRIME (they don't fit Tarsnap anyways). No RC4 (lol). No Lucky13, for the same reason as no POODLE. No BEAST, because you don't do Javascript. And now we're back to 2007 (or pre-2007) for attacks on TLS.
It's possible that I could have taken TLS 1.1 and removed all the broken parts, sure. I mean, that's pretty much what TLS 1.3 is.
But frankly I trust my ability -- both now and in 2007 -- to use standard cryptographic algorithms to build a new protocol far more than I trust my ability to remove all the crap from TLS 1.1.
Heartbleed isn't a TLS vulnerability any more than an overflow in GnuTLS is.
The threshold question is, "could this vulnerability be reasonably expected to recur in independent implementations of the protocol?"
As for stripping back TLS 1.1 --- it wouldn't take much more than simply picking a single ciphersuite and requiring TLS 1.1. You wouldn't need to know, for instance, about export ciphers.
Noise is used today in several high-profile projects:
WhatsApp uses the "Noise Pipes" construction from the specification to perform encryption of client-server communications
WireGuard, a modern VPN, uses the Noise IK pattern to establish encrypted channels between clients
Slack's Nebula project, an overlay networking tool, uses Noise
The Lightning Network uses Noise
I2P uses Noise
There's a bunch of them, but part of the point of Noise is to be extremely prescriptive in order to simplify implementation. WireGuard is based on Noise, but has a lot more than just Noise in it.
>This speaks volumes about the need of standardized encrypted cloud storage protocols
I worry this then just means government actors only will need to find 1 vulnerability to have open access to a standardized web and those vulnerabilities will exist, nothing is perfect. I genuinely believe a non-standardized approach is more effective even if they are each more vulnerable individually.
Android presents a decent corollary here vs iOS. Finding an iOS device exploit may cost more money but you get access to such a massive amount of devices it is worth it.
This isn't how cryptography engineers think about cryptography. It isn't like a PHP program, where there's inevitably going to be some bug found somewhere, and you do what you can to find as many as you can and react responsibly when more are found later; cryptography engineers use formal methods (among other things) to foreclose on vulnerabilities. The vulnerabilities documented in this paper are "own goals", not cryptographic inevitabilities.
For instance, the weird authentication scheme that gives rise to the RSA key recovery attack --- that problem is what PAKEs are for.
>This isn't how cryptography engineers think about cryptography.
That is my point exactly, it should be how they think about it. Attacks to the cryptography math itself are only a single vector, the software implementation of it is going to have holes not to mention those beyond it from the hardware at the chip level to the firmware that runs it there are vulnerabilities well outside the math itself.
These are cryptography researchers, talking about cryptography vulnerabilities. The premise, both of the paper and to the comment upthread, is: we should have fewer cryptography vulnerabilities, and could accomplish that by not having people come up with random authentication and key escrow protocols.
I'd argue the opposite actually. Device owners pay a much higher cost in maintaining multiple, incompatible, devices that each require their own procedure to upgrade, means of notification, etc.
In addition a lot of security when things are fragmented tends to become "security through obscurity". Something that is a small player in a market can still have all sorts of issues that a state-funded actor can find via analysis and exploit. It's also much less likely to have a public actor find and disclose the issue due to the small install base.
I completely agree, that was one of our main goals with Etebase (protocol behind EteSync).
For whatever it's worth, we had an external analysis of the protocol done recently at EteSync. Though even before it, we (intentionally) only used known and common primitives (from libsodium) to ensure that we have a solid base from both the cryptographic schemes, and the actual implementations.
rclone already provides such a client and it is fully open source. In general, to have a zero-trust system, you need to have client and server developed by independent parties.
The key to scientific thinking is empiricism and rationalism. Some people in EA and lesswrong extend this to moral reasoning, but utilitarianism is not a pillar of these communities.