Hacker Newsnew | past | comments | ask | show | jobs | submit | winterdeaf's commentslogin

So much vitriol. I understand it's cool to hate on EA after the SBF fiasco, but this is just smearing.

The key to scientific thinking is empiricism and rationalism. Some people in EA and lesswrong extend this to moral reasoning, but utilitarianism is not a pillar of these communities.


Empiricism and rationalism both tempered by a heavy dose of skepticism.

On the other hand, maybe that is some kind of fallacy itself. I almost want to say that "scientific thinking" should be called something else. The main issue being the lack of experiment. Using the word "science" without experiment leads to all sorts of nonsense.

A word that means "scientific thinking is much as possible without experiment" would at least embedded a dose of skepticism in the process.

The Achilles heel of rationalism is the descent into modeling complete nonsense. I should give lesswrong another chance I suppose because that would sum up my experience so far, empirically.

EA to me seems like obvious self serving nonsense. Hiding something in the obvious to avoid detection.


As far as I am aware, there is no way to stop malicious tags without modifying the protocol to authenticate the messages being broadcast as originating form a genuine tag. [1]

Making a tag that is not trackable is currently as easy as flipping a bit in the BLE advertisement. The same message is broadcast to all phones, but yes, a tag could also produce multiple identifiers and evade detection. [2]

[1]: Section 8 of "Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem". https://eprint.iacr.org/2023/1332.pdf

[2]: "Track You: A Deep Dive into Safety Alerts for Apple AirTags". https://petsymposium.org/popets/2023/popets-2023-0102.pdf


The broadcast isn't signed by some kind of hardware key?


This is such a charade. Making "invisible" airtags is trivial [1], and I wouldn't be surprised if such airtags are being manufactured en-masse.

We allowed the creation of a global tracking network under the false pretense of privacy. The entire Find My security model falls apart when considering "malicious" tags, and Apple knew about this from the start.

[1]: https://github.com/Guinn-Partners/esp32-airtag


In that case, couldn't someone just make a tracker tag using GPS and a mobile connection?


GPS+mobile tracker:

- 4x the cost plus ongoing fees

- 2x larger

- can’t buy it at any big box store

- relies on a third party who can also see what you’re tracking

- battery limited to weeks, not years

Accessibility and features make these way more compelling


these devices are way more insidious though, a gps/mobile tracker would have more limitations due to it being less able to be hidden and less mobile.


Have you heard the term "defense in depth"?

In the security world, it seems accepted that no security effort is a silver bullet that's 100% impossible to get around.

Rather, it seems best practice to compose many layers of security efforts, which all work to raise the level of effort an attacker is required to exploit people.

So I think it's unfair to say this is a charade.


I would argue that it is not misleading -- the website domain is, after all, "breakingthe3ma.app".

The title of the paper presents a more academic angle, and is intended to highlight what the "learned lessons" are, but let's not forget that Threema was vulnerable to our attacks for 10+ years.


I would recommend reading the article and not just the title and url. Editorializing also plainly violates HN guidelines.


> well-reviewed zero-footgun nacl.SecretBox()-style thing for this use case, but there simply isn't.

You'd be surprised, but I've seen designers who managed to shoot themselves in the feet with SecretBox() calls alone. Anything more complex than using a library that does the crypto for you calls for an external/crypto team review.


The same research group working on the Telegram MTProto security analysis is behind these attacks on MEGA!

(I should add: disclosure, I work there too.)


Interesting! While you're here, is there anything you can add to the general debate about Telegram's crypto design? I see a lot of people here disparaging it, but nothing really from Telegram to explain what choices were made or why.


To put it in Igor's words, it is like "somebody baked a cake following a recipe, but without ever having tasted or seen a real cake".

The crypto design is brittle, but the practical attacks are somewhat limited. The reason why it's so disparaged by cryptographers it because it ignores several decades of cryptographic advances -- the whole saga of attacks on SSL / TLS<=1.2 taught us that key separation and clear protocol composition boundaries are important, but Telegram fails disastrously at these. Security proofs should be made before a protocol is used, not as an afterthought.

The real reason why I would not recommend Telegram is that chats (by default) and group chats (by necessity) are not encrypted. Telegram's servers will be eventually breached by someone. A malicious actor will be hired as a software engineer, or as an intern. When this happens, all you ever wrote in Telegram will be a plaintext at their disposal -- unacceptable in 2022, and post-Snowden.


This speaks volumes about the need of standardized encrypted cloud storage protocols.

It always surprises me how fragmented the entire space is: Syncthing "untrusted devices" support is still experimental, Nextcloud does support encryption, but it's hard to judge how trustworthy it is. Gocryptfs and ecryptfs should be solid, but they are hard to use in a browser or on mobile. Resilio, Borg, Tarsnap, EteSync -- yet more protocols, and without clear security analyses.

Same holds for commercial cloud operators: support for client-side encryption is starting to appear (Google Drive), but without an open, standardized client you still need to trust software from the cloud provider, which mostly defies the point of encrypting in the first place.


Resilio, Borg, Tarsnap, EteSync -- yet more protocols, and without clear security analyses.

I did analyse the security of Tarsnap as I was writing it, for what it's worth.


Tarsnap does not actually look bad. But any client-to-server protocol that is not TLS1.3 will make cryptographers twitch, and (as noted in the documentation pages) compression is bound to offer a side-channel attack (if only an impractical one, with hundreds of queries per recovered byte).


Using TLS makes this particular cryptographer twitch.


Not GP, but a wannabe level 3 [1] cryptographer.

Why does TLS make you twitch? Does that apply to TLS 1.3?

[1]: https://loup-vaillant.fr/articles/rolling-your-own-crypto


TLS 1.3 is definitely better than previous versions. Note however that it wasn't published until 2018; Tarsnap's transport layer has been in use since 2007, before even TLS 1.2 was published. If I had used TLS at the time, it would have been TLS 1.1. Hopefully you agree that would have been a bad thing?


I mean, TLS 1.1 isn't a good thing, but which <TLS1.3 bugs actually would have impacted Tarsnap? SMACK, maybe? Probably not POODLE, given the ciphersuites you'd have locked down to. Not BERserk (you'd never use NSS). The TLS BB'98 attacks didn't hit any library you'd actually use. No Triple Handshake, since you wouldn't do renegotiation. No BREACH, TIME or CRIME (they don't fit Tarsnap anyways). No RC4 (lol). No Lucky13, for the same reason as no POODLE. No BEAST, because you don't do Javascript. And now we're back to 2007 (or pre-2007) for attacks on TLS.


It's possible that I could have taken TLS 1.1 and removed all the broken parts, sure. I mean, that's pretty much what TLS 1.3 is.

But frankly I trust my ability -- both now and in 2007 -- to use standard cryptographic algorithms to build a new protocol far more than I trust my ability to remove all the crap from TLS 1.1.

(Did you deliberately not mention heartbleed?)


Heartbleed isn't a TLS vulnerability any more than an overflow in GnuTLS is.

The threshold question is, "could this vulnerability be reasonably expected to recur in independent implementations of the protocol?"

As for stripping back TLS 1.1 --- it wouldn't take much more than simply picking a single ciphersuite and requiring TLS 1.1. You wouldn't need to know, for instance, about export ciphers.


That seems like the wrong question. My options were "write my own protocol" or "use openSSL" -- writing my own TLS stack was never on the table.


Right, I get that, but you could have done the two config things I just mentioned with OpenSSL.

I get why you didn't use OpenSSL. The normal thing for someone like you to do in 2022 would be to use Noise.


What are real world implementations of the Noise Protocol? https://github.com/noiseprotocol/noise_spec/blob/v34/noise.m...

Quick search shows WireGuard protocol, but I am not sure if how much of the WireGuard protocol is the same as the Noise Protocol.

https://www.wireguard.com/formal-verification/ https://www.wireguard.com/papers/wireguard-formal-verificati...

  The WireGuard protocol is extensively detailed in [2], which itself is based on the NoiseIK [3] handshake.


I found a page by Duo Labs listing Noise in Production.

https://duo.com/labs/tech-notes/noise-protocol-framework-int...

  Noise is used today in several high-profile projects:
    WhatsApp uses the "Noise Pipes" construction from the specification to perform encryption of client-server communications
    WireGuard, a modern VPN, uses the Noise IK pattern to establish encrypted channels between clients
    Slack's Nebula project, an overlay networking tool, uses Noise
    The Lightning Network uses Noise
    I2P uses Noise


There's a bunch of them, but part of the point of Noise is to be extremely prescriptive in order to simplify implementation. WireGuard is based on Noise, but has a lot more than just Noise in it.


Yes, of course. I was just confused because it seemed like you were saying that even the new version of TLS was bad.


>This speaks volumes about the need of standardized encrypted cloud storage protocols

I worry this then just means government actors only will need to find 1 vulnerability to have open access to a standardized web and those vulnerabilities will exist, nothing is perfect. I genuinely believe a non-standardized approach is more effective even if they are each more vulnerable individually.

Android presents a decent corollary here vs iOS. Finding an iOS device exploit may cost more money but you get access to such a massive amount of devices it is worth it.


This isn't how cryptography engineers think about cryptography. It isn't like a PHP program, where there's inevitably going to be some bug found somewhere, and you do what you can to find as many as you can and react responsibly when more are found later; cryptography engineers use formal methods (among other things) to foreclose on vulnerabilities. The vulnerabilities documented in this paper are "own goals", not cryptographic inevitabilities.

For instance, the weird authentication scheme that gives rise to the RSA key recovery attack --- that problem is what PAKEs are for.


>This isn't how cryptography engineers think about cryptography.

That is my point exactly, it should be how they think about it. Attacks to the cryptography math itself are only a single vector, the software implementation of it is going to have holes not to mention those beyond it from the hardware at the chip level to the firmware that runs it there are vulnerabilities well outside the math itself.


These are cryptography researchers, talking about cryptography vulnerabilities. The premise, both of the paper and to the comment upthread, is: we should have fewer cryptography vulnerabilities, and could accomplish that by not having people come up with random authentication and key escrow protocols.


I'd argue the opposite actually. Device owners pay a much higher cost in maintaining multiple, incompatible, devices that each require their own procedure to upgrade, means of notification, etc.

In addition a lot of security when things are fragmented tends to become "security through obscurity". Something that is a small player in a market can still have all sorts of issues that a state-funded actor can find via analysis and exploit. It's also much less likely to have a public actor find and disclose the issue due to the small install base.


Nextcloud's E2E encryption is at best very half baked, with very limited features.

They make it look like its designed as a core part of the product on the website, while in reality its an after thought and its behind on updates too.

As good as Nextcloud sometimes is, a parts of it feel very legacy and unmaintained.


I completely agree, that was one of our main goals with Etebase (protocol behind EteSync).

For whatever it's worth, we had an external analysis of the protocol done recently at EteSync. Though even before it, we (intentionally) only used known and common primitives (from libsodium) to ensure that we have a solid base from both the cryptographic schemes, and the actual implementations.


Restic had some professional cryptanalysis, and was very well received.


rclone already provides such a client and it is fully open source. In general, to have a zero-trust system, you need to have client and server developed by independent parties.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: