Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is nice for protecting against compelled legal disclosure in the US, but Google could just serve js in the future (targeted to specific users only, if they want) to capture password/key. This is the problem with repeatedly downloading a client which isn't inherently trusted/from the same party holding the ciphertext. Same thing happened LONG ago with hushmail, which used a java applet to do encrypted email -- at some point they chose to serve a "special" applet to some users.


You're not wrong, but is there any common client software today that isn't susceptible to rogue updates?


It is trivial to serve different javascript to different clients on a request by request basis, making it very unlikely that highly targeted malicious js will be detected.

Most thick client update mechanisms make this more difficult (but certainly not impossible) thus greatly increasing the risks of embedded malware being detected.

Signal, for example, is reproducibly built, so you can diff the source of each new update and then verify that you get the same binary. This gives you very good assurance that your audit of those diffs is valid to the binary on your phone.


Thanks. I understand autoupdate and web infrastructure, as well as the Ken Thompson hack that resolves all these cases into the same threat. The question was rhetorical.

I doubt that someone who has invested the time in developing code-auditing skills also values their own time so little that they'd audit and build their chat client. And if they're willing to farm auditing out to someone else, or to vouch to other users of the app, then they've lost the plot.

Not saying such a person couldn't exist. But the intersection in the Venn diagram seems small.


This gross oversimplification precludes one from understanding anything but a yes/no answer. The answer is not binary. Like the parent said, delivering the client on every single call makes the rogue update trivial and possibly targeted. Having your software come from, e.g., a bi-annual release Linux distribution, and/or one that focuses on security, makes the rogue update less likely. Not impossible, but still more difficult.


As long as the yes/no doesn't have an inversion error, I'm fine with the coarse granularity. Unless you're targeted by an APT, the threat model would consider all of this to be noise. It's just odd to single out Gmail E2E of all things for this reason in a top-level HN comment on this article. Within a reasonable confidence interval, it's irrelevant to anyone already concerned about using Gmail for their high-security data (because they aren't using Gmail!), and irrelevant to everyone else because they don't care (because it's just their email).

I'm way more worried about all the dirty little fingers on the hundreds of Rust crates, Python packages, Java libraries, and NPM packages that get slurped, unreviewed, into so much software these days. (But I'm still not actually going to do anything about it.)


Much larger cross section when you expand "person" to companies. I probably wouldn't bother with chat, but absolutely would/have with smart contracts and wallet software. (Was a multi-human effort within an organization.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: