The two exploits in the subject line are two completely separate things. The main focus of the linked article at ps3hax.net is a key which is used to HMAC-authenticate the service mode dongle. The source code for that is here:
The more interesting hack was announced at 27c3. A team comprised of Wii hackers has discovered Sony's main boot-signing private key. This is like discovering Verisign's private key -- you can now issue any SSL cert you want. They can sign any hypervisor they want, which leads to running any code you want.
They were able to do this because (surprise), there was a crypto mistake in the implementation. Two (or more) ECDSA signatures were generated with the same secret nonce. Apparently Sony doesn't read our blog because we discussed this flaw before:
The cool thing about this flaw is that the private key is not present in the PS3 anywhere. It (probably) only exists at some locked down code-signing center. However, a software flaw in the way it generated the signatures was effectively painting the private key on the side of every signed code module released.
I know enough about crypto to know that I need to stay away from it. I can tell people about common pitfalls and all that, but I'm nowhere near good enough to stay out of all of them myself. It's times like these that I'm happy to be on the side of analyzing this stuff rather than developing it.
It's definitely easier to look for known flaws, such as obvious spec violations like this ECDSA signing flaw. It's much harder to review a high-assurance system and be sure you've anticipated all possible ways something might fail years down the road.
One point I have not said recently is that crypto review is very expensive in terms of time and money. So the design approach to security problems should be roughly:
1. Avoid crypto if possible. Store data on the server, for example. Doing this correctly is orders of magnitude easier than developing crypto protocols.
2. If using crypto, use something high-level. GPG is a great example of a bundle of crypto primitives with a well-understood protocol for encryption, integrity protection, and key management. PGP has been around for 20 years now.
3. If none of the above works, develop custom crypto protocol. But budget 10x for review as for design/implementation. So if you spend a week and $10,000 developing it, spend 10 weeks and $100,000 to review/improve it. This includes external review, not just internal. This goes for everyone, even "experts".
My main point is: "Crypto costs a lot. Are you sure you want to pay for it?" Because if you do implement custom crypto, you (or your users) will pay for it one way or another.
Part of the problem is that most people who have to use it aren't mathematicians. And even those of us who have math degrees may not realize every single one of the assumptions or requirements that go into each operation (especially if they change due to new attacks being discovered).
So just knowing that something is dangerous isn't the same as having everything you need to deal with the danger.
I'm surprised that I've never heard of a cryptography wiki somewhere where people try to list all the requirements for using each algorithm or technique securely in simple terms. E.G. a page on each technique or algorithm telling you which numbers must be random, unpredictable, never repeated, etc. Or telling you that if a person has these numbers, they can do X, Y & Z. Or that if you don't verify that this padding is right, people can forge messages. With citations linking to the attacks.
Incidentally, I'm hoping that I'm wrong and there really is such a thing out there, somewhere.
That knowledge is spread out over a bunch of people who bill upwards of $500/hr. What's more likely to get built is a wiki that tells people just enough to feel comfortable while writing dangerously brittle crypto code --- to say nothing of the people who will get the recommendations on such a wiki wrong.
It's also all downside. No one person has all these details. The primary contributor to such a wiki is inevitably going to miss horrible faults. If you're that guy, why bother?
> The primary contributor to such a wiki is inevitably going to miss horrible faults. If you're that guy, why bother?
Even those $500/hr experts miss horrible faults from what I can see. I mean, you were telling me several months ago on HN how hypervisor secured game consoles were one of the few places where we see high-end security in the consumer space and that we can't just assume that the hackers will always win.
But the way I see it, there aren't any magic bullets in the security world. Security is a Red Queen problem. In other words, you have to keep running as hard as you can just to avoid falling behind. If you stop running, someone will eventually catch up to you.
> No one person has all these details.
That sounds like an incentive to collaborate to me. One might expect that people should want to gather and organize important details like that. The fact that any such an endeavor would inevitably be incomplete and require updates should go without saying.
The main problems would be starting with enough information for people to want to contribute to it and having moderators who were good enough to vet contributions for accuracy.
I don't think the thing you want to have happen is going to happen.
I think what could happen is, for lack of a better term, a half-assed wiki that leads people to code broken e=3 RSA implementations and feel safe doing it.
"In DSA, the k value is not a nonce. In addition to being
unique, the value must be unpredictable and secret. This
makes it more like a random session key than a nonce. When
an implementer gets this wrong, they expose the private
key, often with only one or two signatures."
There is no standard term for "nonce that must remain secret", hence I believe that "secret nonce" is the best I can do. The term "nonce" is general enough that it can be public; it only must be unique.
I take great pains to create a reasonable term where none exists because I agree terminology and consistency are important in crypto. If you have a better term, I'm happy to hear it.
I believe calling it simply a "secret" or "random secret" would lead to less confusion. But my point wasn't so much to prove you wrong, but to point out that the issue was more subtle than "sony didn't read the crypto manual".
I liked your comment in general, but I think the jab at Sony for not reading your blog and the parenthetical (surprise) insinuate a level of boneheadedness that's unwarranted. Maybe that's just my reading of it.
"Random secret" doesn't capture the fact that it must not be reused across messages. A session key ("random secret") can be reused to encrypt multiple messages, a DSA secret nonce can't.
You need three concepts: unique (used only once, ever), unpredictable (pseudo-random), and secret (never revealed to anyone, before or after use).
You should read the DSA spec, FIPS 186-3, section 4.5. They call this parameter the "Per-Message Secret Number", which doesn't capture the fact it needs to be unpredictable. (Later in that section, they mention it is "random", but the name of the parameter doesn't have that notion.)
My surprise is not that Sony didn't read our blog, but that they didn't read FIPS 186 when implementing their concrete-vault root signing tool. "Per-message" is spelled out right there in the title of section 4.5.
"Nonce" means "a token that is used once." It's sort of meaningless to say "used the nonce twice", but the only way around it is rigorously formal, but even more circuitous and confusing speech:
"Two (or more) ECDSA signatures were generated with a constant value provided where a secret nonce value should go, making it not effectively a nonce."
https://github.com/winocm/ps3-donglegen
The more interesting hack was announced at 27c3. A team comprised of Wii hackers has discovered Sony's main boot-signing private key. This is like discovering Verisign's private key -- you can now issue any SSL cert you want. They can sign any hypervisor they want, which leads to running any code you want.
They were able to do this because (surprise), there was a crypto mistake in the implementation. Two (or more) ECDSA signatures were generated with the same secret nonce. Apparently Sony doesn't read our blog because we discussed this flaw before:
http://rdist.root.org/2010/11/19/dsa-requirements-for-random...
And before that, we discussed a variant of this attack when the Debian PRNG was broken:
http://rdist.root.org/2009/05/17/the-debian-pgp-disaster-tha...
The cool thing about this flaw is that the private key is not present in the PS3 anywhere. It (probably) only exists at some locked down code-signing center. However, a software flaw in the way it generated the signatures was effectively painting the private key on the side of every signed code module released.
And people still think crypto isn't dangerous?