I'm shocked he didn't even try to contact Onity. He's trying ascribe motivations and possible behavior just based on his reading of the lock market - we have no idea whether Onity would have acted in this way. Perhaps they'd have scrambled to contact hotels? Or maybe they'd have disregarded it.
We have no way of knowing because he never attempted to do the right thing.
Don't we have a good idea how they would have responded from how they've actually responded? It took four months and a huge public backlash before they acquiesced to demands for replacements.[1]
How they responded after public disclosure says little about how they would have responded to notification with a deadline for public disclosure.
I understand the various incentives for Onity, and I think a great incentive for them is, given six months to address the issue, knowing it'll be made public, to take at least some proactive steps such as notifying large customers or being ready upon public disclosure with a mitigation plan.
His whole blog entry is a lot of handwaving to cover up the fact that he never even gave them a chance to do something anything like the right thing.
He makes an important point though: security researchers have been used by vendors as part of trying to go through the traditional process of responsible disclosure. It could easily come off as blackmail[1]. Simply disclosing to the public avoids that possibility, because the researcher was clearly not trying to personally get something from the vendor.
That being said, I think this kind of thing should be covered by whistleblower protection laws (I don't know if it's ever been tested)... although it seems those are only enforced when it's convenient.
So while I think he may have reached the right conclusion, I don't think it was for the right reasons. If sufficient protections for disclosers are in place, this should be a relative non-issue (though it makes sense to adjust the disclosure window based on the ease and risk of the vulnerability in order to apply pressure for an expedited fix)
[1] Given there's no payout for the researcher other than having the vulnerability fixed, it is conceivably not too hard to defend against. That doesn't change the fact that they can be sued, which is expensive, time-consuming, and stressful.
The best protection for the discloser is simply to do so anonymously, which shouldn't be difficult for someone like this.
Alternately, do so 'legal anonymously', perhaps by the EFF approaching the company and saying "we have in our possession information on a security vulnerability in your product. We want to give you information on it. In six months this information will be made public. We ask for and want no compensation or consideration at all."
That's it. There exist methods to do this safely; Daeken could have done it, and didn't.
edit:
In particular, I'd rather know that when I stay at a hotel with an Onity lock, I should take security precautions instead of remaining in ignorance. I would rather know that there's a general public pressure via the media to change from insecure locks to secure locks. I would rather that hotel managers know about this via the 6-o'clock news rather than in some mailing with a PR spin.
Consider the Therac-25 case and the problems the vendor exhibited in fixing it. That behavior is endemic, and I'd rather be a consumer in the know than trust to the chancy kindness of a corporation whose interest is not per se aligned with my personal interests.
Standard procedure is to give the vendor a reasonable amount of time to fix the problem, then go public if they don't. It's not like there was some big hurry to go public, here. As he notes, the problem has been around for decades and he personally knew about it for years beforehand. Giving the company a few months to get their shit together, if they so chose, wouldn't make much of a difference in the worst case, and could have turned out a lot better.
I think he read the situation correctly, given response when the information was actually released. If they'd gone to Onity before, it's pretty clear that they'd try to sweep it under the rug, also since it's highly probable they already knew about such an obvious security hole. I don't know how the law works, but it would seem to me that if they went to Onity, they might have sued to keep it quiet. It's happened before.
It is entirely likely that had he given them notice and then 6 months to prepare, they would have spent the entire 6 months suing him and trying to get him silenced by court order. The problem would then still exist but he would be bankrupt and unable to warn anyone.
Our legal structure creates situations where the "right thing" sometimes has to be balanced against the potential for personal ruin, so it turns into the "rightest" thing given the circumstances.
Anyone who's dealt with Onity and is familiar with their business practices would immediately confirm that they not only didn't deserve disclosure but they likely would have done nothing with it. One need only look at the evidence of what they did in the face of non-disclosure to confirm that.
Everyone messes up. Every exploit can be made into an easy to distribute tool. As an industry, the idea is to help the people who make these mistakes fix it as soon as possible so users don't get hurt.
The problem is that the incentive here is for Onity to NOT disclose anything and keep on going like nothing happened. They'll get around to replacing them one day, and it hasn't caused any huge scandal yet, so what the hell right? It can wait! The alternative: a lot of bad press and millions of dollars in hardware fixes. Contrast that with a software fix delivered through the internet, instantly fixing the hole.
Then there's the matter whether Onity seems like a trustworthy company, that would do the right thing. A company whose ONE job is to make electronic locks, but still has an obvious security hole in their system is either 1. really stupid or 2. knows about it and has done nothing. A security whole in a beast like Windows (which main purpose is not security btw) I could understand and sympathize with. A lock is not nearly as complex. Either way I wouldn't trust them to do the right thing.
If you make hotel locks, it's your job to not mess up and if you do, to find it and fix it before anyone else. Other people's safety is relying on you.
"Everyone messes up" is fine if your job is creating systems where security is largely a secondary concern. If security is the whole point of your product in the first place, and your product is actually completely insecure, that's not messing up, that's gross incompetence.
It doesn't matter. The linked breakin was half a year after the presentation. If Onity didn't fix the lock by then, more time wouldn't have really helped.
My mother-in-law was in a hotel room on a business trip just in the last 3 weeks where they had the lock vendor reprogram the door while she was in the room because of "some hacker."
I had no way of asking her if it was Onity after the fact, but whoever it was is actively working on the problem. It takes a bit longer to push out something to millions of non-networked devices than it does to just fix something on the App Store.
If you look through my past comments I've been very critical of his "disclosure plan" myself. I agree with you completely. This blog post did nothing but reinforce my belief that he made a mistake. He wrote a lot of words that weakly defended his almost indefensible position.
I can only conclude he really does believe he did the right thing. I hope the people who's rooms were broken into see it that way too.
Yeah, didn't consider that angle. However, that (much like Onity's original approach to the problems) leaves the independent hotels in the cold. Definitely a decent option, though.
While fixing all the locks will be costly and time-consuming, perhaps a far smaller number of specialized 'tripwire' fix boards can be sent to affected hotels, boards which track (and perhaps report instantly by radio) attempts to use the hack. For example, one such device could be added per floor.
Even if crooks only very occasionally trip the specialized replacements, they would alert hotels to active exploitation, allowing either immediate apprehension or timely review of security footage. Shifting the risk/reward for criminals could buy time for a more complete but gradual fix.
Once you've already spent the time and money to pull the locks off the doors and replace/add a circuit board, you may as well just drop in the fixed board. That prevents the main vulnerability from functioning, though there's no fix at all for the encryption flaw (but no one has exploited that, nor will they in the near future, if I had to guess).
Not only that, but seeing as the locks don't communicate in any way besides the maintenance port, there's no way to know if they've been tripped without reading them at some regular interval.
The replacement board, in the hypothesized special 'tripwire' assembly, would have added radio-reporting. You only need a few of these super-locks, randomly added to the population of vulnerable locks, to catch any exploitation at scale. That'd both curtail the losses and deter future copycats. ("While you're rifling through the guest's belongings looking for valuables, security is already on its way.")
Sure, but my point is that fixing all the locks in a hotel is expensive.
Fixing just a few per hotel (perhaps 1 per floor), with a tripwire device that reports attempts at exploitation, might be possible at 1/100th the cost. (Most locks aren't even touched.)
Still, anyone exploiting the vulnerability at scale would soon trigger a tripwire. At the very least that lets the hotel know exploitation has begun, and it might help apprehend the burglars almost instantly.
I doubt crooks who think they have a master key will stop at just a few rooms. And once the use of the tripwires to catch a crook is reported, alongside the stories of the vulnerability itself, the expected return to this hack drops way, way down. Even criminals respond to relative risk/reward.
Even if the risk for independent discovery was high, announcing it to the public without contacting the vendor and giving them the opportunity to at least prepare a response plan (even if you think the plan is inadequate) ensured a thoroughly dramatic media response that made certain that anyone with the slightest desire to break into hotel rooms now knew how.
EDIT: Yep, you're right. I don't know how I missed that given that I skimmed the article twice. Sorry.
> The fact that 'contact Onity, then disclose publicly after a reasonable period of time' is nowhere on his list just blows my mind.
That's the very first thing on the list. Quote: "The standard 'Responsible Disclosure' approach would be to notify Onity and give them X months to deal with the issue before taking it public."
We have no way of knowing because he never attempted to do the right thing.