Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If your OEM can be coerced into pushing a backdoor in an OTA update, maybe our software habits are to blame.

We'll always be powerless to stop top-down attacks like this until we demand real audits and accountability in the devices we own. Shaming the UK only kicks the can down the road and further highlights the danger of trusting a black box to remain secure.



That’s the trick. We don’t own the devices. We merely license their use. No root, no ownership.

People have been warning of this outcome for years and years. Stallman was right and all that. We got laughed out of the room and called paranoid weirdos.

Ever since smartphones were a thing it’s been obvious that this is where we were heading.


I mean, arguably you don’t have ownership over anything without a legal right which is enforced by a government etc, and every item has exceptions. You think you own a house, but see how long that lasts if you stop paying land taxes. You only own a car as long as you don’t damage someone else’s car without insurance and it gets repossessed. These are stupid examples so don’t read too deeply into them, but it sorta outlines my point that licensing the use of things is very often as close as we get to owning just about anything. Not saying that’s good or bad, just that that’s kinda how it works.


When a company has the ability to push OTA updates to a device locked down with trusted computing, it's not even a backdoor at that point, it's a frontdoor.

I agree political action here is totally fruitless. The UK government and Apple could already be cooperating and you would have no way of telling the difference.


> When a company has the ability to push OTA updates to a device locked down with trusted computing, it's not even a backdoor at that point, it's a frontdoor.

Ideally, everything that runs outside of an app sandbox would be 100% Open Source. Anything short of that is not sufficient to give people full confidence against a backdoor. (Even that also relies on people paying attention, but it at least gives the possibility that people outside of a company whistleblower could catch and flag a backdoor.)


I think so too. It should include full free open source specifications of hardware, as well as fully FOSS for all software that is not inside of the sandbox system, and probably also FOSS for most of the stuff that is using the sandbox, too. Other things should also be done rather than this way alone, but this will be a very important part of it.


I'll go even further and bring up Trusting Trust - whole chain needs to be open source and verifiable.

and you need to be able to compile each and every part of it.


Open source alone isn’t enough. You also need a way to build and deploy the code yourself.


Agreed. And demonstrated reproducibility showing that the result is identical.


> you would have no way of telling the difference

If only specific individuals are targeted, I agree. But if it's pushed to all users, wouldn't we expect a researcher to notice? Maybe not immediately, so damage will be done in the meantime, but sooner than later.


> But if it's pushed to all users, wouldn't we expect a researcher to notice?

Think of the security a games console has - every download arrives encrypted, all storage encrypted, RAM encrypted, and security hardware in the CPU that makes sure everything is signed by the corporation before decrypting anything. To prevent cheating and piracy.

Modern smartphones are the same way.

We can't expect independent researchers to notice a backdoor when they can't access the code or the network traffic.


How long was HeartBleed exploitable? How many people looked at that code? Now, take the source away and make the exploit intentional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: