Maybe I'm misunderstanding something, but we are talking about non-rooted consumer Android, right? Google already has complete control over those systems. That hypothetical oppressive regime could simply ask then to embed their spyware in the next OS update (or have them remote-install a spy app of their choice via the normal play services process[1])
What more potential for abuse would this change bring that isn't there already?
I don't understand either. It's not like they don't also control the code that perform the signature verification.
I don't really know how app submitting/building/signing works on android, but I'd say the main issue it could add would be that the application can be tempered with between the moment it's built until the moment it's signed by Google.
It's surprising to me how pervasive this idea is that app code can be completely trusted because a developer signs it at some point.
All those super secure end-to-end encrypted messaging apps are just one automatic app update away from uploading any locally stored conversations and keys. And turning off automatic updates isn't enough, Google and Apple have ways to force an update if required.
Maybe for now Apple/Google can convince the courts to not coerce them into using this power, but it is present.
Personally I would say App Signing is the smallest worry. Far bigger are:
1. Mandatory XCode for iOS build/debug
2. Mandatory yearly payments for Apple if you want to release iOS App
3. Mandatory DRM for iOS apps, even for Open Source apps
4. Horrible Apple certificate/provision profile system if you have multiple devices
Complete user freedom includes the freedom to activate the orbital footgun array (e.g. kernel level debuggers), it is inherently unsafe.
So we can't have both at the same time.
But what we can have is more stops along the safe-powerful gradient and provide safe APIs/wrappers for the most popular use-cases that are developed through unsafe means.
I mean, doesn't Android already have that middle ground? If I install F-Droid, I have to asses that I trust it, but thereafter I delegate trust to them and can install from their repos without having to re-decide every time.
> To a certain extent yes, but delegation at the store level is extremely coarse grained.
I suppose that's true. I'm curious what level of precision you would prefer to see? Accepting a developer's key?
> Also, no automatic updates.
Only because Google decided to be uncooperative in their builds. If you control your own system partition, I believe you can still inject the privileged extension and have this. Google has also implied that this might change with Android 12 (https://android-developers.googleblog.com/2020/09/listening-...).
Apple never sufficiently instruments code during app review; the curation doesn't actually prevent malware, it just lets them remove it once it's well known.
If this weren't the case you wouldn't hear about people sneaking behavior apple doesn't like into published apps.
This is exactly how iOS works today; in fact Apple recommends you upload your LLVM IR to them and tag your assets as well so they can recompile and recombine your apps for hardware you don't know the existence of yet. Which is nice if you trust Apple…if you don't, then it is very difficult to actually verify that what you're downloading from the App Store is actually what you submitted to the company. With resigning and FairPlay and all the wrappers that Apple applies, it is really difficult to do any sort of verification here :(
Not easily, unfortunately. Accessing the app files itself is usually not possible on a normal iOS device, and even then they are encrypted with FairPlay DRM (which is easy to reverse–but only on a jailbroken device).
I believe the main reason that it was incompatible was a EULA that had incompatible sections and was unwaivable, although those parts are gone now so whether it's still incompatible is not clear.
IANAL and I have no involvement in the Apple ecosystem, so take with a grain of salt:
My understanding is that GPLv3 requires that anyone who gets a binary can also get the source to it, and can then build and run that source on the same device. Even if Apple now allows distribution of software that demands to also share its source code, it's my understanding that you can't build that code and run the result on your iPhone without either rebuilding/reinstalling every 7 days or paying Apple. That certainly seems to be against the intention of the license, although I admit it may technically squeak by the exact requirements.
I’m starting to question the whole app store distribution mechanism. The web is clearly not the best platform for mobile, but as mobile developers, maybe we should make a stand and only develop for the web. Pushing mobile os manufacturers to improve their platform for web, instead of having us cooperate to publish in their own private garden.
Native apps on these devices are locked down to the point that they don't really provide any advantage over the web (and it's extremely rare that the few advantages they do provide (such as guaranteed caching) are actually used in a helpful way.)
They have much harder to block ads and better integration with the is for things like camera, clipboard (even if we have seen its too much clipboard access), notifications, etc. However, the fact that you can't block app ads without a network level ad blocker means they're probably here to stay.
Are you sure the “web” is not already largely Google’s private garden? They basically control every aspect of the web now. Nothing your webapp does or sees for a large majority of its users is outside of Google’s vision...
If Google has their way, a big chunk of the web will be absorbed by AMP. I don't understand why so many talented people at Google continue to support this kind of takeover. It goes completely against the hacker ethic that built the internet.
>I don't understand why so many talented people at Google continue to support this kind of takeover.
It's easy talk yourself into a position that boils down to "I trust us."
Not only are people disinterested in designing around that, most people don't think of it as a problem. Partially because they don't always engage system thinking to that extent, and also it's grueling to admit that your ability to make decisions should not be trusted.
Sure, it's easy to say "I don't trust myself" when asking someone else for a code review. Its expected to miss some things and motives are not really at question. Removing the ability to change goals after the fact, mislead or break promises is different in kind.
> Anyone who had an old school hacker ethic retired from Google years ago.
Some of the Go developers fall into this category, and they aren't retired yet.
However, they are walled off inside one particular part of Google, and it's a part that is carefully isolated from the kinds of things that raise the issues under discussion.
Yes, it’s true that the faceless corporation would allegedly not care, but there are still human beings working there who would benefit from keeping the open web alive (so would their grandmothers, grandchildren, friends, etc). Google isn’t completely run by Skynet yet.
I would expect someone with principles to behave differently. There certainly are principled people at Google (the recent-ish google walkouts come to mind). Maybe the principles I'm thinking of are no longer valued at Google.
Also, in (most?) Linux distros it's easy to add additional keys to the system's "trusted" list or bypass it completely (AFAIK, `dnf localinstall ./foo.rpm` doesn't care at all about signatures), which makes a significant difference in user freedom IMO.
What more potential for abuse would this change bring that isn't there already?
[1] https://www.quora.com/How-Google-play-remote-installation-of...