Even if the app is open source with reproducible builds, you have to trust the OS. Are you going to audit every line of the entire operating system? If you do that, are you going to audit every transistor in the SoC to ensure the hardware isn’t compromised in some way? No one person can do that. Even if a team of people could do it, do you trust the other people? At some point, you have to trust _something_. If you don’t, you simply can’t use computers.
What you're saying isn't wrong, but it misses the point. I'm not saying, "you have to trust someone so you shouldn't use it". I'm questioning whether the E2EE is adding anything or if it's security theatre.
Take the case of WhatsApp. Without E2EE, you have to trust the platform (Apple) and the people behind the app (Facebook). With E2EE, you still have to trust Apple, but you also still have to trust Facebook, since Facebook controls the endpoint and you have no insight into what it does or if there are any back doors. The same applies to Signal obtained from the App Store or (AFAIK) Google Play. And Apple's E2EE stuff is obviously not adding anything, since we're already trusting them as the OS vendor.
Something like Signal on the desktop is different though. There, we still have to trust the OS vendor, but we and anyone else can inspect the source code, verify that it does what it's supposed to and doesn't have any back-doors, compile it for ourselves, and then we can exchange messages without trusting Signal. In the case of Signal on the desktop, E2EE lets us distrust the Signal organisation, which has value even though we still need to trust our OS and our hardware.
You could argue that we should trust the Signal organisation. That's not unreasonable, but that still makes E2EE no better than normal client-server encryption a la TLS and the promise of a solid no-logging policy.
(We could also discuss the feasibility of verifying that there are no back-doors purely from reading source code and how well a back-door could be hidden. But that's sort of a tangent.)
This does not take away from the parent's point. You need to trust something, yes; we just don't trust Google in particular in that whole chain of trust.
I haven't said I'm not willing to trust Apple. The Apple example is included exactly because you have to trust Apple, whether they use E2EE or not, so E2EE doesn't make a difference and is security theatre.
E2EE has value in the case where it (at least in principle) allows you to remove something from the list of trusted entities. Signal on the desktop is one such case, where E2EE lets us avoid trusting the Signal organisation.
I’m not trying to protect myself from Apple. Apple doesn’t have a reason to lie. I want to protect myself from the police. I trust Apple much more than I trust law enforcement. I found that it could remain secret for too long if Apple were forced to compromise its encryption.
Besides, they can afford better lawyers than Signal.