> It requires a human to actually look at the contents of the unsafe block.
Yes, that's an incredible capability - that a human can actually look to find bugs, that's insane, that's huge.
> Most of the time those humans are even rarer than those that validate unit tests are properly written.
Maybe, maybe. Is it rarer than fuzzing with sanitizers though? Or rarer than serious sandboxing efforts? When unsafe has been used gratuitously in Rust libraries the community has pushed back, to a fault.
I don't know what it is you want to try to argue here though. My questions were really targeted to someone at Google who can speak more directly to what I've been seeing for years.
I am trying to argue that just by using a safer systems programming language, the security exploits don't go away.
Someone needs to take care that the remaining 30% are taken care of.
While the unsafe blocks are still responsible for the remaining 70% if no one bothers to actually validate them.
Actix Web is a very good example, how the community still has to learn how to deal with such issues, and how effective the security would have been, if its security wasn't validated by fellow humans.
Yes it is definitly better than C, C++ or Objective-C, just not zero effort to keep it safe.
If your >95% of your codebase doesn't feature unsafe, your rare human with the review skills only has to look at the remaining 5%, maybe 10% in order to understand what the 5% are doing.