Why is it at all quite obvious? How is arguing for being careful “such a stupid argument it’s really just not worth anyone’s time to entertain”?
Someone in this discussion has an insane amount of blind faith in technology which here literally killed a pedestrian, and it’s not the people who are arguing for just consequences.
Are you arguing that a machine does not have better reaction times than a human being? Are you arguing that a machine can fall asleep, drink and drive, panic in a high stress situation?
Aren't you the same person who called for holding the developer liable for writing software with a bug? Are you accusing the developer of promising something that is impossible (not hitting a pedestrian in a crosswalk?) or simply implementing it wrong?
It's worth pointing out that we have no idea yet who is at fault in this accident. It could easily be someone who simply walked out in front of traffic when they weren't paying attention.
"Are you saying X" is a pretty aggressive way to frame your argument.
The above poster seems pretty clear that it is NOT obvious that cars will necessarily drive safer than humans on average, in the same way it is NOT obvious that we will ever have General Artificial Intelligence.
These are very complicated problems, and the machines are currently (significantly) worse than human drivers, so I think it's fair to question the argument that "everything will work out eventually"
The answer to your first “question” of course is that it depends on the machine, how it’s built, programmed, and the context of operation. Machines can have much faster reflexes, or they can freeze.
Theoretically it could be otherwise, perhaps, though the human brain has an extremely parallel pattern-matching engine honed by about half a billion years of evolution.
Realistically, the self-driving system will be made of layered distinct components that all add latency. This is how we build both hardware and software. An image is sensed, it gets compressed, it gets passed along the CAN bus, it gets queued up, it gets decompressed, it gets queued up again, object detection runs, the result of that gets queued up for the next stage... and before long you're lucky if you haven't burned a whole second of time.
Machines can drive aggressively.
There was a university that had self-driving cars do parallel parking... by drifting. Driving along, the car would find a parking spot on the other side of the road. It would steer hard to that side, break traction, swing the rear of the vehicle around sideways through a 180-degree turn, and finally skid sideways into the spot. The car did this perfectly.
That kind of ability is something that I personally don't have. I would consider a self-driving car that could do this. If I'm paying, and that kind of driving is my preference, I expect to get it.
I really don't want you to get your wish. We have no need to invest in flashy self-driving car stuntmen, building a car that can get you from A to B safely and in a reasonable time frame is all that we should be aiming for.
That sort of drifting parallel park might work most or nearly all of the time, but if the road conditions are poor and the car loses handling then it will be a lot more risky.
The camera feed going straight to the neural network will not have a lot of latency. The neural net will not take very long to process the image and make a decision. Humans need at best half a second and at worst several seconds to recognize, process, and act. These systems are designed to be fast to responds. They do not have a second of latency.