Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1.3 million people a year die in car crashes. Humans are woefully unqualified to pilot heavy machinery on a daily basis. Tesla’s Autopilot reduces crashes by 40% according to the NHTSA, so I can’t agree with your binary argument.

Whether you trust others is immaterial; statistics will be the final arbitor. If Autopilot still causes fatal accidents, but fewer fatal accidents than humans alone, how could you argue against such a safety system? What of the lives saved that wouldn’t have been if we demand an entirely fault proof system prior to implementation? Who are you (not you specially, but the aggregate) to take those lives away because of irrationality?



> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA

This claim was immediately called into suspicion when it was first published, and the NHTSA is currently facing a FOIA lawsuit for refusing to release data to independent researchers.

http://www.safetyresearch.net/blog/articles/quality-control-...

https://jalopnik.com/feds-cant-say-why-they-claim-teslas-aut...


As noted in your citation, Tesla requested the data they provided to be confidential (which is not an uncommon request), and the NHTSA granted the request. Whether the statement from the regulatory agency can be independently verified is immaterial.


> Whether the statement from the regulatory agency can be independently verified is immaterial

You believe independent verification of study results are "immaterial"? OK.


Because I do not have access to raw data does not make the resulting facts any less true (see: proprietary hurricane models).

The NHTSA made a determination and I’m using it as a data point.


OK, and all I said in my comment was that the NHTSA was asked for elaboration and proof -- because its findings seemed curious with respect to other study results -- and so far they have declined further explanation. This may be relevant information for anyone who sees you using the NHTSA's claim as a premise.


Fair point. I upvoted your posts; thinking about it further, it’s a legitimate line of inquiry.


I think I normally would have given Tesla the benefit of the doubt. But after the misleading, weasely-worded data they discussed to defend AutoPilot in light of the recent fatal accident [0], I think the onus is now on them to provide more concrete proof.

[0] https://news.ycombinator.com/item?id=16722500


>> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA

So does any car with automatic emergency braking and/or or forward collision warning (links below).

The problem here is, this Tesla drove head-on into the gore point. And previously, it drove right into the side of a huge truck. So... can it be trusted? Your call.

https://www.consumerreports.org/car-safety/automatic-emergen...

" IIHS data show rear-end collisions are cut by 50 percent on vehicles with AEB and FCW. "


"drove head-on into the gore point" "drove right into the side of a huge truck"

A lot of human-led accidents sound just as silly when stated this simply, so it's not relevant to the comparison.

More to the point, I think each of the deaths would be exactly as tragic, no more no less, if the sequence of events was complicated.


I am human, and any product built for me will have to take into account my idiosyncrasies. This includes my unwillingness to drive on the same road with a car that might at any moment swerve into me because of weird software.

It may be irrational, but I can forgive a human, I cannot forgive an AI.


Wasn't that with the old, better, auto pilot?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: