Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not saying what the better option would be (because I don't know), but many people approach the problem from a very myopic point of view.

Adopting Lidar would of course provide Tesla with higher-quality input for their self-driving model. But the quality of the input isn't the whole equation; you need to process it as well. In other words, adopting Lidar would incur costs not only on the hardware side, but also on the software side, which of course would result in more expensive cars. More expensive cars means less cars sold, and less cars sold means less data, which in turns means less input.

Does this result in a worse model? Again, I don't know, but I do know that the issue is more complicated (and not only because of the reasons I mentioned here) than many people seem to think.



You have it wrong. Processing LIDAR is way, way more computationally efficient than processing camera footage into a 3D model. LIDAR feeds you direct distance and speed data. Cameras of course do not, meaning you have to try to compute it, something which is very hard and error prone for even the most powerful computers that have ever been created (human brains).


You aren't wrong, but my assumption was (perhaps incorrectly?) that they would add Lidar sensors in addition to their cameras, not replace them.


Basically every AV company except Tesla is doing cameras + LIDAR. Tesla decided to do camera-only for definitely-not-cost-cutting-reasons.


> “… camera-only for definitely-not-cost-cutting-reasons.”

Is this sarcasm?


Yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: