> I'm googling, and not finding a reference here. As far as I can tell not one of these manufacturers (or anyone else) is shipping a consumer vehicle with a LIDAR sensor. You're just citing press releases about plans, I guess?
XPeng P5, XPeng G9, Nio ET7 are all shipping with lidar. You'll also see them in many other models starting this year: Mercedes S-class/EQS w/ Drive Pilot, Volvo EX90, Volvo XC90, Audi A6L/A7L/A8L/Q8, Polestar 3, etc. The list is growing [1].
> 2. Depth info from camera devices is proven sufficient. Again, Teslas don't hit stuff due to sensor failures, period (lots of nitpicking in this thread over that framing, here I'm being a bit more precise).
Teslas absolutely hit stuff. Plenty of examples in Reddit and YouTube about FSD hitting curbs, bollards and other things. We also don't know any accident numbers about Tesla because they actively hide it. They skirt CA DMV rules by not reporting it and have a very questionable methodology for what they consider an accident (they don't count it if airbags don't deploy).
> 3. All the remaining hard problems are about recognition and decisionmaking, all of which gets sourced from vision data, not a point cloud.
This is wrong, too. Object detection and decision making uses fused input from cameras, lidar and radar. They don't just use cameras. There's plenty of literature to look up on sensor fusion and how they're being used throughout the perception/behavior prediction stack.
> The conclusion being that LIDAR isn't worth it. It's not giving significant advantages.
Only if you lack the ability to do sensor fusion or if you have already falsely promised customers their cars are capable of fully autonomous driving and painted yourself into a corner. Only one company stands out here.
XPeng P5, XPeng G9, Nio ET7 are all shipping with lidar. You'll also see them in many other models starting this year: Mercedes S-class/EQS w/ Drive Pilot, Volvo EX90, Volvo XC90, Audi A6L/A7L/A8L/Q8, Polestar 3, etc. The list is growing [1].
> 2. Depth info from camera devices is proven sufficient. Again, Teslas don't hit stuff due to sensor failures, period (lots of nitpicking in this thread over that framing, here I'm being a bit more precise).
Teslas absolutely hit stuff. Plenty of examples in Reddit and YouTube about FSD hitting curbs, bollards and other things. We also don't know any accident numbers about Tesla because they actively hide it. They skirt CA DMV rules by not reporting it and have a very questionable methodology for what they consider an accident (they don't count it if airbags don't deploy).
> 3. All the remaining hard problems are about recognition and decisionmaking, all of which gets sourced from vision data, not a point cloud.
This is wrong, too. Object detection and decision making uses fused input from cameras, lidar and radar. They don't just use cameras. There's plenty of literature to look up on sensor fusion and how they're being used throughout the perception/behavior prediction stack.
> The conclusion being that LIDAR isn't worth it. It's not giving significant advantages.
Only if you lack the ability to do sensor fusion or if you have already falsely promised customers their cars are capable of fully autonomous driving and painted yourself into a corner. Only one company stands out here.
[1] https://cnevpost.com/2023/04/21/shanghai-auto-show-nearly-40...