Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Under the Hood of Luminar's Long-Reach Lidar (ieee.org)
58 points by sohkamyung on July 6, 2017 | hide | past | favorite | 43 comments


The article says Luminar's advantage comes largely from wavelength: the longer wavelength they use is absorbed more strongly by water, meaning it won't penetrate to the retina, even at higher power, which allows them to have much better range and resolution.

But what about the weather? The atmosphere often contains water. Does anyone know Luminar's performance in fog or rain?

I'm always worried self-driving cars will be set back by having so much of their development take place in sunny California.


Luminar's lidar beams will be absorbed by water rather than scattered so fog will effectively seem black to it rather than white and it will be just as unable to see through rain or fog as normal lidar or your eye.

Thankfully if it's going to be developed in SF they'll be getting a lot of experience with fog.


True. For details on relative absorption and scattering and performance on rangefinding at 905nm vs 1550nm see https://www.degruyter.com/downloadpdf/j/oere.2014.22.issue-3...


200m for a mechanically scanned LIDAR isn't a big deal. There are older flash LIDARs that can reach 400m with a 9 degree field of view.[1] Flash LIDARs have a field of view vs. range tradeoff, since they illuminate the entire field of view at one time. Range gated imagers, another form of flash LIDAR, have ranges out to kilometers.[2] (Those guys are using a lot of power, but it's spread over a wide area.)

The eye safety problem can be overcome by making the outgoing beam bigger. What matters is how much energy comes through a hole about 1/4" in diameter, the size of the pupil of the human eye. If you enlarge the outgoing beam, by running it through a collimator backwards, the energy per unit area decreases. Expand a beam to a 2" circle and you have reduced the power per unit area by about 50. The risk is to someone coming up to the thing and staring into the beam, not out at range where energy per unit area is much lower.

Cost remains a bigger issue than range. Give it 2-3 years.

[1] http://www.advancedscientificconcepts.com/products/older-pro... [2] https://www.youtube.com/channel/UCLrAizlR4ry9Nu6A7BXBBdg


Are there any cumulative effects regarding eye safety?

As in: if 100's of cars in your field of view, say a traffic jam in the opposing lane are all firing LIDAR would it add up to trouble?


Excellent question. I foresee a booming market in lidar-blocking sunglasses.


I wonder how this affects non-human eyes.


Why use lasers at all? My brain doesn't need anything like a laser to figure out where objects are in relation to me. Is there something about a human eye that a camera can't reproduce for a machine?


In my understanding: Your brain does two things automatically that are currently difficult for computer vision: fast and good object recognition; and comparing the history of object to track it.

Currently CV can be good or fast, but not really both.

Also, contrariwise, your brain is actually terrible at depth perception and object recognition, and employs many tricks to approximate depth perception and recognition[0]

---

I don't have any special knowledge on this, but I wouldn't be surprised if self-driving car companies are using Lidar to train CV algorithms.

---

0. http://www.opticalillusionsportal.com/wp-content/uploads/201...


You have good company with that opinion. No less than Sebastian Thrun will tell anyone who will listen that he is sorry he got the self-driving car world hooked on Lidar point clouds. The Udacity self-driving car class is largely CV based. Cheap cameras and good CV will win eventually IMHO.


The key word is "eventually". CV is making huge progress lately, but it is nowhere near human level yet. My guess would be that the first generation of self-driving cars will use Lidar because it means shorter time-to-market. Later manufacturers will be able to supplement Lidar more and more with cameras + CV. And maybe microphones, thermal cameras,...?


Assuming the LIDAR manufacturers do not achieve regulatory capture to lock out competing tech.


Also, our eyes are astonishingly good cameras - high dynamic range, excellent white balance control, high speed autofocus, high bit depth.


Eh, not really, plenty of other animals and machines do all of your examples much better than humans. Our eyes are mediocre actually. It's our visual cortex that is superior.


The human eye has much higher sensitivity range and resolution than any sensor I know of.

The human eye does the equivalent of what you might call supersampling in computer terms. The eyes make very subtle focus shifts to change perception of the same object and get a more detailed picture of something that is difficult to see. No mechanical lens that I know of can move this fast or accurately but you can achieve something similar with multiple cameras.

The brain also filters out extraneous data and interpolates missing data.

Human vision is far from perfect though. Light changes that exceed you sensitivity range will blind you. Vision is impaired by darkness.

The filtering and interpolation mechanisms of the brain are error prone and the brain often does not know when it is wrong. You may see something and never know it wasn't real, or don't see something and never know it was there.


> The filtering and interpolation mechanisms of the brain are error prone and the brain often does not know when it is wrong. You may see something and never know it wasn't real, or don't see something and never know it was there.

But it's hard to say necessarily that it's not optimal -- that is, that a digital version could necessarily do better. We are biased by our priors, just as a digital pattern matcher is biased by its training set. And in low light, we try to find patterns that we recognize in the noise, reconstruct the missing parts, etc., just as a computer would.

A digital system might eventually have better resolution and be able to do better on some scale of precision and performance, but I suspect that most attempts to improve performance by introducing "supersampling" and pattern matching and other forms of inference will always result in similar errors to the brain. Perhaps different in character due to differences in the training set and algorithms, but of a similar nature.


This comes up basically everytime the Tesla Autopilot is discussed on HN since they rely mostly on cameras, except that they also have a front radar. In the end, we want self-driving cars to be better than human drivers, so they should also handle situations that humans are struggling with and additionally have some redundancy. Some things which are difficult to do for CV, detecting things in the dark for example, are trivial for Lidar. Radar on the other hand can provide accurate measurements for distance and speed of other vehicles, also independent of lighting conditions.


Yes. The human eye is attached to the human brain, which we haven't yet been able to reproduce for the machine.


Maybe a good example here is that most humans wouldn't try to avoid a plastic shopping bag floating in front of a car.

Or our ability to interpret facial expressions of pedestrians to help gauge if they might walk out in front of the car.


> the human eye is attached to the human brain

And those human brains are attached to human arms which navigate Americans into 32,000 motor vehicle deaths a year [1].

[1] https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...


Sure, but that's not what ianai asked. The question was what makes an eye better than a camera. You are comparing the human brain to a hypothetical future ideal machine brain. Replace every single driver on the road today with current state of the art machine brains operating under exactly the same conditions, and that 32k death toll will look good in comparison.


OP asked why we need more sophisticated hardware for self-driving cars than human drivers possess. My response is that the status quo is not good enough for an emerging technology. Self-driving cars are not politically viable if they kill a hundred people a day. They may not be viable if they kill 1/10th that number. That's why we need better sensors for them than we possess.


There are a number of disadvantages to human drivers. They can only look in one direction at a time, they get tired or distracted, etc. But there's no beating a human (yet) when it comes to taking visual data and trying to figure out the physical structures generating it.


Brains like yours are more advanced than the computers they are hooking these up to, and those brains still cause a lot of fatalities every year.

Why make the problem harder than it needs to be? Why not shoot for superhuman? Who cares if the submarine swims like a fish?


Lot of good comments here - CV is heading in that direction, but in addition to the other mentioned problems, the needed data bandwidth rises exponentially - my own take is that we go broad in some ML work where we should go vertical (I believe that the same thing that makes you jump when you see a (non) snake is what your brain is using for image fusion feature detection. However, the real problem not mentioned is parallax - we understand some bits of it, but our brain does a lot of high level manipulation of the data - in the real world, it's two completely separate incidence angles for 2+ visual sensors seeing a point at a given angular relationship, but the same point is not the same thing - take an index card, paint one side red and the other green, and look at it edge on far enough away both eyes see each side, and its close enough for parallax - sometimes produces interesting illusions :-) When you get to complex 3D structures and parallax, the problem gets a lot nastier, because after all the work to get the data you want, you are then forced to throw offending bits of it away and start making educated guesses - my own take is "You do not see with your eyes, you see with your mind" :-)


I think a simpler answer to the others given is because we can, and don't think it'll make the cars less safe.

We add ultrasound range sensors for parking because it helps, even though it's possible to park despite not being bats. You also could still drive with reduced vision, but it'd be better if you didn't.

Would we drive better if we had accurate distance sensors?

So we theoretically may not need to, but they help with the current problems we have.


Main use of lidar is for very fast obstacle detection. Other popular methods to detect obstacles are ultrasonics and vision.

Problem with ultrasonics is latency, range as well as lack of accuracy.

To detect obstacles with vision, generally you need stereo camera and generate disparity map (you can do with moving mono camera too but its much harder and even more unreliable). The confidence levels drops quickly as objects are farther and lighting conditions, reflections etc gets more complex. The main problem is that popular algorithms only rely on geometry to compute the depth and they don't into account knowledge of the world. For example, if there is area of specular reflection, geometry will fail even though one can reasonably estimate depth by assuming some continuity of shape of objects in real world.

We humans are very efficient in figuring out depth even with single eye and its purely because we can do very quick and accurate object segmentation and combine it with our knowledge of what those objects are (semantics) as well as our internal map of the world around us. For example, when you see truck on the road even with single eye, you have fair estimate of what sizes trucks usually are and how big should it appear if it was 100ft away vs 1000ft away. You would be able to do this even if you can see truck only partially from the side angle or back angle.

There is no known algorithms currently that can come anywhere close to human level accuracy and speed for depth estimation and consequently obstacle detection using just vision in variety of lighting conditions when you are moving at 70 miles/hr. On the other hand, lidars are super easy to implement, fast and very accurate. So unless a major breakthrough comes along, lidars are actually the only choice right now for fast and accurate obstacle detection. Remember even a tiny error rate could be fatal when you have millions of cars covering millions of miles every day.


Lidar works at night?


Passive lidar/radar/sonar is a harder problem than active.


How do Lidar devices deal with potentially dozens of other Lidars operating in the same area?


You can have all the sensor-tech in the world, but self-driving cars aren't going to take off until you have standardized car-to-car communications and smart roadways designed for self-driving cars, such as pre-mapped lanes and surfaces as well as smart roads communicating road-condition and obstruction information.


Car-to-car communications are not necessary for automatic driving. They may even be undesirable, as a potential source of bad info. Google/Waymo doesn't need them. They make the point that there are lots of things in the environment, from bicyclists to moose, that aren't going to cooperate with the system.

The need for super-detailed maps is probably temporary. A reasonable goal is to have the ability to drive anything with a basic map (Google StreetView level) at slow speed, then retain info to do it again at faster speed. Some of that mapping info may later be uploaded for use by others.


I really hope they get self driving cars working with nothing more than current GPS maps, plus lots of sensors. Considering the drive I took this weekend, past actively changing construction zones, landslides, along infrequently traveled roads (and even a logging road), I wouldn't want to trust my life to stale data.


We did all that in the DARPA Grand Challenge over a decade ago. Most of the problems today involve dealing with other road users.

(Some of the self-driving car projects today are less capable on bad roads than the off-road systems of 2005. They don't have to be; it just comes from focusing on lane following on freeways as the primary goal.)


I'd love to see self-riding multileg robots, like an advanced version of Boston Dynamics alpha dog. You don't even need roads for them. With some kind of super-stability they could become more comfortable than vehicles. Perhaps we'll have human carying drones a bit soner, though. Still, in a bit more distant future, roads might become obsolete.


It's been tried a few times. Early versions were really clunky. Later versions were usually art projects.

* General Electric Walking Truck (1969) [1]

* Adaptive Suspension Vehicle (1984) [2]

* Korean giant mecha (2016) [3]

The BPG Motors Uno transforming motorcycle seemed a good idea, but they gave up on it.[4]

[1] https://www.youtube.com/watch?v=ZMGCFLEYakM [2] https://www.youtube.com/watch?v=DIiD1JimBXQ&feature=youtu.be... [3] https://www.youtube.com/watch?v=bqZWNn5qZ7U [4] https://www.youtube.com/watch?v=odI4WaYEcCU


Riding one of those, galloping along at 70mph, would be quite something. (I guess it's the legged-equivalent of a motorbike?) You'd have to have a lot of trust..


We do car-to-car communications already via signaling. Bad actors also exist right now and are handled via regulations. The same will happen under autonomous systems.

The final check will be sensors, but the system relies on cooperation.


A "digital rail" with centralized traffic control is much safer and much more practical.

Autonomous is a dead end. People make the mistake of looking at early progress (starting at 0 base line) and thinking it will not stop till it is perfect.

It is like when we went from 100mhz to 1Ghz chips and people were saying "in 10 years we will have 10Ghz chips" but in reality we never will.


Sure, but that means separated roads without pedestrians, cyclists and non-autonomous vehicles. Not really politically possible. And it's an incompatible system, so you don't have any benefits until you've sunk a huge ton of money - a total non-starter in most countries.


I would settle for a car that drives exactly as well as I do, and non of that stuff is required to meet that goal.


You communicate with other cars via signaling. Road conditions are transmitted to you via signs.


And? Why wouldn't my automated car do that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: