Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There arent that many novel scenarios on the road. Sure, Google can't program around the possibility of an airplane falling down on you, but how often does that happen?

It doesn't have to be perfect. Just very good and improving. Some time ago google shared a gif of a wheelchair chasing a duck in the middle of the road. The car didn't understand it, so it just stopped. Good enough for me.

Obviously, they have a lot of work ahead of them, but don't be so pessimistic. Most people drive shitty (myself included), we aren't impossible to improve upon.



Novel scenarios might not be common compared with miles of traffic-following drudgery, but even really bad human drivers deal with novel scenarios on the road more often than they have accidents.

Stopping might be a sensible safety protocol in some situations, but it isn't in others (not to mention the situations where the car may stop too late because it doesn't actually recognise that a novel scenario is about to occur).

So if you want Level 5 driving, and not just a very impressive demo which is safe provided a human watches it attentively enough to take emergency measures and is able to override it when it decides it can't process a situation well enough to proceed, you need the AI to be pretty damn close to perfect in its judgement of how to react to a huge number of edge cases.


Except self driving cars are paying full attention all the time and can react significantly faster. This causes the difference in stopping distance to be dramatic. Remember, humans are basically going to do the same thing for the first 0.25 seconds in any emergency situation and that's the best case.

So, self driving cars can simply be very cautious without seeming to.


I agree that self driving cars' response time can sometimes compensate for lack of general intelligence to anticipate a visible roadside activity developing into a hazard or non-routine situations in which another driver might cut into their lane. But lightning reflexes aren't going to eliminate situations in which buggy, late or nonexistent responses to things an AI hasn't been trained to deal with endanger other road users, especially when said other road users don't have lightning reflexes themselves.


Can you give an actual example? Because, not being able to identify something is not necessarily an issue as long as the car notices something is there and it should not hit it.

EX: I am sure the car had no idea what this was: https://youtu.be/Uj-rK8V-rik?t=26m11s but as long as it can tell it's bigger than a bread box and so it should not to hit it that's enough.


The obvious problem scenario is when unpredictable evasive action puts an autonomous vehicle into the path of other drivers (with human response times). That's when very sharp responses to an uncategorised "obstacle" that's actually a drifting plastic bag or a reflection cause more problems than they solve.

Similarly, instantaneous harsh braking might help an AI save the small child it didn't anticipate might chase the football that flew past moments earlier, but a human capable of grasping that footballs are associated with pedestrians making rash decisions might have braked early and gently enough to not get bumped by the car behind. (If they didn't, they might find their late reaction blamed for the accident and possibly even get prosecuted for driving without due care and attention).

The UK requires every learner driver to sit an exam consisting of identifying CGI "developing hazards" where they're scored on ability to rapidly identify stuff that might happen before they're allowed to do the full driving test. I'm sure a key focus of the teams the article discusses is teaching AI similar cases like gently slowing in the event of a football-shaped object moving near the road (which is likely far from the most difficult or obscure novelty to teach an AI to handle) but the problem space of novelties humans handle by understanding what things might be and how/if they are likely to move isn't small or one there's good reason to believe plays to AI's strengths

(Meta: not sure why you're being downvoted, your contribution seems constructive and on topic to me)


A bag* or child running into a street is not an usual event, also a car is not going to 'evade' into another car. Rare events are the things people don't see across multiple human lifetimes not just something you don't see every month.

Which IMO is what's missing from the debate, unusual events are in terms of ~10+ million miles of training data before these things are in production. They are clearly out there, but I doubt people are going to react well to say someone falling from an overpass onto the road very well either. So, it's that narrow band of really odd but something a person would respond correctly to that's the 'problem'.

PS: Of course the bag might relate to a bug which are likely. But, IMO that's a completely different topic.


> A bag* or child running into a street is not an usual event, also a car is not going to 'evade' into another car

Opting to drive around a (stationary, visible from a distance) bag in an unpredictable manner is literally how Waymo's first "at fault" accident occurred...

The point is that a human has a concept of a "ball" linked to the concept of "children play football" and an understanding that if one sees the former, one should be prepared for the latter to bursts onto the road from behind the partially-obscured roadside. Appropriate action probably involves easing off the accelerator and lightly tapping the brake so the car behind gets a hint that you might have to stop suddenly on if a child emerges from behind a bush. An autonomous car which fails to anticipate even though it's lightning fast at slamming the brakes on is going to get rear ended a lot more.

The neural network of a self driving car might be able to classify small coloured spheres in the vicinity of the roadway as balls, and the AI will certainly have been taught the concept of a human-shaped obstacle moving across the roadway being a "need to stop" situation, but is unlikely to "learn" the association between the two through a few tens of million miles of regular driving, because only a very small proportion of "need to stop" events involve balls (and only a very small number of sightings of spheres moving in the vicinity of the roadway result in "need to stop" events). Of course, you can hard code a machine to respond to ball-shaped objects moving near roads by slowing down and you can construct a huge number of artificial test scenarios involving balls to teach the AI the association between balls and small children, but either of these options involves engineers envisaging the low frequency hazard and teaching it enough permutations of the sensory input for that hazard for it to be able to anticipate it (and there's a balance to be struck, because nobody wants a paranoid AI which drives through the city braking every time it sees something its neural network identifies as a pedestrian or the front of a parked car protruding from a driveway) Suffice to say, we take for granted our ability to know how to react to things like children chasing balls, staggering 4am drunks, tiny puddles the car in front just drove through versus a raging torrents of water through the usually safely navigable ford, sandbags versus shopping bags, vehicles laden down with loads which look like things which are not vehicles, and people frantically gesturing to stop.


> Opting to drive around a (stationary, visible from a distance) bag in an unpredictable manner is literally how Waymo's first "at fault" accident occurred...

There wasn't anything unpredictable about the behavior of Google SDC. From the official statement: https://www.engadget.com/2016/02/29/google-self-driving-car-...

"Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it.

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day."


Except, Waymo's is still in the R&D stage where this stuff is being worked out. The bag issue is exactly the kind of thing that will be fixed before production because it's in their training data. It's really the 'unknown unknowns' that will be the problem and they are going to be really odd.

As to the ball case, it does not need to stop just slow if it's possible for a kid to run out fast enough to be a problem. aka on residential streets when it should not be going fast in the first place not a highway where there are no major obstructions. So again, what your describing is in the 'to be dealt with' pile.

I don't mean it's working right now, just they are going to hold off on mass production until it's working.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: