Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Drivers react to Tesla’s full self-driving beta release (arstechnica.com)
129 points by elsewhen on Oct 31, 2020 | hide | past | favorite | 334 comments


"Move fast and break things."

I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people. It's one thing if it's a website for throwing virtual sheep at another; nobody really cares if your Facebook feed omits an update or two. It's another thing if a software bug means that you die. Or at least it should be - I'm amazed at the number of people who seemingly don't care.

I also wonder what this is doing to Tesla's brand reputation. I was just in the market for solar panels and Tesla's offering (both the panels and the solar roof) was very, very attractive on price & aesthetics, but their reliability record with their cars made me think twice about buying a 25+ year product that keeps my home powered and dry.


> I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people

There's a term for it: "normalization of deviance"[1]. A highly risky thing is done and doesn't fail as early as thought. The people doing it assume that means the risk is lower than they thought, but it's not, and they just happened to get lucky.

I'm sure Elon knows this, so probably believes their tech can advance enough before their luck turns. I personally think he's deadly wrong on this, but it seems like a lot of Silicon Valley Tesla owners who sleep or read their phone while on Autopilot accept it at face value.

[1] - https://en.wikipedia.org/wiki/Normalization_of_deviance


> There's a term for it: "normalization of deviance"

I believe this is not entirely correct, as the parent talks about how a company can get away with something, while the term you use applies to the company itself.

The main question should be: why do we (the government) even allow companies to do something like this?


Say his name, Walter Huang


Obviously I have no idea what's in their heads, but a plausible train of thought goes like:

Legal : If you sell it as a drive assist and not a driver replacement, you're off the hook, with a high probability. It's on the drivers if they get complacent. There may even be precedents with similar products for trucks.

Moral: pushing this now will very likely kill some people that would otherwise be alive (drivers or pedestrians). But it's very unlikely to be a carnage - I'd dare say even impossible. On the other hand, moving the technology forward and directly into widespread commercial use will save people, and likely even... what's the equivalent word for a "carnage" but where people are saved? Miracle.

Branding: well, there you have a real risk. Public opinion is fickle, and a particularly gruesome accident might do some real damage, both to Tesla and to self driving technologies in general. But then, they do have a high risk high reward mentality, and the chance is good that the first bad accident (statistically inevitable) will come after the advantages are proven and well established.


> On the other hand, moving the technology forward and directly into widespread commercial use will save people

This is the only rationale I can come up with for Tesla's approach. The tenets go like this:

1) The ultimate realization of self driving will be successful, and will have profoundly positive impacts on society (e.g. safety, urban planning, ease and availability of transportation, economic)

2) The path to that ultimate realization contains many local optima traps

3) The only way to push past those local optima is by acquiring large amount of real world automated driving data, using systems in production

From a technical standpoint, I'm disinclined to disagree with them on the necessity of large amounts of real world system behavior data as a key ingredient.

Personally, I also think we (humans) have become way too culturally risk adverse. Or, to put the calculus another way: if I have a credible belief that I can save some number of future humans, what is the acceptable moral number of current humans to place at risk and/or kill?

Imho, that's a grave, serious decision, but not one to which the morally absolute is "zero, always."

From a purely ethical standpoint (leaving legal aside), the crux would seem to be informed users. Not all Tesla drivers are likely capable of understanding the current SotA, and I'd hope (and expect) that Tesla's phased rollout of the beta is informed by previous user interaction with assisted driving.


> if I have a credible belief that I can save some number of future humans, what is the acceptable moral number of current humans to place at risk and/or kill?

> Imho, that's a grave, serious decision, but not one to which the morally absolute is "zero, always."

I think that what we've learned throughout human history is that no one should be trusted and given the power to make such decisions. It has invariably led to disasters. The very idea that someone can decide to put someone else's life in danger for potential future advances gives too many perverse incentives - chief of which, rationalization that "more people will be saved" even as the experiment crashes and burns.


Bullshit. We've done that countless times, to good effect.

When it's for the greater good, or by the winners, we conveniently omit it from history.

When it's for the greater evil, or by the losers, we publicize it as a cautionary tale of moral hubris.


War aside, what’s an example of innocent lives being sacrificed for some sort of greater good that got omitted from official history?


>> War aside, what’s an example of innocent lives being sacrificed for some sort of greater good that got omitted from official history?

Example 1: Vaccines

This is an interesting example because the winners and losers seem to be proportionately spread across almost all of society (except anti-Vaxxers who benefit from herd immunity but don't sacrifice with the minute risk of vaccines). See also: https://www.hrsa.gov/vaccine-compensation/index.html

Example 2: War on Crime / Safety

Another example would be the war on crime. Liberties and lives are sacrificed in the name of safety for others. Unfortunately, the liberty/lives sacrificed fall disproportionately into one group, and those protected fall disproportionately into a different group group. If you look at the 2nd order effects (impoverishment, single-parent households, effects on education and future) lots more lives are lost, again in a subset of society.


This is of course highly controversial, but deaths in revolutions or shortly after that led to eventually much better living conditions might be considered, depending on your political views, to be such a case.


Cars. Cars kill hundreds of thousands people each year. But we accept it, for the greater good of modern civilization.


That's not an example of putting this kind of decision power in the hands of one person. It is a complicated history that likely started with almost no fatalities, and steadily increased as speeds increased, with no clear point where a line was drawn.


Not omitted, but the deaths and/or illnesses resulting from drug trials are almost never widely publicized. Conversely, any improvement brought by a new drug will be drummed up everywhere (and for just reason).


Every time a human gets in a car, they put someone else's life at risk.


The ultimate realization of self driving will be successful, and will have profoundly positive impacts on society

This is clearly possible in principle. After all, fully autonomous vehicles can have much more capable sensor systems than any human driver, can process the input from those sensors faster, and can then apply any necessary adjustments to the vehicle's behaviour almost instantaneously, all without ever getting tired or distracted. No human driver will ever match that potential.

The problem I have with this whole argument, and thus the reason I'm not sure the moral/ethical arguments stand up either, is that we don't know how to reliably write excellent software yet.

The entire argument for autonomous driving rests on the premise that we will at some point in the usefully near future reach a situation where the cars drive themselves much better than we humans collectively do. But there is, to the best of my knowledge, no evidence so far that we as a society can actually create the necessary software yet.

Until we can, any autonomous vehicle control system will be at risk of life-threatening bugs. In particular, it will be at risk of widespread failures of the same type that could cause much more harm than any single human driver's failure could, whether caused by defects in the software alone or by a malicious actor exploiting a security vulnerability. That is a huge risk that is getting far too little attention so far IMHO.


But humans too have life-threatening bugs, especially when combined with an automobile. Car+driver combinations have blind spots, for example. Human drivers get drunk, get distracted, and spill coffee in their laps. I think autonomous cars will kill people, and they will kill people in situations where human drivers would not. But I'm finding it increasingly easy to imagine that autonomous driving will save lives overall.


Yes, human driving is far from perfect and sometimes that has tragic consequences. But there are two issues at play here.

The first is whether we expect the autonomous vehicles to become safer than the population of human drivers they would be replacing under normal circumstances. This is a population measure, and it's about overall safety and averages, and given enough time and data it seems a realistic outcome.

The second is the risk of a single catastrophic failure, which is a risk that doesn't generally exist with today's human-controlled vehicles. It is no exaggeration to say that the kinds of autonomous vehicle technologies we are conceiving here have the potential, if they go wrong, to cause more harm than any WMD ever deployed in war. And not only the theoretical potential, but given the current state of our software development abilities, a non-trivial chance of it actually happening in practice.

Until we've dealt with the latter problem, which I don't believe we currently know how to solve, any arguments based on the long-term superiority of autonomous vehicles over the capabilities of any human driver is going to be fundamentally flawed.


> Personally, I also think we (humans) have become way too culturally risk adverse.

This is what it really comes down to.

Imagine if we invented the car today in our current cultural climate. I don’t think it would be possible for humanity to reap it’s advantages.

Back when it was invented people actually had to drive the things around on public roads that had a bunch of pedestrians walking haphazardly accross them continuously.

I was an autopilot skeptic, and it’s entirely for this reason. I didn’t think it would be possible for anyone to actually do what needed to be done to get this technology across the line. That is to say, give it to customers before it is safe.

If Tesla is actually able to keep doing this without it getting killed politically, then I think it actually will happen, and that gives me some hope for humanity more generally.


The car didn't replace walking, it replaced carriages, which were easily as heavy and could probably go much faster than the first cars. Sure, the horses would generally try to avoid obstacles, but horses pulling a heavy carriage at speed are not infinitely maneuverable. So, I doubt that cars would have ever seemed like a huge risk initially simply because it wasn't significantly different from what came immediately before it.


“The only way to push past those local optima is by acquiring large amount of real world automated driving data, using systems in production”

That, I don’t see ever being true. What would you get from that that you couldn’t get from “large amount of real world human driving data, using systems in production”?


If I understand your point (why not backseat systems?) then I'd say it fails to capture the complexity of feedback from the automated systems.

Or to put it another way, every time the driver's inputs diverge from the automated system's inputs, the test run is invalidated. You're no longer getting results of what the automated system did.


Try any system ever that has had a v2. Engineering is not omnipotent.


> Legal : If you sell it as a drive assist and not a driver replacement, you're off the hook, with a high probability

In Germany they lost a case about their Autopilot advertisement. So not even legally they are in the clear.


What was the case about? If it was in the opposite direction (that they branded a drive assist system as an autopilot) then it might actually help them now. "Look, we even have a court decision saying it's not an autopilot".


> ... using its customers as QA when they sell products that can kill people

They aren't just using their own customers, they are using all of us.

The cars won't be just crashing into inanimate objects, but school buses and ambulances. The QA is being done to all of us and our loved ones near their cars.


Tesla runs their new releases for millions of miles on the wide fleet in shadow mode before they release to limited beta. This is not an untested release in the way the article makes it seem.


That's not as useful as it might sound. If a particular action the driver didn't take would lead to results that didn't happen, then shadow mode will have a lot of trouble knowing that. In general it's a problem called "off policy evaluation" and it's really hard. Tesla has to estimate what would have happened counterfactually, had the bot been driving, instead of seeing what happens with the bot driving. A million miles of this shadow mode evaluation isn't worth nearly as many miles of actual self driving.


> If a particular action the driver didn't take would lead to results that didn't happen

I'm having a hard time parsing your double-negative, but when shadow mode is running, it's choices are back-tested in real-time with the driver's choices, so I'm not sure how this fails to replicate what a real-driver would have done in such scenarios.


Here an example. I’m on the freeway in my Tesla and instead of staying behind a car, at a safe following distance as predicted, I change lanes and speed past. Was it because, 1) the car was doing something I perceived as dangerous and I moved to a safer position, 2) I’m in a freaking Tesla and I can


Tesla's shadow mode is collecting real data up until the point where the car's decisionmaking process and the human driver disagree. Everything after that is simulated fantasy. And since human drivers aren't necessarily going to do "the right thing" in all circumstances (there isn't necessarily a single 'right thing') you can't treat 'car-disagrees-with-human' as an error case.


Other cars respond to your actions, so the moment your actions deviate from the bot the reactions of the other cars are no longer valid to test against.


Backtesting a driving model, it’s very hard to tell what would have happened, had the car deviated at all from what the driver did.

e.g. You see the driver swerve for no apparent reason, did the driver simply drive more recklessly than the car would, or did they avoid an obstacle the model didn’t catch?


And yet there are several recent videos of Tesla cars doing crazy stuff on the roads with this system.


Wait till you see the videos of humans doing crazy stuff on roads in cars!


If humans did the crazy stuff on the road at the same rate as we've seen from the few select hours of FSD driving uploaded by Tesla owners (who are fans and more likely trying to show it at it's best) this week, we'd all be dead.


Do people randomly brake at high speeds for no reason ? Tesla cars do:

https://m.youtube.com/watch?v=xj73Q3lWvFM


All the time. Witness the effects of cellphones/children/food/spilled food/makeup/jarring news/exciting music/significant others/attractive people/gawking at random bits of interest. Also, brake-checking just cause' and for insurance fraud.


If what you're proposing is that "success" in this experiment is that every Tesla now drives at least as well as the absolute worst-case human driver, then you've pretty much articulated why this experiment is so dangerous.


The thing is these are not the "absolute worst-case human driver", these are the absolute average human drivers.

People get easily distracted, get excited, get tired, all of which are propitious to bad decision-making.

Many human drivers are downright dangerous, but that's not a reason to add dangerous machine drivers on the road, either.


The adaptive cruise control in my Honda is also bad about braking without good cause. Of course, they don’t market it as “self-driving”.


Automatic emergency braking also has false positives.


Who is legally responsible for the car doing something crazy? Will engineers or executives go to jail?


How does that explain the persistent issue with phantom breaking ?

https://m.youtube.com/watch?v=xj73Q3lWvFM

It's been a problem for years now.


And it's scary as hell. Had it happen to me twice in a 600-mile highway roundtrip this summer. Was very happy nobody was following me closely, because the combination of regen breaks and disc breaks really stops the car quickly.


Still magnitudes safer than human drivers.


Drunk drivers and human mistakes are two things I accept in traffic (or rather - laws and human behavior both show we don’t accept them). It’s factored into my risk.

And FSD isn’t orders of magnitude safer. They are barely as good as human drivers and only in some conditions.

For the worst situations they aren’t comparable at all (since they won’t drive at all)

This must be repeated over and over for some reason: people will never accept self driving cars that are just as good or bad as human drivers. Nor should we. It’s a much too low bar.

Self driving cars will need to be orders of magnitude safer than human drivers to be even remotely acceptable.


If it's consistently as good as an average human that's still better than an average human. That's because robots don't get tired, hungover, angry, bored, drunk, or get an eyelash in their eye at a really bad time.


Yes. But again people won’t accept that. People will rather be killed more often by drivers that feel guilt, go to prison, have strokes or poor eyesight than less often by a machine that does not have all of those flaws but also none of the feelings and responsibilities.


I’d be thrilled if self-driving cars could meet that bar, because then I can feel at least as comfortable napping in the car as I do driving it. And unlike humans, I have the expectation that my car learns when other cars crash, so I have a lot of reason to expect that once the system rolls out it will continue to improve.

I hate almost everything about Tesla and their business model but the idea of full automatic driving, once vetted to even a bare minimum level of approximate human parity, is enough to make me consider one anyway.


Lets do a simple test:

You are in between two walls and there are two straight lanes. Lane 1 has Tesla FSD coming in at full speed. Lane 2 has a human driver coming in at full speed. You can choose to be in one lane. The FSD and the human can both see you only 50 mts out and if they brake as soon as they see you, you will survive.

Which lane will you choose?


Well, a Model S can just barely stop in 50 meters at 60mph, while some other luxury and sporty cars in the segment can stop in 35 meters or less. And that’s only 60mph, hardly “top speed” or even a normal highway speed. So regardless of driver assistance features, I’d choose a car that at least has a chance to even physically stop assuming instant reflexes. 50 meters is probably a much too small distance for this thought experiment.


I don't know my cars that well, but an alert driver will take at least 300ms to begin reacting. That's over 8m at 60mph (not sure why we're combining unit systems here). So I'll stand in front of the Model S in this particular scenario and hope to get away with 'mere' broken legs/pelvis. Give the human more time and is probably take the human, not because that's necessarily the more reliable option, but because its distribution of outcomes is more well known to me.


Most, if not essentially all, cars in that category have emergency automatic braking, so the Model S still isn't the best choice.


It absolutely isn't.


Hate to say it but I think you’re in the minority with that opinion. Reaction from most people has been “woah”.

Also, there is no moving fast and breaking things here in real terms. Drivers are and have to pay attention while driving, and Tesla is strictly monitoring this with beta. You also specifically have to opt-in to beta software. 99% of Tesla owners do not have this functionality.


I think we are mainly silenced by the hype folks who can't not tell the rest of the internet about their excitement. I'd like to think OP's opinion is the majority, hard to tell on a startup forum where most folks are hodling TSLA.


I would be astonished if a majority of people on HN have Tesla stock, for what it's worth.


Would you only be astonished if they directly hold it, or also if it's via a fund or three?

(I think you're probably right they don't, HN international and diverse enough, but I wouldn't be astonished.)


I meant "directly hold" (since I intended to reply to the parent's comment about "hodling").


I would guess that most people on here are cautiously optimistic. Given the progress that has been made over the past decade on all things AI my priors are telling me this will eventually work.

Note that it doesn’t mean I love it; I just expect that they will figure it out.


This is exactly how tech demos for all sorts of AI have gone for decades: initial impression is "wow, I can't believe it's doing so well!" But the positive impression quickly disipates as there's real world use of the product.

Tech demos have a way of setting good impressions that are not realistic. I have a feeling that's exactly what is going on right now, but a few more close calls with crashes and the shine wears off.


I'm not sure what you're looking at here. There's no "tech demo" going on here. It's a new beta of technology replacing an old and quite well functioning technology that is a SAE level 2 system. This is not frozen in stone. In fact Tesla is updating it every ~5 days right now. There's many people that post daily videos of how Tesla handles on a specific road near them and you can see it improve update to update.


Ironically, deep learning might finally be the solution to this, in a convoluted way.

Previous AI systems generally fell over when they hit the final 2% of real world scenarios, unable to cope with the explosion of complexity.

In contrast, modern AI finally provide a toolkit for "last mile AI" (proverbially speaking), by allowing incorporation of those scenarios without breaking the entire previous model.


All deep learning implementations to date fail on the last 2% of real world cases. Why would deep learning be the solution for its own shortcomings?


Failing on the last 2% (or whatever small number we want to set) isn't a shortcoming of deep learning: it's a shortcoming of all prior knowledge systems. Humans included.

And the barrier to surpassing that problem has been hardware (now approaching solving) and calendar- & person-time.

The problem with 1970s AI (ingnoring hardware) isn't that we couldn't build useful AI, but that we couldn't afford to build useful AI in most domains.

Calendar time costs money. People time costs money.

Deep learning and derivates finally provide a solution where we can substitute machine time and data volume for calendar and people time. As a result, we can afford to do things in domains we couldn't before.


> Failing on the last 2% (or whatever small number we want to set) isn't a shortcoming of deep learning

Citation needed. Deep learning is used to power some of the biggest money making machines in the world (like google or facebook ads) and they're still bad. It's also used to power products like voice recognition and it'll still add dog to my shopping list instead of soap.


I would counter that those systems are as good as they need to be. Google and Facebook have realized advertisers are dumb. So why waste a ton of people time optimizing past what moves the revenue needle?

Where I'd expect to see interesting developments (hopefully published openly) is in sentiment analysis / disinformation identification, as companies begin to see it as an existential threat to the unregulated nature of their businesses and fund accordingly.


Forget this 2% business. How does it do in snowy conditions at night? Because in some places, that's around 12.5%.


You cant go "woah" for 10 years. SD cars are not delivering. One more death and people will be calling to burn them.

One should do sentiment analysis on these discussions threads over the years to see how the attitudes have changed.

A much less dangerous version of this was the VR hype train


Tesla's unregulated and (some might say) reckless roll-out of self-driving technology could indeed be the biggest threat to the technology's realization.

As you point out, all it will take is a few fatalities for public opinion to harden against self-driving, and laws will start banning or severely restricting it. Rather than being wowed by Tesla's gung-ho approach, boosters of self-driving technology should be angered by it.


Totally disagree - the public is complacent about death insofar as it is not reported on by the media. If the media reported on daily accidents, and the carnage they bring, nobody would drive. In general, unless you assume this is going to be so bad that it can't be ignored, your prior should mostly be informed by how much you think the media will try to capitalize on a moral panic about self driving cars or not.


Wait. You disagree that the media will try to capitalize on a moral panic?


I phrased it poorly. I think the odds of the media turning self driving cars into a moral panic is quite low. I don’t think such a panic will materialize organically.


I don’t buy the FUD. By all measures driver assist programs make the road safer. These cars are not out on the streets massacring people like human drivers are. So meh.


What measure are you talking about here? Has Tesla published any stats about FSD?

The other stats in this thread make it sound like Tesla is not even the same league as other attempts at driving.


Yes, they have, lots of public data. The latest show that autopilot on is just slightly better than a human re. safety (but Teslas overall are about 10x safer than the average car already).


Tesla has not provided any numbers supporting that, and what numbers they have provided are so obviously flawed as to be either designed to mislead or produce by someone utterly ignorant of statistics. I’m assuming the former, but who knows.


But we are talking about FSD, not autopilot, a very different beast.


You’re asking for evidence that can’t possibly exist yet. They’ve only had the FSD beta out for a week, and only for a tiny subset of the fleet. It takes time to accumulate accurate stats and publish them. According to the NHTSA, there is one accident for every 479,000 miles driven. For Teslas with automatic safety features disabled (no AEB), the accident rate is 1/3rd the NHTSA average.[1] They’ll need tens or hundreds of millions of miles driven to get accurate safety figures.

1. https://www.tesla.com/VehicleSafetyReport


Many other FSD companies are doing testing with professional safety drivers, sometimes on closed courses or in specific cities with well-understood road systems. Presumably Teslas has done the same thing, and they have data from that? If they don't have such data, that means they're trying to develop this data using amateur Tesla owners on public streets all over the country, which seems quite a bit less safe.


Other manufacturers all rely on high resolution maps for anything approaching self driving. This is why Waymo is geolocked to a single city.


This is a beta release, you have to be quite gullible to think anyone would deploy untested software on the roads.


Thousands of people die each year because of stupid mistakes by human drivers. From a utilitarian perspective, pushing for self driving as fast as possible - in a manner that some might even judge to be reckless - is a net positive.


I'm surprised people have this opinion, on HN out of all places. In my view this technology shouldn't be allowed on public roads at all. Not even a little bit. Either it's 100% safe under all circumstances or it shouldn't be allowed. And Tesla definitely shouldn't be allowed to call it autopilot or "full self driving" when it very clearly isn't.

And yes, human drivers might be statistically worse - but just because this might be statistically better(and I'm not even sure that's actually the case here) it doesn't mean it should be allowed.


Isn't this position tautologically the case of letting the perfect getting in the way of the good?

I'm not saying Tesla is making a good move here or maybe isn't being careful enough. I don't know yet & am not familiar with how the engineers feel about the development of the product & how bad the corners they cut during development are. That being said, the standard you've laid out is not a good world to live in. I don't know if the tech equipped in the car is sufficient.

A huge statistical difference in quality should be sufficient to go with automated cars even if they don't make the same mistakes as humans or sometimes make mistakes humans don't. "Statistical difference" isn't about a coarse level analysis either but situational. If they get into fewer road accidents but kill pedestrians at a larger rate, that's a problem. The problem is the news media only reports the top-line number because analysis is hard.


Industrial automation that runs the world at large in thousands of ways isn't 100% safe under normal circumstances, much less "all" and yet we have almost entirely automated elevators, high voltage relays, medical equipment, nuclear reactors, autopiloted airplanes, and gas powered furnaces, to name just a few.

Most of this automation operates under certain assumptions about its environment and in some cases shuts itself down in a controlled fashion to fail safe in ways that may not be available to cars on roads, but these are not new ethical dilemmas.

But to pick just one example, automated saw stops and controllers aren't 100% effective at preventing injuries, but they're still a huge safety improvement over the human population at large just winging it.


Should everyone on HN have the same opinion, didn’t get that memo?

Nothing is 100% safe. By your logic we’d have no medicine, have you ever read the side effects of some medicines.


> By your logic we’d have no medicine, have you ever read the side effects of some medicines.

You picked the worst example possible.

Medicine is one of the most tightly regulated fields in the world with huge upfront cost and approval processes spanning years or even decades. E.g. the research and approval process of a new antibiotic costs multiple billion dollars before it can be sold.

What you see now wrt the Covid-19 vaccine is an international effort that is fast-tracking everything. Regulators are willing to overlook a lot for it to come to market as fast as possible and it's still taking a year or more before it's approved for use on everybody.


I would actually say it's a very good example. That no matter how tightly and strenuously we regulate the safety of a product, 100% safety is an unattainable goal and that given sufficient benefit there is an acceptable level of risk of grave outcomes, including death. This is from a publication on the risk/benefit evaluation of new medical devices from the FDA:

Uncertainty – there is never 100% certainty when determining reasonable assurance of safety and effectiveness of a device. However, the degree of certainty of the benefits and risks of a device is a factor we consider when making benefit-risk determinations.

pg 11 - https://www.fda.gov/media/99769/download


Regulated doesn't mean safe. It means a little, but not a lot, compared to the internal practices of the companies under regulation.

And regulation in no way - in no way - addresses the fact that medicine isn't close to 100% safe before adoption, and neither should self-driving cars be. As long as they are safer than humans, that's enough. Any other conclusion will cost lives.


If technology is required to be 100% safe under all circumstances, then why aren't humans held to the same standard? For humans, we have a driving test that measures readiness for driving. It includes a margin of error for new drivers and there's also an expectation new drivers suck more than experienced ones (e.g., indirectly via higher insurance rates).

I'm having a hard time understanding your position as to why the technology should not be allowed, even if it were to perform statistically better than humans. Is this more of a philosophical perspective?


What else not 100% safe shouldn't be allowed? Passenger planes? Cars in general? Cancer medicine?

Why wouldn't a community of engineers welcome a statistical improvement over meat drivers?


If that's the standard, full self-driving won't become a thing for another 50 years at minimum if not longer, because not only is it a crazy high target, the lack of data and improvements brought by real-world usage would hamstring development — there's not but so far test tracks and single-city pilots can go.

It's a lot like the "old space" approach to spacecraft engineering, spending decades and hundreds of billions of dollars to develop a rocket that would've been cutting edge half a century ago, all in pursuit of making the initial product perfect.


>Either it's 100% safe under all circumstances or it shouldn't be allowed.

This would be a much more compelling argument if the technology in question weren't replacing an existing system that is frequently lethal. I do agree that it needs more testing and possibly regulation.


I'm surprised that people have this opinion, especially on HN out of all places. Seems like most people here are aware of how dangerous human drivers are and the vast possibilities of technology. Seems like you're just needlessly scared since you're arguing for killing people?


> Either it's 100% safe under all circumstances or it shouldn't be allowed. And Tesla definitely shouldn't be allowed to call it autopilot or "full self driving" when it very clearly isn't.

Then no human should be driving on the roads either or at least every vehicle should have a breathalyzer in it to check your blood alcohol level before the engine can turn on. You're being unreasonable.


I’m trying to think of a comparison and I can’t come up with one. Something where we’ve relinquished control to something else that we’ve designed. Maybe trusting robots to build the cars themselves or something akin to that.

Either way, it’s definitely new territory for us. I am sure there will be debates on this forever but I foresee reasonable arguments by both sides.


Elevators. Automatic doors. Fly-by-wire. Airplane autopilot.

Heck, modern cars are full of things that control aspects of driving for you: Cruise control. Automatic emergency braking. Brake assist. Antilock brakes. Traction control. Many of these features are so beneficial to safety that they are mandatory for new cars.


If the goal is to prevent traffic fatalities then funding self-driving research must be one of the least effective methods. Tens of billions of dollars have been spent and no lives have been saved.

If we could achieve cycling usage at the level of the Netherlands, mass transit usage on the level of Japan, and road safety on the level of Norway then we would cut road deaths by 90%. These are ambitious goals but can actually happen and are not an open-ended research project.


How do you know no lives have been saved? I've seen footage of Teslas autobraking to avoid collisions ahead that happen a couple of seconds later. Does that not count?


Well the government isn't exactly funding self-driving very much. Indeed if the government actually had such goals then funding cycling paths, mass transit routes would certainly be better, but the government doesn't have such goals really. So companies develop technology that fits within the government funded systems that exist (namely the government heavily funds building highways and roads) so the systems must be built for that application. Feel free to encourage politicians to change their ways, but until that happens automated vehicle research will continue.


Cycling usage and mass transit are strongly related to the culture of society. I believe funding self-driving is more cost-effective than getting Americans to get rid of their cars and use mass transit (and I say this as a non-American that doesn't have a car, and uses mass transit, bike and walking extensively).


>These are ambitious goals but can actually happen and are not an open-ended research project.

can actually happen *if you live in an urban area


Cool, would you volunteer to be one of the extra casualties in order to make self driving work? If not, then why volunteer others?


Has anyone actually been killed by any Tesla vehicle that was driving automatically (that wasn't in the car itself)? Even on the earlier SAE level 2 "Autopilot" system? I don't remember hearing of any personally.


I don't recall any incidents where a third party was killed due to autopilot, however, I don't believe it's reasonable to extrapolate the general case performance based on prior special case performance, especially when the prior special case performance did in fact result in deaths that would clearly have been avoided by a human driver.


Indeed, but we won't have general case performance until it is acquired. And the statistics show that the prior special case performance did reduce accidents compared to without using the system even if some accidents happened that would not have otherwise happened.


This argument is silly since it assumes that self driving is somehow both the fastest and most efficient way to save people's lives, however we have working technologies we could implement today that could save lives (breathalyzers for cars, measures of visibility plus warnings, etc). Self driving efforts related to saving lives miss the opportunity costs of doing so.


There's no reason both paths couldn't be pursued.


That's only true once the technology seems to be better than human driving. That doesn't seem to have happened so far, and is definitely not true of Tesla's efforts.


By the same logic, in march we should just have started distributing whatever attempts at covid vaccines companies thought might be promising. Sure, it might kill a lot of people, but think how many others could be saved! It's reckless to require demonstrated safety and efficacy of drugs!

What's missing, using this analogy, is that it is only a net loss to slow down the vaccine that actually works, which we don't know a priori. But if you fast track something that kills a bunch of people, then it might have been a waste. You might not learn enough for it to be worth the cost.

What if we push out Tesla's cars and all we learn is that vision based neural networks with the particular models tesla's using today aren't good enough? How many lives would you sacrifice to learn that when a big investment in a more careful testing strategy might say the same thing?

To say nothing of the criticisms of the utilitarian perspective. If we believed in it as a society, we'd round up the poor/unskilled/uneducated for mandatory medical experimentation, but thankfully that idea is horrifying.


To be a safe driver you have to be able to predict what the people around you are going to do. Simply reacting is not enough. I don't think that it's a coincidence that the guy selling self-driving cars is the same guy who goes around exaggerating the current abilities of AI.

I too am shocked that anyone is ok with this, given the auto industry's history of fighting against safety and transparency.


We humans are terrible at predicting the behavior of other humans, we have unconscious biases and our own behavioral patterns that regularly cause us to misinterpret data and make the wrong decision. Quite honestly an objective AI that also has faster reaction times is likely to be a better driver than any human, it’s just a matter of training data.

Note: I’m not going to say Tesla isn’t being at least moderately reckless with its rollout of self-driving features, just that I firmly believe even a less than ideal AI with good enough training and sensor data can trounce the safety record of human drivers.


> AI with good enough training

This is the crux of the issue. If we had AI that could outperform human drivers and make our roads safer then I'd agree that we should use it. But I see no indication that this is the case, or that it's even technologically possible at this point.

Tesla can make whatever claims they want, but until their self driving systems have been independently and rigorously tested they shouldn't be anywhere near public roads. Have such tests happened yet? The article doesn't mention any, so I assume that they haven't.


This is exactly why regulations such as professional engineering licenses were put into place. You don’t want someone with that mentality building a bridge.


No, this is exactly why vehicle regulations were put into place.

Regulate and validate the vehicles and their software updates, not the engineers.

Edit: Or get used to our field moving as slowly as the bridge building field.


Huh? Bridges stopped failing when engineers were held responsible. That’s a pretty fast change.

Holding people responsible doesn’t decrease or slow down innovation. It alters the incentives to align the innovation with public benefit. If properly caring for society is too slow for you, your part of the problem.


> Bridges stopped failing when engineers were held responsible.

Oh did they?

https://en.wikipedia.org/wiki/List_of_bridge_failures#2000%E...

Edit:

> Holding people responsible doesn’t decrease or slow down innovation.

Citation required. This is strongly non-intuitive.


> Oh did they?

First one only sagged, no catastrophic failure, no deaths, no injuries.

A few more on that list were over a hundred years old. Not sure when he started holding engineers responsible.

Then you have the bridges rammed by ships, terrorism, construction accidents, a wooden suspension bridge in the middle of nowhere that had an upper limit of three and was used by seventy, etc. .

Seems as if the bridges themselves have held up fairly well.


Perhaps a less conservative bridge development culture would have resulted in more innovative bridges that could have avoided those incidents?


History shows that less conservative bridge culture will reduce costs at the detriment of safety.


Engineering education is designed to ensure you know better than risking lives for fun or profit.

It doesn't prevent you from trying to introduce new technology as long as risk is acceptable. Otherwise we'd still ride horse carriages (with special horses bred to be slow).


> I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people.

Stuff like that is only possible in the US. Would be unthinkable in the EU. Maybe in UJK but you know, they are no longer ...


In the long term this will cause EU auto makers to get their self driving tech from US companies (either through licensing or partnerships). VW has already partnered with Ford. Most of the other european auto makers are getting their tech from Aptiv. When it comes to self driving, none of the EU car manufacturers can compete with Waymo, GM (Cruise), or Tesla. Even when they manage to build really cool stuff, it gets shut down by regulators.[1]

1. https://europe.autonews.com/automakers/audi-bmw-others-frust...


Maybe EU auto makers just wait, meaning work on something, until they can put something on the market that works. After watching some hours of youtube footage I can say one thing for sure: Teslas' FSD does not.


By that time the US companies will have technology with a track record of real world use. The EU companies will have an easier time buying that tech than building their own.

This is analogous to banning airplanes until they can be proven safe to fly in. The countries that don’t ban them will lose more lives early on, but they’ll have a technological advantage. The place where this analogy breaks down is that hundreds of people are killed by human driven cars every day. Every year cars kill 120,000 people in the US & Europe. If we can develop and deploy this technology a year sooner, we can save tens of thousands of lives.


I think only the Tesla approach wouldn't fly in Europe, but trained drivers should hold, last year there was some announcement by Renault and Waymo for something in Paris though hard to tell what it actually meant: https://techcrunch.com/2019/10/11/waymo-and-renault-to-explo...


Tesla Autopilot is already used all over the EU.


The underlying motivations explain all: this isn't about improving automobile safety anymore (if it ever was): it's about recognizing deferred revenue from this feature (which has been sold for years) and trying to keep up, from a marketing perspective, with Waymo and Cruise. That's why they're not using trained safety drivers, but fanbois.

Waymo is at least as safe as the average human driver, while Tesla has unleashed literally worse-than-drunk drivers upon us all. Heaven help us.


The ethics of this are heavily influenced by how much you believe a) this rollout will result in a net decrease in automobile accidents and b) how much we should assume drivers will both be informed and intervene if the system missteps. If you believe both are unlikely, or you believe engineers should have a "do no harm" based ethics similar to medicine, this would be unacceptable. But if you believe both are highly and/or think engineering ethics should focus on harm minimization, this makes sense to do and under certain assumptions it becomes unethical to not roll it out.

As a concrete example: if you assume that 99% of accidents that would be caused by the system will be prevented by driver interventions, and you think overall the system will reduce the likelihood of an accident by 20%, the question you have to ask yourself is: if 1 in 5 people harmed over the next 6 months we could have saved, should we avoid doing so and wait until only 1 in 200 will not be harmed, in part due to their own negligence? As you can see, the assumed probabilities matter a lot, so I'd be curious how one can come up with good projections. The only case where these probabilities don't matter is if you believe that engineers ought to never create harm where none would have existed otherwise, but when it comes to self driving cars, this is an impossibility since it assumes perfect autonomy.

In general, if you buy into the theory that "human + AI" is always going to be smarter than just AI, then arguably even in a world with great autonomous driving, the best system will be one that has both an AI and a human. And that is what we have here, so it's more a question of if AI quality is sufficient or if this is just a bad idea in general given the natural tendencies of humans to be poor co-pilots.


> I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people.

They already claim that drivers must pay attention to the road in order to prevent accidents and this has been the status-quo in the automotive world for a century or so. Selling products that can kill if misused is not remarkable.

This could turn ugly if it's shown that enabling self-driving results in much more accidents but there is no data to support this.


> get away with [...] products that can kill people

if your moral premise is saving lives, your conclusion does not follow. vehicles are a leading cause of death and injury, so this omelette is worth breaking a few eggs, no?

AV has the potential to save a lot of lives. of course, there will be effects in other markets like insurance, new car sales, road repair, junk/salvage yards, tow trucks, etc. what the net economic effect will be is up for debate.


False dichotomy fallacy. AVs are great. There's a way to test them that doesn't involve killing people: you put a safety driver in the car, make sure that they're attentive at all times, and record what the car's software does vs. what the driver has to do. Then you gradually ramp up to more sophisticated scenarios as the software gets better.

Waymo and Cruise do it this way, and their software gets better without killing anyone. Uber and Tesla figured they'd play it fast and loose and killed people.


You're assuming eggs need to be broken in the first place, which is not at all proven. There are more options than what Tesla is doing - just look at Waymo as a counterexample of actually prioritizing safety over public release speed.


If you want the amount of data necessary you do need miles driven. The millions of extra miles may very well be what it takes to put tesla over the edge. Sure you can do that in a controlled environment, but it would take a lot longer, and people would die in the mean time.


This argument makes sense, if you can assume that Tesla will actually deliver functional FSD system as a direct result of their approach. There's no data to support it.


You're claiming "even though Tesla's self driving might be better/safer than human driving today, and will in aggregate save lives over the present state, there's this other approach which could save even more lives!"

It's not certain that Waymo's approach will be better, especially at scale (and in cities other than Phoenix). We don't have that kind of data yet. And even if it is better, Tesla's approach is still a pareto improvement over now. The perfect is the enemy of the good.


Shortcuts don't necessarily get you there sooner either. Tesla's approach might be risk without the reward. It's not a deaths-vs-FSD-date dial you can just turn (not even a dial you have to turn).


Well if it is worth that eggs get broken then it should at least be Tesla paying for it no? They seem to want to have it all, advertise their product as autopilot (not drive assist), use their customers and everyone else as Guinea pigs and take the data to improve their systems, but if things happen it's the drivers fault for not being attentive and they have to take financial (and moral) responsibility.

If they really thought some eggs need to be broken to get _their_(because they are using the data nobody else) systems up to speed, they could just come forward and say it and more importantly pay for the consequences.


AV has that potential in principle, sure. It hasn't realized it yet, particularly in Tesla's implementation.


> they sell products that can kill people

All car manufacturers sell products that can kill people.


> All car manufacturers sell products that can kill people.

Yes, that's exactly why most of them have much better QA than Tesla.


The key difference is that only Tesla ships one that will autonomously kill other people.


Except tons of other manufacturers have SAE level 2 systems that autonomously follow road curves and will fully run into objects that are unexpected. GM Super Cruise will kill people all the same.


Super Cruise is geofenced to known highways. What "road curves" are you talking about?


Presumably the curves in roads.


Well, I own a Tesla Model X when I'm with my family I don't engage self driving car mode, nobody force bus to do it. And when I do it is normal situations like 101 Highway


>nobody force bus to do it.

And where do the other people on the road factor into the decisions of you and your 5000+ lb vehicle?


Just watch out for left-hand exits like the HOV lanes at the Hwy 85 split going southbound.

Let's start "Falsehoods self-driving programmers believe about roads:"

All highway exits are to the right from the right-hand lane. If the left lane line curves away, the road is curving and you should follow it.


The only stats I have seen on this show that the autopilot reduces the rate of collisions. So it's not really a product that kills people, but the opposite. The stats came from Tesla, though, so I can't rule out the possibility that they are incorrect.


Tesla stats are extremely misleading and deliberately slanted toward favoring Tesla. Unfortunately, it's not really possible to find real stats since they're hidden by individual OEMs and insurance companies. But you would need to compare comparably priced modern cars with standard ADAS features to Teslas with Autopilot to get a datapoint worth talking about for the general case. You would also need to look at where Autopilot is engaged (mostly highways where accidents are rare in general) vs. not engaged (mostly city streets where accidents are common) to draw any real conclusions about safety.


> Tesla stats are extremely misleading and deliberately slanted toward favoring Tesla. Unfortunately, it's not really possible to find real stats since they're hidden by individual OEMs and insurance companies.

You realize that those two statements are in conflict with each other right? If they're misleading and favoring Tesla then you need contrary stats to state that. However you state that they're not available, so your previous statement can only be a lie.


Sorry, that’s not a logical conclusion. Just because real stats are hard to come by does not mean that Tesla’s stats are true.


But until you have evidence that they aren't true, what reason do you have to believe they aren't true? All you're really arguing here is that you personally believe Tesla is a "bad" company so you should disbelieve anything they say. You do realize that's not at all convincing and just shows you're personally biased, right?


But it does mean that you don't know that your first statement is true.


In Tesla's most recent report, it turns out that compared to Teslas using plain-old-human drivers, Teslas using Autopilot driver assist functions were 2x more likely to get into an accident, and Teslas using FSD were 3x more likely to get into an accident.

Any way you cut it, Autopilot and FSD are statistically worse than human drivers.

This is the only data Tesla has released comparing similar cohorts of vehicles (i.e., modern luxury cars). The only stats in which Tesla beats the average driver is where they include all cars ever put on the road, including semi trucks...but under those models every other modern car is also better than average.

[EDIT: And this is the Forbes article on Tesla's report. There's also a Cleantechnica post on the same report as well. https://www.forbes.com/sites/bradtempleton/2020/10/28/new-te...]


I’m pretty sure you misread the article. City driving is 3x more likely to result in an an accident compared to highways, not self driving. The numbers are one accident every 1.7M miles for autopilot in city conditions vs an accident every 500k miles global average (including highways, for which Teslas ratio is 1 in 5 million miles).


No, I read the report. You've inverted the numbers.

If Tesla FSD was 3x safer than human driving Musk would be bragging about it. The fact that he's not indicates that my interpretation is the correct one.


Again, I don’t know what you’re looking at. Who’s claiming it’s 3x safer now? Where are you getting these “interpretations”? Just read the article carefully.


> [...] made me think twice about buying a 25+ year product that keeps my home powered and dry.

does it even become profitable before the 25 year mark in the current market?


This comment strikes me as completely disingenuous.

(1) None of the companies involved have ever had the motto "move fast & break things." Zuckerberg created that philosophy to Facebook. It's foolish to apply it an entire industry.

(2) They are no more "using its customers as QA" than pharmaceutical companies do when gradually ramping up trials that also kill people. There has been heavy testing before this point, and it seems entirely reasonable to scale up testing before a general release.


and that's why it's not allowed in the EU


You have to continue to leave your hands on the wheel and pay attention.


Just like having a trained bear chauffeur you around. Except people see that as a bad idea.


It’s not a public beta. The select few drivers know what they are getting into.


It’s strange that people would rather a small chance of crashing themselves, versus a smaller chance of the car crashing for you.

If Tesla is going to be a safer driver than me, it’s already a good investment. If a software bug means I die, well, human error might too.


I'm less concerned about the Tesla driver dying (since it means they weren't attentively monitoring their car right?) and more concerned about the driver they t-bone or hit head on due to a software bug.


They're selling safety statistics. I find that unethical. You don't say to a widow "but think about how many times I could have killed someone and didn't".

Also note the double speak of Musk on the topic.. he was the first to talk (even boast) about how the structural design made the car the safest of all .. but now safety is not as essential, it's ok to have some accidents. And it will improve their machine learning data set right ! /s


That would be an argument worth having if a tesla with FSD were actually safer than a comparable modern luxury car driven by someone in the same demographics. Or even just compared to a tesla without FSD. We aren't even at that argument yet.


That's only the case if this new Autopilot iteration is actually a safer driver. There are many who doubt that.


It clearly isn’t. Everybody should ask themselves, “How often do I have a near-miss on the road that another human in the car with me prevented by warning me?”

I’ve had that happen a few times in the decades I’ve been driving. It appears to be a regular occurrence in this software.


Cars driven by humans kill people at a higher rate than this shitty AI, why doesn't anyone care then?

I don't understand this point of view. Cars WILL kill people with AI or humans driving. AI is not great right now, but it's already way better than humans in several situations.

Then people get spooked by bugs that the AI brings because they're not "normal" accidents, and think it's a bad idea to continue to train the AIs.

But why? Why is a "beta" AI killing people worse than the normal everyday drunk/distracted/bored/texting human driver?


> AI is not great right now, but it's already way better than humans in several situations.

Every stat that I've seen that comes out with the machines better than the people isn't a like-to-like comparison. Every like-to-like comparison I've seen has the machines still worse than people, so we don't even need to have that argument. Beware the propaganda.


Every stat? Wow. I'd be very interested in reading some of this material if you could link to some.

If you take a shitty driver and compare it to an AI, I doubt the AI will perform worse when it comes to preventing accidents.

Sure, a skilled human driver will likely perform better than this early stage AI, but if even 1% of the bad drivers out there switch to an AI, that's a net positive in my view. Not to mention all the other risky situations involving alcohol, sleep deprivation, etc. where an automated response can save lives.

You didn't address my main point though. Why is a driverless accident worse than a drunk driver?


I don't believe you can claim that humans kill more people than shitty AI since we haven't seen a world in which shitty AI is driving around unassisted. Given that Uber/Waymo/Cruise all have safety drivers working, it's hard to say how many fatalities would have been caused if those safety drivers weren't there since that would require predicting the future at the moment the safety driver disengages the AI.


By “cars kill people” do you mean car defects or human drivers? If former do you have a source for this claim?


I meant cars driven by humans. Edited right after I posted.


From the article: "These YouTube videos underscored how important it is for drivers to actively supervise Tesla's new software. Over the course of three hours, the drivers took control more than a dozen times, including at least two cases when the car seemed to be on the verge of crashing into another vehicle."

In a way, it's good that the Tesla system sucks so badly. If they had a disconnect rate of one per month, drivers would trust the thing.

The other guys, disengagements per 1,000 self-driven miles in California:

    Waymo,  0.076, or one per 13,000 miles.

    Cruise, 0.082, or one per 12,000 miles.
Not clear how many of those would have resulted in a crash, as opposed to just stopping. Probably not many, the California autonomous DMV reports indicate. US humans have a crash rate of one per 508,000 miles driven.

Anyway, Tesla does not have "full self driving". They have slightly better Level 2. Not even Level 3. You'd think that by now they'd have hands-off freeway driving totally automated to a better than human level of safety. But no.


> Not clear how many of those would have resulted in a crash

Waymo just published an extremely detailed paper explaining exactly how many of their disengagements would have resulted in a crash, among other things. It's very interesting reading and shows how far Tesla has to go.

Summary: https://www.theverge.com/2020/10/30/21538999/waymo-self-driv...

The paper: https://storage.googleapis.com/sdc-prod/v1/safety-report/Way...


Makes me wonder:

I have two driving modes, not autopilot, but not as aware as I could be, and then tracking everything I can because something odd is going on and I'm trying to avoid a bad scenario. Perhaps I'm a bad driver, but I suspect this is normal, kind of in a low-attention mode most of the time, but if the car a lane over starts to change lanes too close, you go into high-alert mode.

I wonder if there are studies on how often humans need to do this sort of "disengagement" for lack of a better term? Not sure how you'd measure it, perhaps with more eye movement than normal? Dunno.


This is reporting on a limited beta release. California’s official numbers to compare are

   Tesla, 0.076 per 1,000 miles, or 1 per 13,219 miles
That was from 2018 though - AP has received hundreds of updates in the meantime but they stopped testing in California.


If you believe those numbers they got really incredibly unlucky in every video I've seen that it happened to be while the camera was rolling


The “problem” is Tesla has crossed 3 BILLION miles driven, so you still see a fair number of incidents (50k+ so far if you follow those numbers, likely less).


And yet the FSD beta has only a few hundred miles on youtube with tons of disengagements.


Waymo system is limited to a single city that is fully mapped into the Waymo system. Cruise is limited to only work on highways (and not all highways) and is only SAE Level 2.

So you're comparing Apples to Giraffes right now, let alone oranges.


People will really keep saying the Tesla system sucks up until they solve autonomy lol. They're the farthest ahead and it's not even close.


Sure, but it'd be nice if in the meantime Tesla didn't call it 'full self driving', because it's nothing of the sort. It's a pretty good lane assist that requires constant human supervision and only works in a small subset of environments.


You really shouldn't trust those disengagement statistics much at all. They are totally up to the individual companies what counts as "disengagement", how to define it, and how they report them.


It’s not as simple as “the driver can take over if they need to.” In many cases it takes a human too long to recognize the system doing something dangerous and to intervene. This is going to end badly.


Indeed, recognizing the problem takes vital time and taking over to execute a maneuver is way harder than executing the same maneuver while you're already in control.


But think of the upside -- they could create serious value for their shareholders!


> “Oh Jeeeesus”

> In another video, Brandon's Tesla was making a left turn but wasn't turning sharply enough to avoid hitting a car parked on the opposite side of the cross street. "Oh Jeeeesus," Brandon said as he grabbed the steering wheel and jerked it to the left. "Oh my God," Brandon's passenger added.

Maybe it's just me, but this screams clickbait and low effort. The overall article is fine, but I nearly stopped reading after this. Is this kind of writing really necessary nowadays?


I think given the high stakes of killing yourself or other people, we should be brutally unforgiving in highlighting major failures of killer technology.


Fair enough. But on the other hand, if our headlines we're like this:

> OOOOH MY GOOOOSH! - Data scientist REACTS to possible DATA BREACH by HAXORS

... I doubt it would help highlight the need for security. On a more general level, I personally don't think it's worth to trade reputation for attention via clickbait. But it might be the better option, I honestly don't know.


In that case a computer isn't nearly killing someone. Very obviously not a fair comparison.


I'm not saying grabbing attention is bad - I fully agree that the topic needs it. But those warnings might be in vain because you're not taken seriously.

Imagine BuzzFeed would start to warn about a serious issue - sure, they get the attention, but with their reputation is going to be hard to get people to take their point seriously. And I hate to see ArsTechnica going in this direction.


And in the videos shown the vehicles are not going fast enough to kill any occupants either.


“Killer Technology” - the Tesla marketing team is going to use that now.


What kind of writing? Aside from the quotes, the description is pretty neutral. Do you believe the passengers said those things?


I do believe them they said this and even if not, it doesn't matter. But one could have left this paragraph out without making the article any worse. In fact, I'd say describing a passenger reaction that closely actually distracts from the gravity of the situation.


Considering the seriousness of the failures and the overwhelming Tesla marketing downplaying the failures... Yes, yes it is.


> Is this kind of writing really necessary nowadays?

Not on HN usually.


I agree, it is clearly a clickbait title with the keyword of "reaction" to imitate that type of youtube video title. Then again it might be tongue-in-cheek, but since it fits the content perfectly, it's always hard to tell.


Timothy B Lee only posts articles that are negative on Tesla so yes it's clickbait-y. I really wish Ars Technica would bet a better auto writer.


Terrifying scenarios are described in the article. This is obviously nothing more than (very) advanced lane assistance and absolutely requires fully observant drivers who might have to take over the wheel in the fraction of a second.


Humans aren't designed to monitor something that requires no action for a long time and then suddenly requires attention, nor are we good at taking over control of something at the last moment. You're not in the "flow" at that point.

The article is correct that expecting average drivers to do this without training is a high risk move. I've given flight instruction, taking over a landing 20ft off the ground is way harder than landing a plane yourself. And that's with a lot of training, not just an average driver being put in a place to supervise Tesla autopilot with no training at all.


In practice it doesn’t work like this.

Current Teslas allow Autopilot on the streets, it just tries to go forward on the intersections (with FSD it also tries to take turns). Stops on the lights and stop signs, btw. So the way you use it, you engage it on the long stretch, where it knows what to do, and monitor on the intersection. If it tries to take wrong lane on the intersection, you interfere. On simple intersections it goes through just fine. All this without FSD Beta.

Is it autonomous? No. Is it way less tiring to drive a car, when you don’t have to do lane keeping, speed control and traffic light stops yourself all the time? Absolutely.


Having to be prepared to take over at any second is more tiring than doing the other stuff myself. You have to be even more alert because you're changing mental states.

For most people, lane keeping and speed control is ingrained enough they do it subconsciously.


Have you actually tried it? Because entire point I’m making is that it’s not like this in practice. You get used to it quickly and know when you need to monitor it and be ready and when you can relax.


But "knowing when you can relax" is the dangerous part. That's what makes humans unsuitable for monitoring something that goes right 99% of the time. Tesla's can swerve into an obstacle or oncoming traffic at any moment, it only happens maybe once per 100.000 driving hours, but at that moment the human behind the wheel is probably in the relax mode and completely incapable of taking over.

That's not just theory, the article describes there have been 3 serious accidents already with "self driving" Teslas that crashed while being monitored by the driver.


But how is it deferent from humans? They make mistakes too! All that matters is rate of mistakes.


> Humans aren't designed to monitor something that requires no action for a long time and then suddenly requires attention

Isn't that exactly the definition of cruise control ?

I'm sorry but I won't be driving a car without cruise control anymore: cruise control saves oil and is much more comfortable.


Cruise control doesn't require you to drop your attention to driving. It simply relieves you of the need to keep your foot on a pedal.


Cruise control + lane keeping perhaps. Which is why I think lane-centering driving assists are dangerous and the limit should be lane-leaving warnings (buzz in steering wheel when close to lane edge - no steeeing to follow lane).


BMW has a good middle ground, resistance increases if you go to the edge of the lane and if you almost touch the lines the wheel vibrates. That means you're still consciously steering but it's easier because the car has a tendency to follow the lanes just as any wheel has a tendency to return to center after a turn. But if you let go of the wheel it wouldn't keep a lane.


Makes sense. Making these systems behave well doesn’t seem so hard if it’s starting from the correct principle: never encourage the driver to not drive the vehicle. Assist, but don’t take over.


With cruise control you’re still very much the one who is driving and paying attention.


Ok, I can't imagine myself dropping attention from driving. As a member of a national roadkill prevention awareness org and having lost close friends on the road: I would never had imagined that it would require you to drop attention.

I imagined it was something you could enable and then disable by just breaking like cruise control or auto parking with toyotas.

But now I see how complex it becomes because of the wheel, even tesla owners in the comments of the article don't like to let their hands off the wheel.


The more automated cars become, the more drivers will start to trust them and pay less attention - especially when it’s advertised as full self driving.

Prime example would be the Uber test driver who felt she could trust the car enough to watch a TV show on her phone, when she was supposed to be supervising it.

Plus our attention naturally just starts to wander when the talk at hand is not that engaging.


My understanding was that she was looking on a dashboard screen slightly before crashing into the bicyclist, not a cellphone?


> this is obviously nothing more than (very) advanced lane assistance

Is that really obvious? It looks like the system is aware of much more than just lanes. A lane assist system doesn't stop for red lights or stop signs, or make left turns into a side road after waiting for oncoming traffic to clear.


True. On the other hand, it also fails to handle lanes properly:

https://youtu.be/wPJc9_gJHtM?t=431


When Elon Musk said real world testing was needed, I assumed it would be Tesla test cars with trained ‘drivers’ not beta testing guineapigs.

Given that there’s regular stories of people playing phone games and crashing whilst using the current lane assist tech, are we safe?

‘Real-world testing was needed to uncover what would be a "long tail" of problems, he added.’

https://www.bbc.com/news/technology-53349313


> When Elon Musk said real world testing was needed, I assumed it would be Tesla test cars with trained ‘drivers’ not beta testing guineapigs.

Cynical view - it's being tested by twitter users who are on good terms with a company, with an explicit goal of generating more hype for Tesla.


Its definitely going to be real world drivers who opt in at their own risk. My understanding is that Tesla views its large number of cars as a test ground to get to level 5 autonomy. It will slowly roll out more and more autonomous functionality and incrementally get there. You simply cannot solve the tail end of issues with test driving alone.


> drivers who opt in at their own risk

If it was actually just at their own risk, it would be substantially less bad. They opt in at their own risk, and at the risk of those around them.


I learned that Teslas had some self driving functionality by seeing obviously inattentive drivers in them.

While on the road I try to stay as far away from any tesla as I can.


Tesla should follow Waymo's lead and release statistics on collisions while self-driving, and also the number of simulated counterfactual collisions avoided by the human taking over [1]. Based on the multiple incidents from the individual drivers cited in the article, it sounds like the numbers wouldn't be very good.

1: https://storage.googleapis.com/sdc-prod/v1/safety-report/Way...

Disclaimer: I work for Google but am not involved with Waymo. Opinions my own.


> Tesla should follow Waymo's lead and release statistics on collisions while self-driving

Tesla does rely statistics on collisions while self-driving.

https://www.tesla.com/VehicleSafetyReport

> In the 3rd quarter [of 2020], we registered one accident for every 4.59 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.42 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.79 million miles driven.


This is good, I had forgotten they released that data, but I was still asking for a lot more. For example, Waymo bucketed accidents by severity and went down to very low severity (like a pedestrian bumping into the side of a stopped car) whereas on that site Tesla is essentially only reporting airbag deployment counts. (To Tesla's credit, they have more time granularity than Waymo.)

I do think the counterfactual "how many times did the human driver taking over prevent a crash" data is really important to have. In the article, the scary near misses they talked about fell into that category rather than into the "actual airbag deployment" category.


> I do think the counterfactual "how many times did the human driver taking over prevent a crash" data is really important to have. In the article, the scary near misses they talked about fell into that category rather than into the "actual airbag deployment" category.

It would be nice to have however I think it misrepresents. If the end result of having Autopilot on is that the driver, on average, crashes less then it's an improvement, even if the driver has to regularly intervene. What's important is the end effect, not how it happens.


Sure, but if you want to estimate how far they are from level 4 autonomy then the counterfactual data would be very useful.


You don't have to read Tim Lee's writing to know how the latest FSD performs. There are lots of videos on YouTube, and new ones posted every day. My personal opinion is that this is a quantum leap over the previous implementation. Much respect to the engineers at Tesla for this amazing accomplishment.

https://www.youtube.com/results?search_query=tesla+fsd+beta


"Full self-driving"

> Tesla ... says it's not intended for fully autonomous operation. Drivers are expected to keep their eyes on the road and hands on the wheel at all times.

If tesla rolls out a product where you have to pay close attention and be ready to take over and calls it full self driving, that seems like a massive case of fraud. I'd be pissed if I'd bought the FSD package.


On the other hand I am okay with that requirement but it becomes as diluted as the term smartphone / smarttv / smart cars.... Dont call it self driving call it smart cruise control or something?


1. It's not even available to 99% of people right now.

2. It's a beta.

3. People bought into it early when the price was lower. It's a subscription to the feature set as it develops, not a guarantee that it works perfectly the day you pay for it.


It’s a beta release


By now it should be obvious—and this is purely descriptive of what I see playing out, not prescriptive—that self-driving tech will have to clear a higher bar than just statistically better in order to become mainstream.

Not only because anecdotes carry more weight than statistics in the media—as this story illustrates—but because people are much less afraid of bad things happening, as long as they're in control when it happens.

Also, I suspect everything associated with fuel-burning cars is by now irrevocably linked with cultural ideals of individualism and freedom. Resistance to EVs and self-driving tech will continually materialize, as if out of thin air, regardless of how well documented the benefits are. It's going to be an uphill battle for the foreseeable future.


Anecdotes are data. They're a different kind of data, that usually leads to investigations resulting in more rigorously gathered data.

Multiple drivers are reporting multiple incidents of varying severity in the short (week?) that the "full" FSD beta has been available. That's a huge concern, and it absolutely needs to be followed up on.


The software isn't widely available. It's available to a tiny set of Tesla's "closed beta" customers. Tesla picks and chooses who it sends it to right now. Basically it's only being sent to Tesla large stock owners or people who Tesla has worked with in the past who are non-employees right now.


But also there is no way Tesla FSD is anywhere close to being statistically better than normal drivers considering what we see in the videos.

The reason I make this obvious statement is you are making it seem like the article is using anecdotes while there are statistics somewhere that show safety


I'm not so sure. It seems equally if not more likely to me that people will eventually take a ride in a self driving car, be impressed since that's the statistically likely outcome, and slowly get used to it. Accidents from robocars will become things that happen to "other people" like all things that ultimately boil down to random chance.


I see your point. Using air travel as a precedent, there's a small subset of people who absolutely refuse to fly, and a small subset with zero qualms about it. Between those extremes, there's a range of folks who participate with varying levels of anxiety. I can see that becoming the endgame here.


What will it take to stop them? I believe it's a settled matter that their self driving is fundamentally flawed due to relying on ML and cameras instead of lidar. Will it take people dying for the government to step in? Lawsuits? I don't think our politicians are well versed enough in how this works and the level of risk letting this kind of thing happen.


Is that a permanently settled matter? When humans drive, they are relying almost exclusively on visual inputs.

I don’t see a reason to believe that humans are so far superior at visual processing that no self-driving based on video could best them and many reasons to think that we’re inferior in certain aspects of the task.


1) Human eyes are superior to any camera system in a Tesla. both in resolution and dynamic range. 2) The human visual system is backed by the equivalent of strong, general AI that intuits and understands the entire world around the vehicle and all objects interacting with it. It is in theory possible for a video-only system to work, but ML is nowhere near the level of development needed for that to be a reality.


Sure they're superior, but you don't decide whether something is flawed by whether it's superior or not.

Our road systems are not designed for the full dynamic range of the human eye, which is why we drive with headlights and we have street lights on highways rather than relying on night vision for all driving.

> The human visual system is backed by the equivalent of strong, general AI that intuits and understands the entire world around the vehicle and all objects interacting with it.

I think the Human general AI vastly overestimates it's ability to drive and respond to conditions that suddenly appear when driving.

> It is in theory possible for a video-only system to work, but ML is nowhere near the level of development needed for that to be a reality.

[citation needed]


Look at the car's display in this video: https://youtu.be/RN5Qoei7v1k?t=2281

The "lane" in front of it is wildly moving around, so it's definitely misinterpreting the visual inputs.


As an owner of a tesla, the rendering on the screen and what the car actually does with the inputs don't always correlate. The rendering is purely for your own viewing pleasure. Hell, there's a disconnect where if you have the new autopilot computer but the old entertainment screen computer, it didn't even have the capability to render anything worthwhile but you still had self driving


But in this video, as soon as the FSD got engaged the car started driving towards the train... It doesn't look like the car had much understanding of what was going on.


I've seen this several times in vieos with the new FSD, when you enable it suddenly for example when you're already stopped at a stop sign, it's not sure what state it's currently in and why it's stopped and assumes it should drive forward when you enable it from a stop as long as there's no physical objects in front of you.

If you want to see longer FSD videos check out https://www.youtube.com/user/pilotjc78


The driver enabled AP at that very moment, and disabled it half a second later, you can hardly take any conclusions from that.


Given that the rendering on the screen is supposed to be showing the inputs the car thinks it is receiving, it is even more concerning to learn that what the car actually does isn't correlated with the inputs it is receiving.


That seems intentionally tortured logic. Why would it be a given for the rendering to show the inputs as received rather than being an entertaining caricature loosely related? Surely you don’t think the animations of the car’s battery being charged are realistic in any engineering or physical sense.


Because Tesla says that it is a rendering of what the car thinks it is seeing, meaning that the rendering is supposed to roughly correlate with the information upon which the FSD systems are acting.

In contrast, the animations of a car battery being charged don't claim to be realistic in any sense, and they deliberately attempt not to be realistic: oversized images of batteries or other standardized iconography, stylized arrows or other indicators of charging "flow", etc.


I'm distracted by how unsafe the driver's use of the touchscreen is in that video.


I'm fact I'd argue the opposite is true. It's settled that the endgame of self driving will rely only on visual input, since that is what humans need.


ML shows no signs of achieving intuition, which allows us to predict various scenarios in real time and adjust accordingly.


Does “instead of LIDAR” mean that LIDAR solves that in some significant way?


With camera-based sensing, you need AI to interpret the inputs in addition to AI for deciding how to react to those inputs. With LIDAR, the job of detecting objects seems to be much simpler.

Taking a relatively simple to work around example, you need AI to distinguish lens flare from an object approaching/departing.


With LIDAR the job of detecting some objects becomes much simpler but for other objects becomes impossible.


Nope. I don’t think we’ll have Level 5 autonomy with safety on par with human drivers until we have general AI. At that point, whether or not that AGI had LIDAR supplementing its visual inputs becomes irrelevant.


Doesn’t Wayno already have level 5 autonomous vehicles operating in the Phoenix area? So far they’ve had about 65,000 driverless miles[1] (and 6 million miles with safety drivers). That number should increase pretty quickly now that they’re scaling up their service. So far it seems like the main problem is that they get rear-ended by inattentive human drivers.

1. https://www.engadget.com/waymo-indepth-details-selfdriving-a...


ML is intuition. Consider the difference between an expert system solution to chess and an ML solution to go. Go is all intuition.


>> Is that a permanently settled matter? When humans drive, they are relying almost exclusively on visual inputs.

I really don't understand how so many smart people are repeating this argument over and over. it's one the most flawed arguments that was used by Elon Musk to sell his lack of ability to retrofit lidars in every single car.

We also got a single brain, so why can a computer not do what we do?


You could short Tesla stock. It is well proven that the free market will correct mistakes like this. /s


It will take a few serious accidents that most humans would have avoided.


I think I get less interested in self-driving cars the closer we seem to get to them. It honestly just stresses me out. Driving is something I enjoy doing. I’m not sure I will ever feel comfortable taking my hands off the wheel. This has little to do with how well the tech works in tests or anecdotally; I think I just don’t want this.


I'm a staunch grumpy old man, but our minivan has lane keeping, automated emergency braking and radar cruise control. The lane keeping and emergency braking are just there and mostly help out (although, apparently I like to drive on the line when exiting a freeway, so it's kind of annoying there). Radar cruise would be nice, except I don't do much long distance driving, and I don't like the following distance options, a different car might give different options that would be better.

I wouldn't like to be in a car that drives like either the Tesla or the Waymo cars though, the Tesla's are too aggresive, and the Waymo cars are not aggressive enough (which is generally good and safety aware, but would still be frustrating to ride in).


> I think I get less interested in self-driving cars the closer we seem to get to them. It honestly just stresses me out.

That's probably a reasonable response. These things have the nasty feature that they both require less input from the driver the more sophisticated they get, but also that they require more _attention_ from the driver the more sophisticated they get (a boring old cruise control system won't abruptly swerve into a concrete road divider, but a high Level 2 system will). And humans just don't work like this. "Don't do anything, but be ready to take control within a second or so when it does something insane" is not something we're good at.


For dense urban areas (low speed, lots of stopping) I could see waymo tiny wheelhubs working. A tiny taxi bubble.


I agree. I think I’d feel differently if the vehicles were more like trains on a track, if the roads had some added features that guided the vehicle. I just don’t think I’m down with the autonomous approach where the car is trying to make all the decisions I do.


Bet it's only named "Full Self-Driving" to get out of their obligations to customers who bought that package.


You can drive yourself anywhere!


Tesla is very "thrifty" with their FSD software. Other companies need to hire safety drivers, but Tesla just foists that labor onto their paying customers, and any liability to boot.


Who very willingly enable the option after several warnings that the software is beta. In fact tons of people are begging to get access.


Either the human should never have to intervene (can sleep) or the human should never be allowed to take their eyes of the road.

Any level between that is dangerous, irresponsible, and should be banned from use on public roads.

Which leads to an interesting question or two: a) why isn’t it? b) what will happen to Tesla and Uber once it’s clear that “Unsupervised driving” will remain a decade away for at least another decade?


> why isn’t it?

Reactionary regulation. Once a few high Level 2 driver assist systems drive into schoolbuses or something, expect to see that change.


Without commenting on this particular technological advance, it did make me think of the early days of the automobile

https://amp.detroitnews.com/amp/26312107

My key takeaway is regulation of advanced technology lags waaaaaaay behind the disruption caused by the technology.


Is it really possible to regulate without any experience at all ?

I thought regulation was something that you could define based on a sufficiently widespread practice.


>Is it really possible to regulate without any experience at all ?

Yes. You can absolutely set safety metrics and boundaries. They might not be perfect, but as you get experience you refine those boundaries.

Just like settings alarms in web services -- you have a sense of roughly where the performance should be, and you set some alarms. You iterate on those alarms as you get data, but you still absolutely have a sense of what's roughly appropriate from the start.


I can guarantee you that if Tesla's auto-driving starts killing people, there will be near-instant regulation. For that reason alone, supporters of auto-driving technology should be aghast at what Tesla is doing.


It has already killed three people. The question is how many more will it kill before real regulation starts hitting? Waymo has been conservative, which is the right way to go about this.

The number of outright faulty actions I'm seeing on these videos is terrifying. Pedestrians and other cars on the road didn't ask to be part of this live fire experiment.


Each year ~1.3 million people are killed on the road (worldwide). (interesting to compare to covid deaths).

Giving people the option to sacrifice their lives to contribute to solving this problem is an amazing thing when you think about how many million people the capability could save over the next century.

These are the risk taking explorers of the modern era.


It’s not just the Tesla owners who can die when things go wrong.


Has a Tesla on autopilot ever killed a non-involved person?


That’s true of most all car crashes. Also, alcohol.


Yes, but we don’t call drunk drivers “the risk taking explorers of the modern era.”


> Giving people the option to sacrifice their lives to contribute to solving this problem...

Given that that's not how this is being marketed, I don't think the people "sacrificing their lives" would necessarily agree with that.


You're assuming those sacrifices will not have been in vain. What if Tesla fails to create a self driving system that's superior to human driving? What then?


Tesla shows us high quality mockup[1]. The system has all the features, but nothing really works.

[1] https://en.wikipedia.org/wiki/Mockup


This thing is obviously not ready to drive properly:

https://youtu.be/RN5Qoei7v1k?t=2218

It can't even figure out which traffic lights apply to it.


The guy disengaged it just as it started accelerating. It's sometimes slow to respond to green lights right now. Give it a few more weeks and it'll be fine.

Look at how it responds to pedestrians here: https://youtu.be/RN5Qoei7v1k?t=587

And his quote: > Improvements across the board, turned into Target and took me to the front! Drove around East Sac, through Sacramento State, Midtown, and Downtown. This release definitely was far less interventions for me.

Also this guy in general gives a lot more vocal reaction than I would personally give in similar situation so it makes it look and feel worse than it actually is.


Unbelievable that the guy has to swipe and type on the center console while driving. Touch screens in cars should be banned and Tesla held liable for all accidents that happen while interacting with the console in existing vehicles.


Wow, that's like 10 disengagement's in just a couple minutes and it almost drives through a train boom barrier and does some scary stuff with a pedestrian...


I missed that on the first watch but you're right, the driver engages the self-driving mode in front of the train barrier and it immediately accelerates towards it.

This system is really bad.


Without releasing data, Tesla is forcing analysts to focus on anecdotes, which suffer from all the usual problems like selection bias. What we need to know:

- How often do drivers intervene?

- How often do drivers fail to intervene and the system causes an accident?

- How often does the system take an action which has a high likelihood of having prevented an accident?

- How often does the system do so when it seems likely a human driver would have failed to do so?

- Integrated together, what's the expected dynamics of these probabilities and their net impact insofar as the system being releases more widely creates training data to help improve them more quickly over time?

It could very well turn out to be the case that this system is purely positive, strongly net positive, or neutral in harm reduction. The question then is, given that, what ethics should inform its release: is putting the stress on the driver sufficient if it say, saves 1000 lives in the next six months and will harm noone in exchange, other than some drivers having to intervene and have moments of stress? What if it will save 1000 and 10 people will be harmed by failing to intervene? What if it's an even swap, harming people who fail to intervene in exchange for saving people who would have inevitably been lost to accidents they couldn't have prevented?

Saying "this is wrong to do" based upon anecdotes is dumb analysis. Saying "this is wrong to do" based upon an absolutist form of ethics is fair, but it also means you reject that we should be trying to solve self driving. If you think it's wrong to do and are not doing either, you ought to articulate what scenario in terms of data would justify the action vs not, even if that data is unknown right now.


> It could very well turn out to be the case that this system is purely positive, strongly net positive, or neutral in harm reduction

It could also turn out to be negative, right? Certainly the anecdotes shown here would point much more to that - for example, its inability to turn sharply enough is indicative of some deep flaws.

> If you think it's wrong to do and are not doing either, you ought to articulate what scenario in terms of data would justify the action vs not, even if that data is unknown right now.

It's wrong to do as long as you rely on CUSTOMERS to ensure safety. There is no further argument to be had: as long as the system is not PROVEN to be safer than a human driver even in the absence of human attention, it can't be put into the hands of customers.

That is not to say that they shouldn't be developing it, and paying and training people to operate it safely. That is what all other companies are in fact doing - taking a safe and responsible route, which is apparently too much to ask of the most highly valued car company in the world.


Certainly could be negative, I didn't list them not because I don't think they're possible, but because I was responding to the negative perspectives on this story which in many cases presumes the ones I listed are not possible.

Saying "there is no further argument to be had" is just a way of shutting down debate. Why exactly is asking customers to opt-in to testing this somehow off the table? I'm not saying you're wrong, but you haven't given an argument and just are stating your belief as if it is some immutable fact.


> Why exactly is asking customers to opt-in to testing this somehow off the table?

Because they are likely to not just harm themselves, but also others. And also because they are not given access to enough information to be able to make an informed decision on whether the technology is safe to use, under what parameters.

Not to mention that the lack of accountability this creates for the company introduces perverse incentives - the company is getting free labour, and the more it hides about real stats, the more free labour it is likely to get.

This is all pretty well settled ethics - it is for the same reason that we don't allow companies to run beta tests for potentially life-saving drugs. This is long settled both legally and ethically, and the self-driving case is essentially identical, so why reopen that debate?


I don’t agree this is settled ethically unless you can come up with a better example. This leaves full agency in the hands of the driver throughout the application, unlike a drug which is a point-in-time decision which leads to a long chain of events outside the patient’s control. A better example would be something where a manual operator was displaced by an automated one, especially where the operator was a non-professional and the use of the thing is risky and already kills many people. Perhaps there is an analog in trucking or machinery.


In most cases, drugs need to be taken regularly for a period of time to have any observable effect, positive or negative. Sure, it's still much less immediate than a driving decision, but it's also not simply a point-in-time decision.

More importantly though, I believe that the statistical characteristics are more relevant here - I have no idea to how much risk I'm exposing myself by activating the Tesla "FSD", because (1) I don't know how much I can trust myself to switch from supervision to split-second control, and (2) I have no idea how badly this has been going for other drivers of the Tesla, or how likely the Tesla is to make really poor decisions (will it stop before the intersection, or plow directly into traffic? What percentage of the time will it do either?)


Tesla released data for autopilot, FSD beta has been out for ~1 week so there's not much real data.


Yeah, I'm not arguing that Tesla should have done something differently so much that I'm encouraging them to make sure they get data out ASAP.


It's not enough to know rates. If you know incidents per day, but don't know hours driven per day, you can't compare safety of two modes of driving. One mode might just be parked more often, and that's why it has fewer incidents per day.

Similarly, if you know interventions per mile, but don't know miles of challenging circumstances per mile, you can't compare two modes. Maybe people only turn self driving on in particular circumstances. Or maybe tesla opened the beta to people in easy locations. Apples to apples safety comparisons require a lot of detailed data when they aren't part of a randomized trial (which is still hard to do correctly).


The system has been out and available for a tiny selection of drivers for 1 week. How can there be statistics?


> How often do drivers intervene?

Extremely often, if the videos linked in the article are anything to go by.


Would we all want a world where every car drives itself? Yes.

Are we anywhere close yet? No.

Then why is Tesla calling it as Full Self Driving? Because it's marketing and more money can be made in updates.

Is it wrong to call something as Full self driving when it obviously is not full self driving? Well, duh!


Ok, so. I will say it again to all the people out there who think they can accumulate millions of miles of trials and use statistics to eventually declare a driverless transportation system is safe: stop. What you are doing is morally and technically wrong. You have to prove safety by design, and this design must be fail-safe. It’s not something you can refactor in software or test on users.


That's a great ideal to strive for, but it's unrealistic. Driving is unsafe. People die all the time on the roads. The benefits of driverless cars mean that they will never be held to a higher standard than human drivers.


Driving is unsafe if humans drive because humans make mistakes. It should not be when machines do. Otherwise, what is the benefits of using them? Driverless cars can, must and will be held to a higher standard.

But it is not a lofty ideal, it has been done before. And in the field of transportation. Trains used to crash all the time. Driverless trains don't. Why? Because fail safe systems have been added on railroads that nearly totally prevent this.

Don't take my word for it, the bar for rail safety is defined in well documented standards (namely, EN 50129, EN 50126 and EN 50128). It is 10-9 hourly probability of fatal failure. You can also look up "CBTC" on the Internet. This is the reference bar for driverless.

The real reason Tesla and friends don't want to go there is because fail-safe systems like a CBTC requires some form of collaboration with devices installed on the ground infrastructure, meaning Tesla should accept to become part of a system, and not "the" system. Obviously a paradigm change for the car industry.

How long will the public authority require to understand that they must step in to force a systemic collaboration and standards between infrastructure and cars to ensure driverless safety, I don't know, but I think it's time to call bullshit on the concept that you can have an acceptably safe driverless car system without it.


Given that our legal system tends to be retroactive, and Tesla seems to have pushed all legal liability onto the drivers, our only hope is that the first victim of the FSD beta manages to sue the driver into oblivion and puts up a barrier to drivers using software that the manufacturer won't take responsibility for.


Can we get a law that cars with the beta self driving need a big "DANGER STUDENT ROBOT DRIVER" sign on them?


I agree that this whole experiment had been crazy, but the thing that people don't realize is that there is a vast difference in performance and safety between the original FSD beta version and the latest one from a few days later. Up to 1/3 less disengagements due to fixing low-hanging bugs.


The line selection for turns seems to be quite poor still.


> An experienced human driver can drive for thousands of miles without making a serious mistake

Absolute bullshit. 99% of drivers will frequently commit a series of small mistakes on an everyday <5km drive.


Try to be more civil, please.

And he said "a series mistake", presumably meaning something resulting in a crash. The statistics back him, in that case (1 in 500k miles, someone else mentioned).


In which case Teslas incident rate is one every 5 million miles.


> > An experienced human driver can drive for thousands of miles without making a serious mistake

> Absolute bullshit.

It's not, though.

> 99% of drivers will frequently commit a series of small mistakes on an everyday <5km drive.

Perhaps, but the vast majority of such series of small mistakes don't add up to a serious mistake, so that's not relevant to the claim it was offered to rebut.


Tesla statistics say there's 1 accident every ~4.5 million miles driven on autopilot and they release those statistics. https://www.tesla.com/VehicleSafetyReport


Is the trick here that the driver is responsible in case of accident? That would be evil genius.


I guess that's why it's still beta !

Kudos to the driver ++


Those are going to be good arguments for the gun lobby people.


I don't get it, why?


With this release Tesla has become a very attractive platform for assassins; if your target drives a Tesla it can be subverted and their death will look like an accident.


"Beta" has no place on the public road.

I'm gonna be so fucking pissed if I die to Tesla's flawed risk/reward assessment, or worse yet, my kid. I bet Musk wont lose a moment's sleep over it though.


>"I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people."

Since when did this ever stop companies when there are money to be made. Driving cars in general kills tens of thousands of people per year and so do many other activities. For governments it is just a question of how much money vs number of people killed.


The main problem is that Tesla autopilot uses pytorch which is a prototype and toying library designed by an intern as compared to tensor flow which was designed from ground up targeting realistic applications such as this one.


I'd be very curious to hear some elaboration on this claim; it's the first time I've ever seen it.

From my point-of-view, the fact that TensorFlow completely scrapped its entire ≤v1.0 graph-based API for a PyTorch-style eager execution API for v2.0 onwards does not suggest that it was “designed from the ground up” in any way, shape, or form. We're talking a total rewrite of the entire frontend codebase going from v1.0 to v2.0, and a total abandonment of all v1.0 style code. TF 2.0 is basically a brand new language.

Under the hood it's all the same BLAS/LAPACK operations anyway. I've found TF superior to PyTorch for stats methods and probabilistic programming in general (far richer API), but for standard linear algebra the two are equivalent performancewise, with PyTorch having the superior API IMHO.


Just being easy to use doesn't mean it has a solid back end. I agree pytorch is easier to comprehend UI for average person.


Completely irrelevant since the model is run on a custom chip. So no part of the pytorch code actually ever makes it into production.


Comma.ai recently switched their entire codebase from tensor flow to pytorch, how do you explain that?


If you don't qualify what you mean by "AVs will kill people" it's not useful. People can, and do, accidentally and intentionally, step in front of vehicles. They also intentionally crash vehicles. They won't stop because of AVs.

Even the fact of suicide by vehicle is complex: AVs can prevent some suicide by driver but possibly raise the risk of pedestrians committing suicide by vehicle. One reason suicide by vehicle happens is because it can conceal the intent to commit suicide. This is compounded by suicide being a taboo subject. This alone will complicate and possibly delay adoption of AVs.

Millions of innocent people are injured every year and about a million die every year in road accidents. Cutting that number significantly would have roughly the same impact on global well-being as curing a dread disease.

There are potentially hundreds of millions of new customers for vehicles as developing nations grow their middle class. If humans drive those vehicles, hundreds of thousands more will die every year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: