It's really quite sad that the regulation of autonomous vehicles been so slow to come along. Public roads are filled with other drivers, passengers, and pedestrians that did not consent to be a part of a large scale beta test for partial driving automation that could fail at any time. I believe this is a case where self driving software should be default illegal until proven safe. Most companies in this industry, thankfully, seem to be moving carefully and rolling out their products conservatively; Tesla seems to think "move fast and break things" is an appropriate motto for 5000 lbs projectiles on public roads.
I partly agree, but it's also sad that we need to add to the bureaucracy just to insist that professionals act professionally. As you point out, most companies are doing fine. But it wasn't just Tesla playing fast and loose; as far as I'm concerned Uber execs should be doing time for negligent homicide: https://www.npr.org/2019/11/07/777438412/feds-say-self-drivi...
> it's also sad that we need to add to the bureaucracy just to insist that professionals act professionally
That's how bureaucracy and laws form. Individuals (people or companies) do X. The rest of society doesn't like it. They outlaw or regulate it. That's why the rivers no longer catch on fire in the US.
Sure, but I'll note the other companies are self-regulating on this. Regulation tends to slow progress, so what I'd rather see is what places like Waymo are doing: acting like responsible adults so regulation isn't forced upon the whole industry prematurely.
Thing is, it wasn't just one incident, it was just one incident that resulted in a death.
When Ubers started self-driving it took just a few hours before there were videos on twitter and youtube of them driving right though red lights without a care in the world.
Let's not give them too much credit here. I think Uber's halt had more to do with Uber's change in CEO and the indictment of the guy who ran their self-driving car program. Plus the fact that it was a giant money sink with no short-term return being run in a company that has never been profitable and can no longer raise infinite investor money.
I don't think it matters whether or not you think Uber was acting responsibly or reflexively. The point is their program is done while Tesla's, which has seen far more fatalities, continues unabated.
It may not matter to you, but it definitely matters to me and to my point about professionals acting professionally. Uber is not a good example of responsible self-regulation; it's instead about them getting reined in by other circumstances.
If every driver on the road today was never drunk/distracted/enraged I would agree, but the reality is that humans driving cars kill other people every single day. We should fix this with a better system. Tesla and Waymo seem to be making progress. I don't expect them to be perfect but in the long run this will save lives.
You seem to be implying that in the short run it's ok for them to kill some extra people. One, I don't think that's necessary; Waymo is a good example of how a safer approach is also apparently no less effective. Two, you're presuming that we will get to self-driving cars that are economically viable and safer than current human-driven ones, something that is not a given. And three, it's not clear to me who gets to decide exactly how many unwilling people should be sacrificed on the altar of technological progress, but I hope it's not us and it sure shouldn't be Musk.
Here's a video of Waymo car in Phoenix. Around min. 17 it gets confused by a cone and stops in the middle of two lane road, hugging both lanes.
Then google sends support but just when they arrive, the car drives away from them.
They were lucky it didn't lead to an accident and as long as they keep the fleet to 600 cars then yeah, the accident rate will be much lower than Tesla's AutoPilot, shipped in more than a million cars.
My point is not to rag on Waymo just to inject some reality.
We don't have a choice between "safe and unsafe" way of developing self-driving software.
We have a choice between "test software we know can't handle all situation on real roads and make it better based on that testing" or "we'll never have self-driving software".
Except the other option will rather be: "U.S. doesn't allow testing of self-driving software on real roads and a Chinese company will develop it and will capture a trillion-dollar market in U.S."
The Waymo car in that video drove extremely safely give that it was confused. It was conservative and thought it might have seen and obstacle so it stopped. Did this inconvenience other drivers? Absolutely. But it was not a major safety risk. In fact, slowly coming to a stop is the legal and correct thing for a confused or impaired human driver to do. In comparison, Teslas seem to rapidly and suddenly brake for no explainable reason while traveling at fast speeds and do so routinely. Further, Teslas have other safety issues which are indicative of sloppy design (such as the fly-by-wire passenger doors that will trap back sheet occupants in the car if the electronic system is disabled). This is a failure of Tesla specifically, and regulating to stop it wouldn’t really slow down others like Waymo.
> ”Teslas seem to rapidly and suddenly brake for no explainable reason”
The reason is widely known. Phantom braking is caused by rogue radar reflections that confuse the car into thinking there’s an obstruction in its path, activating the AEB automatic braking.
The real question is, why does it happen more often with Teslas than with other cars equipped with radar AEB? Maybe Tesla’s is just more sensitive.
Exactly. There's a big difference between approaching this problem with a "first do no harm" perspective and a "move fast and kill a few people" perspective.
And this part from the previous poster strikes me as a big problem: "They were lucky it didn't lead to an accident and as long as they keep the fleet to 600 cars then yeah, the accident rate will be much lower than Tesla's AutoPilot, shipped in more than a million cars."
That seems like an excellent reason to keep the number of active cars very, very small. Rather than, as stated, an excuse for shrugging at a death rate at least 1667 times higher.
100% agree! And I don’t think the approach needs to be “first, do no harm”. I would be very happy with “move at a normal pace and do your best not to kill anyone.”
But “move fast and kill people” is ludicrous and it’s exactly what Tesla is doing.
> We have a choice between "test software we know can't handle all situation on real roads and make it better based on that testing" or "we'll never have self-driving software".
This is like saying that we'll never have a cure for cancer if we can't experiment on the public without their consent.
The bar for medication, at least, is proving safety first before testing on large amounts of people and allowing the public to buy it.
> We don't have a choice between "safe and unsafe" way of developing self-driving software.
It's also entirely possible Waymo hasn't achieved peak perfection in its software development practices despite doing better than Tesla, and that another entity could do it more safely.
How many unwilling people are sacrificed because we won't ban alcohol and all impairing drugs without exception and then enforce those bans with immediate, Judge Dredd-style summary execution?
I bet you that'd reduce traffic fatalities dramatically too.
How far do you want to go to 'save lives'.
I guarantee you with 100% certainty I can design a society that will 'save lives' at every turn for every single activity, and I can guarantee you with 100% certainty you wouldn't want to live in it.
It doesn't, as it compare general rates with self selected "good driving conditions" as defined by the software - only highways, only good enough weather, only good enough maintenance state of the car.
Your notion is that everything is fine because according to Tesla, a company with a leader known for telling whoppers, they are killing less people on net?
Even if we trust them on that stat, which I certainly don't, that still doesn't mean they aren't killing people unnecessarily.
I can't withdraw my consent to share the road with intoxicated or distracted drivers. That's just a fact of life. That doesn't mean I shouldn't be able to withdraw my consent to share the road with Beta software.
Plenty of businesses are able to build very effective safety mechanisms for motor vehicles without subjecting the general public to Beta software. To me this is a case where the ends do not justify the means.
I understand your point, but I would frame it stronger: Indeed, we, as a community, have withdrawn our consent to share the road with intoxicated drivers. You break laws if you drive intoxicated. That beta software doesn't break any laws, but maybe it should.
What if we made driving tests far more difficult (otherwise, you can drive a 50(?)CC moped that tops out at 35mph)? What if we had government subsidize rides home after going out drinking? There are a lot of solutions that don't involve "let companies test 5000lbs autonomous vehicles on public roads." Heck, those companies have enough money let them buy a city and test it on the now private streets.
> Tesla and Waymo seem to be making progress. I don't expect them to be perfect but in the long run this will save lives.
Let's assume for a second your unstated assumptions I was alluding to above are correct. This is the only way to make a better system and that system has to be tested on public roads. Why do we allow two private companies to reap the inevitable huge monetary rewards when we all pay for that system with a higher increased risk of dying in the meantime?
nah. just because some drivers do stupid things does not mean that we should allow this "FSD" travesty. The thing is: if you're a human driver and you screw up you pay the consequences. It's codified in the law and you are fully aware that there are consequences if you don't follow the rules.
Now, if you drive one of these "FDS" cars and it ends up in an accident where people die, who is responsible? Are you responsible? Is the car manufacturer responsible? Do we just write it off as a freak accident with 1 in a million chance of happening again?
> Most companies in this industry, thankfully, seem to be moving carefully and rolling out their products conservatively;
Having used other products, I think this is objectively not true. I've seen "Pro" pilot accelerate itself into its own collision warning and randomly fail at basic curves.
I've seen video of Ford Copilot failing due to glare and jerking toward trees.
For some reason, Tesla just gets more attention. Some of the other beta-like behaviors are actually worse.
GM SuperCruise right now, and Ford is claiming that their "Blue Cruise" product will be able to do it soon. There may be others, but those are the two that I'm familiar with.
I've also been observing a general trend of reviewers rating the performance of new vehicles based in part on how long it can go without nagging the user to hold the wheel. sigh
I think Tesla should definitely be accountable for their software and damage that it causes but involving overly risk adverse regulations will push out self driving cars by a decade, maybe indefinitely.
I agree with your point that government regulation is fundamentally reactive to private sector innovation, hence lagging. That being said, this particular issue of autonomous driving has been a hot topic for the better part of a decade now and I would like our governments to tackle it.
>I believe this is a case where self driving software should be default illegal until proven safe.
If we go with your suggestion, what would you consider "proven safe"?
Autopilot has been running in hundreds of thousands of cars, has driven several billion miles, and we still don't have enough data to prove whether it is safer or more dangerous then an average human driver. Accidents are rare and we therefore need a huge amount of data before we can be confident that the accident rates we are seeing are predictive. I have no idea how you collect that type of data without them being tested on real roads with other real drivers.
There is zero context on that video that tells us what is happening there. We have no idea if that is the Autopilot failing or the emergency braking failing. We also have no control group to tell us what percentage of humans would stop short of that dummy. It is inexact due to Twitter's video player not showing fractions of a second, but it looks like there were approximately 2 seconds between when the dummy started moving forward and when it was hit by the car. The average human time to braking is 2.2-2.3 seconds[1]. Is the car even failing that test in comparison to a human?
It also isn't clear from watching that video what the safest and therefore desired behavior should be in that situation. A self driving car is obviously not going to prevent all accidents, so it is a question of minimizing potential harm. We don't want a car to aggressively brake whenever someone at a street corner takes a step towards the road. We therefore need to balance the chance of a person stepping into the path of the car with the risk of braking when it is unnecessary and causing a rear end collision. The problem in the linked thread is overaggressive braking so forcing the car to pass a test that rewards overaggressive braking would only make that specific example worse.
That leads back to my point about needing a huge amount of data. You can't just run a car through an obstacle course to know whether it is safer or more dangerous than a human. You need to have it interacting with unpredictable humans and you need to do it repeatedly before you can confidentially predict whether it is safer or more dangerous than a human.
> We also have no control group to tell us what percentage of humans would stop short of that dummy.
IIRC, Subaru and other companies pass these simple emergency braking tests 100% of the time.
That test was a Chinese test IIRC, but the software doesn't change between countries. Similar tests have been done here in the USA by insurance groups to set insurance rates, but a government-mandated test for what "emergency braking" really means (before you "sell the feature to the public", lets actually have a government-mandated test similar to that video).
You shouldn't be allowed to call your stuff "autopilot" or "full self driving", or "emergency braking" or "pedestrian avoidance" (or some other set of words) unless you can... you know, avoid pedestrians and emergency brake in a well-controlled test.
Avoiding balloon people is enough. But its a well known fact that Tesla repeatedly fails at these simple tests, when other groups (ie: Mobileye group / Mobileye hardware) manages to emergency brake in time.
The issue is that 3rd party non-government groups (ie: IIHS) are the ones running these tests. There's no advocacy group for US consumers as far as I can tell. IIHS is primarily about serving their master (insurance companies).
Don't get me wrong: IIHS is doing good work here. But its not their job to protect the consumer.
EDIT: I got my sources mixed up. Tesla apparently passed the IIHS test.
>IIRC, Subaru and other companies pass these simple emergency braking tests 100% of the time.
They do not pass 100% of the time unless you have a very narrow definition of "these simple emergency braking tests". No emergency braking system is foolproof.
>But its a well known fact that Tesla repeatedly fails at these simple tests,
You say this while at the same time the source you include has Tesla in the middle tier of results.
Either way, my point is not that Teslas are safe or that they perform well on this test. The point is that this test does not tell you whether a car being driven by Autopilot is safer than a human.
> They do not pass 100% of the time unless you have a very narrow definition of "these simple emergency braking tests"
Lets get them working consistently under well defined, standard, simple, emergency braking tests before worrying about the real world.
Like not hitting a balloon dressed up as a pedestrian during clear skies in sunny weather. I don't care about rainy days until we get the bright / sunny weather figured out.
>Lets get them working consistently under well defined, standard, simple, emergency braking tests before worrying about the real world.
Automatic emergency braking is the exact wrong feature to use for your example. Either the driver sees the pedestrian, stops in time, and the automatic emergency braking is of no use or the driver would have hit the pedestrian and any effort from the automatic system is a benefit. This is the type of feature that should be deployed as soon as possible assuming it is not tuned too aggressively to stop at false alarms.
> we still don't have enough data to prove whether it is safer or more dangerous then an average human driver.
I would point out that given the dangers involved with accidentally turning over a million vehicles into autonomous 5000 lbs missile, erroring on the side of caution seems fine. The benefits are quite low: if the autopilot had been on since inception between 20 and 100 lives would have been saved (I accept your "several billion miles" number and point out that that the average fatality rate is 1.1 per 100 million miles driven, but that is based on averaging in 40 year old cars with fewer safety features and shrinks ever year.). The costs could be astronomical: a simultaneous failure (security, mistraining, date bug, whatever) could result in hundreds of thousands or millions of deaths.
Which means there are three errors to consider. (1) Obviously, some things (bugs, exploits) are unknowns and there will always be an inherit risk there. I would say that these risks may forever make self-driving cars too risky. (2) It is difficult to come up with any actual test of driving skills. This is especially true because any test will suddenly become the target so we have to have the test cover everything. (3) Actual driving errors: Both of the above assume that the AI can drive as well as a person. That's obviously difficult to do. And we would need to see a huge improvement to justify adding a new risk factor.
>The benefits are quite low: if the autopilot had been on since inception between 20 and 100 lives would have been saved
This is only the case if you look at the current system as the finished product. The biggest benefit is that it gets us closer to a true self driving system. That would not only save millions of lives, but it would revolutionize logistics and economics of transportation which can in turn reshape society.
>The costs could be astronomical: a simultaneous failure (security, mistraining, date bug, whatever) could result in hundreds of thousands or millions of deaths.
I have no idea what scenarios you are imaging that could lead to "hundreds of thousands or millions of deaths." Almost every Autopilot death in the US makes national news. There is no way hundreds of people could die without there being some type of intervention in the system.
> Autopilot has been running in hundreds of thousands of cars, has driven several billion miles,
Has it really though?
> I have noticed for me at least it started happening after I updated at 2021.4.18.11
This implies that the functionality of the AutoPilot is constantly changing, presumably meaning each version has thousands of miles rather than AutoPilot having 'several billion miles'. It doesn't seem like you can trust past performance is the users are to be believed.
My assumption is that OTA updates won't be allowed once this stuff starts requiring certification.
I think the first half of your comment is a pointless semantic debate. The Autopilot system has driven billions of miles. Those miles obviously all aren't equally relevant. The older miles lose value as the hardware or software changes. However those miles don't all become worthless anytime there is any software update.
>My assumption is that OTA updates won't be allowed once this stuff starts requiring certification.
It is unclear whether this would actually be safer or not. I am reminded of how both Tesla[1] and Toyota[2] had similar software problems with their antilock brakes. Both companies had a software fix relatively quickly. Tesla deployed the fix immediately to cars through OTA updates. Toyota issued a voluntary recall meaning its cars wouldn't be updated to the fixed software for months, years, or potentially ever.
it is a bit scary to think we'd have to lobby government to allow any new invention to prove "safety". Tesla has indicated why it needs to enable these features in order to collect data to improve them, it has also shown that driving under autopilot are already reducing accidents (compared to without and national stats for all vehicles), and yet you call for a entire ban instead of thinking constructively. with the vision stack there will be improvements to the driver attentiveness checks. that would seem to mitigate abuse of the features, which is clearly meant to be supervised at all times by the operator.
It's quite simple, really: any software that pretends that it can be in control of a car should be subject to the same kind of test that ordinary drivers have to do before being allowed to take to the road. Then, the manufacturer should assume all liability for errors made by their product. Just like a real person would. They can choose to insure or self insure.
Under all circumstances, over all of the miles driven. Tesla is very good at spinning things but in an apples-to-apples comparison they are doing much worse than those ordinary drivers.
Where do you have apples-to-apples comparison? As far as I know, this does not exist.
NHTSA is investigating Tesla, some 20-30 accidents. If they find that Tesla is doing something seriously dangerous, I'm sure they will force Tesla to take corrective actions.
Cherry picking the one line that can be twisted like that is a bit silly don't you think?
Let's start with this segment of the title:
"Teslas Aren’t Safer On Autopilot"
Then, further down:
"Teslas are not safer with Autopilot on"
"Of the 2.1M miles between accidents in manual mode, 840,000 would be on freeway and 1.26M off of it. For the 3.07M miles in autopilot, 2.9M would be on freeway and just 192,000 off of it. So the manual record is roughly one accident per 1.55M miles off-freeway and per 4.65M miles on-freeway. But the Autopilot record ballparks to 1.1M miles between accidents off freeway and 3.5M on-freeway.
In other words, about 30% longer without an “accident” in manual (with forward collision avoidance on) or TACC than in Autopilot. Instead of being safer with Autopilot, it looks like a Tesla is slightly less safe.
But not a lot less safe. And if the predicted 3:1 ratio of accidents freeeway to non-freeway is too high, it might even be about the same. But almost certainly not 1.5 times better as Tesla’s numbers imply."
Which I think is a pretty fair and even handed evaluation of the available data.
You are comparing Tesla on Autopilot vs. no Autopilot, while I was responding to your claim that Tesla on Autopilot are doing worse than ordinary drivers.
> Tesla (...) are doing much worse than those ordinary drivers.
That was response to:
> Those successfully tested ordinary drivers also kill many people every year.
The not-apples-to-apples comparison indicate that Tesla are twice less likely to cause accident/crash when on Autopilot, and even less when not on Autopilot.
I find it also highly likely that Autopilot slightly increases the chances of accident right now.
> ”I believe this is a case where self driving software should be default illegal until proven safe.”
You have to remember that human drivers are extremely dangerous, and cause millions of deaths and serious, life-changing injuries worldwide every year.
Any regulations that hold back the development of self-driving software would likely be counter-productive. It’s a bit like arguing that we should have held back Covid-19 vaccines because a small number of people had blood clots or heart inflammation.
no one in government is qualified to really understand ML and to regulate that stuff. Does the government employ people at the level of Tesla in ML? Its quite the problem... when the best minds are in the tech enclaves are not working for the public interest.
We don't need any understanding of AI/ML in government to effectively regulate the autonomous vehicle industry. Design an appropriate set of tests, make companies run the gauntlet, only approve the ones that pass. The test criteria is simple: does this software meaningfully and statistically significantly reduce the risk of accident/harm/death compared to the average human driver? Add caveats and conditionals as you wish for conditions/weather etc. but it's fundamentally a black box test with no knowledge of technical internals required.
And how does one prove something is true "statistically"?
By doing it many times. So many times that you can be statistically confident of the result.
Statistically, there is ~1.4 deaths per 100 MILLION miles driven.
To prove, statistically, that software is as good, it would have to drive at least 1 billion miles. And yes, it would kill 14 people in the process (or more, if it's worse than humans; or less, if its better).
Tesla is already doing what you say car companies should be doing i.e. statistically testing the software
Except you don't like it and think there's a magic fairy test that will show something "statistically" without driving statistically significant number of miles.
I love how people think all it takes to excuse potentially lethal undefined behavior is that it's documented somewhere:
> This sounds to be precisely the type of roads where autopilot/autosteer should not be used - it is not limited by entry and exit ramps. If autopilot/autosteer is used in circumstances where it is not for use, then from time to time it is going to perform in an undefined way.
> Using ANY tool outside its operating environment will be dangerous. Put your experience down to user error.
Given that the Tesla can tell the difference between road types (as evidenced by the fact that it applies different speed caps to divided highways, and will only enable Navigate-on-Autopilot on interstates), if Autosteer is only designed for such roads it seems odd that it allows drivers to enable it anywhere it can parse the road lines.
Tesla's own marketing[1] conflicts with their documentation, then:
> Build upon Enhanced Autopilot and order Full Self-Driving Capability on your Tesla. This doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what we believe will be a probability of safety at least twice as good as the average human driver. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. For Superchargers that have automatic charge connection enabled, you will not even need to plug in your vehicle.
The following is almost unchanged on Tesla's current Autopilot page[2]:
> All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you.
Selling people self-driving "capability" is like selling them "access" to healthcare. Great for marketing because it completely obscures how hard if not impossible it can be to actually utilize the thing you have the "access" or "capability" to, or how available it is to you in the first place.
This is all written in terms of future capabilities of the system that it doesn’t have today, but that’s extremely unclear to many people who buy Tesla’s.
Tesla represents that the current cars they are selling will achieve these functions with a software update to be delivered at some future date. I do not personally believe that to be true.
To be clear, Tesla is representing that a car you can buy today will one day drive itself. It would be very hard to represent that a car sold today will someday magically develop fusion capabilities. Again, I don’t happen to believe that current Tesla models will achieve what is stated in the marketing copy.
What Tesla is saying is even weaker than that. They're not saying they'll ever provide such software. They're really just selling you a car with the claim, "its hardware is sufficient for self-driving; now all you need to add is the software." Or to put it another way, "if we ever release self-driving software, it'll work on your car" (or otherwise presumably they'd shoulder the cost for being wrong about this guarantee).
That's still more than they could say for fusion drives, but it doesn't imply they will ever provide you with any self-driving software. Just that they believe it's possible to write such software.
Well, there’s nothing in their marketing copy that says that any of this is in doubt - it’s all in terms of what the car will do, not what it might do someday. This is all associated with Tesla’s $10k “full self driving” package that they are selling today with the promise of future capabilities.
Put another way, there are a lot of people who are going to be really pissed when they realize that their car will never do those things, and they paid $10k for practically nothing (ok, technically they get autopilot and the ability for the car to stop at red lights today).
> Well, there’s nothing in their marketing copy that says that any of this is in doubt
I think this bit is meant to cover their rear: "The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions."
Translation: we don't/can't actually guarantee this will ever happen, and if it does, it may depend on where you're located.
Exactly. If Tesla and Musk didn’t aggressively market their cruise control as
Full self driving only a few months from level 5! then nobody would be criticising them.
The marketing is certainly a big part of it, but I think it's a mistake to think there aren't significant issues here that would remain even with better marketing.
One big problem is the sheer unpredictability of what's going to happen the next moment. If you turn on cruise control on the wrong street, its behavior is still going to be entirely predictable. You're simply not going to get suddenly erratic behavior just because you pressed the cruise-control button. Autopilot on the other hand seems to actively make the situation far more unpredictable than it would otherwise be, increasing the dangers people find themselves in. That's just horrible by any standards I can consider sane, no matter how it's marketed.
Another (somewhat related, somewhat distinct) problem is that a system that works (say) 99% of the time and fails 1% of the time can easily be more dangerous than one that works 20% of the time and fails 80% of the time. (Obviously I made up the numbers, but adjust accordingly.) If you have to constantly correct something, then you're used to it; it's second nature to you. But if you might get used to it working too well... then as a human you won't be necessarily ready for the rare cases, especially when they're more difficult to predict. So an autopilot that works 'better' than something like cruise control can easily end up being more dangerous because of this, and it would be a mistake to hold them to identical standards.
You’re right. This also is made even worse with OTA updates since a user might have learned one type of situation response only to learn the hard way that it just changed.
So I guess I have to take back my earlier comment then, they’d still be criticised!
I’m not hating on Tesla, they make beautiful cars and tremendously fun to drive. If I had $50k to spend frivolously I’d buy one in a heartbeat (without “FSD”!).
Well it seems that the commenters on the Tesla Motors Club really should not put any trust in a beta product feature and must keep all eyes on the road at all times when driving.
Seems like Tesla manufactured a reality distortion field with FSD to market it like a fully self-driving vehicle where that is no where near the case as it is still admittedly Level 2 cruise control. Not the over-promised 'Level 5' FSD.
I'm not going into the pricing of this, since that is a complete scam and on this, Tesla really took their customers for a self-driving ride for years on a beta product feature that still doesn't work as advertised.
“Hi mate. I had this now a few times and I learn to expect it and step the pedal ahead. Which software version you are at the moment? I have noticed for me at least it started happening after I updated at 2021.4.18.11.”
Discussed the same way PC gamers might talk about a graphical glitch and whether or not the latest GPU drivers improved things or made it worse. But with a few thousand pounds of metal and bystanders who don't know they're in the game.
Human drivers aren't perfect, but their failure rate is known and predictable. The right software update, though, could have a bug that turns the whole fleet of autonomous cars into murder tanks overnight, invalidating all prior collected safety statistics and expectations.
There's nothing that could make the whole population of human drivers unsafe all at once, we know the upper bound of the destruction we can cause. We've spent a century earning that predictability. With software, we'll never have it.
The funny thing is I have a similar issue with an update on my Model S - I know the exact spots where phantom braking engages and just start turning off Autopilot in those spots.
For braking; as a following car, you're technically following too close if you can't come to a complete stop if the car in front of you does.
Every car is capable of doing this; every car is one transmission / differential / electric motor lockup away from stopping its drive wheels immediately.
In an internal combustion engine there are a few other possibilities including a thrown piston or rod, a significant lack of oil
>Every car is capable of doing this; every car is one transmission / differential / electric motor lockup away from stopping its drive wheels immediately.
yeah, but these risks of mechanical lockup are usually better mitigated against because they're an understood risk.
Driveshaft lockups can occur, generally because an appliance on either side of them failed -- certain manufacturers mitigate this by very simply using mechanisms that are unlikely to ever break, and some manufacturers go an extra step and use BreakBolt style fasteners that will destroy themselves in the event of a lockup that torques them harshly.
An engine seize event doesn't stall or stop a vehicle that uses a CVT (most low-end 'automatic' cars on the market), it just creates additional engine braking pressure -- and before CVTs were in common use we used traditional torque converter automatics that failed similarly.. this is one of the major features of a hydro-mechanical linkage element like a torque converter.
Many CV axle designs now incorporate straps to limit movement during a failure, to prevent wheel lock or steering lock.
New style manuals can automatically engage the clutch during an engine seize event, limiting the physical reaction from the car during the event.
Shit still happens, wheel locks are uncommon but they still exist; but I think it's safe to say that people are waiting to hear what Tesla's engineering answer will be to the whole 'suddenly-stopped-car' problem.
Brake testing and tailgating are examples of aggressive driving, and both may be punishable as vehicular assault, careless or reckless driving, or vehicular homicide if death occurs.
Because it generally works really well, and most of the phantom braking is generally 5-10 mph drops that aren't as dramatic as the on in the forum post being referenced.
Twice in the last year (>13k miles total). One of them involved a truck pulling a fifth wheel that crept into my lane, but not far enough to warrant an emergency braking response.
The other was a similar situation involving a vehicle merging in a short merge lane. I think the car was anticipating that the person was merging due to the short space that they had, but in reality they were slowing to wait on me to pass.
Both involved the car jumping immediately to a full on panic braking type solution when no collision was imminent or likely. To be clear, I don't know that these involved dropping >10 mph either... but it definitely would have if I hadn't intervened.
By phantom braking, they mean a drop of 10-20 mph to add a safety margin in the potentially hazardous scenario of a car travelling opposed into your lane of travel.
It is scary to people because the Tesla makes an unsettling alert noise at the same time the vehicle speed changes slightly.
The only way this would result in an accident is with someone else speeding and tailgating. I'll get my tiny violin and 360 dashcam ready for that one.
20mph is not a "slight" change. That's easily a full third of your speed in a freeway (or a quarter if you're speeding), literally the difference between a street and a freeway. And that's easily half your kinetic energy you'd be losing.
> The only way this would result in an accident is with someone else speeding and tailgating
Surely you can't pretend you've thought of every possible scenario this could turn into an accident?
If nothing else, people can momentarily miss the fact that the car in front of them randomly slowed down 20mph, especially when other cars are maintaining their speeds. Doubly so if they're glancing somewhere else to change lanes or watch out for other dangers or whatever. A million things can go wrong that I can't pretend to think of, and I'm shocked you just so confidently ruled them all out. As I see it, just because some of us are mortals, that doesn't excuse letting small mistakes bare our mortality in the middle of a highway.
> The only way this would result in an accident is with someone else speeding and tailgating.
Let's examine a Tesla traveling at 60 mph starts full on breaking to 40mph, and the person behind them is traveling at (1) the same 60mph and (2) is maintaining the recommended separation for 60 mph and (3) has instant breaking (bonus condition). The driver behind has 2.5 seconds to react to avoid a collision. The average human reaction time for a driver who is expecting breaking soon is 1.5 seconds, but this "phantom breaking"'s deceleration comes out of nowhere. And the average car doesn't have instantaneous speed changes.
In fact, an F-150 with similar automatically computer activated breaking would be a coin flip to avoid a collision, certainly a matter of a couple of feet.
I've never heard of a better case that even these "safety features" should be disallowed on public roads.
Maybe the solution is an ecosystem of supplemental AI that you can pick and choose from to compensate for the undesirable characteristics of Autopilot. Call it “Driveware”.
I'm making a recommender system for suites of disparately-designed, disconnected driving AIs that seem to work well with each other.
So you put in that you have a Tesla on FSD update 2021.7.20.123 and the supplemental AIs you want to be sure to use, and my AI will recommend you 3-5 other AI systems that receive good reviews with your existing choices. Thumbs up if you like it, thumbs down if not, if your car is totaled, that's an auto-downvote. Looking for funding.
There isn't really a 'Tesla community' though, it's called 'Traffic' and we're all part of it, also those people that do not have Tesla's such as pedestrians, cyclists, motorcycles and other vehicle operators. And that's why everybody should take great care to keep each other safe, not just those people that are currently enclosed in 2+ton steel enclosures.
My point is that government regulation of auto steer, driver monitoring, and other adas features won’t help with the more dangerous phantom braking issues. And these aren’t talked about much at all with Tesla or any other cars that may have similar issues.
Agreed. I got rid of a late model Mercedes on account of phantom braking, which I considered endangering other drivers on the road as well as myself (and possibly other occupants). This crap should be illegal until they have it working to a degree where you don't have a near accident on account of your 'safety' feature every few thousand km.
I can’t tell if I crashed my Tesla or if the Tesla crashed me. I don’t have anyway of figuring out what happened. One second I was driving down the road the next I was parallel running into the guard rail. It happen when another car merged onto the highway. I was bullish on the technology and I still have PTSD from the wreck that happened in 2019.
Tesla doesn’t just peddle super car capabilities with FSD, but just recently they seem to support the idea it can travel through dangerous flooded waters. [1]
Hmm I do remember Musk joking awhile back about someone using their car as a boat. I can see why the above commenter might perceive that as real advertising since Tesla's advertising truly is word of mouth using social media.
More distressing is the above commenter described a nightmare scenario which went unaddressed. I guess that's your right. I wonder what you think about that part of their comment and whether it is "believable" or not. I find many aspects of messaging coming from Tesla, either the company itself or supporters, as unbelievable.
“We def don't recommended this, but Model S floats well enough to turn it into a boat for short periods of time. Thrust via wheel rotation.“
“If curious abt TSWLM car, am still planning to do a sports sub car that can drive on roads. Just a side project. Limited market potential :)“ Musk, 10:29 am 6/19/16
The spy that shagged me car which is a reference to a amphibious car. There is more in this article too.
Safety regulations for E/E systems (which includes electrical and software components) in automotive are done via "self-certification" and adjudicated via post-facto liability suits. There's no FAA or FDA like regulator for automotive currently. As such, I'm not sure who would/could step in and regulate Tesla currently.
the attitude in a lot of the comments there is bizarre...
in most of the world i'd hope an accident caused by using these features is the drivers fault for switching them on and not 'driving with due care and attention'
all of the stuff about it being "safe if" miss the core concept of what safety means if you take it seriously, which you always should when driving. safety and stability isn't something you get by ignoring edge cases, its about handling them so robustly you can't have problems.
people who have never experienced problems can /maybe/ justify using these features... but what they say on the tin should be enough to not want to use them.
giving beta features to regular consumers is fine. doing it when those features can cause injury or death is absolutely abhorrent behaviour that should be prosecuted.
allowing software updates for something like this is pretty counter to safety practises too, and the QA process clearly has too many problems to be reliable... not that it should be relied upon anyway on the timescales involved here.
too many of these tech companies 'pushing the limits' are doing so by ignoring the law, and it costs lives. when individuals do this they are fined and/or imprisoned, and rightly so.
the idea that a company results in no accountability is not a failing of the law, but a failing of those whose duty it is to enforce it.
> in most of the world i'd hope an accident caused by using these features is the drivers fault for switching them on and not 'driving with due care and attention'
If I can die because a driver has a big red "kill" button in their car, I'm not going to blame them for pressing it. I'm going to blame the manufacturer and regulators. That button is just too damn dangerous for humans.
Similarly, there's a reason we don't put a "nuke" button on the president's desk directly guarding missiles and then blame him for pressing it at the wrong time. Some things are just too damn dangerous to put in a single human's control, let alone so easily.
Sure, I'd also blame the driver if they pressed it intentionally. I can imagine it happening by accident. I know I've even accidentally hit the gear shift on a car as a passenger.
Prior to the last update my M3 w/radar would emergency brake anytime a semi truck got even close to the line. Not safe at all when doing 85 down I5. They did fix it but now I don’t trust it at all.
Yesterday while on navigate on autopilot is veered to the left on the off ramp and almost crossed the line. It took a lot of force to stop the wheel. They’re making it really aggressive which can be good but it shouldn’t rip the wheel out of your hands.
I just ignore my autopilot completely and enjoy my Tesla for the fine car it is. Autopilot feels like being on a roller coaster put together by drunk carnies.
I have two kids, currently 3 and 6. I've shown them some of the videos of Autopilot freaking out and doing nonsensical things (like "braking at the moon" because it can't tell the difference between a low, yellow moon and a yellow light).
Both of them have quite a bit of experience with Power Wheels and other random wheeled vehicles around the property.
I think I'd trust my six year old on the road over Autopilot, given "weird things." At least her vision processing system is capable of going "... huh, that's weird..." instead of "I know what that thing is!" when nonsensical. And she has object persistence, combined with a sense of road speed.
We occasionally see weird optical illusions and have to work through them, but she understands the difference between a stoplight and random lights in the night sky (which, around here, are often enough crop dusters).
Maybe it is my age and millennials or the generation after will trust self-driving cars. I’ve seen computers fail so many times in my lifetime that I just can’t go 70 and trust the car to stay in the lines. Straight is ok but when the curves come…
They are talking about Traffic Aware Cruise Control (TACC), not actually AutoPilot as most people think would be the case.
I love our Tesla, but the TACC is not that great, the most annoying part is when passing cars on the freeway it sometimes all of a sudden it thinks the car next to me that I'm passing is in the same lane, and then phantom brakes while trying to pass. It is disappointing.
> it sometimes all of a sudden it thinks the car next to me that I'm passing is in the same lane, and then phantom brakes while trying to pass
For reference, I've had a Honda with adaptive cruise control for about 10 months, and it's done that once. But I don't think it is designed to brake that hard no matter what.
I had a late model Mercedes with radar and automatic emergency braking. After it tried to kill me twice I sold it. The dealer checked the system and pronounced it A-ok, which was enough reason for me to get rid of the car if it had been pronounced broken I could have lived with it. But if that's the kind of quality they ship then I don't want to be responsible for it.
That makes it even more concerning, because TACC is a subset of Autopilot (it's Autopilot without autosteer), and it's the easier subset (many modern cars have their own versions of TACC and they work very well, speaking from personal experience).
As I understand, phantom breaking is a result of inconsistent sensor inputs from forward radar, specifically with things like highway underpasses. This is one of the reasons Tesla is moving to a full vision stack. Currently the new vision-only update is only out to FSD beta testers. Hopefully the phantom breaking issue will go away once highway autopilot is updated.
No, it's because their world model stability & consistency, and sensor input integration into the model is shit.
Of course it's not easy. But nobody forced them to release this half-baked steaming pile.
Full visual won't fix the fundamental problem. At best it decreases the cost and energy requirement (but then they'll likely use that for more processing anyway, to try to squeeze more information out of the remaining sensors).
Well, unwanted auto-braking beats failure to brake.
I can see the day coming when a piece of trash falls on a road, and the car seeing it auto brakes, and the car behind it auto-brakes, all the way back to some antique without auto-brake, which rear-ends the last automated car and takes the blame.
But sometimes it doesn't end with 'taking the blame'. Sometimes it ends up with a hospital ride or a funeral, sometimes more than one. Unwanted auto braking is unacceptable and failure to brake is unacceptable. You don't get to choose one bug over the other.
and you have to pay 199$ monthly for the self driving feature, or 10000$ without subscription. [1]. Couldn't find the EULA for the feature, wonder how it is termed.
I refuse to use it, and even the cruise control is down right annoying. Braking at the entrance of a corner, in a 55 zone, that could easily be taken in this model y at 90-100.
Of course, it’s not my 2010 4Runner that would hit the brakes in a corner if it detected a wheel slipping, which would pitch me in to an outside lane. I bout had three head ons before I got rid of it.
The complaints about Tesla Autopilot reminds me of the complaints of Wikipedia in its early days. People were saying you can never rely on Wikipedia and should use a 'real' encyclopedia. But Wikipedia mistakes got corrected and it got better over time; as will Autopilot.
If artificial intelligence were at all smart it would be a partner & be able to communicate with us. But instead we have these ridiculous black box machines, which purport to replace not participate in intelligence.
Tesla have rolled back on a few things, eg passanger monitoring, so hopefully we will get standard cruise control soon too. Sometimes I want to take responsibility for observing side traffic and let the car just handle front and back only
I know these kinds of situations are frightening, but as long as AutoPilot is significantly safer, statistics-wise, over similar driving, it's hard to argue otherwise.
The AutoPilot simply cannot be significantly safer. If it was on all cars with autopilot enabled and it was on the whole time it could have saved a few lives. Not enough to be statistically significantly safer even if it dropped it to zero.
Of course take it with a grain of salt since this is Tesla as the source, but they show significantly less accidents when using AutoPilot. Of course, this is missing controls on the data, so I don't expect it to be as good as they state.
I wouldn't mind the electric powertrain but the giant ipad display and gadgets would piss me off. I don't want the distractions. My ideal anti-tesla could include advanced safety features and driver assistance as long as they stay out of the way and enhance the driving experience rather than replace it. If people don't want to drive they should take public transport.
I would love, for once, for the some in one of these anti-tesla threads to explain how we will ever get to L5 self driving without the equivalent TSLA's FSD program.
OK. It is far more likely to happen by the programs being executed by companies like Waymo and Cruise, who are deploying vehicles with far more sensors and compute capability than Tesla, operated by trained safety drivers. Waymo has already deployed cars logging Level 4, which are giving rides to people in Phoenix every day. Tesla, meanwhile, logged ZERO test miles at anything above Level 2 in 2020.
As I was driving home the other day, heading west down the highway, my Tesla Model 3 duly warned me that its front-facing cameras were blinded by the setting sun and temporarily non-functional. If the car had been operating in Autopilot, it would have disengaged and put me back in command. If it had been operating in Level 3 or higher autonomy, perhaps driving my kids home from school, I have no idea what would have happened. I do not believe Tesla will get to Level 4 or 5 autonomy with the current cars on the road, despite what Elon has promised.
>explain how we will ever get to L5 self driving without the equivalent TSLA's FSD program
Is it too hopeful to think that it won't take a lot of human blood sacrifice to get there?
Is it cynical for me to think that if it does require human blood, then maybe I don't want to go down that research path -- and at best I don't want to be in the testing group?
A million people a year die every year, almost all are human error. We need this technology advancement as soon as possible. Also, while you can see some accidents with Tesla Autopilot the crash stats seem to indicate is Tesla cars are doing a great job preventing accidents.
> the crash stats seem to indicate is Tesla cars are doing a great job preventing accidents.
No, they don't. Tesla won't release enough data to allow anyone else to evaluate it, so you're stuck with their bulk statistics, which generally speaking compare middle aged (safe) drivers, in luxury cars (safe), in well marked road conditions (safe) during good weather (safe) with "all accidents, for all vehicles, for all drivers, in all road conditions, in all weather."
I’m not sure how we test such a feature well enough to deploy it widely.
But I guess the thing is, AutoPilot is using a human as the backup. The human is saving the car from making mistakes. The very thing that is supposedly worse than the car is the only thing capable of making certain decisions well enough when things get a bit dicey.
I appreciate the enthusiasts paying to be beta testers. I hope to profit from their experience in a few years when a car with a real autopilot is available. I just wish they didn’t have to endanger innocent others in the process.
The problem is that "a few years" is way too optimistic. Fully autonomous driving requires a level of situational awareness that computers can't yet touch. Everybody looks at the progress we've made so far and assumes it will continue at the same pace until all the problems are gone, but I don't see how that's possible. It's like assuming Moore's Law would have given us 10Ghz processors by now.
It is completely acceptable for those who want to pay to be beta testers to take risks. I would rather we have fewer bicyclists killed by people sleeping while their “autopilot” drives.
Are they testing it in private 'beta' streets, with 'beta' people who won't be killed? Or is this person going to be in opposing traffic on my nearby freeway?
Another end user mistaking marketing hype behind 'AutoPilot' as 'Full Self Driving' (E.G. that level 5 driving feature).
As other users in that forum thread point out, the current software is more like enhanced cruise control, lane keeping, and automatic breaking assist. It is also //only intended// for //limited access highways and freeways// e.g. onramp / offramp limited.
If enhanced cruise control in my non-Tesla routinely slammed on the breaks out of nowhere, I’d be pissed about that too.
Tesla’s marketing is optimized to convince users that the software is better than it is, and the software’s failure modes align horribly with the worst case: when things are working, everything is smooth, but without warning the car will suddenly decide to make a dangerous active change (swerving, braking, etc) in a way that can kill a driver who is even partially distracted.
Put the above together and it starts to look like Elon has created a perfect storm for traffic accidents.
I'm not justifying how Tesla's assistance works as I've driven a rental and it really is glorified cruise control right now.
That being said, I have a Honda with assistive braking and it is terrible. Shadows, pot holes, ghosts all seem to cause it to brake and not in predictable ways which is frustrating. I only leave it on, because I imagine one day I won't quite react fast enough to someone in front of me and it will save me the pain of rear ending someone.
But I definitely almost did when it slammed the brakes on a highway off ramp and got me rightfully honked at by the car behind me.
Whats worse is I'm not even sure if mine could receive updates at any point. Maybe at the dealership.
I have a Subaru with assistive braking (EyeSight) and it does the exact same thing on a very specific off ramp. My best guess was that it's detecting the big yellow "turn" sign right before the loop. It's about big enough to be the back of a car to the primitive detection.
That's not a solution for you, just thought I would share commiseration. Your experience with it is slightly worse than mine has been but I leave mine on for the exact same reason.
IMO, Tesla is a bit worse, but I agree with this in principle. I've had other cars phantom brake in similar situations.
The worst phantom brakes that I've had from AP were cases where a car really was creeping into my lane a bit. Its just that hard braking wasn't the right fix for the situation as a slight swerve was safer.
There's no "slam" on the brakes you are being sensational.
It's a slowdown typically within the range of regeneration, not even touching the brakes, and it's nowhere close to squeaking tires or the dramatic scenario you describe
I own a Tesla Model 3 as my daily driver. Phantom breaking works exactly the way I described it. And it’s not hard to understand why: the intent of the system (like all other emergency braking systems on modern cars) is to avoid a collision with a perceived object in front of the car. A gentle regenerative slowdown wouldn’t even make sense in that scenario.
Phantom breaking behaves the way a human driver would if they were attempting to stop short of an impending front collision. It just so happens that the Tesla is happy to engage that behavior in all kinds of fun scenarios, like if somebody leans an inch into your lane, or a bridge has a shadow it doesn’t like, or it decides the potholes are a force field in front of your car.
I’ve experienced it on highways, but as I noted in a parallel comment, if Tesla thinks Autosteer isn’t viable on non-highways, it’s odd that they make it available. They have no trouble identifying road type and using it to determine speed restrictions for autosteer, and no problem disabling autosteer if they can’t ID the lines.
Maybe the phantom breaking is so harsh in your car it’s giving you amnesia, because it’s a well-understood Tesla failure mode on the Teslamotors subreddit, TMC, and every other Tesla owner forum I’ve seen.
They need to compare against cars in the same class, not the whole industry. Also, I’m not sure death per miles is the right metric. What matters is the accident rate while operating in cruise.
Also, it is an inherently screwed data collection, where people will drive manually in more dangerous situations - so you pretty much get a statistics on going on a straight line for hundreds of kilometers, except for the actually problematic intersection/people walking before after the car, etc.
Yes, autopilot is just cruise control. No other cruise control I've used before violently brakes on the highway for no reason. I don't think expecting it to not do that is unreasonable.
That's the exact reason a German court barred them from using that very label last year in advertising material. They need to start calling it what it is, driving assistance.
If by "mistaking marketing hype" you mean "believing the actual words that they were told", I guess. But I think that goes well beyond the sort of puffery [1] that is legal because nobody believes it. Indeed, I'd argue that Tesla's valuation is precisely based on people believing it.
To be fair it is also referred to to as such in the car UI and in the disclosures that appear before you enable them (lane keeping vs. auto steer isn't too different in my opinion - both terms imply a regular competent driver, not some "always perfect" driver).
In other cars, similar features are restricted to when the vehicle's sensors are 1) able to discern clear lane lines and 2) the vehicle is traveling above a certain speed. Otherwise they're automatically turned off.
This seems like a pretty easy way to ensure the features are not misused. If you're going under a certain speed or lines aren't painted (or they're unclear) you are more likely to not be on the type of road or situation recommended for these features.
Also something to remember: actual pilots that use real auto pilot in planes have countless hours of training in how to properly integrate its use into the operation of the plane.
In Tesla's too, just not the speed part - I don't think you can assume highways are always above 45 miles an hour, especially if you've ever been on an interstate with lanes closed during rush hour.
That's what I mean though: these features are not really ideal for traffic that isn't flowing smoothly. Rush hour with a lane closed is a scenario where the driver should be fully aware & in control. Behavior is often unpredictable in such cases with tightly spaced cars and frustrated drivers.
I've been in such traffic and the only issue is TACC's 1-2 car spacing allows more people in to my lane than I usually allow, but otherwise it does fine. Tesla's TACC (compared to Honda) is even better in this sense since it often accelerates at the same time as the car in front of it, while Honda's will wait for a 1 car gap to continue driving on its own.
Ok, fair, whatever. But can one use Tesla with normal radar cruise control? Systems like these been super robust for over a decade if not two. Can one disable all these proactive security features that seem to not work.
Yes, you can, that’s called traffic aware cruise control (TACC), and is activated with a single pull on the stick. TACC absolutely does exactly what is mentioned in this article - it suffers from phantom braking where it hauls the car to a stop suddenly in the middle of the highway. Yes, other cars with radar cruise control don’t seem to do this.
No, it's not. I know exactly what my car is going to do when I turn on cruise control no matter the circumstances. It's predictable as hell. With autopilot you have no clue what's going to happen a split second later if you use it on an unblessed street.
Falsely advertised and packaged as 'Full Self Driving' by Tesla for years, but recently admitted to be Level 2 cruise control rather than the 'ambitious' so-called Level 5 that was repeatedly due for last year.
This is more like 'Fools Self Driving' I'm afraid. Especially with messing around with beta software that doesn't function properly as advertised.
It's fascinating (and really quite unfortunate) how often articles driven by what I'd call "hate upvotes" make it to the front page of HN. We're awash in links to pessimistic anecdotes that suit a narrative driven by dislike of a certain company or person, but which don't provide any useful knowledge. I think our time is probably better spent on more positive and insightful articles.
“Pessimists are usually right and optimists are usually wrong but all the great changes have been accomplished by optimists.” - Thomas Friedman
Your comment is not substantial. If you have a reason why Tesla should not be responsible for selling half-baked software as "Full Self Driving", then please share it.
Otherwise your assumption that it's just a hate-vote is just your assumption.
For me personally, any article that helps to define the risk of driving on the same roads as Teslas is important. I didn't opt in to this greed-fueled lunacy and don't want to die because of it.
This is a strange take. It's preferable to be exposed to a pocket of criticism to get a full picture. In the wider world, Musk can do no wrong, at least in his business endeavours.