Though censored in some news reports, the vehicle has stickers with the text www.bmw.com/autonomousdriving. Apparently an autonomous driving test vehicle, although it is unclear whether autonomous driving was active at the time.
One BMW spokesperson said: "We are currently investigating the exact circumstances. Naturally, we are in close contact with the authorities. One thing is already clear: The BMW vehicle involved was not an autonomous driving vehicle."
Maybe it was a vehicle with additional equipment for collection data/imagery that was still completely driven by a human driver. That would explain the stickers.
My understanding is that in most of the civilized world (not in the US), the norm for media is not to report defaming information before its veracity has been established.
It’s a good thing when media shows some restraint not to make unsubstantiated claims that are impossible to fully retract in the public consciousness.
Note: This doesn’t mean media shouldn’t report on anything short of a conviction, but it probably shouldn’t report in a defaming way on something that may not even be reportable news at all (i.e. an ordinary fatal car crash)
I’m sorry, but you’re blowing this out of proportion.
Wanting the media to include the brand of the car involved in the crash (totally normal thing to do), is not equivalent to avoiding names of murder suspects. The brand is a fact, regardless of whether the self driving nature caused the crash.
If you read carefully, they are talking about the difference in responsiblity between level 2 and level 3 autonomous driving, with the explicit remark that the driver is fully responsible in a level 2 vehicle and that the vehicle in question is level 2, not level 3. They fully avoid saying whether the autonomous driving was active at the time of the accident or had anything to do with it. It really reads like something has gone wrong and they want to avoid bad publicity and shift the blame on the driver by tiptoeing around the real issue of whether their autonomous driving software caused the accident. Instead they are trying to redefine the general notion of "autonomous" to mean "level 3" which their vehicle isn't.
Translation of the part of the article I'm talking about:
BMW said on tuesday: "The car has a driver assistance system of level 2, which are even today shipped in normal consumer vehicles and which support the driver if they so desire. With level 2 vehicles, the driver is generally always responsible." Only with highly automated vehicles of level 3, the driver may, under certain circumstances, delegate driving fully to the vehicle.
BMW said: "At the moment we are researching the exact circumstances. Of course we are in a close exchange with the authorities. But it is already certain: The BMW involved was not an autonomously driving vehicle".
EDIT: They exists since the 1970s on closed tracks. I remember during my work in 1990s at DASA (Daimler Chrysler Areo Space) they had them on the airfield in Friedrichshafen.
You can buy a ride on the Bezos Vomit Comet up to the Kármán line, but that does not make you a martian - or that contraption an orbital rocket.
There is no self-driving car in existence today that can do what most people understand by driving. Waymo is just a simulation of what a self driving car should do, achieved by painstakingly mapping every square inch of street in a very limited area. It's a ruse achieved by brute-forcing, not self-driving. Not even their makers trust it to be self driving, assigning back-up human drivers in unpredictable service areas.
Serious question: will we demand perfection from autonomous cars before we allow them out of “test” status?
How many autonomous miles without an accident will be sufficient to prove safety?
I am not diminishing this tragedy; I can only imagine the grief of the families involved and my heart goes out to them.
I am also NOT making the “have to break a few eggs to make an omelette“ argument; clearly something went wrong and I look forward to understanding what led to this accident so we can avoid it in the future. The article left out lots of facts like whether the car was truly autonomous; whether it was in autonomous mode; whether it had been approved for that; whether the driver failed to supervise it appropriately, etc.
Note that the Concorde had a perfect safety record and was the safest (statistically) commercial airframe until it’s one and only accident, after which is was the least safe airframe [1] by passenger deaths per air mile.
I own a Tesla and love it but the autopilot function is not even close to ready for autonomous driving, in my opinion. I love autopilot and use it every time I drive but it has trouble with even simple problems like two lanes merging down into one and the center line disappearing. I don’t have direct experience with other autonomous cars and am aware of the criticisms of Tesla’s non LIDAR approach.
“Autonomous miles without accident” is not the only metric. What matters is also how the algorithm fails.
To give an example. If you have autonomous miles without accident that is two times better than humans, but every accident tends to involve dark-skinned children and women, then the algorithm can have better statistical numbers in safety but still be rejected by humans.
Safety is a high dimensional problem that cannot be collapsed into one metric.
So if there's fewer deaths overall, including among black kids, but all the deaths are black kids that's not ok? I get your larger point but this one seems like a really poorly reasoned example.
If black kids are dying because the car accelerates in school zones and swerves directly into black kids at high speeds, you can imagine that would obviously be not okay no matter what the statistics are.
My point is that a single metric like “deaths overall” is insufficient. The failure modes need to be reasonable and sympathetic to a human.
The premise of GP may have been slightly different. If a safety improvement reduces child pedestrian deaths by 100% in groups A, B, C, and D and by 50% in group E, that safety improvement should be welcomed, even though all child pedestrian deaths are now in group E (even though that group is also better off).
Perhaps it could be improved by being explicitly programmed to kill 25% as many children in groups A-D vs the original baseline. Then let activists argue why that figure should be doubled.
That's what "equality of outcome" is; it's not about better it's about equality. When equality of outcome is implemented by raising everyone to a better outcome then it's a fine thing but that's not inherent to the underlying concept.
Testing shows that Teslas specifically have trouble identifying young (short) children as people to be avoided[0]. To your point, if Tesla were able to provide in the future that they actually do result in fewer deaths, but that the deaths which remain were heavily concentrated in young children, I think people would have a serious problem with that, regardless of whether the overall count of children dying was higher or lower. It's irrational, but that's how humans are.
As you may have unintentionally alluded too with your reference to Concorde, there is no hard number or line. It's purely a public perception thing. There is no definable threshold that is "good enough".
The tech needs to stick around long enough and proliferate to a large enough cross section of society to become mundane (something Concorde failed to do). The people who are not satisfied will never be satisfied by an improvement in the numbers. What will change their minds is when it becomes clear that nobody else cares and they're just yelling at clouds. Realistically what that means is a generation has to come of age after the tech has proliferated and accept it by default and don't see it as pushing the limits of what's reasonable. From there the acceptance gets back-ported.
Serious question: will we demand perfection from autonomous cars before we allow them out of “test” status?
I don't think anyone is demanding perfection, but they are quite reasonably demanding that autonomous driving doesn't cause any accidents that wouldn't have happened if a human was driving (not that this is clear from the accident in the article).
The bar issimply 'better than a person'. If a car apparently changes lane and drives into another car without the cause being clear it's not there yet.
> quite reasonably demanding that autonomous driving doesn't cause any accidents that wouldn't have happened if a human was driving
That's totally unreasonable. If a self-driving car prevents 10,000 accidents that human drivers would have caused, but causes 20 accidents that human drivers would have prevented, we should accept it without demanding further safety improvements first.
They need to be significantly safer. Not because humans are a particularly great, but as a human I understand the psychology of other drivers. Faced with another driver I know I can usually predict better what their failure modes are and compensate accordingly. This goes completely out of the window with self driving cars, whether I'm driving and must deal with them or in situations where all the cars are autonomous. And, particularly this is all bad when being a pedestrian.
Truthfully I despise the way this is going. Why can't we just massively reduce car use?
In my mind, I would be happy with 75% of the risk of human driving.
That is enough to be clearly better and clearly safer. It is a big enough margin to prevent lying with statistics ('oh, we only count highway miles', or 'yeah we were including with that statistic the young drink drivers').
It also isn't too unrealistic. If (to simplify things) you imagine that every accident has two cars involved and is the fault of just one of them, then a perfect self driving car that makes no mistakes will still be involved in half as many crashes (ie. 50% of the risk). I therefore don't believe it to be possible to exceed that while most cars are still driven by humans.
I would be completely happy with my government telling car companies they can do whatever they like as long as their accident rate per passenger mile remains under 75% of the nationwide average. I'd be happy with that instead of car standards too - if your manufacturer has a safety record exceeding that, they no longer need to do crash tests, brake performance tests, meet headlight regulations, whatever.
Alice says "I never drive drunk, so the accident bar for self-driving cars to clear shouldn't take accidents caused by drunk driving into account." Bob says "I never use my phone while I'm driving, so the accident bar for self-driving cars to clear shouldn't take accidents caused by phone use into account." Carol says "I never drive exhausted, so the accident bar for self-driving cars to clear shouldn't take accidents caused by exhausted driving into account." And so on, since for every bad driving behavior, there's probably at least one person who's never done it. If they all get what they want, then self-driving cars will have to be perfect before they're allowed, even though they could save a lot of lives by going into service long before they're perfect.
That's a standard no human driver could possibly achieve, for the simple fact that no human drove a billion miles. I think setting such standards, which will be close to impossible to achieve (and/or take centuries) is helpful.
On highways, the number of human injuries is a lot lower than you think. Most accidents are on surface streets, which is not where most "self-driving" features are purported to be usable.
The reason why I'm against self driving cars is simply the fact that they're cars at all. I sincerely have no idea why we are collectively spending so much time and resources to avoid the solution we've had for years: public transportation. I would much rather live in a world where I have to deal with the minor inconvenience that is waiting 20 minutes for a train to arrive but get to my destination cheaply and easily than all the insane beaurocracy and overhead that comes with cars existing at all.
> I would much rather live in a world where I have to deal with the minor inconvenience that is waiting 20 minutes for a train to arrive but get to my destination cheaply and easily
I'm much happier to put up with the expense, bureaucracy, and overhead of owning a car than I would be if every trip, every day of my life took 20 minutes longer (and for a lot of places I go, 20 minutes is an underestimate). If you'd choose the opposite, that's fine, but don't do things like banning cars that would restrict my choice. That's just as unfair as banning public transit would be.
We're in agreement then. My ideal world is one where we have cities designed for public transportation (including walking), which would eliminate or at the very least reduce the current problems those methods of transportation currently have for everyone, while the current system provides somewhat efficient transportation but only for those who can afford it. I think spending money and human resources into improving cars instead of public transportation just exacerbates that problem, so it just frustrates me on a personal level to see that this is the direction we're headed to.
Self driving cars could be the public transportation of the future. Like taxis with the fraction of the price. It would be something that would dramatically reduce the need for private cars in sparsely populated areas, and even reduce the need for short and medium length flights.
I love public transportation, but the possibility of low cost taxis is very exciting.
If you like public transportation, then I’m delighted. I hate it and don’t want it forced on me. I think that ultimately autonomous cars are likely to replace much of public transportation, or at least be a large part of such a system.
The problem is we're currently living in the opposite situation, one which is so normalized most people don't even realize it. We have entire cities built around and for cars instead of people, with public transportation treated as an afterthought and just walking to places not even considered as a possibility. I wouldn't want to force public transportation on people either, but pretending the alternative of forcing cars on everyone else is better, desirable or not current isn't productive.
Robots killing people are much scarier than humans killing people.
Unfeeling monsters taking lives are much more frightening because we have zero mental models for what motivates them and therefore no way to predict or protect ourselves from them when we encounter them. Statistics don’t matter.
Psychopaths are scarier than normal people even when they have the same outcomes.
We’re just desensitised to motor vehicle accidents. Self driving car accidents are still novel. A million people killed worldwide per year in motor vehicle accidents is just a statistic.
BMW also uses those stickers on vehicles that just collect data for BMWs autonomous driving program (they have the sensors, but not the software to actually self-drive)
Those stickers don't prove anything, especially if BMW themselves deny it. You can find those stickers on support-vehicles without any autonomous features as well. There will be more press coverage anyway and we'll find out soon.
A spokesman for the Munich group [BMW] said:
"The vehicle has level 2 driver assistance systems, which are already installed in production vehicles and support the driver if desired. With Level 2 vehicles, the driver is always responsible."
A statement which seems vague by choice.
For example, there could have been development software or hardware prototype sensors installed on this vehicle that malfunctioned.
But still, if it would have been even fully fitted Level 5 self-driving car, it is still a test car and the test-driver is explicitly there to take over in cases when it fails.
In any case, the driver here is still fully responsible.
The phrasing in the article is a bit confusing if it's actually established that it was running autonomously or not. (other articles suggest that's currently unclear/hasn't been announced)
The article calls it an E-test car but it also says it had the 43 year old driver and 4 passengers(aged 31, 47, 42 and a baby). Is it a test car, which I'd expect would just have 1 or 2 employees monitoring everything, or a car being used by civilians?
If you zoom in the picture, the logo on the center of the car's rim is BMW's logo...
… and even as employee your are not allowed to share unofficial things with relatives nor to take them on a ride with you.
I used to be hitchhiked in such cars to meetings, because those "colleagues" hat the same route to the same meeting. But officially, I wasn't allowed in.
fun headline if it's not even clear if the driver was driving or not.
> Whether the vehicle was driven by the 43-year-old driver or not is the subject of the investigation and is currently unknown, a police spokesperson said on Tuesday. The cause of the accident remains unclear.
According to the articke, autonomous car (unclear so far if manually controlled at the time of the accident or not at the moment) hit car A (old lady), car A subsequently hit car B, car B went of the road, hit a tree and caught fire. After hitting car A, the autonomous car hit car C head on (I assume that's what you see in the picture), driver of car C died (no surpise whej looking at the picture). Pretty fucked up, even more so if the autonomous car was actually driving autonomous.
For what it's worth, the autonomous car in question seems to be a BMW (hard to tell with the wreackage).
The logo on the center of the front wheel rim is unmistakably BMW's logo.
Edit: Also, car B caught fire and its driver died, the BMW crashed head on into car C but the article says the BMW and car C's occupants were seriously injured.
Driving in general is inherently dangerous. For an informed opinion on the matter, we need crash rates per driven distance with and without autonomous driving activated. Preferably by manufacturer.
I do agree with that. We should wait until we have the details to make up our minds about this.
One of the problems I have with self driving is, that everyone on the road can judge, to a degree, how the others will behave given certain circumstances. Just what self driving cars will do is everyones guess. And self driving's track record so far is underwhelming, to put it mildly.
Can't read german. Does anyone have any input on the following question? Isn't a driver meant to constantly monitor a slef driving car and manually take over? Do we know of this person had this ability? Can a person react quick enough to prevent an accident?
I can't answer, but I can't imagine myself being focused enough to be able to react really quickly to an immediate danger on the road if I'm not actively driving the car. I doubt most people would be able to do this.
I don't think I'm going to use a self driving car in self driving mode as long as I am expected to watch out for danger and take over if needed. If I need to constantly monitor, I might as well drive. Self-driving is only compelling if this means I can free some time for me while the car is driving, or at least make driving safer. Otherwise, it certainly just makes driving more boring and more dangerous.
Until then, I'm more a proponent of making driving easier by removing the manual gears so my focus can be better directed toward the road and less toward the gears. Engines do it better than me anyway.
Exactly, if the self-driving is good enough after some time you will reduce your attention and will not be able to quickly take over if the car suddenly fails to self-drive in a corner for example.
There are plenty of other industries - CCTV monitoring, pool lifeguards, pilots using autopilot, airport x-ray operators - and they all know it's incredibly hard for humans to remain vigilant for hours on end when there's no action.
I think it's pretty well known in the self-driving car industry that safety drivers do a great job if they have to intervene several times an hour. And they're good at getting the vehicle unstuck if there's e.g. a flat tyre. But for a system that only needs an intervention once a week? The safety drivers often won't react fast enough - they're just there to take the blame.
And all your examples are trained professionals. In case of everyday self driving we are talking about the average joe, I know for sure I wouldn't be avle to jump in in a split second.
Right now is not even clear if the car was in self driving mode. It could be well the case, it is a "normal" accident. The titles in the news are very yellow sometimes...
It's the internet, so Skynet, if you are listening, I hereby want to aks you to forgive my nasty behaviour towards evry computer (seriously, I never wanted to actually use that hammer!), fridge or coffee machine!
The NTSB (not the FAA) handles the part of aircraft accident investigations that you're likely interested in. They then make recommendations to the FAA on matters of aircrew or aircraft certification or other deficiencies that were in the causal chain of the accident.
They also don't travel to the vast majority of aircraft accidents, even ones with fatalities. They do travel to jet (or airline turboprop) accidents with fatalities, but small general aviation aircraft crash fairly regularly with only a small portion of those crashes getting any meaningful investigation from the NTSB.
I should have elaborated, but I sort of just tossed my post on here quickly. I was imagining if we get to the point that self driving cars become fully autonomous, and lets say 100% of cars on the road are self driving-- what happens if there's a crash? Theoretically that shouldn't happen right? If it did I would want an independent body to investigate.
One dead and nine seriously injured in accident with test vehicle
Four rescue helicopters and 80 firefighters were deployed: in an accident on the B28 in the Reutlingen district, a young man died and several people were seriously injured and taken to hospital.
A test vehicle with modern assistance systems was involved in a serious accident with one fatality and nine seriously injured on the B28 in Baden-Württemberg. Whether or not the vehicle was driven by the 43-year-old driver is the subject of investigation and is currently unknown, a police spokesman said Tuesday. The cause of the accident is reportedly unclear.
The 43-year-old was driving an autonomous e-test vehicle near Römerstein in the Reutlingen district on Monday afternoon, according to police. In a curve, the car came for as yet unclear reasons on the oncoming lane and struck there an oncoming car. The 70-year-old driver subsequently crashed his car head-on into another car. This car left the road and immediately started to burn.
In addition, the test car reportedly crashed head-on into another car. The 33-year-old passenger of the car suffered fatal injuries as a result of the violent impact.
Traffic police Tübingen and expert investigate
In the test car of the 43-year-old were still two men aged 31 and 47 years, as well as a 42-year-old woman and a one and a half year old child, according to police. According to the report, they were seriously injured - as were other people involved in the accident - and taken to hospitals.
The car manufacturer BMW contradicted on Tuesday the information of the police, according to which it was with the accident car "an autonomous E test vehicle". A spokesman for the Munich-based company said, "The vehicle has Level 2 driver assistance systems, which are already installed in production vehicles today and support the driver on request. With Level 2 vehicles, the driver always remains in charge as a matter of principle."
Only in highly automated vehicles from Level 3 would the driver be allowed to delegate driving completely to the vehicle under certain conditions. A police spokesman did not comment on this when asked.
The investigation into the accident has been taken over by the Tübingen traffic police together with an expert. The rescue service was on the scene with four rescue helicopters and ten ambulances, among others. In addition, 80 firefighters with 15 emergency vehicles were deployed. As a result of the accident, the federal highway 28 was fully closed for several hours. Police estimate the damage to the vehicles involved at around 180,000 euros.
Editor's note: An earlier version of this article stated that the accident involved an autonomous vehicle. Following a statement from BMW, we have changed the wording. In addition, the gender of the 70-year-old has been corrected.
Translated with www.DeepL.com/Translator (free version)
The translation is accurate. "Bei Level-2-Fahrzeugen bleibt die Fahrerin oder der Fahrer grundsätzlich immer in der Verantwortung." can be translated this way. The only nit I would pick is about the translation of "grundsaetzlich" by "as a matter of principle", it could also be read to mean "generally", "usually", "always" or "basically". In this case, they probably wanted to say "always", but there would also be the legalistic meaning of "grundsaetzlich" which is "normally, but with exceptions".
All in all, no matter the above nits, they are obviously trying to shift blame away from their vehicle.
This was a test car (driven by a 43yo)
> In einer Kurve kam das Auto aus bislang unklaren Gründen auf die Gegenfahrbahn und streifte dort ein entgegenkommendes Auto
For unknown reasons the test car changed lanes in a curve, driving towards (into?) an incoming car.
> Die 70 Jahre alte Fahrerin stieß in der Folge mit ihrem Wagen frontal in das Auto eines 32-Jährigen.
The aforementioned incoming car (driven y a 70yo woman) hit another car frontally (driven by a 32yo)
> Das Auto des Mannes kam von der Fahrbahn ab und fing Feuer.
That later car went offroad and burst into flames.
> Im weiteren Verlauf prallte der E-Testwagen zudem frontal in ein weiteres Auto
The test car then ran into yet another car frontally.