A friend demoed his FSD-enabled Model 3 to me the other day.
In a 5-minute midday drive on the quiet, well-kept streets in his ordinary suburban neighborhood, it tried to pull in front of an oncoming car while making a right turn onto a road, and tried to turn in front of an oncoming car when trying to make a left turn off said road. Both times the Tesla initially seemed like it was going to wait for the oncoming traffic - then lurched in front of it at the last second. My friend had to jump on the brakes to avoid an accident.
In both cases the other car was clearly visible and in fact, the only other traffic in sight. You couldn’t have come up with a simpler test case if you tried.
What a joke. Tesla is nowhere close to FSD. My bet is they have a NN that can recognize lanes, stop signs, street lights, and most of the time, other traffic, and some simple control logic on top of that. Nowhere close to FSD.
It's because they've convinced themselves that without full self-driving Tesla is worth nothing. Certainly it's the only thing left to justify the stock price:
It works. FSD works. Tesla will slowly push updates and ask for money to purchase a subscription.
This is what Apple has done. People will soon jailbreak their Tesla. I give it roughly 6 years for someone to get mad and hack it.
More accurately, this shows a FSD beta tester deliberately allowing his vehicle to head into oncoming traffic.
The beta is not just level 2, it is a privileged system of trust to enable learning. A privilege that this user will likely no-longer be afforded.
That being said, I am a FSD beta tester that lives between 2 one-way streets and I can confirm it is laughably bad at one-way streets.
This week, it tried to change lanes into an oncoming reversible lane as if it were a turning lane.
1 block from my house, FSD beta comes to a complete standstill 10 meters before the stop sign, while it waits indefinitely for completely parked cars to clear what it believes is the left lane before turning left.
When the left side of a one-way is clear and cars are parked on the right, it will weave toward the parked cars as if "staying right" makes sense for any reason in this context.
But most annoyingly, it will completely ignore signs like "do not enter" and "no turn on red" not like a bug, but like it doesn't even attempt to read them.
Of course, as a beta tester it is not difficult to prevent it from doing these unsafe things since it's going the actual speeddlimit and I keep my hands on the wheel unlike this presumably former tester.
You're not a tester at all. You're a user who an irresponsible company has given access to incomplete software intended to control a powerful machine in close proximity to the general public.
There are no test protocols. There's no repetition to isolate the cause of a malfunction. There's no communication with the actual engineers. There's no meaningful feedback loop at all. Most importantly there's no corporate insurance, so when it goes wrong any unwitting members of the public that are impacted (sometimes literally) are stuck trying to recover their costs from some random person.
The claim that there's any kind of actual learning going on from public "testing" has yet to be supported as well. Low quality testing as noted above results in low quality data, and while enough low quality data can sometimes be useful for trends over a large group autonomous driving is something where specifics matter.
Tesla is using the "FSD beta" process primarily for marketing hype, but also to create a group of people who will be angry about having their toys taken away when regulators finally catch up.
I'll give you the benefit of assuming ignorance rather than malice, but the FSD beta is a program that is selective based upon absurdly high driving standards, from which it is easy to be kicked out even from responsible use.
"There is no repetition, (..) protocols (..) communication with engineers (..) meaningful feedback loop"
Yes, there is. There is a clear user agreement protocol that the video in this post violated. If your hands are on the wheel, it is impossible for the car to turn 90 degrees in a direction you didn't intend. The slightest wheel pressure would overcome this maneuver attempt.
There is also the inverse feedback loop for engineers. The in-cabin camera constantly monitors the driver's eye movement and will kick the tester out of the program completely if the system disengages based upon the driver looking at the console or his phone or closing his eyes.
Moreover, communication of malfunctions is both automatic with disengagement, but can also be communicated directly by the tester through the use of the report button. Both of these channels create simulation test cases which are actively used in a continuous feedback loop to train the AI.
> I'll give you the benefit of assuming ignorance rather than malice
Neither ignorance nor malice, just stating the facts.
> but the FSD beta is a program that is selective based upon absurdly high driving standards, from which it is easy to be kicked out even from responsible use.
The criteria being tested in Tesla's game to get access to the beta has nothing to do with whether you would be a good tester. Whether or not you can drive the car in a way that makes it happy doesn't reveal anything about the value of the information you could offer if they were running an actual test program.
> Yes, there is. There is a clear user agreement protocol that the video in this post violated.
That's not what I mean by a test protocol. When actual testing is being performed by engineers employed by the various vendors in this market, they have a plan for exactly what they're testing, where, and how. The drivers have intent and are actively kept aware of the known bugs and weak spots to look out for. In most cases the test driver is not the engineer monitoring the testing, they ride in the passenger seat so they can be fully focused on what the self driving system is doing while the driver can be focused on ensuring safety.
By repetition, when actual testers get an unexpected result they re-run the test segment to produce more data, hopefully replicating and isolating the problem.
Communication with actual testers is two way. Sure, sometimes the problem is obvious, but a lot of times there's benefit from further explanation. Either way, after developing a fix you need to test that it actually worked and we go back to the protocols and repetition.
> There is also the inverse feedback loop for engineers. The in-cabin camera constantly monitors the driver's eye movement and will kick the tester out of the program completely if the system disengages based upon the driver looking at the console or his phone or closing his eyes.
Actual enforcement of this started just over a month ago with FSD Beta 10.12.2. The cameras had been present for a few years but were only activated to even nag the driver roughly a year prior, but could be blocked with no consequence until this recent update.
> Moreover, communication of malfunctions is both automatic with disengagement, but can also be communicated directly by the tester through the use of the report button. Both of these channels create simulation test cases which are actively used in a continuous feedback loop to train the AI.
Again what I said about low quality data. Individual event reports generated by random people with no test plan, no training, no guidance, nothing but their own idea of what's right and wrong have a very limited SNR. Likewise for disengagements, especially when Tesla has historically been known for their lenience as far as any kind of driver monitoring or limits on where their systems can be used, while encouraging misunderstanding of its capabilities.
I think the OP just coined the phrase "Tesla Privilege" for egregious acts of attempted vehicular manslaughter...
Edit because humor isn't the done thing here, or for the subject, the quote is "a privileged system of trust to enable learning." But that would be less pithy.
Let me know when the beta has a casualty rate over 0. Until then, enjoy the "human privilege" of driving 10mph+ over the speed limit, cutting off traffic, weaving out of the lane, brake checking highway traffic because work was rough, and the other myriad human privileges killing 42,915 Americans in 2021 and your smarmy misquotes to justify it to yourself.
If you care for the math, even in it's current state it is saving lives. Things like braking early to cause a fender collision rather than a door collision are remarkable superhuman feats, but beyond that the mere fact that it is always paying attention in every direction, has no ego, no sense of tardiness, urgency, efficiency, selfishness, aggressiveness or defensiveness makes it really easy to kill less people than human drivers do.
The breaking to avoid collisions is both standard on a lot of cars, and not part of the FSD Beta.
Additionally, even though the FSD Beta is able to more accurately detect objects in a wide view that isn't borne into safety yet for a few reasons:
- the driving behavior is inconsistent and often will still ignore things that it detects because the hardcoded and ML models don't know what to do
- Tesla don't have the strongest sensor coverage. In fact, for higher end cars, they actually lag behind most competitors by having quite significant blind spots right in front of the hood and lack (depending on the model) rear and forward facing radars, therefore having vision blind spots that radar could see around by bouncing.
I say this as a 2020 Model Y owner, the FSD is way further behind than what you describe.
your example still allows the car to strike the door directly. It is also based upon radar data rather than a 3d simulation, so it could also result in a rollover with certain geometries. I suspect that's why pre-sense 360 is not available as an option on any of the non-etron audis at my local dealership, and why even in the e-tron that includes it, it is not even advertised as a side-impact safety feature anywhere, while it clearly explains its performance in front and rear collision.
If you keep watching the video, you'll see you're again incorrect.
The first section of the video is with the car in a fixed stationary position to demonstrate how it can activate safety even in an immobile state.
At the end of the video, they show how it breaks because it detects a lateral incoming object around a blind corner, preventing door collision all together.
This isn't unique to Audi either, other brands like Volvo have oncoming traffic detection around corners and will break to avoid it.
Again, Tesla actually lacks some sensors that these other brands which causes Tesla's to actually have a greater blind spot. E.g most of these cars can "see around corners" whereas a Tesla cannot. This is in part because Tesla has removed radar (and the ultrasonic sensors are too short range on any vehicle). Radar was what allowed it to do bounce detection around obstacles that obscured pure vision based approaches.
Breaking early to avoid a collision is a standard feature of modern vehicles. Even my very not cutting edge, definitely not top of the line 10 year old Volvo has it! (Admittedly Volvo tends to pack more safety features into their vehicles than other manufacturers for a given target market / price point iirc.)
Tesla gets no bonus points for having collision avoidance, pedestrian detection or any of the now standard safety features that are available on every modern vehicle that targets the same markets.
I did not describe emergency automatic braking. I described a side collision that is unavoidable, e.g. someone running a red light, and the car being able to determine how to control that accident to prevent injury.
No other manufacturers uses side cameras and continuous 3d mapping and localization and physics projections to do things like this.
And despite the -4 karma tax for speaking on behalf of Tesla, the statistics DO show that no other manufacturer is as good at it.
My Honda minivan has helped me brake in a sudden and emergency way on several occasions.
Of course, Honda doesn't sell something called "full self driving" which might lull me into a false sense of security, human nature being what it is and all.
> More accurately, this shows a FSD beta tester deliberately allowing his vehicle to head into oncoming traffic.
It does not seem deliberate to me, from watching the video. While stopped at an intersection, the Tesla:
- displays it's turning left but signals right
- then displays left and signals left
- starts moving, and suddenly. displays right and signals right, making a right turn
That right turn was onto the one way street, and the driver sounds (understandably) surprised as soon as he realizes he's facing oncoming traffic.
If your hands are on the wheel, as is required of testers, it is impossible for the vehicle to turn in a direction you did not intend. The slightest pressure of human direction takes priority over any route planning by the AI.
What you are seeing here is a human mistake, sorry. No one is saying that FSD beta is level 3, and treating it as such is a clear human error. Akin to a human allowing cruise control to run a red light.
> The slightest pressure of human direction takes priority over any route planning by the AI.
> What you are seeing here is a human mistake, sorry.
This is an enormously different claim from "deliberately allows the car to drive into oncoming traffic". It's also tautological given the intended operating mode of Tesla. I've heard this line plenty from all manner of Tesla fanatics, but it's deeply dishonest sophistry that doesn't at all address the actual complaint.
Which is: the Tesla's display and motion were erratic and incorrect enough that even a reasonably-attentive driver would easily end up putting himself and others in danger.
You can argue the merits of whether this should be allowed. There's reasonable disagreement on whether this is feasible in general, or at scale. And further disagreement about whether "play stupid games, win stupid prizes" should apply to endangering the lives of other road users. But please don't flatly lie about what's happening in the video by describing it as "deliberately allowing the vehicle to head into oncoming traffic" when it's extremely clear from the video that he was unaware it doing so, in large part due to the absurdly erratic performance of the system on that turn.
You are again moving the goalposts. We seem to disagree about how attentive one can expect a driver to be, across long periods. IMO, decades of science on driving are on my side. But resolving that disagreement isn't necessary: even if you're correct, the word "deliberately" doesn't mean "didn't notice because he should have been more attentive".
When I mention your initial comment flatly lying, it's this "deliberate" claim I'm referring to, as the video makes it clear how surprised he is once he notices the Tesla has turned him onto a one-way.
I'm sure owning TSLA has been painful recently, and as I said there's room for disagreement about whether their overall strategy is feasible. But there's no excuse for complete fabrication about the contents of the posted video.
The arrogant way in which you are speaking just indicates to us you are being dishonest and you are discussing this in bad faith. Perhaps you are a paid employee of Tesla.
Every release you can tell it's learning little more about general driving. I would call it a responsible but paranoid adult driver if you are driving in the suburbs or a city with well marked roads and controls. The problem is that it doesn't seem to even attempt certain human logic patterns, and I'm not sure a generalized approach will ever be able to infer these kind of nuances.
At AI day and in shareholder meetings they go over this regularly. Last year they described having 16 AI models for handling various situations, with the goal of eliminating all code logic and moving to a single-stack AI model capable of all situations.
>The car opted to turn right — straight up the wrong way of a one-way street in San Francisco (around the 6:35 mark below).
Not sure where they got San Francisco from, this took place in Seattle. The video description they link to plainly says this
>Here's a vid of Tesla's FSD Beta driving through Seattle
Funnily enough this is the same guy who's Tesla lurched towards pedestrians crossing an intersection last year https://twitter.com/TaylorOgan/status/1542555188279517184 and once he started getting flak he copyright struck the Twitter videos and deleted the video from his YouTube account.
I wonder that Tesla doesn't seem to make use of a national map of roads, to know about one-way streets and turn lanes and so on. It'd be a valuable addition to 'blunder around trying not to hit anything' or whatever their current algorithm is.
Another piece of tech that will only ever be available to 'rich white' people, meaning a reach of 50M urban users in the western world.
I grow to dislike this sort of tech. We should aim to close gaps in our world not open them.
Please take race out of it, you're just perpetuating the predjudice that anyone not white is poor. Which just hurts non-white rich people when in situations around rich white people. It will make them feel like they don't belong which I'm sure is not your intent
I know your sentence means well, but something to keep in mind.
Do regular drivers actually take their hands off the wheel and expect the car to drive them around in an urban road network? And they have peace of mind while doing so?
Worse, it appears there are at least some software engineers who do this. It’s perplexing to me that anyone who works in this industry would give any software control over their life.
True story:
About three months ago, I saw a Tesla exiting a fast food establishment, pull in front of a normal big yellow school bus. The school bus slammed on brakes and blasted the horn.
[edit] to be fair, the incident happened on a weird S - shaped part of the road
"I got to admit, Tesla's self-driving software is getting really impressive, and I've been using it all the time. Here's a vid of Tesla's FSD Beta driving through Seattle (version 10.11.2), and doing the monorail test 3 times. It's so fun to watch the car get better. Feels like I'm living in the future every time I turn on FSD Beta or Autopilot :)"
This is what youtuber author said. However, the headline is trying to attract more eyeballs. Shorter looks desperate now!
Tesla fired the first employee to report an FSD crash. They knew about his channel for years and he disclosed it to them, and then they fired him a couple weeks after he posted a minor crash video.
In a 5-minute midday drive on the quiet, well-kept streets in his ordinary suburban neighborhood, it tried to pull in front of an oncoming car while making a right turn onto a road, and tried to turn in front of an oncoming car when trying to make a left turn off said road. Both times the Tesla initially seemed like it was going to wait for the oncoming traffic - then lurched in front of it at the last second. My friend had to jump on the brakes to avoid an accident.
In both cases the other car was clearly visible and in fact, the only other traffic in sight. You couldn’t have come up with a simpler test case if you tried.
What a joke. Tesla is nowhere close to FSD. My bet is they have a NN that can recognize lanes, stop signs, street lights, and most of the time, other traffic, and some simple control logic on top of that. Nowhere close to FSD.