Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll give you the benefit of assuming ignorance rather than malice, but the FSD beta is a program that is selective based upon absurdly high driving standards, from which it is easy to be kicked out even from responsible use.

"There is no repetition, (..) protocols (..) communication with engineers (..) meaningful feedback loop"

Yes, there is. There is a clear user agreement protocol that the video in this post violated. If your hands are on the wheel, it is impossible for the car to turn 90 degrees in a direction you didn't intend. The slightest wheel pressure would overcome this maneuver attempt.

There is also the inverse feedback loop for engineers. The in-cabin camera constantly monitors the driver's eye movement and will kick the tester out of the program completely if the system disengages based upon the driver looking at the console or his phone or closing his eyes.

Moreover, communication of malfunctions is both automatic with disengagement, but can also be communicated directly by the tester through the use of the report button. Both of these channels create simulation test cases which are actively used in a continuous feedback loop to train the AI.



> I'll give you the benefit of assuming ignorance rather than malice

Neither ignorance nor malice, just stating the facts.

> but the FSD beta is a program that is selective based upon absurdly high driving standards, from which it is easy to be kicked out even from responsible use.

The criteria being tested in Tesla's game to get access to the beta has nothing to do with whether you would be a good tester. Whether or not you can drive the car in a way that makes it happy doesn't reveal anything about the value of the information you could offer if they were running an actual test program.

> Yes, there is. There is a clear user agreement protocol that the video in this post violated.

That's not what I mean by a test protocol. When actual testing is being performed by engineers employed by the various vendors in this market, they have a plan for exactly what they're testing, where, and how. The drivers have intent and are actively kept aware of the known bugs and weak spots to look out for. In most cases the test driver is not the engineer monitoring the testing, they ride in the passenger seat so they can be fully focused on what the self driving system is doing while the driver can be focused on ensuring safety.

By repetition, when actual testers get an unexpected result they re-run the test segment to produce more data, hopefully replicating and isolating the problem.

Communication with actual testers is two way. Sure, sometimes the problem is obvious, but a lot of times there's benefit from further explanation. Either way, after developing a fix you need to test that it actually worked and we go back to the protocols and repetition.

> There is also the inverse feedback loop for engineers. The in-cabin camera constantly monitors the driver's eye movement and will kick the tester out of the program completely if the system disengages based upon the driver looking at the console or his phone or closing his eyes.

Actual enforcement of this started just over a month ago with FSD Beta 10.12.2. The cameras had been present for a few years but were only activated to even nag the driver roughly a year prior, but could be blocked with no consequence until this recent update.

> Moreover, communication of malfunctions is both automatic with disengagement, but can also be communicated directly by the tester through the use of the report button. Both of these channels create simulation test cases which are actively used in a continuous feedback loop to train the AI.

Again what I said about low quality data. Individual event reports generated by random people with no test plan, no training, no guidance, nothing but their own idea of what's right and wrong have a very limited SNR. Likewise for disengagements, especially when Tesla has historically been known for their lenience as far as any kind of driver monitoring or limits on where their systems can be used, while encouraging misunderstanding of its capabilities.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: