Hacker Newsnew | past | comments | ask | show | jobs | submit | absnetneg's commentslogin

I'd also add that generally to break the ~$150-180k threshold you need to look for companies that offer RSU (stock)


I can relate to all of this. One consequence for me has been lower visibility from my managers. I'm more productive by focusing on the work of my team and helping others, but by not participating in the politics I am perceived as a lower contributor than someone who does the opposite. It's still worth it for me, I've just given up on the goal of being perceived as the top contributor.


It's not a brake check if the person in front of you is slowing down and your braking is to maintain safe distance. A brake check is braking for the purpose of affecting those behind you. It seems clear he means it is braking due to the slow down ahead.


> A brake check is braking for the purpose of affecting those behind you.

Intent doesn't matter to the person in the other car, who likely is relying on your behavior to some extend to understand traffic ahead. If your driving seems like a brake check and brings in all of the unsafe conditions associated with brake checking, then it's a brake check.


Of course, the unsafe conditions aren't associated with braking, but with the failure of the trailing car to leave adequate room for their own braking. The accident here would happen whether the leading car had a justifiable reason to brake or not, and the trailing driver would be responsible in either case.


Unfortunately that just shows the person following was not doing so at a safe distance - any car you're driving behind could perform emergency braking at any moment.


^ It's this. I haven't been honked at yet, but I have felt a little apologetic to people behind me. I recently updates my follow distance from 3 to 4 and its helped tremendously.


Sure, but if you're doing it far later than a human would have done it, then you're really screwing with the person behind you who isn't an AI with instant response times.


This is why the law requires people to follow at a safe distance. If you're only leaving room for "instant response times" then you had better have instant response times because you're liable.


The question isn't who's legally liable. The question is are you doing things more unsafely than you need to be. You don't have to be legally at fault to be doing something that's unsafe for other drivers to deal with. Sudden breaking at highway speeds, especially for no actual reason, even if the person behind you is following at a safe distance, is rolling the dice that every car behind you for a while is driving at full attention.

"I require every other human driver to be driving correctly so that my AI-car may drive unsafely" seems like a bad bet. I'd also imagine that, if as a human, I just slammed on my breaks for no reason on the highway, I would be found to be at fault for an accident assuming the car behind me wasn't directly tailgating me.


You seem to view safety as some binary that doesn't account for frequency or severity of incidents. Your framing suggests you think it's better to continue with human drivers and the commensurate 40k lives lost each year than to use an AI that has even the slightest possibility of causing even the least significant accident irrespective of whether or not any accident occurs in practice.

I suggest it's better to compare a given AI with humans in terms of fatalities caused per million miles driven. If an AI performs a little better than humans it should be legalized and if it performs dramatically better than humans, it should be mandatory.

Of course, this is where we need more data and greater transparency so we can answer these questions.


I'm not trying to say that at all. What I'm saying is that we're at an awkward time now, where this sort of quick AI-assisted breaking is especially dangerous because fallible humans are most of the rest of the drivers on the road. In an all FSD world, this wouldn't really be a problem.

You're putting a lot of words in my mouth and assuming I'm against working on AI driving because one person might ever die. All I'm trying to point out is that it's pretty worrying to have a system that could cause a highway-speed accident because of a well-known and decently common bug. I'd be equally worried if it came up that some other decently selling model of car would randomly have the ABS system engage.

I wrote my OP here because the parent poster was casually talking about "not really break checking people" as if that's just normal behavior that's a part of R&D, instead of an AI accidentally emulating dangerous aggressive driving patterns that FSD is supposed to do away with. I'm not trying to ban FSD research or anything. I want this improved! It's just scary when people excuse dangerous behaviors by FSD systems because it's otherwise safer.

The other issue is that more data and greater transparency are both not things Tesla seems to have any interest in providing anyone, so while this may get fixed, it's not really pushing the industry forward all that much if no one other than Tesla is going to benefit. There's plenty of mentions in this thread of this sort of issue happening on other cars and adaptive cruise control systems that could benefit from an improvement for the betterment of all drivers, but instead "not breakchecking people" is going to be a unexplained feature improvement in some FSD patch probably.


if a car is tailgating me (less than one car length of space behind me above 30 mph), and my Tesla sees a ghost and brakes hard, and the car behind me rams into me, then that is completely on them. it's on me if they were further away than that.


Sure, but fault aside, you still just got into a highway-speed car accident. I don't get to decide if the person behind me is a reckless driver or not and "but the law says it's not my fault" doesn't do away with any injuries or damage to my car that happens because of it. There's plenty of completely legal things you can do that will create unsafe situations on the road, and you get a mark on your insurance for getting in the accident whether or not it's your legal fault that it happened (as I found out when I lost the front half of my car to flying road debris.)


The only thing keeping me from being done is the fear/suspicion that the healthcare system is on the verge of collapse.

Front line workers are not being compensated well enough. This means less people taking the career path and increased burn out.

We are in for a world of hurt and the only thing I can do is try to not be sick when the hammer falls.


This is why I got the vaccine. I'm personally not afraid of COVID. But the healthcare system shutting down to the point where we have patients dying in the hallways? That scares me.

Too many kids who want to become lawyers or software developers instead of doctors and nurses. With an ageing population that's not good.


I think being generous with these arguments hurts more than helps. The reporting rules are mandatory for COVID-19 vaccines because they were authorized under emergency use.

"Healthcare providers who administer COVID-19 vaccines are required by law to report the following to VAERS" ... "Death" - https://vaers.hhs.gov/reportevent.html

Even someone who dies of a car accident gets recorded as a death. There is no judgement call made as to whether the vaccine caused it at all, it is required by law.

https://wonder.cdc.gov/controller/saved/D8/D159F823

VAERS is a dataset that is meaningless on its own, especially in the case of the COVID-19 vaccine. It is meant to be compared with other datasets to uncover trends which could indicate a problem with the vaccine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: