Because kilo- already has a meaning. And both usages of kilobyte were (and are) in use. If we are going to fix the problem, we might as well fix it right.
I had a similar problem a few years back. Our decades old system had a poor architectural system baked in that was causing major performance issues in some cases. Fixing it within the core was straightforward, but ended up violating an assumption that almost every module we had made.
After two attempts at fixing it failed (due to getting hopelessly out of synch with mainline), we put the fix behind a flag and setup updated our CI to run the test suite with both modes of the flag; and to only fail for new test failures.
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
I'm guessing that's a fairly city viewpoint. My car is setup with roofrack and carries a lot of other gear I want. I'm regularly in places without reliable cell etc. Visiting friends can easily be an hour drive.
Yes, a city viewport. I usually just walk, but when I don't I most often take the subway, not even Uber. Though I feel like in Toronto the subway or some part thereof is closed or under maintenance or whatever way too often. It's not very reliable.
Talk to anyone from the midwest about not owning a car and they'll laugh you out of the room.
Well, unless it's because youre proposing they switch to ATV's and Snowmobiles, in which case there some people can technically get by without a traditional automobile.
If you take off the conspiracy hat, you will see that there are many advantages to not owning a product. Such as that the vendor's incentives are better aligned with yours. For example, if the thing breaks, it is in __their__ best interest to fix it (or to not let it break in the first place). This also has positive implications for sustainability.
It’s also in their best interest to set the price so as to maximize their own profits. If switching costs or monopoly power allow them to set a higher price, they will do so.
Have we learned nothing from a decade of subscription services?
Especially Adam Smith. The claims are scattered throughout The Wealth of Nations, but he hated them with specificity.
He said they raise prices and lower quality, misallocate capital, and corrupt politics, among other things.
All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
But that might be considered a legal trick. Suppose that, when you pay for a taxi, the standard conditions of carriage would make it your responsibility to supervise the vehicle operation and alert the driver so as to avoid accidents. Would the taxi driver and taxi company be able to eschew liability through that formalism? Probably not. The fact that Tesla makes you sign something does not automatically make the signed document valid and enforceable.
It may be that it is; but then, if you are required to be watchful at all time, and be able to take over from the autonomous vehicle at all times, then - the autonomy doesn't really help you all that much, does it?
> The law makes the driver of a vehicle liable for the operation, as it always has.
So, either those Tesla's don't really self-drive (which may be the case, I don't know, but then the whole discussion is moot), or they do, in which case, the human wasn't the one driving and may thus avoid liability.
Then of course there is the possibility that the court might be convinced the car was being drive collaboratively by the human and the car/the computer, in which case Tesla and the human might share the liability. IANA(US)L though.
All Teslas are level 2 ADAS and require the human behind the the wheel to monitor the vehicle and intervene when necessary.
> or they do, in which case, the human wasn't the one driving and may thus avoid liability.
That is not legally true. Automation does not absolve someone from liability. Owners of a piece of machinery have liability just by being the owner and placing it into operation.
Forget about cars for a second -- we already have many products that are entirely automated already, for example: an elevator. If you own a building with an elevator, and it hurts someone, the building owner is absolutely going to be sued over it, and "oh, it's automated" isn't a get-out-of-court free card.
There are still responsibilities that the owner has: did they properly maintain it? were they aware of an issue but decided to operate it anyway? were they in a position to intervene and avoid the accident, but failed to do so?
They say they will, but until relevant laws are updated, this is mostly contractual and not a change to legal liability. It is similar to how an insurance company takes responsibility for the way you operate your car.
If your local legal system does not absolve you from liability when operating an autonomous vehicle, you can still be sued, and Mercedes has no say in this… even though they could reimburse you.
I would be surprised if that was what they were actually looking at. They are an established insurance company with their own data and the actuaries to analyze it. I can't imagine them doing this without at least validating a substantial drop in claims relating to FSD capable cars.
Now that they are offering this program, they should start getting much better data by being able to correlate claims with actual FSD usage. They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future.
They are a grossly unprofitable insurance company. Your actuaries can undervalue risk to the point you are losing money on every claim and still achieve that.
In fact, Tesla Insurance, the people who already have direct access to the data already loses money on every claim [1].
That is why we do long and expensive trials before approving any medication for use.
Having said that, we have we been medically lowering people's cholesterol levels for decades, and the evidence seems pretty clear at this point that it is a net health benefit to those for whom treatment is indicated.
It is not at all obvious that targeted gene editing would be more disruptive to the body compared to flooding the body with a drug that happens to interfere with the one part of the process that we found a drug to interfere with.
Particularly if we are editing the gene to match a form that is already present in much of the population.
Some issues could only become evident over a period of hundreds of years with gene editing. That's longer than any medical trial I'm aware of. And mistakes made would be difficult, if not impossible, to undo.
If medications can already do what's required for cholesterol issues, why wouldn't we continue to use them rather than making some change to affect a complex balance that could cause problems over very long timescales?
If we were to be editing a specific gene to match what the wider population has, then I'd be more ok with that.
High cholesterol is well documented to be heritable. Perhaps more relevantly, even if they would work, lifestyle changes have a significant patient compliance problem, which significantly reduces their effectiveness.
There is a more reasonable brother argument to could make, which is that we have well tested and effective drugs available today for managing cholesterol. Any new treatment would need to clear the bar of being better than those (in at least some circumstances) to be put into wide use. This bar may cleared by the fact that existing treatments often have adverse side effects.
Further, the one time treatment aspect is actually a demerit in some ways, as one cannot stop the treatment if there is an adverse effect. This means that the safety profile would need to be much better than us typically required. And proven over a longer timeline.
Of course, this is all concerns about approval and widespread deployment. We are still in the early human trial phase, where much more risk is accepted (subject, of course, to ethical guidelines).
Genetic treatments have the scope to not only have unintended consequences, but unintended consequences that can last over generations of people. I am in favour of them for some things, but we need to tread very carefully with the technology.
It would give you a certificate chain which may authenticate the onion service as being operated as who it purports to. Of course, depending on context, a certificate that is useful for that purpose might itself be too much if an information leak
DV certificates (that lets encrypt) provides offer no verification of the owner. EV certificates for .onion could be actually useful though, but one generally has to pay for EV cert.
I've seen another type of "let projects fail" in my career done by middle managers in a large project. Essentially it takes the form of them saying "the larger project we are working under is probably going to fail. When it does, I want our component to be useful for whatever comes next". And, the surprising thing is that this often worked. The project itself fails, but most of the work done on it still ended up being used.
Letting people fail and letting projects fail seem fairly different to me (at least for large projects).
There have been a bunch of times in my career where I've allowed people under me to "fail". Often times, an individual failing at something is just not that expensive; while being highly educational. Sometimes, it turns out that there approach actually worked, and we as a group gained a new bit of institutional knowledge.
reply