Usually you have to provide another piece of information like the first 5 letters of the last name or something. That's definitely not good that they show you a name by just putting in an account number.
Australia put a cap on interchange fees in 2017 that caused a drop in card rewards. Merchants argued that the lower interchange costs would be passed on to consumers. However, it appears to only significantly affected the consumer rewards.
It's interesting that the car was exceeding the speed limit of 35 mph. I would assume the car would stay at or below the speed limit. Who gets the speeding ticket in this case? Does 5 mph affect the reaction time such that it could have noticed and taken evasive action?
Whoever owns the algorithm. Or at-least in whoever's name the license/permission was issued. If its an organization, the top management signing off on this has to take the blame.
Legally, the person behind the wheel was still the driver. They are responsible for both the speeding and for killing a pedestrian. At this stage it's no different than using cruise control - you are still responsible for what happens.
I really hope you're wrong. If the legal system doesn't distinguish between cruise control and SAE level 3 autonomy, the legal system needs to get its shit together.
No, that's bullshit. It's physically impossible for a human to intervene on the timescales involved in motor accidents. Autonomy that requires an ever-vigilant driver to be ready to intervene at any second is literally worse than no autonomy at all; because if the driver isn't actively driving most of the time, their attention is guaranteed to stray.
I agree with you - but that's literally the stage we're at. What we have right now is like "advanced" cruise control - the person behind the wheel is still legally defined as the driver and bears responsibility for what happens. The law "allows" these systems on the road, but there is no framework out there which would shift the responsibility to anyone else but the person behind the wheel.
>> It's physically impossible for a human to intervene on the timescales involved in motor accidents.
That remains true even without any automatic driving tech - you are responsible even for accidents which happen too quickly for anyone to intervene. Obviously if you have some evidence(dashcam) showing that you couldn't avoid the accident you should be found not guilty, but the person going to court will be you - not the maker of your car's cruise control/radar system/whatever.
Currently have two cars; one Mazda '14 3 with AEB, Lane Departure alert, radar cruise, BLIS, rear cross alert - and the other an '11 Outback with none of that (but DSC and ABS, as well as AWD).
The assists are certainly helping more than anything, so I feel that the Mazda is much safer to drive in heavy traffic than the older Outback.
The cruise has autonomy over controlling the speed only, and applying brakes, but it is still autonomy. Of course since my hands never leave the wheel it may not fit with what you have in mind.
Having said that, Mazda (or Bosch?) really nailed their radar, having never failed to pick up motorbike riders even though the manual warns us to not expect it to work.
I feel more confident in a system where the ambition is smaller, yet execution more solid.
Fwiw I also tested the AEB against cardboard boxes driving through them at 30km/h not moving accelerator at all, and came away very impressed by the system. It intervened so last second I felt for sure it wasn't going to work, but it did - first time was a very slight impact, next two were complete stops with small margins.
This stuff is guaranteed to save lives and prevent costly crashes (I generally refuse to use the word "accident") on a grander scale.
Bullshit?? It may be autonomous but these cars are still far away from driverless. YOU get in the car, you know the limitations, you just said you even consider yourself physically incapable of responding in time to motor accidents, and that the safety will worse than a non autonomous car. Sounds to me what's bullshit is your entitlement to step into an autonomous vehicle when you know it diminishes road safety. Autonomous vehicles can in theory become safer than human drivers, what is bullshit is that you want to drive them now, when they are strictly not yet safer than a human, but do so without consequences.
I attended an Intelligent Transport Systems (ITS) summit last year in Australia. The theme very much centred around Autonomous Cars and the legality, insurance/liabilities and enhancements.
There are several states is USA that are more progressive than others (CA namely). But with many working groups in and around the legal side - it hopefully will be a thing of the past.
In Australia, they are mandating by some year soon (don't have it on hand) that to achieve a Safety Rating of 5 star, some level of automation needs to exist. Such as lane departure or ABS will become as standard as aircon.
Assuming ABS means "Anti-Lock Braking System" in this context, isn't that already standard? I can't think of a (recent) car with an ANCAP rating of 5 that doesn't have ABS. I'm not sure I would even classify ABS as automation in the same way that something like lane departure is automation. ABS has been around (in some form) since the 1950s, and works by just adjusting braking based on the relative turning rates of each wheel. Compared to lane departure, ABS is more like a tire pressure sensor.
Does this responsibility stay with the driver, despite this clearly being an Uber operation? Aside from the victim, did self-driving tech just get its first, uhm, "marter"?
By law(and please correct me if I'm wrong), the driver of the vehicle is responsible for everything that happens with the vehicle. Why would it matter if the vehicle is owned by UPS, Fedex, PizzaHut or Uber? Is a truck driver not responsible for an accident just because they drive for a larger corporation?
Let me put it this way - my Mercedes has an emergency stop feature when it detects pedestrians in front of the car. If I'm on cruise control and the car hits someone, could I possibly blame it on Mercedes? Of course not. I'm still the driver behind the wheel and those systems are meant to help - not replace my attention.
What we have now in these semi-autonomous vehicles is nothing more than a glorified cruise control - and I don't think the law treats it any differently(at least yet.).
Now, if Uber(or anyone else) builds cars with no driver at all - sure, we can start talking about shifting the responsibility to the corporation. But for now, the driver is behind the wheel for a reason.
The San Francisco Chronicle late Monday reported that Tempe Police Chief Sylvia Moir said that from viewing videos taken from the vehicle “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway." (bit.ly/2IADRUF)
Moir told the Chronicle, “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” but she did not rule out that charges could be filed against the operator in the Uber vehicle, the paper reported.
Driving rules in the UK have changed, since at least a decade ago, so that there is no 10% margin. Speedometers are required by law to read on or under and they are more reliable now. So if you're going 36mph then you'd be fined.
On top of the speedometer it has the GPS speed to compare as well, I can't see how there is any excuse for being over the limit.
The quoted stats from UK advertising were that at 40mph 80% of pedestrians will die from the crash, at 30mph 20% will die.
Had the car been doing just under the limt e.g. 33mph then there's a much better chance that the woman would have survived.
I cannot find a reference to backup your claim of the 10% + 2mph margin having been axed. In fact I remembered the Chief Constable calling for the end of it recently (implying it was still being used):
So when the road sign says 35mph it means the official speed limit is exactly 38.5mph?
Because sometimes that 10% is argued as a margin of error for humans supposedly not paying attention how fast they're going, but if that's the case then there's really no reason why the robot shouldn't drive strictly under the speed limit.
If you explicitly programmed a fleet of robots to deliberately break the law, then I think it's not enough consequences if you just fine for the first robot that gets caught breaking that law, while the programmers adjust the code of the fleet to not get caught again.
Consequences should be more severe if there's a whole fleet of robots programmed to break the law, even if the law catches the first robot right away and the rest of the fleet is paused immediately.
Should be noted that speedometers display a higher number than actual speed. So if cop flags driver at 38.5 mph, there's a good chance their speedometer showed 40+ mph.
It doesn't help that almost all the fascia locks on each vendor's machines are a standard key. With that standard key, you have full access to the computer or embedded device drive.
Nowadays the communication link to the dispenser is encrypted, making swapping the hard drive useless. The real problem is the machines aren't replaced very often so there are quite a few old models out in the field that are susceptible to these sort of attacks.
I've been following the Ethereum/DAO situation since the drain of funds occurred. I currently do not own any cryptocurrencies so this has been my first real look at the communities surrounding Ethereum and Bitcoin. In laymen's terms I am under the impression that these crypto coins are to be treated the same as cash. If I hand someone $10,000 in USD and they disappear and/or spend the money without fulfilling their side of the contract, I'm out of luck. Sure, I could take them to court but if I can't find them (or I sent my crypto coins to another country) or they can't repay, my coins are still gone. I know there are different schemes to handle this situation, such as coin burn, escrow, etc. But, barring those methods, once a coin is spent it's forever spent, same as a crisp $100 I hand to some door to door salesman that never comes back.
After dipping my toe into the crypto currency world, learning about the DAO and Ethereum, there are a few things that bother me about this whole situation.
1.) Rolling back (voiding) transactions, which seems to go against the concept of cryptocurrencies in general.
2.) The conflict of interest between the DAO, slock it, and core Ethereum developers, and their hefty co-mingled investments in the DAO.
3.) The labeling of this person as a "thief" when they played by the rules set forth by the DAO contracts. Maybe not the spirit, but courts generally decide that. The code is the law and is final is my understanding. (I don't know how you code the spirit, by the way.)
4.) The non-inclusive "democratic" vote of the Ethereum community on the hard fork that appears questionable.
5.) The different rules for reversing a "theft" depending on whose funds where taken (see #1 and #2) or how much was involved.
6.) The centralized decision making on the fork by a small number of people who stand to lose substantial funds based on a poorly understood investment vehicle.
This whole situation occurred because of some poorly implement code and now there's a whole lot more code being deployed in a hurried manner. If there is another unnoticed, unintentional way to run code that is not within the spirit of a contract in the future and no core Ethereum or slock it funds are involved, will another hard fork occur? Will they legally be forced to do it? It seems as though the risk was contained to the DAO's funds and now, with the hastily rolled out fork, the whole Ethereum market is put at risk.
This. To me (1) are (3) are amazingly, agonizingly ironic given the context of the situation, even if the outcome is not entirely surprising knowing how the voting is done and how centralized mining pools make things (as well as point (2)). The hypocrisy of the whole thing is astonishing to me.
>[…] and now there's a whole lot more code being deployed in a hurried manner.
This made me wonder, is it possible that the added code were to introduce bugs to the Ethereum protocol itself, allowing for even worse exploits than the one affecting the DAO? (Worst case scenario: unrestricted draining of any smart-contract, or something like that.)
I’m not knowledgeable enough to answer myself, but maybe somebody is familiar enough with the code to say?
It looks like Gigster will invest (money, time?) in some companies from a predetermined fund, not Gigster itself. In turn, they will share 1% of their fund equity with all Gigster developers who were active at the time, paying out when a fund company or Gigster liquidity event occurs. They will allocate rights to returns to active Gigster developers over a 1 to 5 year period.
It looks like, as a Gigster developer, you would get 1%/total active Gigster developers. If there were 500 Gigster developers over that 1 to 5 year period, you'd get 1%/500 = 0.002% of a liquidity event depending on Gigster's weighting mechanism. If Gigster's equity stake was 20% in a company that exited for $100 million, you'd receive maybe $40,000 depending on how Gigster determines your weighted percentage.
It's not really even the same league as a 401k or pension, contrary to what the page says.
Calling it equity is a bit confusing. On the page it states at the top:
"Gigster Fund provides our freelancers with access to equity from Gigster and select companies in our client portfolio."
At the bottom it states with regards to owning equity:
"No. Direct equity ownership or indirect ownership through a limited partnership has complex tax & legal implications, and the SEC limits the number of shareholders a corporation may have which would make direct ownership impossible after a certain number of freelancers".
And
"No. Direct equity ownership or indirect ownership through a limited partnership has complex tax and legal implications. In the United States, for example, Gigsters would be required by law to be accredited investors."