I'm curious what the reasoning was behind approving a measure which had such a high risk of backfiring very badly for vw. Is it something which was approved higher up or something an engineering team quietly hacked into place to meet an emissions target.
All it takes is for a team and their direct managers to collude if oversight and review are lacking. As little as 5 people could have known about this, or possibly it went all the way to the top but given the risks I very much doubt that. It's one thing to have errors or bugs, quite another to deliberately mislead the authorities on a major benchmark for vehicle approval. That's way beyond the gray zone.
I reasoned that it would have to be 2 people or else any sensible person would know it would eventually get out. Including the executive who had to achieve some engine performance goal (but couldn't handle the software manipulation), and the software engineer (who wouldn't directly have to answer for engine performance goals)
"Two may keep counsel when the third's away." --Shakespeare, Titus Andronicus
I see the initial code of this hack as just a begining. Then someone needs to test it with actual cars, maybe tweak it a bit and then make sure it gets deployed on the selected car models while assuring it works without breaking anything else. Moreover this hack was in place for few years ... On the other hand I'm not from the industry, so maybe it would be actually much easier.
This. There is also a pretty wide variation in skill levels between different doctors. With only one arbitrarily selected opinion the chances of getting advice from the wrong end of the spectrum is higher.
There's no shortage of conspiratorial comments and posts about Russia and China on HN either. There's something about conspiracies which captures our imagination but there needs to at least be some basis in the facts.
Sounds pretty circumstantial. Adobe for example has had many security vulnerabilities in flash over the years. I doubt that they were intentional back doors.
This is getting into conspiracy territory, but we are talking about a government that intercepts networking equipment while it's being shipped, disassembles it, installs hardware back-doors, and then delivers it. Strong arming any US software company that has a near universal install base isn't really a stretch of imagination.
I believe most AV vendors, if/when persuaded by powerful agencies, won't need to introduce an specially crafted backdoor.
An average antivirus software has everything necessary already. Personal licenses as a way to identify the specific machine or person, automatic streaming software updates as a way to deliver the payload and almost unrestricted privileges on the target system enough to infiltrate and stay concealed.
I agree with you on the Adobe software issues, particularly PDF reader. But you raise a question: how can we tell if a security vulnerability is an intentional back door or a goof? It's pretty easy to say "intentional" in some cases, and "goof" in others, but what about the vast majority that will inevitably lie between the easy-to-tell ends of the spectrum?
As an example, there's still room for argument about the Dual_EC_DRBG algorithm, and RSA making that the default PRNG for some or all of their products. RSA denies taking money for it. Nobody can make an airtight case for the NSA deliberately weakening it. Yet we still all kind of view Dual_EC_DRBG with suspicion.
The RSA deal was 10M, this was after its acquisition by EMC so 10M out of 25 billion in revenue makes it a bit an odd sum to introduce a backdoor.
Not that RSA hasn't done so in the past, but it was public when encryption software could not be exported RSA cam to an agreement with the US government to export it's 64bit encryption, it would use a 40bit private key and append the message with an additional 24bits which are transmitted in clear text and complete the private key to it's 64bit size.
This was a government mandated "work reducer" so the NSA if need be could decrypt the message as they had the ability to break 40bit encryption and the rest of the 24 bit of the 64bit encryption key was known for each message.
This wasn't hidden, this was even released in a conference with great pride that RSA could now export it's mail encryption suit to Europe.
Germany made a fuss about this 5 years after the fact, but everyone pointed and said well they announced it in a conference.
People are trivially easy to bribe. The KGB bought an FBI agent for 22 years for a total of only $1.4M.[1] And an army intelligence officer for only $250K over 25 yeas.[2]
It's strange to think that corporate employees would be that much harder to corrupt, especially for their own country.
Corrupting an RSA employee sure, making a deal with a corporation for measly 10M nope.
Human assets is a different story, 1.4 and even 250K while not being a large sum is quite a large amount of money.
Those assets are usually developed by other means, in most cases the money is largely irrelevant even if the asset refuses to take money tradecraft mandates that they'll be forced to take it it just to leave a money trail that they could then be threatened with if they no longer with to comply.
Additionally being paid also makes the asset more invested in their duty because it creates a link like with a would be employer, and allows them to quantify their assignment with a positive reinforcement no matter how big or small it is.
So money which is paid to long term assets isn't really a bribe, an initial sum might be used to turn the asset in the first place but it also usually require them to be in a position to need it e.g. gambling debts, medical bills etc.
Generally assets that can be bribed will not be farmed in such manner as the cases you've mentioned, people that can be easily bribed cannot be trusted which isn't a trait you want in an asset.
Eh are we sure the 10M was the only thing being paid? Perhaps the NSA sweetened the deal for key decision makers.
Even without that - it's free money for a benign reason, while doing the security services a favour. "Hey we've got this new crypto thing that's amazing but people don't believe us. Add it to your product, and we'll give you token compensation. Also we'll also make a note of what great guys you are."
Yet he got away with it! They only popped him after he got greedy in retirement and they setup a sting!
I know of employees busted for internal schemes they cooked up (it was pretty cool working with BigCorp to setup an international sting to get them). It simply cannot be that hard to find people that need or want money and get them to compromise things for relatively small amounts of money.
"To solve latency, Amazon built Availability Zones on groups of tightly coupled data centres. Each data centre in a Zone is less than 25 microseconds away from its sibling and packs 102Tbps of networking."
25 microseconds at the speed of light (best case, through a vacuum; through fiber is significantly slower) is ~4.7 miles, and based on the quote, that is the furthest they are apart. If your buildings are within 1-2 miles of each other, they're essentially the same facility.
Sure, it's not geographically redundant, but nobody in this thread claimed it was. DinkyG disputed that your "one datacenter" claim was false, which it appears to be.
Given that they created the account to comment in the DynamoDB thread, I'd guess they're a DynamoDB developer, but that doesn't invalidate anything they've said in this thread -- they even provided a 3rd party source.
This is incorrect. If you look for news articles about Amazon constructing data centers or buying facilities you'll notice that they have multiple data center facilities in each region.
I didn't refer to Homo naledi in my comment, but nonetheless the article does touch on the fact that Homo erectus was a widespread species across Africa-Eurasia, but after nadeli was around.
Are you using a recent version of Windows (8, 8.1, or 10)?
Do you have automatic updates enabled?
Do you have standard Windows features such as User Access Control enabled?
Do you use the computer with a standard user account as opposed to an administrator (root access) user account?
If the answers to all these questions are yes, I'd say you don't need an antivirus solution. Don't listen to the scaremongers. Microsoft has got you covered.
Honestly, I haven't had antivirus in 10 years and I haven't had a problem (although my windows usage has declined over the years... I mainly just stick to steam now.)
Run updates, don't use browser plugins, stick to applications you trust, and stay away from seedy looking sites when downloading common software. (Sourceforge comes to mind.)
I agree, but only if one uses noscript and requestpolicy, never torrents software, etc.
It's not that MS has you covered, moreso that AV vendors don't really catch new malware that has been mutated, packed, or whatever. So a more in-depth defense is better.
Not to mention that hilariously enough, AVG is literally selling user data now, which is what antivirus is supposed to protect against in the first place.
Depends what you're doing. If you like to play with a lot of risky torrents, then Windows Defender may not suffice. I also don't think Windows Defender does a great job at protecting you against infected removable media either. Avira seems to be pretty good at all of that and light weight.
For risky websites, a combination of Chrome, WOT, ublock origin, HTTPS Everywhere and Sandboxie and/or Malwarebytes Anti-Exploit (zero-day protection) should suffice.
Using a Standard (non-Admin) Windows account and being up to date goes without saying.
if you don't, and you don't do webbrowsing from inside windows then I can't imagine the need for anti-virus.
90% of attacks are trojan horses (fake/embedded pirated software usually) and the remaining 9.9% is browser attacks.
I doubt anyone is defeating your firewall/NAT box to get a direct connection to your windows machine, and even if they did they'd have to find a service they can exploit.
What are you using to run the VM? I've always had issues integrating the host and guest VM nicely in Windows - getting copy-paste working properly, resizing the window, etc.
How is this the same as having to run anti-virus software because the system's (i.e., Windows's) security model is broken?
> jailbreak iOS
Not sure why iOS is even relevant to my comment, since it isn't built on Linux (or even Unix).
> Security starts with the user.
This is true; a user who is bound and determined to hose their system can do it no matter what protections are in place.
But that's irrelevant to the point under discussion, which is how people who do not want to hose their system can keep it secure. On Windows, you have to run anti-virus software (and even the protection that provides is not foolproof), because the system's security model is broken. On Linux, the system's security model is functional to begin with, since unlike Windows, the system was designed that way from the ground up. So you don't need to run anti-virus software, and hence you don't have to worry about what information that software, which has a privileged position on your system, might be sending to others.
> Windows security is pretty good when running as a normal user and having UAC turned on on its full level and binaries validation.
Do you still need to run anti-virus software in this configuration?
> UNIX does have a better security model configuration out of the box, but is just as unsafe for the regular users that just dump stuff into their PCs
Again, I agree, if a user wants to hose their system, Unix won't prevent them. But anti-virus software won't prevent them either.
My point is, what about the user that doesn't want to hose their system? On Linux, it's very simple: use your package manager to install software, and don't run anything that wasn't installed that way.
You don't need an anti-virus if you are only running software from trusted sources, just like in Linux.
Just that trusted sources in Windows means not installing pirated software or that thing a friend gave because it was so cool. Or going to shady internet sites.
All things that will hose a Linux system as well.
Linux package managers are nice until one needs something it isn't there, like it happens to most average users that don't care about about FOSS and forcing themselves to alternatives.
And I never saw a UNIX that would allow to prevent users to install software locally, as Windows does with Active Directory group policies. Although I bet there are some third party commercial offerings for that.
> You don't need an anti-virus if you are only running software from trusted sources
What does "trusted sources" mean in the Windows world? Microsoft itself has shipped virus-infected CD-ROMs in the past.
> Linux package managers are nice until one needs something it isn't there
My sense is that, while this can happen, it's less likely to happen with the major Linux distros than it is with Windows. Major distros have tons of software in their package managers.
> I never saw a UNIX that would allow to prevent users to install software locally, as Windows does with Active Directory group policies
Um, you do realize that all it takes is not putting the user in the "sudoers" or "wheel" group (depending on the distro), right? This is routinely done in settings where only sysadmins are allowed to install software, such as universities. You certainly don't need anything as heavyweight as Active Directory group policies.
> What does "trusted sources" mean in the Windows world? Microsoft itself has shipped virus-infected CD-ROMs in the past.
Do you also read OpenSSH and Bash source code looking for security exploits?
> My sense is that, while this can happen, it's less likely to happen with the major Linux distros than it is with Windows. Major distros have tons of software in their package managers.
Quantity != Software X that user won't do without.
> Um, you do realize that all it takes is not putting the user in the "sudoers" or "wheel" group (depending on the distro), right? This is routinely done in settings where only sysadmins are allowed to install software, such as universities. You certainly don't need anything as heavyweight as Active Directory group policies.
I can install whatever software I want under $HOME, there is nothing preventing me to do that.
> Do you also read OpenSSH and Bash source code looking for security exploits?
I don't personally, no. But I'm confident that there are experts doing so, and that when they find an issue, it is publicized and fixed quickly, because it's considered an extraordinary and urgent event, and allowing it to continue unfixed would be unacceptable. When MS shipped virus-infected CD-ROMs, nobody thought it was unacceptable, or even abnormal.
However, if you're confident enough in Windows' security features to run without anti-virus software, that's fine. My sense is that the vast majority of Windows users are not. But the vast majority of Linux users are.
> Quantity != Software X that user won't do without.
You're going to have to give specific examples, because I just don't see this as a significant issue that users who don't want to hose their systems have to deal with on Linux. I've never come across any software I needed as an ordinary user that I couldn't find in my Linux distro's package manager. (As a programmer, I have, but that's a different case.)
> I can install whatever software I want under $HOME
Which comes under the heading of users who want to hose their systems. If you don't want to hose your system, just don't do that.
(As an aside, I think you can actually lock down executable permissions in $HOME with SELinux. But I haven't tried it myself.)
Not a given. Where I work, most of the reports I get from security admins regarding compromised hosts (found to be port-scanning, attacking other hosts, etc.) are for Ubuntu systems. You still have to secure any services you're running and take basic common-sense precautions.
And any malware maker and his dog know to bypass it before creating his new malware. How many stories have you heard of Windows Defender stopping ransomware? That's right - ZERO.
With AV you could be infected for months without ever knowing. All it takes is to get infected by anything that hasn't made it into the (often out of date anyway) definitions.
The old school Unix method works very well: Keep a list of all changes made from the base install, then periodically swap the disk out for a blank one, follow your documentation and restore non-executable user data from backup. Also has the benefit of regularly validating your documentation and testing your backups, and allows easy rollback by following the same process for major OS updates or hardware upgrades.
Why did you switch from Bitdefender? I'm forced to "sysadmin" my family's Windows systems ... and last I looked, Bitdefender seemed to do a good job. Was I wrong to recommend?