Not sure how this works, but if I were writing something that checks "goodness" of passwords, I'd want to check it against a large, sorted list of like the 100 million most common passwords (since almost anyone attackers would start with a dictionary attack, and 10-100 million passwords is decently quick to run against a weak key derivation function). Such a database would be huge, so you'd prefer to do that server-side.
Look at the list of things they check. It shows you how they calculate the score for the password. The actual scoring is in fact done in JavaScript, which you can see when you inspect the page.
They want the password on the server side purely for statistics or some other reason that has nothing to do with scoring how "secure" the password is.
True, account should be disabled after x number of bad guesses. But securing against a "brute force" attack for me is more a case of cracking hashes from a db dump. It's easy to churn though vast numbers of hashes in no time at all these days. Here things like hashing algorithm speed, number of iterations, unique salting and original password length all have a part to play.
Completely agree. For example, I use two-factor authentication on all accounts that allow me to.
Regarding limiting to 12 characters, the sites in question there are putting there users at risk and it's very likely they aren't storing passwords correctly, leaving anything you put in that password box vulnerable anyway.
It was a larger number at first, but try remembering say 10 words in a row over a long period of time; most people find that difficult.
If hashing is implemented properly (i.e. is slow with a large number of iterations) then passphrases shouldn't have to be that long. And if it isn't, then pretty much anything you use will be as bad as each other (Some people are still using MD5 for example!!) :)
I don't see the harm in leaving the option in, given that for some applications a longer passphrase is critical (not everything can use iterated hash schemes: for instance if you want to deniably encrypt a hard drive partition to look like random data).
EDIT: also, 'explicit' is a nice touch, makes some pretty memorable passphrases, but I hope you're not taking from a small list of profanities, since that would seriously diminish the entropy. Be sure to factor in the (probably) much smaller number of possible 'explicit' passphrases when doing entropy calculations.
I'm the guy who originally contacted Troy Hunt about this, as he mentions in the blog post.
What annoys me is I'm a very young developer, and I've only really just become interested in security (12 months ago I didn't even know what hashing was!!!), yet there's developers out there with years and years of experience making huge sites for the likes of Tesco and TopCashBack for vast sums of money and they don't think about incorporating even the simplest foundations of internet security a novice like me would implement without even thinking! How is this possible?! If I'm doing it in tiny little php sites with 1 unique visitor ever, why are these 'experts' not in there huge corporate sites with hundreds of thousands of users a month?!
I could cite multiple reasons, but the main one I see is that the people who pay for and specify the requirements of a project are often nontechnical and don't delegate any decision making authority to technical personnel, no matter how much talent is in the pool. Repeated attempts to bring up issues are met with blank stares, as if a foreign language is being used (and to nontechnical people, it is). I've gotten calls at 5 PM on a Friday that "Our new site (hosted on a third party server) is going live on Monday! We need a domain and an SSL certificate." In a corporate or institutional environment, bureaucratic obstacles can make such a request impossible. Throw in a VIP who won't budge, and you're about to ruin the weekend for multiple employees who will remain gun-shy about disclosing any problems in the future.
Years and years of experience? Often not, and that's speaking from years and years of experience!
Vast sums of money? Yes, at least the outsourcing vendors who churn this sort of thing out.
Unfortunately you're the exception Mark so good on you for that. Well I mean unfortunate for the greater web using population, but very fortunate for you!
To expand further on those points with a personal anecdote... I worked on a "big website" for a very big government department. I was still in university at the time, had about a year's experience with the platform, and was earning close to callcenter wages.
The government only contracts to companies on a particular whitelist, which we weren't. They paid a company something in the region of £300,000 for the website.
That company then outsourced 100% of the work to the tiny (single digits employee count) company I was at.
They paid us £40,000.
My share of that works out at about £2,000.
The fact something cost half a million dollars in no way implies any kind of quality, or that the people working on the software will know what they're doing. (Really: another horror story involved a website that only needed to work in IE6... and had been built and tested purely in Firefox)
Businesses care about loss, not security as an intrinsic merit. They will invest it only to the extent to which problems are actively costing them sales, raising expenses like insurance or are required to comply with some sort of regulation (government, credit card processor, etc.).
Something positive like a site redesign perceived as increasing customer appeal or negative like an expensive legacy server upgrade often trumps security unless it clearly poses a future risk – this is also why many large companies tend to follow the trailing edge as the main concern is usually avoiding a lawsuit alleging that they were significantly worse than everyone else in their industry.
Because huge corporations often prioritize other things over technical expertise, and experts often prefer to work in settings other than huge corporations.
This is not always the case. Lots of marketing/PR companies started off in the pre-Internet days, and simply added websites to their list of services and rely solely on the experience of their devs to handle the rest. I was a contractor for such a company and to cut costs (read: to save on paying me), they hired a junior developer with no formal education, who specialized in copy-paste programming to do many of their large sites. He's since left, but the damage he's done still lives on in the countless sites he built for their clients, which are full of horribly insecure code (especially #1 on the HTTPS no-no list).
The good news is that, following his departure, I got plenty of work doing clean-up jobs on all the sites he built. :)
In any sufficiently large organization, the people who make the decisions and the people who enact those decisions are separated by distance, background, experience, or other reasons. This is why you see so much dysfunction in large organizations, and it extends to politics and other things that affect you as well. Look for it and you'll start seeing it everywhere.
Good question.
Why doesn't chrome decompress it when their header suggests they will?
Yes it's unneeded and discouraged, but if developers do send gzipped content such as images (which as we all (including the blogger) know, they definitely shouldn't!) surely chrome should just go ahead and decompress it as normal?