Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someting I’ve been thinking about, esp since that crowdstrike debacle. Why do major distributors of infrastructure (msft in case of crowdstrike, DHS/TSA here) not require that vendors with privileged software access have passed some sort of software distribution/security audit? If FlyCASS had been required to undergo basic security testing, this (specific) issue would not exist


They often do. The value of those kinds of blanket security audits is questionable, however.

(This is one of the reasons I'm generally pro-OSS for digital infrastructure: security quickly becomes a compliance game at the scale of government, meaning that it's more about diligently completing checklists and demonstrating that diligence than about critically evaluating a component's security. OSS doesn't make software secure, but it does make it easier for the interested public to catch things before they become crises.)


Well, the value is ok, if considered seriously.

Also, any certificate bears a certificator company name. We can always say "company A was hacked despite having its security certified by company B". So that company B at least share some blame.


In practice, most commercial attestations/certifications contain enough weasel language that the certifier isn't responsible for anything missed (i.e. reasonable effort only).

But yes, there are many standards for this (e.g. SOC Type 2 reports).

In defense of their utility, the good ones tend to focus on (a) whether a control/policy for a sensitive operation exists at all in the product/company & (b) whether those controls implemented are effectively adhered to during an audited period.


That’s not really how they work. The auditor attests that they were provided with evidence that the systems/business units audited were compliant at the time of auditing. That doesn’t mean that the business didn’t intentionally fake the evidence, or that the business is compliant at any time subsequent to the assessment.

An auditor would certainly have some consequences if they were exposed for auditing negligently.

This is how the PCI SSC manages to claim that no compliant merchant/service provider has ever been breached, because they assume being breached means that the breached party was non-compliant at the time of the breach. Which is probably a technically true statement, but is a bit misleading about what they’re actually claiming that means.


We're talking about getting a judgement in the court of public opinion not a court of law, and no one is exempt from the former.


Many live in a special labelled class that cannot be criticized


Yes, certifiers are not responsible in legal sense, but nothing stops us from posting crap about them on internets.


> The value of those kinds of blanket security audits is questionable,

You're totally right. Why are people afraid to say that they're worthless? Why caveat or equivocate?

Adversaries in computer security do not mince words.


“Worthless” is quite a strong claim. There isn’t much work I’ve encountered that’s truly “worthless”, even though bad work can make me quite upset. Anyways, that’s why I would often caveat.


Mandatory audits by accredited auditors in order to participate in a market, inevitably create a market for accredited auditors that don't uncover too much but ensure all checkboxes are ticked. Much of the security industry is actually selling CYA and not actual security. The same dynamic at play means buyiong a home/boat/car you should get your own inspector, not blindly trust the seller's.


I'll say they are worthless because most of time they are dragging time away from things that could improve security. For example, $LastJob we spent a ton of time on SOC2 compliance and despite having applications with known vulnerabilities, we got hacked and ended up all over the news. Maybe of instead of spending all the time getting SOC2 compliance finished, we could have worked at upgrading those apps.

Actually, I doubt they would have upgraded the apps and pocketed the profits instead but SOC2 is providing cover instead of real change.


SOC2 covers a set of vectors (mostly social/separation of controls from what I’ve seen), and you were attacked on another vector.

Maybe the org prioritized poorly and sucks overall, but that doesn’t mean SOC2 or compliance generally is worthless.


>SOC2 covers a set of vectors (mostly social/separation of controls from what I’ve seen)

THAT WAS THE PROBLEM. My bad, I thought most hacks were due poor software management but I'm glad SOC2 truly addressed the real problem.


I don’t understand your hostility. Internet strangers are responding to your comments in good faith.


In this particular case it was worthless. If you have known vulnerabilities and you deprioritize that work to waste time on soc2, and get hacked because of it… soc2 was worthless. Because the whole point is security assurance. When you get hacked you’ve proved the opposite of security assurance.

But also you gotta have the balls to stand up to the guy pushing soc2 and say. No. There are known vulnerabilities. We are patching those first then we are doing soc2. The way I frame it is “we know we have critical vulnerabilities, we don’t need to go hunting for more till we fix them. Once we fix them we go looking for other ways to improve security posture” And if the ceo still insists (big client requires it so we’re doing soc2 simultaneously) you say fine, then hire a security consultant so we can go twice as fast. And if he refuses you quit because fuck that place.


That’s some bad prioritizing there Lou.


I’d rather understate a medium-confidence opinion than overstate it.


Because it's better than nothing when independent organizations are reviewing systems or other organizations. It's like saying that penetration tests are useless because you cannot prove security with testing.


Even if these govt. security audits are checkboxes, dont they require some nominal pentesting and black box testing, which test for things like SQL injection?

That shoudl have caught these types of exposures?


It may not apply to this specific incident, but pen-testing only ensures you meet a minimum standard at a specific point in time.

I almost feel I could write novels (if only I had time and could adequately structure my thoughts!) on this and adjacent topics but the simple fact is that the SDLC in a lot of enterprises/organizations is fundamentally broken, unfortunately a huge portion of what breaks it tends to occur long before a developer even starts bashing out some code.


In the case of msft/crowdstrike isn't this exactly the opposite of what HN rallies against? The users installed crowdstrike on their own machines. Why should microsoft be the arbiter of what a user can do to their own system?


They automatically occupy that position because in practice no user of a microsoft system can audit the entire "supply chain" of that system, unlike one built from open-source components. Any "control" someone has over "their own" system is ultimately incomplete when there is a company that owns and controls the operating system itself and has the sole power to both fix and inspect it


>no user of a microsoft system can audit the entire "supply chain" of that system,

Yes you can, you can access the source code to audit it.

https://en.wikipedia.org/wiki/Shared_Source_Initiative


Microsoft determines who they give root access signing keys to


Because the EU required them to.


I’ve read that story, it inspired my question. Such a requirement wouldn’t be out of bounds with the regulation


Money. Eventually the lobbyists would make it so cumbersome to get the certification that only the defense industry darlings would be able to do anything. Look at Boeing Starliner for an example of how they run a “budget”.


They do. But market forces have pushed the standards down. Once upon a time a "pen test team" was a bunch of security ninjas that showed up at your office and did magic things to point out security flaws you didn't know were even a thing. Now it is a online service done remotely by a machine running a script looking for known issues.


"I made my fortune with nmap, you can too."


Great, now my YouTube recommendations are also on HN...


Unfortunately we're in kind of the worst of all possible worlds here too. Not only do we want to "automate" these kinds of tests, but governments have bought into the "security through obscurity" arguments of tech giants, so the degree to which these automations can even be meaningfully improved is gated in practice by whoever owns the tech itself approving of some auditor (whether automated or human) even looking at it. The author of this article takes the serious risk of retaliation by even looking into this


Part of the reason why Crowdstrike have access, why MS wasn't allowed to shut them out with Vista was a regulatory decision, one where they argued that somebody needs to do the job of keeping Windows secure in a way that biased Microsoft can't.

So, I guess you could have some sort of escrow third party that isn't Crowdstrike or MS to do this "audit"?

Or see this for a much better write up: https://stratechery.com/2024/crashes-and-competition/


MS could have provided security hooks similar to BPF in Linux, and similar mechanisms with Apple, rather than having Crowdstrike run arbitrary buggy code at the highest privilege level.


Crowdstrike configured Windows to not start if their driver could not run successfully.

That's not the default option for kernel drivers on Windows, so this was an explicit choice on Crowdstrike's part.


They could have, however the timeline the regulators gave Microsoft to comply was incompatible with the amount of work required to build such system. With a legal deadline hanging over their heads Microsoft chose to hand over the keys to their existing tools.


^ This statement cannot be accepted without proof. It sounds outlandish and weird. Which regulator? Under what authority. Also Microsoft doesn’t listen to ANYBODY.


I've seen this stated before, but I haven't been able to find reliable data on when regulators required Microsoft to provide the access that they provided, or whether there's been time to provide a more secure approach. Do you know?


Crowdstrike could have included a BPF interpreter in their driver and used it for all the dangerous logic.


Replied in another comment, but I’m aware of the regulation that made msft give access. To my knowledge though, there’s nothing in the regulation that stops them from saying “you have to pass xyz (reasonable) tests before we allow you to distribute kernel level software to millions of people”


So, all companies must gatekeep like Apple? By law?


I've delivered software to the US government. My software has always been required to undergo security auditing.


Oh they usually do require some kind of proof of security certification. However the checkbox audits to get those certs and the kinds of solutions employed to allow them to check off the boxes are the real problem.


I do believe that is the point of having things like FedRAMP and StateRAMP.

Your company must meet said requirements to become a vendor for certain agencies or even be able to submit an RFP for governmental agencies.


Sigh. The company is a different problem than the product. Sally in accounting who has pii on her desk is a totally different problem than that the team that wrote insecure code 15 years ago.


Of course they require that.

Now, why wasn't the requirement enforced? Or why didn't the audit turn this up? Good questions.

But all of those are going to have some kind of requirement, e.g. FedRAMP.


Good to know, didn’t know this program existed, but makes a lot of sense that it does. Why it wasn’t enforced is an incredibly huge question now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: