Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once you realize this sort of shallow, automated audit exists to grease the wheels of some security theater operation, this sort of back and forth makes sense.

False positives don't waste the security team's budget, and are the only surefire way to justify ongoing expenditures on the scanning tool.



So somehow false positives negate all the actual positives caught and corrected? The only true solution is what? Manual audit of everything by some perfect human security practitioner? I suppose the same applies to automated development tools then.

I will concede there probably are some firms out there acting poorly that way. When aren't there? Humans sigh. But by in large automation and the problems inherent are required.


I was in this industry for a while (I did 5 years full time for a security company).

> I will concede there probably are some firms out there acting poorly that way.

This is a hilarious take. Here's mine: The vast majority of these firms are here for liability. They do not provide security, they provide security theater so that if/when something goes wrong, the client can claim to have followed best practices, and that it's not their fault.

This style of theater is RAMPANT in the industry. Literally most of them come in with some version of a shitty automated tool, often with false positives in the 95%+ range, and then you check off checkboxes to make legal happy, and ensure that your insurance (or your customer's insurance) will pay if you get compromised.

This is not limited to small companies, but is instead mostly how large banks, credit firms, and large enterprises work.

The entire damn show is for liability, and half of these "Security professionals" can't do anything other than read the message coming out of their tool. They are utterly incompetent when it comes to applying logic to figure out whether a specific use of a "risky" tool/language feature a genuine problem, vs a god damned fucking regex match from their tool on something completely unrelated.

I went in as a naive young dev, I came out with ZERO respect for this industry, and a healthy dose of skepticism around anything these folks are saying.

Security researchers? Generally fine (although there's a new breed of them that simply submits inane non-vulnerabilities over and over to attempt to get bug bounties from large companies)

Security advice from open source devs? Generally top notch, listen to it.

Security advice from that contractor your company paid? Expect to have 95% false positives, and they'll miss the 2 places you know actually have issues. But it's ok because you'll check all the boxes on their sheet and legal is happy.

---

I'm waiting for insurance companies to wise up and stop covering companies who are breached. Prices are already shooting way up, since it turns out security theater does a very bad job stopping real threats, and that's what this industry is right now. Might was well be the TSA of software, gonna frisk you real good, and then flunk every fucking test.


My project's client, a major local bank, mandates pen testing before we go live with the product that we're building.

Due to time and resourcing constraints, and a fixed launch date (don't ask), we had to significantly reduce scope. We are currently launching with probably 30% of the original scope. Once live we'll iterativley add features and grow our product to bring it back to the original vision.

Will we have to do pen testing again in the future? Nope. Even though a lot of our planned post launch features involve integrations with 3rd parties and the exchange of sensitive personal data.

Pen testing is 100% a box ticking exercise in my opinion. And most of the shops that offer this service are set up to provide the service such that large corporates to appease legal.

I really don't like pen testing as a practice and seriously dislike when it's a go live impediment.


What you’ve described is an issue with the bank’s procedures, not pentesting.


> I really don't like pen testing as a practice

could you elaborate on this a bit? what's wrong with it? to me it seems the only logical thing to spend external security budget on. (or internal red team if the company is so bit it can afford it.)


Thank you, that is a very interesting take. I've always played with the idea of pivoting into the security industry because playing hack the box and the like is something I do in my free time and enjoy. Maybe I'll just do some bug bounties or something for fun instead. I think it's a bit different in Europe though at least anecdotally there seems to be more code audit and the like (more product security) instead of pen testing (more network/infrastructure security).


I'd say don't let yourself be discouraged by GP. Just look into a company before you apply. Many have public reports and/or security research, both of which you could use as indicators.

Here's a repo with lots of public reports by various consultancies, you could use that as a starting point: https://github.com/juliocesarfort/public-pentesting-reports


This post is amazing. Thank you. This captures my thoughts on this problem space very well!

Security vs compliance is real. I love how you just map "compliance = security theater" because that is really the best way to describe it. The TSA bit has me in tears!


Back in the late 90s, I was working at a small web hosting company (take note). One day, a 500+ page report of a recent PCI compliancy check landed on my desk. It was nothing but "OMFG! DOMAIN1 RESPONDED TO PING! YOU WILL BE PWOWNED! OMFG! DOMAIN1 HAS DNS RECORDS! YOU WILL BE PWONED! OMFG! DOMAIN1 HAS A WEB SERVER RUNNING! YOU WILL BE PWONED!". Over and over again. For every domain we have. Complete and utter garbage report.

Better---just summarize the IPs scanned and report back which services were found running on said IPs. Then in an appendix, list why each service is (or might be) problematic. "Ping? Attackers might be able to figure out your network topology and that is bad because blah blah blah blah."

But 500 pages of this automatic breathless garbage? Utter trash.


The problem isn't that audits are inherently worthless, it's that most of these tools are very low-quality implementations of the concept.

In one of my past jobs I was an early-mover on doing a lot of our ops on Linux and the audit tools had no concept of the backport security model whatsoever. If you were running on some kind of LTS distro rather than a current distro, it would see "gosh you're 3 minor versions behind, you have a ton of unpatched vulnerabilities!", but in actuality you didn't, because the fixes got backported into security releases (the "31" in something like "3.1.22~ubuntu31" for example). But the tool was dumb and it just had a table that said "anything under 3.4.14 is vulnerable" even when it wasn't necessarily.

OP's "log4j 1.x is not vulnerable to this the first place" is a similar thing where either someone forgot to put in a value for "min vulnerable version" or the tool doesn't understand the concept of min-version at all. To be fair that can be genuinely tricky in java, best-case you are reading manifest files to try and pick out a version, but some legacy stuff doesn't have manifests, especially very legacy fat-jar stuff. Yes, that stuff didn't go away, it's still out there in places!

And to be clear this was a long-running issue... it wasn't a "oh our tool doesn't understand LTS, I get it" and they left me alone... this was every month they'd be back at me for a new list of utility vulnerabilities even though I repeatedly explained to them that I had set up apt-get to auto-upgrade every night and reboot, so we were running whatever the latest security backports were available, and that just like the last 10 times this was even more false-positives.

And again, this isn't just "we have to run it by you no matter what" and they leave you alone either. What security really wanted was for me to run windows and manually install the updates and get back in line with everyone else, even though that would have been a less secure outcome than my locked-down linux boxes. But this wasn't my day-job at the company so to speak, it was helping out another project with some ops and it needed to be as low-effort as possible (and to be fair their requirements weren't steep, auto-patch-and-reboot was fine by them and never caused any breakage during my tenure there).

The root problem is that these sorts of tools aren't a substitute for an actual security culture, they're often a symptom of a compliance requirement and the whole thing turns into a box-checking exercise. Someone is on their ass about this because of contractual requirements or PCI requirements, and they bought the cheapest thing off the shelf that would check the box, and they are implicitly showing you here that they don't even understand how to interpret the output of that.


> The problem isn't that audits are inherently worthless, it's that most of these tools are very low-quality implementations of the concept.

I think the problem is that the current audits are inherently worthless. I've never seen another industry that would accept a 95% false positive rate from a tool. But that's on the low end from my experiences (I've done 4 major codebase audits, and worked in the security industry for 5 years)

Tools that routinely spit out 1000 plus vulnerabilities, and 6 months later you've checked off all of them without a single valid vulnerability in the list (but 3 you found yourself during that time that were missed entirely by the shitty regex that is really the entire tool in question).


> current audits are inherently worthless

And that's not helped by the warped incentives. Security has to be, fundamentally, assessed as a holistic thing and even highly skilled technical people can't really do that with modern systems. Now you add in auditors, a profession populated by glorified accountants, working off of gargantuan checklists.

You invite and reward mediocrity. At best.

Anyone good enough to actually understand and able to work with the technology will find a job in the industry, for twice the pay. And the same applies to regulators. They can't retain technically skilled staff, so they are filled with accountants. As a result we get rules and regulations, written by technically incompetent accountants, aimed at technically incompetent accountants.

To fill the gaps we have shitty, false-positive ridden, noisy, outright useless tools that generate reports intended to satisfy accountants. Security has next to nothing to do with that.

And to top it off, we have an entire industry riddled with so-called security engineers who know how to run a tool (with some marginally helpful StackOverflow pointers) and export a report, but can't for their life actually interpret, let alone VERIFY, any of it.

That has as much to do with security engineering as burning ants with a magnifying glass has to do with biology.


You could set your build up to not auto-download questionable stuff from untrustworthy machines on the Internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: