Firing someone for an honest mistake is a great way to make all of your employees afraid to push changes out of concern that they'll be next (although to be fair this was a pretty big mistake).
Things like this are usually a systemic failure rather than being 100% attribute to a single person.
There are magnitudes of mistakes. I can drop prod 4 times a day on a project that's being used by 4 old grannies to sync their excel file and get away with it. I can get scolded for 5 minutes of downtime at 6 am.
There is a point of responsibility in here, and it is up to the management to triage the fallout of this "mistake".
>before I went to elite companies, where it is quite normal for people to live-and-breath software, at almost all hours.
Honest question: Do they actually _want_ to live-and-breathe software, or do they work in a highly competitive and highly compensated environment where doing that is implicitly required?
Defintely a mix, though I agree with you that the majority fall under, "they work in a highly competitive and highly compensated environment where doing that is implicitly required."
>It is true IF you live and work in the Bay Area, Seattle, and TLV - which represent the bulk of tech industry employment.
Is that actually true (the bulk of people in the tech industry are working in "big tech" or startups)?
I don't know if there's any hard data around this, but my understanding has been that people working for these types of companies are maybe a single digit percentage of all tech workers (if that).
People working for those companies are certainly the most vocal online, though, which maybe skews perception.
Same here. A local grocery store and several other local businesses got bought out and demolished so Amazon could build a new Fresh store.
I guess Amazon pulled out of the project halfway through, since for the last ~2 years there's been a half-finished building just sitting there completely abandoned in our town center.
This is why you shouldn't waste your money on expensive "consultants" like this guy.
We've had 100% success in reducing Dependabot noise by disabling it in our repos. Why should we pay this guy to configure it for us and still end up with Pull Requests being opened?
Aren't the shape/size/placement/etc. of human teeth fairly unique across different individuals? At least unique enough to use dental records to identify bodies.
I don't see if mentioned in TFA, but if new human teeth can be grown is it expected that the new ones will just grow in "correctly" to fit a person's mouth?
> At least unique enough to use dental records to identify bodies.
Yes but in comparative dental analysis they use ante-mortem dental records to compare with post-mortem remains. It's not like DNA where you can record it once and then use that to match samples decades later in a database. In order to have a high confidence in a match, recent x-rays and records of dental work like fillings, crowns, etc. work best.
And no it is not expected. It's one of the primary challenges with bringing these kinds of drugs to market, as hyperdontia is already relatively common among humans (I had an incisor growing at the roof of my mouth an inch behind my row of teeth). Most successful applications of these tooth regrowth drugs tend to place them near the root of missing teeth hoping that the cellular growth signaling mechanisms are still working.
>The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away.
The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
Where I work, if our infosec team's scanner detect a critical vulnerability in any software we use, we have 7 days to update it. If we miss that window we're "out of compliance" which triggers a whole process that no one wants to deal with.
The path of least resistance is to update everything as soon as updates are available. Consequences be damned.
You view this as a burden, but (at least if you operate in the EU) I’d argue you’re actually looking at a competitive advantage that hasn't cashed out yet.
Come 2027-12, the Cyber Resilience Act enters full enforcement. The CRA mandates a "duty of care" for the product's lifecycle, meaning if a company blindly updates a dependency to clear a dashboard and ships a supply-chain compromise, they are on the hook for fines up to €15M or 2.5% of global turnover.
At that point, yes, there is a sense in which the blind update strategy you described becomes a legal liability. But don't miss the forest for the trees, here. Most software companies are doing zero vetting whatsoever. They're staring at the comet tail of an oncoming mass extinction event. The fact that you are already thinking in terms of "assess impact" vs. "blindly patch" already puts your workplace significantly ahead of the market.
The CRA, unfortunately, also has language along the lines of "don't ship with known vulnerabilities", without defining who determines what is a vulnerability and how, so I fully expect this no-thoughts-only-checkboxes approach to increase with it (there's already a bunch of other standards which can be imposed on organizations from various angles which essentially force updates without any consideration of the risk of introducing new vulnerabilities or supply-chain attacks).
My previous job we did continuous deployment and had a weekly JIRA ticket where an engineer would merge dependabot PRs. We scanned everything in our stack with Trivy to be aware of security vulnerabilities and had processes to ensure they were always patched within 2 weeks.
I really dislike that approach. We're by now evaluating high-severity CVEs ASAP in a group to figure out if we are affected, and if mitigations apply. Then there is the choice of crash-patching and/or mitigating in parallel, updating fast, or just prioritizing that update more.
We had like 1 or 2 crash-patches in the past - Log4Shell was one of them, and blocking an API no matter what in a component was another one.
In a lot of other cases, you could easily wait a week or two for directly customer facing things.
This isn’t a serious response. Even if you had the clout to do that, you’d then own having to deal with the underlying pressure which lead them to require that in the first place. It’s rare that this is someone waking up in the morning and deciding to be insufferable, although you can’t rule that out in infosec, but they’re usually responding to requirements added by customers, auditors needed to get some kind of compliance status, etc.
What you should do instead is talk with them about SLAs and validation. For example, commit to patching CRITICAL within x days, HIGH with y, etc. but also have a process where those can be cancelled if the bug can be shown not to be exploitable in your environment. Your CISO should be talking about the risk of supply chain attacks and outages caused by rushed updates, too, since the latter are pretty common.
SOC2/FIPS/HIPAA/etc don't mandate zero-CVE, but a zero-CVE posture is an easy way to dodge all the paperwork that would be involved in exhaustively documenting exactly why each flagged CVE doesn't actually apply to your specific scenario (and then potentially re-litigating it all again in your annual audit).
So it's more of a cost-cutting/cover-your-ass measure than an actual requirement.
There are several layers of translation between public regulations, a company's internal security policy and the processes used to enforce those policies.
Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.
On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.
As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up.
The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.
Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".
So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".
>Welcome to America where you must watch the kid every second until they turn 18
This must be a regional thing?
I live in New England and I always see kids out and about with no adults around supervising. Especially from 1-3PM on weekdays when school lets out. Maybe a side-effect of walkable infrastructure.
reply