I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.
I'd say about 2/3 of the places I've worked started on Linux without a Windows precedent other than workstations. I can't speak for the experience of the founding staff, though; they might have preferred Ubuntu due to Windows experience--if so, I'm curious as to why/what those have to do with each other.
That said, Ubuntu in large production fleets isn't too bad. Sure, other distros are better, but Ubuntu's perfectly serviceable in that role. It needs talented SRE staff making sure automation, release engineering, monitoring, and de/provisioning behave well, but that's true of any you-run-the-underlying-VM large cloud deployment.
I interpreted the post as saying the support emails were legitimate, opened fraudulently (or at least some were) as pretext for the phishing phone call.
All sorts of reasons, but this isn't a left-pad situation. Axios's functionality is something provided by a library in a lot of languages (C/C++ with libcurl and friends, Python with requests, Rust with reqwest, and so on).
That's not to say it's inherently necessary for it to be a third-party package (Go, Ruby, and Java are counterexamples). But this isn't a proliferation/anemic stdlib issue.
Pinning, escrowing, and trailing all help, but I'm not sure "this step will be eliminated" is inevitable.
Package manager ecosystems are highly centralized. npm.org could require MFA (or rate limit, or email verification, or whatever) and most packagers would gripe but go along with this. A minority would look for npm competitors that didn't have this requirement, and another minority would hack/automate MFA and remove the added security, but the majority of folks would benefit from a centralized requirement of this sort.
Yup. As someone who's been on both the eng and security side, you cannot improve security by blocking the product bus. You're just going to get run over. Your job is to find ways of managing risk that work with the realities of software development.
And before anyone gets upset about that, every engineering discipline has these kind of risk tradeoffs. You can't build a bridge that'll last 5,000 years and costs half of our GDP, even though that's "safer". You build a bridge that balances usage, the environment, and good stewardship of taxpayer money.
Requiring a human-in-the-loop for final, non-prerelease publication doesn't seem like that onerous of a burden. Even if you're publishing multiple releases a day on the regular (in which case ... I have questions, but anyway) there are all sorts of automations that stay secure while reducing the burden of having to manually download an artifact from CI, enter MFA, and upload it by hand.
That's a good thing (disruptive "firebreak" to shut down any potential sources of breach while info's still being gathered). The solve for this is artifacts/container images/whatnot, as other commenters pointed out.
That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response. I know that's toeing the rules line, but I think it's important to observe.
This is the right answer. Unfortunately, this is very rarely practiced.
More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.
I like your dream. I think financial incentives make it unlikely, though. The writing's been on the wall for user-friendly general computing OSes for awhile, I think. So Microsoft's incentive is to treat Windows like a loss leader (even if it's not) and use it as a funnel for services/subscription revenue from their other products.
I hate that/wish it weren't so, but I think the last ~15y of M$ decisionmaking makes a lot of sense in that context.
Another aspect to this is that I really doubt consumers would go to linux if there was any pay-wall or 'donate for more features' type aspect to it. Something that really isn't emphasized much is how lots of OSS/linux work is done by the various big corporations often for goals that are not aimed at the small scale users, and it's a happy byproduct that many aspects of their system may run better just by swapping OS, all free to them. Similarly Valve's efforts seem tightly focused on what matters to their products/services and being available to everyone is a byproduct.
The windows cost gets hidden/de-emphasized when buying a PC, or other users just ignore it which is seems to be below MS's pain tolerance for lost revenue on those users. If there was a price of admittance to linux for any other company to devote resources to work on it where it couldn't be treated as a loss-leader for something else, it'd be an even tougher struggle to migrate users over. (and it's likely right now most people moving to linux are somewhat enthusiasts)
reply