Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A sane DevOps workflow is with declarative systems like NixOS or Guix System, definitively not on a VM infra in practice regularly not up to date, full of useless deps, on a host definitively not up to date, with the entire infra typically not much managed nor manageable and with an immense attack surface...

VMs are useful for those who live on the shoulder of someone else (i.e. *aaS) witch is ALL but insecure.



I'm not sure what you're referring to here?

Our cloud machines are largely VMs. Deployments mean building a new image and telling GCP to deploy that as machines come and go due to scaling. The software is up to date, dependencies are managed via ansible.

Maybe you think VMs means monoliths? That doesn't have to be the case.


That's precisely the case: instead of owning hw, witch per-machine it's a kind-of monolith (even counting blades and other modular solution), you deploy a full OS or half-full to run just a single service, on top of another "OS". Of course yes, this is the cloud model, and is also the ancient and deprecated mainframe model, with much more added complexity and no unique ownership with an enormously big attack surface.

Various return of experience prove that cloud model is not cheap nor reliable than owning iron, it's just fast since you live on the shoulders of someone else. A speed you will pay at an unknown point in time when something happen and you have zero control other that.

DevOps meaning the Devs taking over the Ops without having the needed competences, it's a modern recipe to a failing digital ecosystems and we witnessed that more and more with various "biblical outages" from "Roomba devices briked due to an AWS mishap, cars of a certain vendor with a slice or RCEs, payment systems outages, ... a resilient infra it's not a centrally managed decentralized infra, it's a vast and diverse ecosystem interoperating with open and standard tools and protocols. Classic mail or Usenet infra are resilient, GMail backed by Alphabet infra is not.

What if Azure tomorrow collapse? What's the impact? What's the attack surface of living on the shoulder of someone else, typically much bigger than you and often in other countries where getting even legal protections is costly and complex?

Declarative systems on iron means you can replicate your infra ALONE on the iron, VMs meaning you need much more resources and you do not even know the entire stack of your infra, you can't essentially replicate nothing. VMs/images are still made the classical '80s style semi-manual way with some automation written by a dev knowing just how to manage his/her own desktop a bit and others will use it careless "it's easy to destroy and re-start", as a result we have seen in production images with someone unknown SSH authorized keys because to be quick someone pick the first ready made image from Google Search and add just few things, we are near the level of crap of the dot-com bubble, with MUCH more complexity and weight.


(note .. use 'which' not 'witch', quite different words)

Not sure if you mentioned it, but cost and scaling is an absurd trick of AWS and others. AWS is literally 1000s, and in some usage cases even millions of times more expensive than your own hardware. Some believe that employee cost savings help here, but that's not even remotely close.

Scaling is absurd. You can buy one server worth $10k, that can handle the equivalent of thousands upon thousands of AWS instances' workload. You can buy far cheaper servers ($2k each), colo them yourself, have failover capability, and even have multi-datacentre redundancy, immensely cheaper than AWS. 1000 of times cheaper. All with more power than you'd ever, ever, ever scale at AWS.

All that engineering to scale, all that effort to containerize, all that reliance upon AWS and their support system.. unneeded. You can still run docker locally, or VMs, or just pound it out to raw hardware.

So on top of your "run it on bare metal" concept, there's the whole "why are you wasting time and spending money" for AWS, argument. It's so insanely expensive. I cannot repeat enough how insanely expensive AWS is. I cannot repeat enough how AWS scaling is a lie, when you don't NEED to scale using local hardware. You just have so much more power.

Now.. there is one caveat, and you touch on this. Skill. Expertise. As in, you have to actually not do Really Dumb Things, like write code that uses 1000s of times CPU to do the same task, or write DB queries or schema that eat up endless resources. But of course, if you do those things on your own hardware, in DEV, you can see them and fix.

If you do those in AWS, people just shrug, and pay immense sums of money and never figure it out.

I wonder, how many startups have failed due to AWS costs?


> use 'which' not 'witch', quite different words

Thanks and sorry for my English, even if I use it for work I do not normally use it conversationally and as a result it's still very poor for me...

Well I do not specifically talk about AWS, but in general living on someone else is much more expensive in OPEX than what it can be spared in CAPEX, and it's a deeply critical liability, specially when we start to develop on someone else API instead of just deploy something "standard" we can always move unchanged.

Yes, technical debt is a big issue but is a relative issue because if you can't maintain your own infra you can't be safe anyway, the "initial easiness" means a big disaster sooner or later, and the more later it is the more expensive it will be. Of course an unipersonal startup can't have on it's own iron offsite backups, geo replication and so on, but dose the MINIMUM usage of third party services trying to be as standard and vendor independent as possible until you earn enough to own it's definitively possible at any scale.

Unfortunately it's a thing we almost lost since now Operation essentially does not exists anymore except for few giants, Devs have no substantial skill since they came from "quick" full immersion bootcamps where they learned just to do repetitive things with specific tools like modern Ford model workers able only to turn a wrench and still most of the management fails to understand IT for what it is, not "computers" like astronomers telescopes, but information, like stars for astronomers. This toxic mix have allowed very few to earn hyper big positions, but they start to collapse because their commercial model is technically untenable and we start all paying the high price.


VMs are useful when you don't own or rent dedicated hardware. Which is a lot of cases, especially when your load varies seriously over the day or week.

And even if you do manage dedicated servers, it's often wise to use VMs on them to better isolate parts of the system, aka limit the blast radius.


Witch is a good recipe to pay much more thinking to be smart and pay less, being tied to some third parties decisions for anything running, having a giant attack surface and so on...

There are countless lessons about how owning hw is cheaper than not, there are countless examples of "cloud nightmares", countless examples of why a system need to be simple and securely design from start not "isolated", but people refuse to learn, specially since they are just employees for living on the shoulder of someone else means less work to do and managers typically do not know even the basic of IT to understand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: