Completely agree. The whole compile chain for most software and reliance on linked libraries, implicit dependencies like locale settings changing behavior, basically decades of weird accidents and hacks to get around memory and disk size limits, can be a nightmare to deal with. If using slow dynamic languages, or modern frontend bunglers, all the implicit c extension compilations and dependencies can still be a pain.
The list goes on and on, it’s bizarre to me to think of this as the true, good way of doing software and think of docker as lazy. Docker certainly has its own problems too, but does a decent job at encapsulating the decades of craziness we’ve come to heavily rely on. And it lets you test these things alongside your own software when updating versions and be sure you run the same thing in production.
If docker isn’t your preferred solution to these problems that’s fine, but I don’t get why it’s so popular on HN to pretend that docker is literally useless and nobody in their right mind would ever use it except to pad their resume with buzzwords.
When library versions start to cause issues: like V3.5 having a bug, so you need to roll back to V3.4... that's when ./configure && make starts to have issues.
Yeah, it happens with .so files, .dlls ("dll hell"), package managers and more. But that's where things like containers come in to help: "I tested Library Foo version V3.4 and that's what you get in the docker". No issues with Foo V3.5 or V3.6 causing issues... just get exactly what the developer tested on their box.
Be it a .dll, a .so, a #include library, some version of Python (2.7 with import six), some crazy version of a Ruby Gem that just won't work on Debian for some reason (but works on Red Hat)... etc. etc.
There are basically two options; maintain up-to-date dependencies carefully (engineer around dll-hell with lots of automated testing and be well-versed in the changelogs of dependencies) or compile a bunch of CVEs into production software.
There really isn't any middle ground (except to not use third-party libraries at all).
That assumes you are using free software without a support contract where the vendor has no incentive to maintain long term support for libs by only applying security patches but not adding any features to old versions. I understand this goes against the culture of using only the latest or "fixing" vulns by upgrading to a more recent version (which may have a different API or untested changes to existing APIs).
That makes sense for a hobbyist community but not so much for production.
In a former job we needed to fork and maintain patches ourselves, keeping an eye on the CVE databases and mailinglists and applying only security patches as needed rather than upgrading versions. We managed to be proactive and avoid 90% of the patches by turning stuff off or ripping it out of the build entirely. For example with openSSH we ripped out PAM, built it without LDAP support, no kerberos support etc. And kept patching it when vulns came out. You'd be amazed at how many vulns don't affect you if you turn off 90% of the functionality and only use what you need.
We needed to do this as we were selling embedded software that had stability requirements and was supported (by us).
It drove people nuts as they would run a Nessus scan and do a version check, then look in a database and conclude our software was vulnerable. To shut up the scanners we changed the banners but still people would do fingerprinting, at which point we started putting messages like X-custom-build into our banners and explained to pentesters that they need to actually pentest to verify vulns rather than fingerprinting and doing vuln db lookups.
Point being, at some point you need to maintain stuff and have stable APIs if you want long lasting code that runs well and addresses known vulns. You don't do that by constantly changing your dependencies, you do it by removing complexity, assigning long terms owners, and spending money to maintain your dependencies.
So either you pay the library vendor to make LTS versions, or you pay in house staff to do that, or you push the risk onto the customer.
Aren't we conflating compile complexities with runtime complexities here? There are plenty of open-source applications that offer pre-compiled binaries.
That difference isn't as black and white as you're making it out to be, sometimes it's just a design decision whether certain work is done at compile time or runtime. And both kinds of issues, runtime or compile-time, can be caused by the kinds of problems I'm talking about like unspecified dependencies.
This is why I wish github actually allowed automated compilation. That way we could all see exactly how binaries are compiled and don't need to setup a build environment for each open source project we want to build ourselves.
The list goes on and on, it’s bizarre to me to think of this as the true, good way of doing software and think of docker as lazy. Docker certainly has its own problems too, but does a decent job at encapsulating the decades of craziness we’ve come to heavily rely on. And it lets you test these things alongside your own software when updating versions and be sure you run the same thing in production.
If docker isn’t your preferred solution to these problems that’s fine, but I don’t get why it’s so popular on HN to pretend that docker is literally useless and nobody in their right mind would ever use it except to pad their resume with buzzwords.