For many people self-hosting implies a personal server; it's not really "development" but it's not "production" either. In that context many people find k8s or other PaaS to be too heavyweight so Docker Compose is pretty popular. For more production-oriented self-hosting there are various newer tools like Kamal but it will take a while for them to catch up.
I've managed to keep my personal server to just docker, no compose. None of the services I run (Jellyfin, a Minecraft server, prowlarr, a Samba server [this is so much easier to configure for basic use cases via Docker than the usual way], pihole) need to talk to one another, so I initialize each separately with shell scripts. Run the script for a new service once, and Docker takes care of restarts on system reboot or if the container crashes, don't even have to interact with my base OS's init system or really care about anything on it. When I want to upgrade a service, I destroy the container, edit the script to specify the newer version I want, and run the script again. Easy.
> For many people self-hosting implies a personal server; it's not really "development" but it's not "production" either.
There's Docker swarm mode for that. It supports clustering too.
It's nuts how people look at a developer tool designed to quickly launch a preconfigured set of containers and think it's reasonable to use it to launch production services.
It's even more baffling how anyone looks at a container orchestration tool and complains it doesn't backup the database they just rolled out.
> In that context many people find k8s or other PaaS to be too heavyweight so Docker Compose is pretty popular.
...and proceed to put pressure to shitify it by arguing it should do database backups, something that even Kubernetes stays clear from.
The blogger doesn't even seem to have done any research whatsoever on reverse proxies. If he would have done so, in the very least he would have eventually stumbled upon Traefik which in Docker solves absolutely everything he's complaining about. He would also have researchdd what it means to support TLS and how this is not a container orchestration responsibility.
Quite bluntly, this blog post reads as if it was written by someone who researched nothing on the topic and decided instead to jump to
I'm curious how that last sentence was going to end.
Let's say I agree with you and that TLS termination is not a container orchestration responsibility. Where does the responsibility of container orchestration start and TLS termination end? Many applications need to create URLs that point to themselves so they have to have a notion of the domain they are being served under. There has to be a mapping between whatever load-balancer or reverse proxy you're using and the internal address of the application container. You'll likely need service discovery inside the orchestration system, so you could put TLS termination inside it as well and leverage the same mechanisms for routing traffic. It seems like any distinction you make is going to be arbitrary and basically boil-down to "no true container orchestration system should care about..."
In the end we all build systems to do things to make people's lives better. I happen to think that separating out backups and managing ports as an exercise for the deployment team raises the barrier to people that could be hosting their own services.
I could be totally wrong. This may be a terrible idea. But I think it'll be interesting to try.
> If he would have done so, in the very least he would have eventually stumbled upon Traefik which in Docker solves absolutely everything he's complaining about
I'm aware of Traefik, I ran it for a little while in a home lab Kubernetes cluster, and later on a stack of Odroids using k3s. This was years ago, so it may have changed a lot since then, but it seemed at the time that I needed an advanced degree in container orchestration studies to properly configure it. It felt like Kubernetes was designed to solve problems you only get above 100 nodes, then k3s tried to bang that into a shape small enough to fit in a home lab, but couldn't reduce the cognitive load on the operator because it was using the same conceptual primitives and APIs. Traefik, reasonably, can't hide that level of complexity, and so was extremely hard to configure.
I'm impressed at both what Kubernetes and k3s have done. I think no home lab should run it unless you have an express goal to learn how to run Kubernetes. If Traefik is as it was years ago, deeply tied to that level of complexity, then I think small deployments can do better. Maybe Caddy is a superior solution, but I haven't tried to deploy it myself.
If you want an HTTPS ingress controller that's simple, opinionated, but still flexible enough to handle most use cases, I've enjoyed this one:
https://github.com/SteveLTN/https-portal
> Let's say I agree with you and that TLS termination is not a container orchestration responsibility.
It isn't. It's not a problem, either. That's my point: your comments were in the "not even wrong" field.
> (...) It seems like any distinction you make is going to be arbitrary and basically boil-down to "no true container orchestration system should care about..."
No. My point is that you should invest some time into learning the basics of deploying a service, review your requirements, and them take a moment to realize that they are all solved problems, specially in containerized applications.
> I'm aware of Traefik, I ran it for a little while in a home lab Kubernetes (...)
I recommend you read up on Traefik. None of your scenarios you mentioned are relevant to the discussion.
The whole point of bringing up Traefik is that it's main selling point is that it provides support fo route configuration through container tags. It's the flagship feature of Traefik. That's the main reason why people use it.
Your non sequitur on Traefik and Kubernetes also suggests you're talking about things that haven't really clicked with you. Traefik can indeed be used as an ingress controller in Kubernetes, but once deployed you do not interact with it. You just define Kubernetes services, and that's it. You do interact directly with Traefik if you use it as an ingress controller in Docker swarm mode or even docker-compose, which makes your remark even more baffling.
> I'm impressed at both what Kubernetes and k3s have done. (...) If Traefik is as it was years ago,(...)
Kubernetes represents the interface, as well as the reference implementation. k3s is just another Kubernetes distribution. Traefik is a reverse proxy/load balancer used as an ingress controller in container orchestration systems such as Kubernetes or Docker swarm. The "level of complexity" is tagging a container.
Frankly, your comment sounds like you tried to play buzzword bingo without having a clue whether the buzzwords would fit together. If anything, you just validated my previous comment.
My advise: invest some time reading on the topic to go through the basics before you feel you need to write a blog post about it.
On my personal home server I abuse Docker Compose so I don't have to use a huge command line to spin up/spin down containers but I can just say "docker compose up -d" and be done with it.