On CentOS and similar distros I agree. But on Debian like distros podman has too many rough edges and the packaging isn’t something you can contribute to easily on GitHub. I find this sad since podman is really good but there isn’t much effort in improving UX on Debian distros.
How do you handle load balancing of inbound traffic.. do you use a pod running Traefik or similar? How do the pods communicate when they are deploying, unavailable or busy and so on?
I guess I could get this from your site but there's a LOT of information on the first pages there and possibly not what I am looking for.
We use haproxy pods for inbound traffic, they perform TLS termination and simply pass the traffic over to a local proxy pod.
The haproxy pod (and all other pods) communicate with each other over this local proxy service which is running on each host.
We have a very simplified and robust overall architecture, were each pod allocates a specific virtual port and the proxy will try each host for that pod (and remember status), meaning we don't need to keep track of global routing tables and update each host's ip tables (shivering) when pods come and go. We don't use iptables.
If some instance of a pod is unavailable the proxy will seamlessly try another instance of the pod.
Initially we tried to config haproxy to do all this proxying for us, but it was asking too much.
Good to know is that Simplenetes is still in beta.
Indeed, the goal has to keep the number of moving parts down as much as possible so it can be easy to understand the full cluster and how to troubleshoot it. But of course, it still requires knowledge about the architecture to do so.
How was your experience writing that much Bash code?
I wonder what tools there are, currently (I noticed it's in beta), to get an overview of the state of the cluster, maybe what is talking to what, how much bandwidth they use etc (I don't know what one would need to know)
Thanks for asking :)
Writing this much Bash is quite straining because there is a lot of typing, but it is also liberating in the sense of coding very close to the OS (utils).
Also, most of it is not written in Bash, it's written in Posix standard, which is even more spartanic, but is then compatible with Dash and Ash (BusyBox) also, which is good because Bash is not always available.
To make Simplenetes we used another tool we also created which is meant for writing shell script apps and to perform agent-less automation, it is called Space.sh [1]
About tools for getting an overview of the cluster, there is only the command line tooling as for now, which does parts of the job, but tools for analyzing traffic and such is not created yet.
Ah; since the title was "17k" lines, and the directory was named "includes", I presumed those were includes.
So that would seem to say the title is also counting that compiled version, when in fact it is much simpler. But then also, why is the compiled version longer than the sum of the files in includes?
The line count is a rough (overestimate) summoning the source of the three projects involved, simplenetes, simplenetesd and podc, and it also includes comments and blank lines.
The compiled output pulls in some dependency modules which is reusable code (STRING, etc). So it's not clear where to draw the line :)
Reason the compiled output gets bigger is because the compilation process `(make.sh)` includes compilations of actions, which I use when connecting over SSH agentless to manage the nodes. This tough part is handled by Space.sh [1]
Coding in shell sure is a challenge, I try to avoid bashims where I can, that's why most scripts are .sh not .bash, since they run also with dash/ash (that's why preferring `[` over `[[`).
The only Bash requirement we have is the `podc` (pod compiler) since it parses YAML docs, I couldn't pull that one off without the more powerful Bash.
The code base as a whole absolutely needs testing and shellchecks all over the place to be labelled as mature in any sense.
Our other large shell project is Space.sh [1] where go overboard on testing, each module is tested under a bunch of distributions. [2] [3]
Next step would be to do the same for Simplenetes.
Given that in shell that the only way to protect a variable from functions down the call stack accessing it is to redeclare/shadow it as "local" brings some murky waters.
BUT, isn't it amazing what is actually possible to do with Shell??
For me Simplenetes in the long run is not about it being written in Shell, I'm perfectly happy rewriting it in Rust if we get traction on it.
Simplenetes is primarily about having a simpler architecture of what a container cluster is and is not. I think many projects just get too complex because they want to fill every single use case out there, while the interesting part is saying no to things.
Seems like my click-baity title brought out some strong feelings :)
I do actually like Kubernetes. I recommend it to clients, I assess candidates who are to work with it, but I do really think it is a Beast. Because it is.
Kubernetes is like C++, extremely useful but it just keeps growing and nobody really knows all of it.
While Simplenetes is like... Lua, batteries not included, your boss will not approve, it doesnt't hit the buzzspot, but some people will secretely use it and be happy about it.
Thanks!!
We do use it in production, it is however fresh from the keyb so giving it some time before putting sensitive stuff in there.
About the state, it's perfectly doable, but requires some more Ops than k8s and isn't as flexible, naturally.
As a FB software engineer you can probably work at any company you want to, so one must assume you want to work at Meta. And why is this?
Serious question here, please respond.