I'm running NixOS on some of my hosts, but I still don't fully commit to configuring everything with nix, just the base system, and I prefer docker-compose for the actual services. I do it similarly with Debian hosts using cloud-init (nix is a lot better, though).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
Ditto on having services expressed in more portable/cross distro containers. With NixOS in particular, I've found the best of both worlds by using podman quadlets via this flake in particular https://github.com/SEIAROTg/quadlet-nix
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
I don't think that makes 100 / 100 the most likely result if you flip a coin 200 times. It's not about 100 / 100 vs. another single possible result. It's about 100 / 100 vs. NOT 100 / 100, which includes all other possible results other than 100 / 100.
In statistics, various examples (e.g., coin flips) often stand in for other activities which might prove expensive or infeasible to make repeated tries of.
For "coin flips", read: human lives, financial investments, scientific observations, historical observations (how many distinct historical analogues are available to you), dating (see, e.g., "the secretary problem" or similar optimal stopping / search bounding problems).
With sufficiently low-numbered trial phenomena, statistics gets weird. A classic example would be the anthropic principle: how is it that the Universe is so perfectly suited for human beings, a life-form which can contemplate why the Universe it so perfectly suited for it? Well, if the Universe were not so suited ... we wouldn't be here to ponder that question. The US judge Richard Posner made a similar observation in his book "Catastrophe: Risk and Response" tackles the common objection to doomsday predictions that all have so far proved false. But then, of all the worlds in which a mass extinction event has wiped out all life prior to the emergence of a technologically-advanced species, there would be no (indegenous) witnesses to the fact. We are only here to ponder that question because utter annihilation did not occur. As Posner writes:
By definition, all but the last doomsday prediction is false. Yet it does not follow, as many seem to think, that all doomsday predictions must be false; what follow is only that all such predictions but one are false.
-Richard A. Posner, Catastrophe: Risk and Response, p. 13.
I'm not sure where you're going with this, but since they have actually researched how it grows, I think it's more likely your calculations/assumptions are incomplete.
For example:
> Energy needed to grow 1g of microbial biomaterial
based on what?
Edit: Maybe you meant that radiation alone wouldn't be enough for that growth, so there'd be other components that it's helping with.
Don't do this, and don't then share the resulting numbers as fact publicly without disclosing you just asked a chatbot to make up something reasonable sounding.
If the chatbot refers to a source, read the source yourself and confirm it didn't make it up. If the chatbot did not refer to a source, you cannot be sure it didn't make something up.
The property measured in the source you linked, "enthalpy of formation", is not the same as the energy required to grow 1g of biomatter. One clue of this is that the number in the paper is negative, which would be very strange in the context you requested (but not in the context of the paper). For the curious: "A negative enthalpy of formation indicates that a compound is more stable than its constituent elements, as the process of forming it from the elements releases energy"
You're feeding yourself (and others) potentially inaccurate information due to overconfidence in the abilities of LLMs.
> If i understand that correctly the "energy required to grow" would be bigger than the "enthalpy of formation"?
They are almost completely unrelated concepts. The enthalpy of formation from the paper is the free useable energy that would be generated if you assembled all the molecules in the biomatter from the constituent atoms. E.g. the energy that would be released if you took pure hydrogen and pure oxygen and combined it into 1 gram of water. But the fungi takes in water from the environment to grow, it does not make it's own water from pure hydrogen, and it certainly does not generate any free energy from growing larger. With some margin for error in my understanding, since I'm not a chemist (but neither are you, and neither is the chatbot).
> It was really just food for thought.
It was more poison than food, since you just parroted randomly generated misinformation from the chatbot and passed it of as authentic insight.
Um right did not think of that, if you burn a organism you get to core components but the organism was not originally made of the core components.
The core idea was not generated from a chat bot. Neither was the article i gave (that was my own googling).
The core idea (that there is a requirement and a availability of energy that may differ) was generated from my brain not that i personally think the origin of an idea matters to its value.
General rule of thumb: If you're going to ask an LLM and then make a post based on that, simply don't post it. If we wanted a randomly generated take on this, we would just ask an LLM ourselves.
There wasn't much about the energy equation there. And since it's just conversation with Gemini just pasted here, I'm not sure how much to trust it and it just feels lazy and disjointed.
Across ~10 jobs or so, mostly as a employee of 5-100 person companies, sometimes as a consultant, sometimes as a freelancer, but always with a comfy paycheck compared to any other career, and never as taxing (mental and physical) as the physical labor I did before I was a programmer, and that some of my peers are still doing.
Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.
Software engineering just comes really easily to my brain, somehow. Most of these days is spent designing, architecturing and managing various things, it takes time, but in the end of the day I don't feel like "Ugh I just wanna sleep and die" probably ever. Maybe when we've spent 10+ hours trying to bring back a platform after production downtime, but a regular day? My brain is as fine as ever when I come back home.
Contrast that with working as a out-call nurse, which isn't just physically taxing as you need to actually use your body multiple times per day for various things, but people (especially when you visit them in their homes, seemingly) can be really mean, weird and just draining on you. Not to mention when people get seriously hurt, and you need to be strong when they're screaming of pain, and finally when people die, even strangers, just is really taxing no matter what methods you use for trying to come back from that.
It's just really hard for me to complain about software development and how taxing it can be, when my life experience put me through so much before I even got to be a professional developer.
I've never done anything like road/construction work. But I've done restaurant work, being on my feet for 8+ hours per day... and mentally, it just doesn't compare to software development.
- After a long day of physical labor, I come home and don't want to move.
- After a long day of software development, I come home and don't want to think.
Comfortable and easy, but satisfying? I don't think so. I've had jobs that were objectively worse that I enjoyed more and that were better for my mental health.
Sure, it's mostly comfy and well-paid. But like with physical labor, there are jobs/projects that are easy and not as taxing, and jobs that are harder and more taxing (in this case mentally).
Yes, you'll end up in situations where peers/bosses/clients aren't the most pleasant, but compare that to any customer facing job, you'll quickly be able to shed those moments as countless people face those seldom situations on a daily basis. You can give it a try, work in a call center for a month, and you'll acquire more stress during that month than even the worst managed software project.
When I was younger, I worked doing sales and customer service at a mall. Mostly approaching people and trying to pitch a product. Didn't pay well, was very easy to get into and do, but I don't enjoy that kind of work (and many people don't enjoy programming and would actually hate it) and it was temporary anyway. I still feel like that was much easier, but more boring.
That sounds ideal! I used to be a field roboticist where we would program and deploy robots to Greenland and Antarctica. IMO the fieldwork helped balance the desk work pretty well and was incredibly enjoyable.
They do. And yes, choosing a good distribution will help.
But the fact that most servers run Linux isn't indicating it's the best choice.
Most desktops run Windows - and this doesn't mean it's the best desktop OS :-)
> But the fact that most servers run Linux isn't indicating it's the best choice
True, but server choice is typically made by professionals, while desktop choice typically isn't. So people measure those two by a (imo correct) double standard
> Taiwan is different: the vast majority of people there are ethnically Chinese, so reunification is seen as an absolute necessity.
How does that make it a "necessity"? It's not for China to decide? This is the reasoning Russia uses when invading neighboring countries. To "protect" russian people and claim that <insert part of country> are russians anyway and want to get annexed (still wouldn't make it right). If someone wants to join Russia, they should move to Russia.
(Or maybe it could happen through some longer and slower political process. And the country as a whole should agree, with a lot more than 50% agreeing, to a unification.)
> The Chinese way of thinking is that only after a group has been fully Sinicized (language, culture, identity) can they be considered “one of us.”
Like above, I hope you're not implying that a culturally similar people in another country #2 somehow gives country #1 power over it's sovereignity.
> How does that make it a "necessity"? It's not for China to decide? This is the reasoning Russia uses when invading neighboring countries. To "protect" russian people and claim that <insert part of country> are russians anyway and want to get annexed (still wouldn't make it right). If someone wants to join Russia, they should move to Russia.
The difference is that Taiwan only exists because the losers of the Chinese Civil war ran away to it, and the winners (CCP) were not allowed by the US to finish the job. So for the CCP, Taiwan has always been a problem still left to resolve, an American thorn in their side. It was along the main reasons for them joining the Korean war, because the monumentally dumb McArthur publicly praised and supported Chiang (the leader of the losers of the civil war, the KMT), which led to CCP fears the US will use the Korean peninsula as a sprinboard to attack them and install Chiang back to power.
So while self-determination trumps those concerns for my personal view, I can totally see where China (CCP) is coming from. Especially with a very aggressive American stance against them, why would they want to keep a very friendly to the US runaway province out there?
For Americans, imagine the Confederates ran away to Puerto Rico, force assimilated the locals, and became very friendly with Russia. For the French, that a Bonaparte was ruling Corsica while being friendly with the big bad wolf (depending on the age, Brits or Russians maybe). And on and on.
Thanks for the context. I don't really know the Taiwan situation well.
My main gripe was mostly around the perceived reasoning that ethnicity or culture of some people would make it more okay to try to annex, or invade, anything.
> When it comes to Japan in particular, the deepest desire in many Chinese hearts is for Japan to start a war first—so China can finally settle the historical score once and for all. But even in that scenario, turning Japan into “part of China” is not on the table.
From GP. That is also a bit worrying to me. Who decides what's the fair "historical score"? But mostly, people shouldn't desire for war or use past wars as a reason for new wars. This is more complicated than ethnicity or culture, but it's dangerous and people should just learn to let go or it never stops.
False flag attacks are a thing and have been used many times as a pretext for an attack. Russia has done it. Russia also often uses history as an excuse for new wars. I'm sure it's always possible to dig out some rationalization. The result is mostly more suffering of innocent (who might not have even been born during the cited conflict).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
reply