Hacker Newsnew | past | comments | ask | show | jobs | submit | Phlebsy's commentslogin

The problem that exists is that you cannot just willy nilly try out entirely different desktop envs/window managers/audio frameworks on an existing install of any other distro and be certain everything will work exactly as it was when you remove it. Especially as an only moderately knowledgeable user that won't know every single piece of config that needs to be changed back. Unless you're trying everything new out on a fresh install then there's a big risk.

NixOS gives you that just by opting in to using it, and while AI also speeds up config changes and translating your existing knowledge to a new tool you're trialing in other distros as well it really shines with NixOS where you don't even have to care what it messes up while you're trying something new. You just revert and you know that nothing that was done to configure that new thing - which likely would have broken your existing configuration on other distros - has persisted.


Here is a simple workflow with mutable systems like Fedora that I think a lot of people are missing. AI could be brought into this workflow also for those who want that:

(1) Take a snapshot of your current system (snapper+btrfs on Linux, bectl on FreeBSD+ZFS)

(2) Make destructive changes like install a new windows manager, some drivers etc.

(3) If everything worked out well, continue

(4) If something failed badly, restore from (1) using the snapshot restore -- Your system is as good as before

This workflow replicates many of the benefits of NixOS without the complex nix scripting that can be often needed.

Of course, a declarative and textual rendition of the configuration is better than bash commands entered on the command line but sometimes you don't need that level of precision.


It’s like saying you don’t need a version control system for coding, as you can just make a copy of your sources before making important changes.

A snapshot of your build folder. Not even the sources. This is my other problem with mainstream Distros. Extending them is completely opaque. NixOS is source based and anything and everything can be updated by the user. Need some patch from kernel ML? 1 line of code. Need a Bugfix in your IDE that hasn't landed in a release? 1 line of code.

There is no distinction between package maintainers and end users. They have the same power.

In the meantime i dont expect Debian users to ever write a package themselves or to modify one.

In nixOS you do it all the time


FWIW... I have modified packages on Fedora and installed them. The workflow is very simple... of course, not as simple as NixOS but here goes:

# clone the package definition

$ fedpkg clone -a <name of package>

$ cd <name of package>

# install build dependencies

$ sudo dnf builddep ./nameofpackage.spec

# Now add your patches or modifications

# now build the package locally

$ fedpkg local

# install locally modified package

$ sudo dnf install ./the-locally-built-package.rpm


Arch Linux also has a long history of people writing their own package specs (AUR) and is relatively simple too of course.

Let me put it differently. The documentation of NixOS treats package maintainers and users as kind of equal.

This has benefits and downsides. Benefit is that everyone is treated as a power user. Downside is that power users are horrible at writing docs and this philosophy is my main theory why NixOS docs are so .... Bad

Fedora (and RHEL) end user and developer docs are written for quite different audiences


Yes I just replied to your other comment with the same observation. It reminds me of an article by Paul Graham, I forget which, who expressed the difficulty of explaining to programmers who lack an abstraction just how good the abstraction is. Anything you can do with NixOS, you can do with any distribution, because it isn't magic. But somehow, more stuff becomes possible because it gives you a better way to think.

(As for why the docs are so bad, I think it's because of the lack of good canonical documentation. There's too many copies of it. Search engines ignore the canonical version because it's presented as one giant document. Parts of the system aren't documented at all and you have to work out what you've got by reading the code. The result is that you have no idea what to do if you want to improve the situation - it seems like your best option is to create new documentation. And now you have the same basic level of documentation that didn't help the first hundred times it was rewritten. And I don't really think submitting a PR to nixpkgs is exactly userfriendly, so it probably discourages people from doing the "I'm just trying to understand this, so I'll fix up the documentation as I learn something" thing.)


Bye bye getting automatic upgrades to that package.

yes i think you've hit the nail on the head. I tend to view NixOS not as a distribution, but as a distribution framework. The system configuration is the sources for an immutable distribution as much as it as system configuration.

You're in no way bound by decisions of the nixpkgs contributors: as you say, we can add a patch. Or we can also decide we totally disapprove of the way they've configured such-and-such a service and write our own systemd service to run it.

Anyone can write a local debian package which adds a patch, and build and install it. And anyone can write a systemd service and use it instead of the distribution's systemd service. But on NixOS, these are equal to the rest of the system rather than outside it. Nixpkgs is just a library which your configuration uses to build a system.


I like your analogy and it does make sense.

But note that I did caveat my suggestion: "Of course, a declarative and textual rendition of the configuration is better than bash commands entered on the command line but sometimes you don't need that level of precision."


That’s a great way to get one of the benefits of nix. But you still can check that snapshot into version control, share it with all your machines, etc.

You're right ... you cant check that snapshot into version control and share with your machines etc. When you need that level of control and need to scale your configuration to other machines NixOS sounds like the right choice. If it's for your own machine and you just want to try out a new windows manager non-destructively use snapshots.

Fedora also offers immutable distros which are (I've heard) much more user-friendly than Nix. Sure you can make a hacky pseudo-immutable workflow on a mutable distro but that's literally more effort for a worse result.

Actually desktop environments are entirely modular and even audio stacks are just a few packages and enabling a few services

Man, I still remember what a pain the migration from PulseAudio to Pipewire was. Sure, it's only a couple packages, disabling a few services, enabling a couple others. But I had to do this almost on the daily, while bugs in Pipewire/Wireplumber were still getting ironed out and were rendering my audio stack temporarily unusable.

> What this meant was that instead of leaving nitpicky comments, people would just change things that were nitpicky but clear improvements. They'd only leave comments (which blocked release) for stuff that was interesting enough to discuss.

This is my dream; have only had a team with little enough ego to actually achieve it once for an unfortunately short period of time. If it's something that there's a 99% chance the other person is going to say 'oh yeah, duh' or 'sure, whatever' then it's just wasting both of your time to not just do it.

That said, I've had people get upset over merging their changes for them after a LGTM approval when I also find letting it sit to be a meaningless waste of time.


I would settle for accurate estimates being a requirement if sticking to the estimate and allocations is as well. Every project I've been a part of that has run over on timeline or budget had somebody needling away at resources or scope in some way. If you need accuracy to be viable, then the organization cannot undermine the things that make it possible to stay on track.


Also, if you need accuracy stay away from questionable vendors of 3rd party products, as much as possible since they are chaos generators on any project involved.

In my work we have our core banking system designed in 80s on top of Oracle DB so everything is just boxes around it, with corresponding flexibility towards modern development methodologies. The complexity of just doing a trimmed copy of production servers for say user acceptance test phase is quite something, connecting and syncing to hundreds of internal systems.

Needless to say estimates vs reality have been swinging wildly in all directions since forever. The processes, red tape, regulations and politics are consistently extreme so from software dev perspective its a very lengthy process while actual code changes take absolutely tiny time in whole project.


I get ~weekly crashes using an Nvidia card with arch/hyprland, but honestly it's less problematic for me to deal with than windows updates. I can format and rebuild my machine from scratch in less time than windows takes to download and perform an update.

Flawless experience on non-nvidia hardware though.


That's arch/hyprland though. You're even making it harder for yourself than it needs to be.


That's understating it. There's no amount of skill that will render that setup stable - it's baked into the way those projects are managed.


That's why I keep using Gentoo and X11 to handle my three GPU setup. An Intel iGPU, an AMD dGPU on the same package as the Intel CPU and a RTX 4060 Ti eGPU connected through Thunderbolt.


Only have issues with it on my machine with an Nvidia card. Understand that it can be unstable and accept that when it happens - but with AMD/integrated graphics I don't have the same problem.

Either way, only serves to further the point that Linux is in a pretty good place and the experience should only be better on more stable options.


I don't have that problem with Arch+COSMIC, which has the tiling you get with Hyprland but without the overly complex configuration. You can also switch to floating windows with one button if needed.


Sure, if the warning levels are poorly tuned I might configure my LSP to ignore everything and loosen the enforcement in the build steps until I'm ready to self review. Something I can't stand with Typescript for example is when the local development server has as strict rules as the production builds. There's no good reason to completely block doing anything useful whatsoever just because of an unused variable, unreachable code, or because a test that is never going to get committed dared to have an 'any' type.


An example I like to use are groups that put their autofmratter into a pre-commit. Why should I be held to the formatting rules for code before I send my code to anyone?

I'm particular about formatting, and it doesn't always match group norms. So I'll reformat things to my preferred style while working locally, and then reformat before pushing. However I may have several commits locally that then ge curated out of existence prior to pushing.


> just

Or you can just learn a handful of puzzle patterns in exchange for more job opportunities that would have the potential for higher overall pay. Seems like a fair trade to me.


This is a good way to frame it. I have no issue with people who choose to do this, but I choose not to.


It just feels obstinate to me. Most people will jump through all sort of bureaucratic/performative hoops when they're in a job to keep it or angle for promotions/minor raises, but this one that has a much higher average RoI turns them off. If you put your foot down on that sort of thing too then fair enough I suppose.


I have been told that I’m obstinate before :)

To be fair though, I don’t really want a Big Tech job. Several of the FAANGs, especially Facebook, are morally objectionable to me and I would switch careers before working for them. Most others have shitty working conditions with in-office policies, open office layouts, etc, that are detrimental to me getting work done.

So it’s not just about the financial RoI for me.

And I think I’m at least consistent: I’ve never been one to jump through hoops for raises or promotions either.


I'm a fan of writing tests that can be either. Write your tests first such that the real dependencies can be run against. Snapshot the results to feed into integration test mocks for those dependencies so that you can maintain the speed benefit of limited test scope. Re-run against the real dependencies at intervals you feel is right to ensure that your contracts remain satisfied, or just dedicate a test per external endpoint on top of this to validate the response shape hasn't changed.

The fundamental point of tests should be to check that your assumptions about a system's behavior hold true over time. If your tests break that is a good thing. Your tests breaking should mean that your users will have a degraded experience at best if you try to deploy your changes. If your tests break for any other reason then what the hell are they even doing?


Why do users insist on using tools that can get them fired? This is either a policy or culture problem, not an AI problem.

Hell, just make a macro to replace all the funny anachronisms and a prompt that can reference your own writing style to massage the output.


Coupled with weeks if not more of regularly scheduled sleep deprivation so you never actually recover from any of those hard days.


Same. I have a tmux-sessionizer like script that when I open a new tmux session for a project will automatically build out all the standard windows and pane setup for the type of project it is. Start neovim or rider, the dev server/browser if it's hot reloadable, start a test daemon to rerun tests on uncommitted file changes, etc.

If it's something I'm going to need most of the time when I open that project type then I automate it from bash scripts using simple identifiers like whether a go.mod, package.json, *.sln, etc exist. If you want to get even fancier then you could make scripts specific to each repo with a fallback, or make it search the existing sessions and close out any competing ones that would use the same ports or images. It's one of those things that does truly save 30 seconds multiple times a day, with minimal setup time for new projects once you know how you always structure your dev environment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: