I'm a full-stack developer who has been the team lead on numerous successful projects with over 15 years of experience in Web Development. I have a strong background in Python and JavaScript, and I have experience with a wide range of technologies.
I used to use Ubuntu for everything and was very happy in the massive Ubuntu ecosystem with a .deb for everything till I was forced to use Snap a few releases ago for installing even some basic utils I used on a daily basis. I then had Snap constantly hang on installs and just break for no apparent reason and waste my time debugging it. It didn't "just work" anymore.
Swapped to Arch and haven't looked back yet, Arch took me a lot more work to get set up but once it was it's been pretty invisible, which is how I like my OS to be.
I'd probably happily swap back to Ubuntu if I read somewhere that Snap was removed entirely/canceled or something.
I used to defend snap with the idea that it makes sense for some apps, i.e. "firefox" where updating firefox while its running is bad news.
Except, snap can't update applications while they are running!
i.e. try to do a snap refresh while firefox is running, nothing to update, because its running, quit it wait for all processes to die and then refresh, and it will update and cause you to wait 30-60s while it does it, and then you can restart firefox.
This has to be one of the most idiotic design decision ever made by a containerized application system showing the designers don't understand containerization at all.
one of the primary points of containerization is the ability to have multiple copies of an application running in parallel using different "application images". One should be able to upgrade (i.e. install a new image in parallel to the old one) without disrupting existing execution environments.
Basically all firefox has to do is
1) if a container is running for the current user, execute firefox in its context
2) if no container is running for the current user, create a new container
3) every so often, garbage collect old images that don't have containers running for them.
these concepts are so simple, even docker basically does it!
> i.e. try to do a snap refresh while firefox is running, nothing to update, because its running, quit it wait for all processes to die and then refresh, and it will update and cause you to wait 30-60s while it does it, and then you can restart firefox.
Oh, it's worse than that. When there is an update available but firefox is running you get:
> % snap refresh
> All snaps up to date.
It doesn't even tell you there is an update and that you need to close firefox to get it. In fact, it tells you there isn't one.
Can Firefox itself, when installed via Snap, notify the user that it needs security updates? If so, can users effectively do that, or do they have to know this incantation of closing, waiting, running a terminal command, and restarting?
That’s a step backwards. The whole point of package managers is that the system manages software updates and the software itself can be completely oblivious to that.
Snap done properly it’d download and install the new version, point launchers to that and prompt the user to restart running snaps.
When a statement has many examples it is correct to use "e.g.," which is an abbreviation of _exempli gratia_, which in turn is Latin for "for example." For example: There are many emotions a person may feel, e.g., happiness.
When a statement has only one logical conclusion it is correct to use "i.e.," which is the abbreviation of _id est_ which in turn is Latin for "that is." For example: It was the same colour as a clear summer's day sky, i.e., blue.
Mine mnemonic is use e.g. when you could say “for eg-zample” and use i.e. when you could say “that is” — ie to is. Admittedly, the second half isn’t as good as the first. : )
jeeze please stop! e.g. and i.e. both have come to interchangeably and loosely mean "for example" in the very broadest interpretation. Why? Because way back when people who wrote those things did agree on a specific and distinct meaning for each, a lot of other people who didn't share that understanding co-opted those abbreviations to mean "for example". That interpretation has now exploded in popularity. At this point that simple and shared definition for both is overwhelmingly used and understood by the vast majority of the population - which makes it "right". A historically held understanding of how a word, phrase, or abbreviation was commonly used does not mean that historical belief is right today. That's not how language works. It evolves and that is a good thing. Grammar prissiness is both misguided and futile because language will continue to beautifully evolve no matter how much you try to label that evolution as "incorrect".
> e.g. and i.e. both have come to interchangeably and loosely mean "for example"
It seems you live on a completely different planet to me... "i.e." simply does not mean "for example", and no one who has a good, academic level of English would ever think so.
I'm not saying anything about what I do or don't know about the meaning of i.e. vs e.g., I'm just saying stand in front of any mall in America and poll people as they go in:
(Q1) "Do you, even occasionally, ever use e.g. or i.e. when speaking or writing?" Almost all people will answer yes.
(Q2) "Can you correctly define these abbreviations and explain their correct usage?" 1 in 10 will answer correctly.
(Q3) You (the interviewer), use one or the other in a sentence, ask people to correctly paraphrase your meaning to see if, in spite of their lack of academic understanding, they are still perfectly capable of understanding you. 9 in 10 will answer correctly.
This, almost by definition, represents linguistic evolution. And that is ok. I would go so far as to argue that "academic level of English" should be rephrased as "academic style of English" and that that stye has zero relationship with any notion of "correctness" at all. Telling the 9/10 from Q2 that they are "wrong" is, again, very misguided and pretty jerky honestly.
> Telling the 9/10 from Q2 that they are "wrong" is, again, very misguided and pretty jerky honestly.
I disagree. I do see your point about linguistic evolution. But I don't think it applies here.
In my native tongue, the word for "and" and the word for "to" (the infinitive marker) are very similar. As a result, tons of people mix the two up. But probably not even the most progressive and liberal linguist would agree that this represents linguistic evolution. It is pretty much universally understood to be symptomatic of a poor technical understanding of the language.
I think the same applies to "i.e." vs "e.g.". They are both used predominantly in academic style or level (whichever you prefer) of English. And in that context, their respective meanings are often quite important for understanding the precise details of a text.
I don't think that paraphrasing a text is a good test here btw - even with an A1 or A2 level of English you can get a pretty good rough understanding. Besides, paraphrasing often loses the precise meaning, which I would argue is to answer incorrectly. Logical hierarchies and implications really do matter when it comes to conveying information, and if not everyone understands the subtleties of the language, the go-to response should be "more education is needed" rather than "let's give up and have all words mean the same thing".
A more appropriate preamble to gp comment might be "It might be of historical interest to you that e.g originally stood for <blahblah> and meant <blahblahblah> and i.e. <blahblahblah>". It would be even more interesting to include some information about the history of its usage and influences leading to its modern (and perfect acceptable) usage.
And, in case you think I'm being pedantic here, I'm not making a stink just because I think you are misguided, I'm making a stink because grammar prissiness is a favorite pastime of liberals (which I am BTW) but the real social impact of grammar prissiness is social classism (putting it nicely) and gross racism (putting it less nicely). You nitpick the grammar of your fellow tech bros on HN (which I also am) but when someone goes so far as to say "I ain got nuttin but deeze bags uh sasage", they get this condescending attitude on full blast like "OMG so uneducated and uncultured!!" while failing to recognize that that manner of speaking is a dialect and is most commonly associated with our poorest and most disenfranchised peers. You may not mean it this way but it is equivalent to saying "white rich conformist == right, poor and different == trash".
Firefox is a particularly interesting case because it has a bunch of issues that only exist when it's installed via snap. I just recently ripped out the default snap version and installed via the official PPA, and Firefox seems to be running much smoother now.
For basically any app I use that's installed via snap, I eventually run into a gamebreaking issue and have to remove the snap version and find a normal .deb to install from. Whether it's customization issues (setting up custom fonts in VS Code didn't work) or stability issues (the tab bar in Firefox windows freezing up), there always seems to be some kind of problem
Also, there is the funny thing that when you updated to 22.04 and Firefox it's replaced by the snap version, you lose access to your previous Firefox account on the computer! The account isn't deleted... If you remove the snap version and install the normal version from a PPA, you get back your account with all the settings.
they seem to have fixed that at least. I had the same problem with a computer I updated immediately, but not with one that I waited a bit on. Incidentally you can fix this by copying your .mozilla directory in the correct place in the snap directory in your home folder
Snap makes sense, but it'll take time for the dust to settle. Just like Unity / Wayland.
Applications like Firefox, IntelliJ, Chromium, etc. shouldn't be tied to system dependencies managed by apt. They should be entirely monolithic and hermetic.
Long term this deb vs snap approach will make sense and be good for Desktop Linux.
Upstart predates systemd by four years. It solved real problems at the time and they kept it going until Debian transitioned to systemd. Unlike systemd, upstart supported sysvinit scripts, so it made sense to transition when Debian did. Bringing that up is maybe not the best example you could choose to bash them with.
I am not saying anything about decision to _start_ using upstart... All I am saying is that in the past, Ubuntu made the right decision multiple times and switched from Canonical-only software to a more common alternative once that alternative became widely used.
And I think that it's a grand time to slowly wind down snaps and switch to flatpak. Linux ecosystem does not need more fragmentation, and given proprietary nature of snap store, there is approximately 0% chance that anyone except Ubuntu derivatives would adopt it.
Maybe snap was better than Flatpak back in the beginning; I can easily believe that when the snaps were introduced, they were better than flatpaks. I am not going to judge. All I want to see a right decision made going forward.
Fwiw, systemd actually supports sysvinit scripts pretty well. You can mix and match, and write unit files that depend on init scripts or vice versa. And you can manage them all with systemctl.
No. When we have flatpak and appimage, snap makes zero sense. I just see it as a way to implement a centralized walled garden into the OS, and don't install Ubuntu systems even in VMs for five minute test drives.
The way Canonical rolled it out is flat out insulting to the free software community and distro culture, IMHO.
It's too late here to write about its technical problems, so I'll leave it at that.
I don’t disagree but that won’t change the fact that snap completely broke my Firefox workflow, and doesn’t gracefully allow me to restart Firefox (rather inconsistently have tabs crash unexpectedly).
Snap is immature, and not ready for production deployment. The way Ubuntu/parent company have pushed snap has been damaging to its image.
> it'll take time for the dust to settle. Just like Unity / Wayland
There's a particular irony to this statement. Before adopting Wayland, Canonical tried to do their own thing with Mir[1]. Years later they gave up and reluctantly adopted Wayland. Their investment in Snap rhymes with this. Snap is an inferior competitor to Flatpak, and in the end it's likely the dust will settle for Snap the same way it did for Mir—in the dirt.
Canoncial as an organization has some serious NIH syndrome and they are putting "Linux on the Desktop" through a lot of unnecessary pain.
I don't think Snap is an inferior solution, as much as Canonical is trying to put Snap where it doesn't belong.
AFAICT, Snap is a pretty good solution for server side apps. This is pure speculation (and I don't even know if the timelines line up), but I suspect Canonical developed Snap for server apps, but something made it economically non viable (maybe the popularity of Docker?) and so they now needed to generate money through it some other way and they decided to do it by using it as a workstation app distribution mechanism, and by locking it down for enterprise.
"Server apps" can just be systemd services with isolation configured to your liking. Even if it's necessary that the service include some arbitrary subset of userspace, that can still be done with systemd portable services.
throwup is referring to Mir the display server that implemented Mir the display protocol, because Canonical didn't understand Wayland, didn't talk to anyone that did, and so decided they needed their own protocol.
Actually brings up a good question of why can't dpkg/.deb just support having multiple library versions? My understanding is that it is not support, but if it was, then couldn't that obviate the need for snap?
Many people are unaware that "so naming" [0] solves multiple versions of the same library problem.
My Debian installation regularly has two or three versions of the same library (e.g. libx264) installed while packages update and start to use newer versions of the same library.
I don't think user facing programs needs to be hermetic. If you want to limit behavior of a software there's always AppArmor and SELinux.
Trying to make a set of applications can't interact on an (desktop) environment which built on the promise of cooperation and inter-application communication is backwards from my perspective.
I understand that the browser is a significant vector for attacks, but there are other and more elegant ways to counter these attacks, and these can be layered from application itself to kernel and to hardware. This layered approach is more integrated, universal and applicable to a broader surface area in the OS and application stack.
<rant> Romanticizing isolation and immutability, trying to apply it everywhere in the software stack is a big step backwards in usability and productivity. These technologies are useful in some (and mostly in) server/service scenarios. Trying to apply these principles to everywhere is akin to only having a hammer and seeing everything as a nail. Just because they're easy, they're not the correct and best solution for anything and everything. Maybe we shouldn't be that lazy and try to create more useful and transparent user sandboxes built on cgroups, SELinux and AppArmor, and works with the package managers or traditional distro layouts seamlessly. </rant>
I personally hope AppImage wins over snap, it seems to be the best of its class, and isn't so centralized. It would still need to be paired with a package manager for updates, of course
I've been using Wayland happily for the last few years. I'm missing colour management but I know it's (slowly) being worked on and I can wait, as everything else I need works better than it did for me under X11. Can't say the same for Snap. Wayland has become progressively better over the years. Snap just seems to get progressively more intrusive as they force it more and more. Longstanding complaints go unaddressed. Flatpak has been more or less pleasant. Feels very similar to the Mir/Wayland days to me.
I've been on Ubuntu 22.04 for a few weeks now, and every now and then Firefox will create a popup notification that says "Firefox needs to be closed to update (12 days left)". When I close it and re-open, Firefox is still not updated. Even when I go to the Software Center and update all apps, it apparently doesn't update snaps.
I'm not entirely sure if Firefox is updateable via the UI. It's truly awful; and that's coming from an Ubuntu lover.
>Yup, you need to shutdown firefox, open a terminal, run ‘sudo snap refresh’, wait for it to complete, then re-open firefox.
No, you don't need to do this at all. You need to shut down Firefox, open a terminal, run 'sudo snap remove firefox', then 'sudo apt install firefox', wait for it to complete, then re-open Firefox.
Oh god, I have yet to figure out what it even wants. I assume it is asking me to close firefox to run an update, but I do and nothing is run. And apt upgrade doesn't somehow catch it?
It's amazing that they're managing to make it so confusing that someone who wrote .xsession files can no longer understand it without doing a bunch of research. This shouldn't be tricky... that message, in particular, is absurd. Why would you show that and then not have a button that says "update"?
> I used to defend snap with the idea that it makes sense for some apps, i.e. "firefox" where updating firefox while its running is bad news.
> Except, snap can't update applications while they are running!
What's worse is the unblockable notifications.
And I just de-snapped my xubuntu 22.04 , and doing so upgraded firefox-snap v113 to their repo v116 ! I mean, I thought the purpose was fully updated. Evidently not.
Flatpak updates work pretty much the same way. The new version is installed into a parallel directory and takes effect the next time the application is restarted.
So I'm not nuts. My work laptop recently changed from Fedora using flatpak to Ubuntu using snap and with flatpak, I'd get a couple notifications "Hey, XYZ was updated" and on the next reboot, it was new. With snap, I have to shutdown the application, update it in a console and relaunch it. Meh.
If it was that simple, why wouldn't the deb package work like this? There is no technical reason a new version of a software can't execute in parallel to the previous.
Firefox can't do that. The simplest reason being that it is stateful and keeps records of history, bookmarks and whatnot which not only can't be accessed concurrently, but the specific storage model chosen can only be forward migrated. Accessing this data from two different versions of Firefox would inevitably lead to corrupt data, even when concurrency is not an issue.
That's just the obvious reason with profiles. But there are also various other issues where Firefox interacts with the plugin system and other external data. All of these could probably be fixed, but it would require a radical redesign of the software which right now no one has stepped up to do.
The least problematic method of using Firefox on Linux for most users is probably the official tarball. It's the trivial method of unpacking a software and running it from the destination folder. It has implications for freeness and DRM, and the obvious problems bundled dependencies bring, but Firefox is unusually well maintained software with the manpower to make it work.
And containerization or producification on top of Firefox will only make it more complicated and worse supported than upstream is.
simple binaries yes, firefox installs a whole lots of dynamically read stuff that is specific to the version installed (was more of an issue in the xul days, but I think still exists).
i.e. you install a new firefox, the data on disk (i.e. file xyz) is no longer what the firefox version running expects and bad things happen. It could cache everything in memory (I guess) and never have to read anything from disk, after it starts up (relative to it, cache/cookies in local profile shouldn't matter in regards to disk).
so when you update the deb you are replacing file 'xyz' and old xyz no longer exists (as opposed to unlinking and replacing "/usr/bin/firefox" as even though unlinked it will still exist (for paging purposes) until every execution of it terminates and only then will it actually be removed from the file system.
this is where containerization can help on a "multi user" system, if everyone ran firefox in a container, every time they started it fresh, they would get the most up to date copy installed (in a non "multi user" setting, its less important, but can still have some value). (quotes because I mean multiple simultaneous desktop graphical desktop users, which if we're honest, most desktop linux users aren't doing).
> one of the primary points of containerization is the ability to have multiple copies of an application running in parallel using different "application images". One should be able to upgrade (i.e. install a new image in parallel to the old one) without disrupting existing execution environments.
Would you like to run a completely new Firefox with a completely new profile whenever you update it? You do know that you can't use the same profile even with the same version of Firefox in two separate Firefox instances concurrently, right? If the updater was aware of snap and snap aware of the application it's updating, you could have a graceful update procedure that serializes the current state and reloads the new container, but that's a lot of hoops to jump through, and the jumping must be coordinated between multiple stakeholders.
No, but the issue here seems to be a problem with the underlying snap system, not with the concept. It should be possible to have multiple containers that, when run, access the same persistent data. So the workflow would be:
1. Install and start Firefox. There is one container and one directory full of profiles.
2. Start the upgrade. This creates a second Firefox installation (container) sharing the same profile directory.
3. Try to run Firefox without quitting the old one. Does nothing because the runtime is smart enough to know that the old one is running. Maybe pop up a message suggesting restarting Firefox.
4. Quit Firefox. Start it again. The new one runs. All the data is still there.
5. The old installation goes away.
The runtime has plenty of flexibility in how to make this work. There could be multiple containers. There could be one container with multiple parallel versions. There could be one container with a staged upgrade that can swap in essentially instantaneously once the container is idle (although this may fall apart on multi-user systems or where the containerized program regularly has multiple running copies of itself at once, e.g. a program like bash).
But the fact that snap apparently has trouble with this seems a bit embarrassing.
And snapd has all of the information it needs to do all of this. "Is Firefox Running?" is a tricky question in a traditional package install, because there's a number of ways you can launch a program and "ps |grep firefox" isn't quite right. But that's not the case for snapd! It is the executor and all launches to Firefox have to pass through it; If you launch Firefox while it's running and there's an update queued, it should be able to pass that on to the old version, while still being able to handle "This program is exiting and has an update queued, so let's update it now".
Every critical piece of information is available to Snapd, and yet, they have chosen the wrong and broken implementation.
> if a container is running for the current user, execute firefox in its context
So there is no need for snap to fancy serialization, just have new and old firefox ready, and keep using old one while it has running processeses. And if the old one exits for any reason, start new one instead.
That's all snap has to do! But it does not, because it really likes to make strange texhnical decisions for some reason.
(For "coordination", snap could make a common file in old app dir like "bin/new_version_availablec, and if firefox detects it, it shows that "your browser needs to be updated" button. I would not be surprised if something like this already exists in firefox. But I doubt snap would use this approach because it is not complex enough for it :) )
They're not asking for multiple uses of the same profile. Assuming one profile, then the old image would keep being used until firefox shuts down entirely, and once that happens the new image would be used the next time firefox is opened.
in the specific case of firefox, two different instances of firefox (no matter their version) cannot both use the same firefox profile at the same time (i tried).
You either have to ask the user to restart firefox (that would be the sane choice) or corrupt user data.
But i agree that snap sucks. Snap sucks particularly bad because they also push it where it's not only unnecessary but also detrimental to the user experience.
Simple dumb example: i had gnome-calculator in my toolbar. I sometimes need to make basic calculations, it's easy and I keep it handy. I realized that gnome-calculator (a 500k binary) was taking 30 fcking seconds to appear. Turns out it was "updated" to a snap, and now launching it meant for stuff to be mounted around and for snap to do whatever it wanted with my time.
I thought it was really funny when I tried to update some stuff with the snap gui, and it threw the error - "Can't update snap-store, the application is running" or something like that
My first encounter with snaps was launching gcalc, the basic built-in calculator app.
At some point after an update, launching that app would take multiple seconds. How could the most basic gtk app be so slow to launch.
Investigating further, I found the update had transformed the normal gcalc package into a snap, probably just to prove how great snaps could be for desktop apps?
What a bad way to introduce your latest tech canonical. From then on, my opinion of snaps was very negative and I still do everything possible to avoid them. Even if they are technically great or more secure, canonical messed up their introduction and are now stuck pushing a dead horse.
My last Kubuntu update (to 22.04) broke because, I think, I'd disabled snaps and the update required snap installs (including Firefox). The update had a "install snap version of Firefox" stage with the only option being OK/Yes ... so install broke. They were so up themselves that they removed the option to cancel snap install despite it not being for essential elements of the OS; I had to kill them installer.
My first brush with fat-packages was Digikam stopping producing .deb packages, it put in so much overhead I couldn't run it on my (admittedly weak) system. The forced use of Firefox snap lost configuration, then failed to work properly (permission problems) ... it's like how the use of file managers via sudo was stopped, seemingly without a proper plan to replace it; make the new tech first, then implement it when it works! It's like going back to Windows, having fragmented app sources and forced updates that wreck everything.
I'm not sure where to move to next, I came to Ubuntu years ago because I no longer wanted to do the config work necessary to run Slackware. Looks like Mint/Arch/Debian are contenders.
Firefox is the only thing I had problems with, and this was quickly fixed by adding the Mozilla PPA. I haven't run into any snap problems (I apt-get purge'd snapd).
Tested Kubuntu after frustration with gnome 3+ and over a decade on Ubuntu, but was very disappointed. Someone more knowledgable than me strongly recommended KDE Neon (Ubuntu LTS-based distro by KDE themselves) over Kubuntu. Me, I figured I don't need K apps and a full-fledged desktop environment; ended up going back to Mac OS for now.
That was also my first introduction to snap. Spent a while trying to figure out why the calc app was so slow all of a sudden. Switched to popos eventually.
I also use Ubuntu a lot, and while I'm disappointed about this Snap thing, I'm still not looking for alternatives.
At least until 22.04 I could dodge snaps pretty easily, but now, since Firefox is now a snap package, things went worse, at least for me. I gave it a try and, while the experience was not as bad as I feared, I soon encountered problems with things like trying to use a smartcard reader for signing documents, something that used to work out of the box with the non sanboxed version of Firefox and now it's broken with snaps (or at least some weeks ago).
I ended up downloading a Firefox tarball from the official site. I know you can use a PPA for that, but I don't feel comfortable having software as critical as a web browser from a third-party Ubuntu PPA.
Firefox ESR is available in a PPA that is maintained by Mozilla. Even though that's a third party, that's as close to an official package as it gets.
I'm just hoping that it will remain available until this snap thing subsides, or until I make the switch to GNU Guix or something.
Edit: It seems that the latest stable Firefox is available, too.[1] I thought it was only ESR and that Ubuntu's switch to snap was requested by Mozilla. I'm confused.
It seems like they've been including stable, non-ESR builds for at least the last few releases (I only went back a few pages of previous builds). Launchpad is confusing for me, but my impression is that they're using that PPA to build the stable release using the Launchpad service and then packing that build as a snap. It seems like they're just uploading the snap package rather than building it using Launchpad.
If that's the case, maybe it's a limitation of Launchpad?
It's the opposite of the way things should be. Package managers should have the most utility for applications that update frequently, because isn't the raison d'etre of a package manager to make updating programs easier?
Some distros manage to get it right. I haven't had any trouble using OpenSUSE Tumbleweed's packaged Firefox.
I run Debian stable on my workstation, and manage most of my packages with `apt`. As such, everything "just works," and flawlessly.
For the handful of packages that I want to upgrade aggressively, I manage them with Nix[1].
I've been doing this for maybe two or three years, and despite a little extra complexity, it feels like a best-of-both worlds scenario. I get the rock-solid stability of Debian stable, but the one or two packages that need to stay cutting-edge are able to do so.
I also don't have trouble with package conflicts, because the entire point of Nix is to prevent those kinds of problems.
The package repository should be the off-the-shelf items in the grocery store, while installing separately is the big-ticket and custom made items. Installing apps separately doesn't scale very well, but it makes sense to just do it for firefox and certain others. IMO.
Firefox is "big ticket", but not "custom made"; one firefox package or installer should suit everybody fine. I think package managers can do very well with Firefox if the package is managed well.
Exactly. I use Linux mint (without snaps) and Firefox works great. It updates pretty frequently through mint’s software update tool, which is just running apt and installing package updates.
All this talk of snaps and using PPAs sounds like a an unnecessary headache. What’s the point of a package manager if it doesn’t keep your packages up to date?
You are right its not the fact that it updates frequently. It is when the distro package is an inferior option either because of version or build options.
How do you handle updates? Do you download the new version from the official site from time to time and replace the old one? I know Firefox can "auto-update" itself but I guess it won't (or shouldn't) be allowed to modify things under "/opt" when running as a user process.
I wish Pop OS would tackle KDE. It's a much better DE (particularly if you like to customize), but there are very few (if any) distributions which have an opinionated tuned experience out of the box.
KDE would be superb if it weren't so buggy. KDE is aimed in the right direction, trying to do and be all the things a desktop environment should do and be. But unfortunately the implementation is frequently deficient. Even seemingly basic things frequently break, like re-applying mouse preferences when you unplug and replug your mouse: https://bugs.kde.org/show_bug.cgi?id=435113
A couple more recent annoyances, of many: Plasma looks great but frequently my panels go missing and don't reappear until I restart plasma shell. Dolphin has all the functionality I want and looks great, but frequently fails to watch directories for new files, forcing me to F5. Lots of little things like this turn into death by a thousand papercuts.
That might be the case, but is there an alternative with a similar feature set?
The latest GNOME versions are even buggier, and when it does work, it feels like a toy project compared to KDE. It's baffling to me that it's been years since the v3 controversy, yet the project still doesn't feel as mature and polished as KDE Plasma.
All v2 forks also feel like someone's hobby project to bring disparate tools together, rather than a cohesive desktop. I'm sure some MATE and Cinnamon fans would disagree, but IMO the modern desktop experience has moved on since 2010.
Similarly with LXDE, Fluxbox and a myriad of others. These are great for certain environments, but none of them can compare to the polish that KDE has, let alone something like macOS.
To be fair, I'm not bashing on minimalistic or window manager-only setups. I've been using bspwm on Void Linux for many years now as my main working environment, and wouldn't change it for the world. But when I want to stop being productive, consume content, and not have to troubleshoot issues on my machine, I prefer to use a more featureful environment with a friendlier UI. Since macOS is out of the question, and Windows is purely for gaming (and even that is going away, thanks to Valve), KDE is pretty much my only choice on Linux. Which I've been enjoying for a few months now on NixOS.
But I do wonder if there is really a DE that can compare to it. AFAIK more ambitious projects like elementary OS, Deepin and Solus are still experimental, to the point where I don't even keep track of their updates.
> The latest GNOME versions are even buggier, and when it does work, it feels like a toy project compared to KDE.
This is an interesting viewpoint. My ideal DE is one that is "invisible", so to speak, and I've found that GNOME comes closest to it. I want to launch whatever program I'm using and focus on it's window. KDE has too much chrome, too many elements, so many options. It feels too "busy" to me. Ditto its apps, e.g. comparing okular to evince. Too many buttons, too many UI elements (I know I can remove them, but I don't want to have to, and all the functionality they expose I can do on evince using keyboard shortcuts), and all I'm interested in is the document I'm reading.
To each their own, I guess, I just wanted to put forth the alternative viewpoint.
That's fair, I can see how KDE is busier. But if I wanted to use a DE that's "invisible", I wouldn't pick GNOME either. All the animations, for one, would be too distracting, though I suppose you can turn these off.
Like I said, I use a WM-only setup for productive work to avoid any distractions, but when I want to consume content, connect peripherals and prioritize ease of use over efficiency, then I prefer to use a more polished DE. I enjoy KDE precisely because it gives me all the options to customize it however I like, so I prefer having access to all the knobs to do so, which GNOME purposefuly hides. Other features like desktop widgets are a nice bonus.
KDE is not perfect; it sometimes freezes on me, which might be specific to my hardware or due to an unrelated driver or Xorg issue. This is why I wanted to know if there are alternatives with the same feature set. GNOME, unfortunately, isn't it.
I'm afraid I don't have a great answer. Right now I'm using LXQt with Kwin as my window manager; LXQt replaces Plasma and solves a lot of my problems with KDE, but isn't as slick as Plasma when Plasma is working. I replaced Dolphin with PCManFM, which isn't quite as slick either but it's adequate and seems to reliably watch directories for updates. I've stuck with some KDE applications, like Okular which has never disappointed me.
Yeah, hence my wish that somebody with strong opinions, financing and some energy would adopt KDE as the main DE for their distro.
There are some very prolific people involved in the KDE project, but from my understanding some of the largest KDE distros, e.g. Kubuntu, are conservative and slow to contribute anything.
My prediction is that Gnome is gonna leave KDE in the dust.
GTK 4 along with Gnome’s decision to limit customization is a massive step forward IMO.
Gnome will be able to build a consistent and stable desktop and developers building apps using GTK will not be bombarded with bugs caused by random user customizations, which are usually hard to reproduce, hard to debug, hard to fix, affect a small number of users, and are likely simply frustrating to deal with for devs.
But, maybe that’s not the right choice. Great thing about Linux is that if that’s the case Gnome will wither away and KDE (or other DEs) will quickly surpass it.
I'm myself using cinnamon because I find gnome 3+ too different of what I'm used and KDE too clunky. I like my software to stay simple and not overwhelm me under thousands of option
>I'm a big fan: it's like Ubuntu, but with much of Canonical's divisive changes reverted, and with a few other improvements
Hmm, I assumed that PopOS pretty much just added their own extra stuff to an Ubuntu install. I didn't realize they got rid of the default snap cruft?
Anyways, the last time I used Pop it had this weird upgrade to nowhere thing happened where it prompted me to update and then the update files were nowhere to be found.
Recently converted to their hardware after a rather long line of Macbooks. Completely happy - everything. just. works. First Linux laptop I've had where I didn't need to fiddle with anything, way less than macOS.
Regarding their business model, they do design the laptops (even if they don't produce them themselves), but I get their impression their competitive edge are two different things:
1. Branding/marketing (to non-mainstream users, but we're talking about them right now, so I guess it's working)
2. Software: Ensuring everything works smoothly. It's not just firmware and drivers - having their own distribution seems quite key, otherwise it's trickier to cater to _their_ users needs. Keeping it close to the most popular distributions seems smart to me.
Pop OS clearly is a big deal for (2), but I think it even contributes to (1). Since they make money (apparently sustainably so) from selling hardware, and given that it appears Pop OS is an important driver of that business, I'd wager it's not going anywhere as long as they have a working hardware business.
They seem to. They sell computers (with Ubuntu or Pop!_OS). I'm at least the 2nd one to get one at my work (that I know of). They're not "cheap", and if you know what they are and what you are doing you could just buy a similar model for a bit less and install linux on that.(they're rebadged clevos, I've replaced a fan on one..)
But if you buy from them it comes with linux pre-installed (yeah!) which saves time and stress. It was easy and they do support you, and my 2 experiences with support where very good (Os and hardware). Plus Mat screens.
I was skeptical about yet another distro, but after using I kinda like Pop!_os despite the name. Its my daily driver now. The "Pop shop" where it does upgrades and software installs is decent enough. It seems to let me choose "flatpack (flathub)" or "debs" for most software. I haven't had any issues with either, though looking though my list its vscode and gimp which are my flatpack installs..
One great thing about Ubuntu and variants is that it’s got a lot of software that works well with it.
I've had a System 76 laptop for a few months now and I'm pretty happy with it. They've done a good job with Pop!Os, it's the only distro I've found that has made Gnome usable for me out of the box.
The only downside over my previous laptop (Dell XPS) is that I can no longer run 2 external monitors, the System 76 hardware just doesn't seem to be able to handle it.
Its a little wierd but they seems to exist only to sell to resellers, although clevo-computer.com seems to order a whole bunch.
in my case the "clevo" id is on the sticker on the back (along with a lot of other info). I think System 76 specs the parts the work with linux and orders the machines that way. I have no idea how the supply chain works in this case.
For me it is just Firefox that I don't want in snap form. The Mozilla team still puts out a .deb version via PPA [0]. You should probably also pin to this source so that the snaps don't get installed [1].
I agree, and it's a little depressing to have the same "now I have to go fix that" relationship with two OSes, Ubuntu and macOS (I guess if I used Windows more than occasionally it'd three for three). Since I did all that on 22.04, I am a little nervous that it will get clobbered if I upgrade to 22.10...
> Swapped to Arch and haven't looked back yet, Arch took me a lot more work to get set up but once it was it's been pretty invisible, which is how I like my OS to be.
I think that Fedora Workstation[1] is a #1 alternative to Ubuntu in terms of smoothness and ease of use. And if .deb is a requirement – then simply Debian.
Kind of creepy, but you just described, step for step, how I ended up running Fedora's KDE Spin on all my boxes over the past year. I really wanted to like Manjaro, I mean the AUR is incredible and their visual design even uses all my favorite colors, but despite my best efforts it felt alien to me in a way I can't well articulate.
Now enter Fedora for the past 4-5 months and I have to say I'm rather impressed. In particular, their package archives seem to keep current with a lot of the software I rely on far better than Debian/Ubuntu. Using dnf feels much more familiar than pacman ever did, and as of now I feel like my search for a daily driver has ended. I would recommend anyone else that's not happy with the experience of Ubuntu anymore to do likewise and see how Fedora feels in its place.
I went Fedora because of someones recommendation on HN when complaining about some issues I was having with other distros. But i too feel like my search for a daily driver has ended!
> Fedora is very much NOT an all-ready-out-of-the-box experience, unless you are a FLOSS dev.
Nah it's pretty close if you have an integrated Intel or AMD system, you really don't need closed-source drivers for much except Nvidia these days. Chrome is in the non-free Fedora repositories (or can be installed easily from the website with an .rpm) and that's all most normal users need.
Do they stroll restrict nonfree audio/video codecs?
I would use fedora, but I want my repository set limited to trusted sources only. Core repositories are RedHat endorsed, afaict the user managed ones are not.
I want the ability to say that packages are from maintainers that are well trusted in a court of law. I cannot do that with fedora due to this, Ubuntu seems to be my only solution and it’s rapidly becoming unusable (I don’t hate snap, but it’s broken my workflow).
Most websites people care about use open codecs these days (Google and Netflix use VP9 and AV1, both are open and royalty-free).
Never had an issue with font rendering. And cutting edge being too cutting edge might be an issue with some dev things but having up to date Gnome and apps is fine.
People care about fans spinning up and batteries draining. Video not being hardware accelerated is a bad ootb experience and the type of reason Ubuntu became so big.
You don't need to be a developer, the average computer enthusiast is capable of googling the matter figuring it out. It may not be appropriate for the "colloquial grandmother" sort of user, but you certainly don't need to be a computer programmer to figure it out. There's a lot of ground in-between those.
My dad is a retired accountant and a computer enthusiast since the 80s. Never a programmer, but he does this kind of stuff. Has managed his own linux installations for about 15 years.
Nobara Project[1] helps with this. It is somewhat gaming-on-linux focused, but for some, like myself, that's a win. I use it on both my desktop (Intel CPU, AMD GPU) and my Thinkpad T480: works great on both.
Another good alternative could be openSUSE: https://www.opensuse.org/ - also pretty smooth, with user-friendly configuration tools, and no snap nonsense. WSL users might also be interested in using that for their distro instead of Ubuntu.
I used to use opensuse around 2010, and had very pleasant experience with it. Things just worked, and their Yast tool was very handy. I used to joke that opensuse is the Mercedes of linux distro :).
I am wondering why it isn't more popular. Is there any sentiment/experience people would like to share about this distro?
It's the second biggest Linux company behind Redhat, so I think it's reasonably popular. It's also been a while since I used it, but I'll also point out that it's developed primarily in Germany, so it's entirely possible it's just not as popular in the English speaking community.
However, the licensing agreement with Microsoft put many people in the open source community off, so I think that's contributed to its decline among hobby users.
Ubuntu does a pretty good job marketing-wise, many people start they journey with Ubuntu and never look for other distros. Or on the other hand, they end up on advanced/continuous-maintenance distros like Arch Linux.
I'm a "power" Ubuntu user, strongly anti-snap, but Ubuntu doesn't force users to use snaps.
22.04 has been very invasive by defaulting Firefox to a snap (there is still an alternative though, which is easy and well known, but still), but the repositories are still there in apt/deb format, with all the usual (thousands of) programs.
There may be a few exceptions, but they're very likely programs that have never been in the standard repositories; Subsync is an example. This is a decisions of the developers more than Ubuntu.
I can see another case being programs whose developers did not want to update dependencies, but I've never found such case.
That said, I'm definitely afraid of a real push of snaps, although I'm not sure if Ubuntu itself will be involved more. At the very end, containerized packaging programs take space, and there's only so much that a distribution can provide (before enraging users).
Same here. Ubuntu user since the beginning (2004). Switched to Arch half a year ago, will never look back.
I don't understand why Ubuntu tries hard to scare away users from what used to be a solid OS with things like Mir, Unity, snap,... Even now the modified Gnome it comes with by default. I don't get it.
I think that they are looking for a way to make money. A snap app store managed by them could be a source of cash. I'm still on 20.04 and maybe I'll switch to Debian or PopOS. I need to find the time to experiment. I'll buy a second SSD and dual boot.
With open source apps though? My guess is the idea of a store is a commonly known UX (App store, Play store.) And the snaps give you a way of installing more updated versions of software than the package repos provide.
I'm fine with the repos. After all what am I using? Firefox (I know how to switch to PPA), Thunderbird (with the features that were there 10+ years ago), LibreOffice. Slack is a snap and customers use it. Server side stuff is usually docker. Languages have their own package manager or an asdf plugin. Emacs is apt-get. Is GIMP still an apt? It seems they are going for flatpack https://www.gimp.org/downloads/ Not a fan of that too.
Mint is preconfigured Ubuntu that avoids a few user-hostile choices. Techies talking about choosing a Linux seem to careen between Ubuntu/Mint dead-simple consumer stuff and then the other way to Arch, when they decide that they're too expert for Ubuntu (then to Macs, after they have one very rough day with Arch.) Debian is nice, and simple. Takes a little bit of configuring to taste, but configuring the next Debian how you like isn't going to be much different that configuring the last Debian how you liked. Keep a text file that reminds you of any weird config changes you make and why.
It also upgrades in place nicely. Excellent distro for Ubuntu and Mint to base themselves in.
Installing Arch with the cli installer [1] is pretty straightforward these days. It only becomes a lot more work if you have very specific preferences on partition layouts, filesystems and bootloaders. In my experience, changing these things on Ubuntu was even more difficult and less documented.
I haven't tried Arch, but in my experience changing those things in Ubuntu setup is simple and can be done in the GUI installer - I do it every fresh install. I haven't tried changing the bootloader; haven't had the need to.
There are many configurations that aren't (easily) possible with the GUI. The average user isn't going to care if they use LVM on LUKS or LUKS on LVM, but if you do, Arch makes it easy.
I personally prefer using EFISTUB instead of a bootloader, and this was very difficult to do on Ubuntu last time I tried.
As Ubuntu gradually snapifies more and more stuff, Mint is either going to start falling behind or have to redo Ubuntu's packages (reducing any benefits of the Ubuntu heritage...).
Mint might be an OK stopgap, but I wouldn't call it a long-term solution to Ubuntu's snap infestation. Just like you won't get a good OS by playing whack-a-mole with the various workarounds you have to do to avoid Windows' latest user-hostile bullshit.
Yeah, setting up arch for the first time takes time. For newbie users it is definitely not appealing process, and also I would not recommend it. It's not a long process, but you do everything yourself and all this in the terminal (no GUI installer).
I learned a lot about linux when I decided to try Arch. And as OP, I also never looked back :)
My experience with Arch (which I tried after I'd used Linux Mint) was that for the first few weeks I was constantly consulting the Arch Linux Wiki to get anything done. -- I mean, yes, the emphasis is on "learned a lot".
I’ve been using EndeavourOS lately. It’s less of a distinct distro and more a really convenient Arch installer to get to a usable system quickly. And without all the extra BS of Manjaro — once you’ve installed it, you’re basically just running Arch. It installs packages directly from Arch, etc.
I use OpenSUSE Tumbleweed which is also rolling release, and the one thing that bugs me is that sometimes your tools are so cutting edge that it makes it hard to get older stuff. For example it's super easy to get python 3.10 working in Tumbleweed but super hard to get python 3.6 working.
I needed specifically Python 3.6 to test something, and man that was hard to get working in Tumbleweed. First of all, it is not available in standard repos or pre-built on python's website. Second of all, downloading 3.6 source and building it fails because (apparently) the GCC version I have is too new and creates problems with the `-O3` flag passed during building. So now I have to install an older version of GCC that plays nice with python 3.6 source code...
Eventually I just said "screw it" and used the docker python:3.6 image. Not quite as convenient as having a native python3.6, but "good enough". Not sure what I would do if that image became unavailable.
I started down that route as well but was running into the same gcc issues as when compiling manually. I found this issue https://github.com/pyenv/pyenv/issues/2046 but the workaround didn't immediately work for me and I didn't dig any deeper.
I've also been using TW for a while now (6 or so years) and I've taken to using GNU Guix as an environment manager when I need a specific version of some software.
Canonical really showed their colors with Snap I think. The base concept is obviously one desired by many users, as evidenced by the existence of alternatives like Flatpak and AppImage which both(?) predate it, not to mention ROX AppDirs, GNUStep Application Bundles, and a few others.
But Canonical insists on their own implementation which is entirely controlled by them. Only they have a Snap repo, they follow the hated Windows model of forced updates, etc.
Someday I think Ubuntu will have to give up Snaps to remain relevant and switch to (probably) Flatpak, but they're going to lose a lot of users in the mean time.
I honestly hope not. I like the fact that other distros are trying things themselves. If you don't like it, just switch to a different one. Whats the point of different distros if they all do the same thing?
I strongly recommend Pop!_OS as a great solution for 'snapless Ubuntu '. I buy System76 machines with it, but also install it on a bunch of my other non-System76 machines. The whole family uses it.
I second this. I understand how hard it may be to maintain software these days, and that’s the whole point of snap, but the way it is implemented seems broken. It can simply break the whole system, it is like it uses one huge semaphore for all ‘transactions’ it commits in the ‘snap database’ (whatever it is) and if the transaction doesn’t complete, the whole system stalls. Only recourse is reboot.
Don’t get me wrong, snap is like a great conceptual ideal, but it seems to work only for Ubuntu/Canonical developers (or to people with access to them), no one else. It is not well programmed/thought for some reason. Pretty stupid issues, and the read-only fs thing is also weird, how can one test things On-The-Go?
> I understand how hard it may be to maintain software these days ...*
I can't see it being harder than during the forming years with ever-changing libs and tools. Haven't seen a new end-user desktop app on Linux for over a decade either.
Yes, I agree. However, this is the main execuse Canonical devs cite when asking why they wouldn’t provide a *.deb package. When one pushes more, they do not elaborate and roll back to the maintainability execuse.
Last time I poked at Desktop Linux I tried all three of the major solutions for this, and the only one I could tolerate was AppImage, but its ecosystem seemed a bit weak and its official (I think?) site & repo were... not confidence-inspiring.
You don't have to use Snap at all. I've been using latest Ubuntu for a few years and I just uninstall it after every major upgrade.
I do use flatpak for a lot of stuff, but it's better than Snap in many ways: it's faster, you can upgrade while apps are running, it allows third party repos, etc...
They make it nearly impossible to avoid snap. Every couple of upgrades, they unconditionally run a script that forcibly removes the Firefox deb and installs the snap. Nothing I've tried has been able to prevent this. It seems to be hard coded somewhere.
So it's kind of like saying "well you don't have to stay in the pit" except you keep pushing me in the pit every time I climb out.
I'm not finding this to be the case, on every computer I now uninstall snapd completely and use the firefox-esr PPA (mentioned elsewhere), and I am able to apt-get dist-upgrade regularly with no problems.
And also, you do not need administrative privilege to install Flatpak apps. Snap mandates administrative privilege. The only downside is accessing utilities from frameworks using command line is difficult and tricky in Flatpak.
Let me start by saying Debian rolling release is the truth.
That being said, I worry about the archaic community and contributor-tooling behind debian to be a barrier that will lead to its demise though. Makes me sad, but it is what it is. We are all getting old, and to us this interface looks normal, but to new people and those making the apps that the kids depend on, it looks like a shitshow of antiquated code.
Nobody wants to interact with savannah, it's ugly, move those projects somewhere else.
Nobody wants to use mailing lists or byzantine social arrangements through IRC just to get their package submitted.
I've got to be real with you, not a lot of people give a shit about 'free' beyond price either. Make non-free the default and put 'free-only' as the apt option. I love free software and it hurts me to say it, but there's a reason why AUR is killing the game right now. It's easy to use, easy to enter, easy to see people using your code in a mainstream OS without having to opine to holy neckbeards for the mercy to allow your package to be published.
Debian is a great OS, and I'd happily use it for the rest of my life if given the choice, but reality is staring me in the face that this won't last in its current form, and its current form won't be maintained if anything changes. It's a catch-22.
Debian has hidden appeal. Distribution tooling languages center around the era the they started out, it takes a group of mixed generations to do slow reform. If you worry you can get involved
Debs are very easy to make and their structure is easy to understand for anyone who's been using Linux for at least a couple months. They're just tarballs with a couple special text files. What's hard is Debian's official tools to make debs, which are geared towards people who want to become maintainers of packages in official Debian repos. If you don't aim to become one, there are third-party tools which make it easy, such as cargo-deb or holo-build. If you don't want to use them, you may also even write a shell script which will produce a good package. Just use lintian to double-check whether everything is alright.
RPM on the other hand is an over-engineered mess: <https://xyrillian.de/thoughts/posts/argh-pm.html>. Tangentially, YUM and DNF were the slowest package managers I've ever used. They're a real pain.
> RPM on the other hand is an over-engineered mess: <https://xyrillian.de/thoughts/posts/argh-pm.html>. Tangentially, YUM and DNF were the slowest package managers I've ever used. They're a real pain.
I like OpenSuse but man zypper is possibly the slowest package manager ever!! Can't even download in parallel. I think they are now moving to dnf.
Making the package generally isn't very difficult in comparison to flatpak, appimage, rpm. The problem is the old dusty path to getting a mentor/sponsor for your little insignificant utility to get introduced into the repos. Every doc/page you see will have swaths of (sometimes old) information that outlines endless procedure and arcane incantations required to find someone who may be online and not idle, who might just vouch for you... or they might endlessly critique your code and license.
This whole process worked great back in the day, but now the kids will just go somewhere else or release outside of repositories.
I know Ubuntu loves the smell of their own codes, but part of me thinks PPA was Canonical's way to address how difficult and crusty the process to get software into Debian is. To this day if you can get in Debian repos, Ubuntu will include you in theirs 'for free'.
I feel so bad saying this.. Debian was supposed to be The One, imo. I've been at this linux thing since the mid 90s, I could not have predicted the current state of things decades ago. It truly disappoints me that things have gone this wrong with modern Linux, but I can't say it had a chance to play out any other way. You see the same with cryptocurrency (gasp!) where in order to achieve legitimacy in the eyes of the opposition a novel industry sought to emulate the legacy systems enough to be familiar and charm the opposing users, but ended up gaining users sold on that dream who lo and behold demanded an effigy to their old systems. We brought this on ourselves.
Now we have Ubuntu aiming to be the new Microsoft at an org level, aesthetically emulating MacOS at a UI level, and meanwhile pretending to be a crossplatform user experience perfect for phone and desktop. I call bullshit. Ubuntu took the dark path after 10.04 and 10.10 when Ubuntu Netbook Remix first reared its ugly head. Someone misunderstood the instructions, fed it after midnight, and now it's achieved its final form- Unity.
This is 100% our fault, my generations fault. We didn't keep with the times and I don't see that changing fast enough.
No, the recent change [0] was to add non-free firmware to the official installer (instead of relegating it to a separate unofficial installer). GP, however, was talking about the apt repository: the non-free apt repository is disabled by default. This hasn't changed and is unlikely to ever change. The non-free apt repository be enabled via a trivial /etc/apt/sources.list edit.
So that means you can end up installing some non-free firmware, potentially buggy/broken, and get zero updates for it without manual intervention? I don't think I would have expected that. Is it such a leap to now also enable the non-free repo by default? Seems like an inconsistent position to an outsider (user).
> So that means you can end up installing some non-free firmware, potentially buggy/broken, and get zero updates for it without manual intervention?
The alternative is that they don't install any firmware, your installation is broken or crippled, on another computer you search desperately for the correct firmware, which when you find it (on some byzantine and bitrotted manufacturers' website) has some impenetrable process that only works on Windows to install, is often buggy and broken, and there's never been an official update. If you need to manually intervene you might as well reinstall the OS because you didn't take any notes on how you did it last time, the manufacturers website has disappeared, and you're downloading the updated driver that might fix the bug from all-the-drivers.disco or a Discord server run by H4r&dw3r3-wiz4r&d88 who was probably not born in 1988.
For me, there's a huge leap from installing drivers that you're necessarily going to install anyway (because you're installing on the box that has the hardware) to adding optional stuff in. As long as it doesn't install the drivers silently, but installs them loudly. I'd love to even see a EULA pop up that indemnifies Debian from problems stemming from the nonfree drivers, to provide the appropriate level of intimidation and aversion. I would also request exclamation points, possibly a bright red or yellow color, maybe a blink.
I wouldn't be entirely against exclamation points, bright red or yellow, and blinking, for enabling nonfree repos, but there isn't exactly a high bar to jump over currently. The bar seems exactly high enough to keep everybody from complaining for very long.
The alternative I was hoping for was a non-free (firmware only?) repo was enabled by default to support the non-free software that was installed by default.
The alternative to not install required non-free firmware is not an option we need to go back to.
I'm interested to know: What made you choose Arch over Fedora?
I had a somewhat similar experience to you and tried Fedora first and it works well. Fedora seems to follow the same "just work" direction that Ubuntu aimed/aims for, minus some aspects I wasn't a fan of. Since Fedora worked for me, I never bothered checking out any other distributions. Was there a big draw to Arch over some of the others, or was it just the first one you tried out and stuck with?
Speaking for myself - Red Hat's package manager cratered on an upgrade in the past (early 2000s) when it ran out of user space, and my investigation was that I hit a pretty core issue with the package manager design itself.
Since it isn't something where I can easily test if the problem still exists, I just avoided RPM completely. That means I have a knowledge and learning benefit in using other systems, and even in learning new (simpler) systems over investing in Fedora.
The split between Fedora and RHEL is also not IMHO as clearly advertised as with other distributions, so I do worry that I will hit a very expensive support wall that might also require me to switch distributions.
After 15 years switched to Mint Cinnamon, it's quite nice now. It is easier than ubuntu because removing snap is no longer a task.
Next up need an alternative to Firefox, it's very disapointing how much has to be configured to turn off all the bullshit analytics, "colorways?", and get a blank new tab page.
My main gripe with Snap was the loss of configuration whenever it decided to update a package on its own. Imagine running OwnCloud as a Snap and losing all metadata when OwnCloud updated itself for some reason.
The problem is when you instruct a colleague to check something using nmap, they install nmap using snap and are surprised it doesn't work, while you are left wondering what the hell is wrong with their computer, oblivious to the footgun Canonical put into their distro; until you remember again after a decade+ of not using Ubuntu. Canonical shouldn't push out broken things like that. It's supposed to be a distro where stuff Just Works(TM).
Only Firefox is shipped as snap on default install.
But Firefox packaged by Ubuntu was always gimped as they didn't enable certain optimizations like PGO/LTO so official tar.gz from Mozilla was superior anyway. (They fixed this recently for snap). I think it's similar for Chromium.
So I didn't use snaps and I never been in situation that I needed to use snap.
Firefox isn't the only thing that has been switched off .deb and to a snap. LXD, for example, has been shipping as a only a snap for years.
But even if it were only firefox, telling people they should download tarballs instead of having their distro ship packages is really missing the point.
I've loved ubuntu for a while as well, but snaps are making me want something new - any suggestions for a "I'm middle aged with children so I need an it just works distro"?
> any suggestions for a "I'm middle aged with children so I need an it just works distro"?
The latest and greatest recommendation is now openSUSE Tumbleweed (rolling yet stable). Fedora was the recommendation of 2021. Manjaro was the recommendation of 2020. Try one and see if it works on your hardware. If not, pick the next one. :)
Seconded. I am also a fan of Linux Mint for very similar reasons. Ended up recently switching from Mint to Pop, though because I like modern Gnome more than Cinnamon (sue me).
Mint was pretty much rock solid on my Dell XPS. It just worked. Pop has been a bit more exciting, but still a great distro!
I was in the same boat and switched to Fedora 6 months ago on desktop and laptop, the Cinnamon spin. It gets out of the way and up until now has just worked. No hardware issues except for the fingerprint reader.
I recently discovered ZorinOS and while I didn't use it long enough to advise it strongly, I found their philosophy of proposing a simple system well integrated interesting
I have a ubuntu server, I needed an app that is not in the repos, I found a snap of it, it worked great (it is a CLI app), so there are good use cases for this tech, it needs probably more optimizations but hopefully it does not get killed, some competition is good.
Btw I used Arch , when I had time and tinkering with software was still pleasurable( I could do a full dual boot install without any documentation in one hour but this kind of stuff displease me this days)
I think I was bit by snap (Firefox refusing to start) - I just don't have the mental energy to learn what snap is and why I can't or shouldn't just use apt-get like I always have. Or do I have to use both now?
(edit: if you want to quickly answer great, but I'm really putting my frustrations into words)
I always uninstall snapd first thing, as well as GNOME (I use KDE but the Kubuntu 22.04 installer crashes if using full-disk encryption). Apart from Firefox, for which I had to enable the Mozilla PPA, I haven't really run into anything where I was required to use snap.
does arch still sometimes just break requiring manual intervention on updates? that's what turned me off last I used it, especially coming back to a machine you haven't used in a while trying to update it, but that must've been over half a decade by now so I don't know how things changed
Good luck on your link shortener project! I made an open source link shortener many years ago and eventually threw in the towel and quit. The problem domain for link shorteners is one of anti-spam and abuse. Charging for it makes complete sense to try and weed this out but having any amount of "free" I found to be a hotbed of reports thrown against my hosting provider and registrar for supporting spam. I found myself getting quickly blacklisted everywhere since spammers love to use new shorteners for bypassing their own blacklists.
I've not looked into the problem again in 10+ years since. If I did I would 100% skip the free plan.
Choose what fits your field. If you're doing ML you probably want Python. If you're doing web you probably want JavaScript. If you want fast native apps then Go, Rust, C++, C, and the rest based on what libraries you think you'll need and what you enjoy the structure of most.
I personally have had a career in Python for over 10 years at this point. What made me choose Python as my go to language was my friends and peers making fun of me using PHP for server side web development. Ruby and Python (Rails and Django) were the cool kids. So peer pressure could also help you with your decision as it did mine!
At this point in my career I care less about my language and more about what libraries are available as those determine how fast I can have an MVP. I'd determine what you are trying to build and come up with a list of requirements and see what libraries are already available for you to build off of.
Open source pet projects are a great starting point for learning. Come up with an idea, even if it's been done before, and build it.
And this is when comments in code are important! Any random numbers without a source are immediate suspect to me, especially in something that needs to be secure. It will save your coworkers and peers time trying to figure out why it's there.
It’s an incantation that’s propagated for 50+ years because it’s minimal and effective. Over time, it’s been fully distilled to those properties.
Since comments aren’t essential to being minimal and effective, they don’t survive the distillation.
Think of it like a clever gist that got pasted and shared a hundred times. Even if the original source had explained every step in great detail, with inline comments and deep explanatory discourses and citations to prior art and etc, they’d eventually get trimmed away as fat as people repeatedly prune it down to some “important” bits pasted into their own copies and then later share those trimmed copies, ad infinitum.
Since it's an LCG, it does not matter how it was derived as long as the randomness properties are known. Such as cycle length, identical initial state set and dispersion properties. Perhaps also performance.
These should be documented.
It's a rather weak PRNG of short cycle, so the suspicion is that it's made for particular dispersion properties, such as for a hash table of particular data and size or other bucketing algorithm.
It is interesting -- 5/9 of the books listed there are clearly numerics textbooks, so it isn't surprising they talked about a non-cryptographic PRNG. Curious about the other 4. But maybe this shows up as a "here's why you should be careful what type of RNG you are using" type example.
When you find this algorithm with or without any comments from where the numbers come from in a point that should be secure, you should be more than suspect. In any way, this is not a secure PRNG.
For PRNGs like this, constants are often chosen by guess and check. This algorithm (an LCG) has a bit more theory, so these might not have been chosen that way, but the author of the code probably didn't have any insight either.
My only issue with this is that I can go to my password manager and I have like 10 different Microsoft accounts over the years, some of which seem to be merged, some of which are not, some are from Microsoft acquired entities like Skype and all of them confuse me.
I seem to be able to use some of them to login to the same merged account and others don't work anymore at all.
Microsoft's entire auth ecosystem since Live has been a confusing mess for me especially when they acquire someone and start bringing them into the fold.
I tend to just avoid it, I'm not deep into the Microsoft ecosystem and rather not have the headache at this point.
Oh man, I always find it funny in a slightly schadenfreude sort of way when someone swaps to a new platform, goes to talk about how great that new platform is, and then the platform can't handle basic load.
That being said I do use Jamstack quite extensively, it can trivially handle huge loads due to it being mostly just static files that can be served through CDNs. This seems like it has nothing to do with Jamstack but it's still funny.
I should of moved my blog over to Jamstack.
I put this link out and then realised there was a magic link exploit in the ghost version I was using. I took it down to make some emergency changes...DOH!
I honestly miss people building random stuff on the internet to showcase unique information on things like Geocities. They were almost always "ugly" but also always visually entertaining.
I remember skimming through hundreds of car Geocities sites by enthusiasts who had no clue how to build a website but wanted to show off their rides. It was always a rollercoaster of poor UX and bad quality images but something was also fun about finding the random image that was a link to some internal page that had more random links on it to more and more and more.
Now everyone just posts info on curated pre-designed platforms and blogs like Facebook or Blogger which takes out half of the appeal for me, I instantly don't care about anything posted on Facebook and don't have a Facebook account. There is some good content on Blogger and the likes but there's nothing fun about most of the posts.
I clearly have been visiting the wrong sites and probably could dig up similar content to what I miss. It could also just be me being old and grumpy.
Well it's dangerous to give people the tools to design themselves - it's a rare skill and very difficult to blend it all together into a cohesive community! I think many startups of the 2010s saw Myspace design as a disaster.
We're hoping that by having some default color palettes and some common patterns and fonts - as well as limits to paddings, margins and border sizes - that it helps people stay in the ballpark of what a Multiverse post looks like. (Kind of like how you can spot something made with LEGOs from a distance.)
But whatever - creating ugly things should be part of the world. Looking forward to the future of 'ugly', 'bad', 'shoddy' design on the Internet. <3
> Well it's dangerous to give people the tools to design themselves
Having built a site builder before, I think you are on the right track. I think you're right to limit the design scope people have access to in order to manage Multiverses' brand image, I wouldn't recommend otherwise. What you've got is much more platform than blog/site builder, the network effect is going to rely on people wanting a "Multiverse" site and that's going to be a particular aesthetic.
In general though, there is a saying in music when someone makes a mistake: "It's only music, nobody died". I think the same could be applied to the web in general, and personal sites especially. Things got reaaaally boring after the "Flat design" trend of the 2010s powerwashed the web of anything remotely interesting, and as much as it brought in good UX and design patterns it also stripped out a lot of character. It'll be nice to see people try on their design pants again.
Wow - this is a very encouraging post. Thank you! If you happen to still be reading - what was the site builder you worked on? Have you written anywhere about your experience on that project?
Absolutely agree with you as well. I do think it’s “dangerous” - or perhaps it’s better to just call it a “tradeoff” - because you give people greater power over the design of the site. But obviously a tradeoff we’re all willing to make.
This comes down to the fact that learning design, UI, and UX is far harder than learning to code. And that nobody who uses a tool cares how good or bad the code is if it works okay.
Whenever I have looked into learning the design side, I've found it far more difficult to find a good set of resources to go by.
Whenever I see people on here longing for the web of old (surprisingly often) I point them to a great little game called HypnoSpace Outlaw. You play a web enforcer in an alternate reality 90's internet simulator.
appreciate what you've expressed @overshard, it resonates
i dont think pre-designed sites and blogs are bad, but I think if/when they are the only (easy/popular) options out there, that's when it feels a little unfortunate
fun and casual is something that we strive for -- a lot of deep and rich things come out of playing and being playful. we hope that we can help contribute a small step toward that
I don't think it's that, I think you're describing the electronic version of an old neighborhood with character versus a modern suburban planned community. Some people genuinely prefer the latter, but there's not much of a choice both online and in the real world these days, so we're all stuck with it.