Arch Linux is so powerful thanks to its amazing community. You got access to over 70k packages with one command (stable: pacman, community: yaourt). Whatever you're looking for, it's there.
I run the same setup for almost 7 years without any issues, and it's always fast, always reliable and productive. I prefer Xmonad over pretty stuff like KDE though, just like many Arch users. It's not beautiful and there is a learning curve, but once you got used to it, you can use it for years, just like me and lots of other people. And it'll give you a stability that lasts years.
Recently I created a distro based on Arch, to attract more pragmatic developers to use Arch. My distro is called Happy Hacking Linux and unlike Arch, it comes with a ready-to-use desktop. It's a niche distro made just for pragmatic developers, so its installation wizard starts with asking where your dotfiles are located, and sets up a system that you can immediately start building stuff. Last January, I converted my dad's old desktop pc to my personal development workspace within 30 minutes. And that useless machine was running my workspace as fast as a brand new Mac with bloated OSX.
When you overwrite the disk, you say you should use dd if=/dev/random, but the reason why this is much slower is because /dev/random blocks. Use /dev/urandom because it doesn't block -- Before you ask, yes this is still secure.
Or you can use dm-crypt and create a temporary container with a random key to wipe it. This will be a bit faster than the above. See the Arch wiki for details[1]
On the subject, is there any evidence that wiping to anything more complicated than 0s is actually helpful?
Are there are practical attacks where you take a zero'd hard drive and derive data from it that you wouldn't have been able to derive were it written over randomly?
Or, you can use the (S)ATA Secure Erase[1] command which was designed from the start to securely wipe a hard disk and should handle wiping reserve space etc. properly.
Apparently, this is not always implemented correctly on SSDs. It should take quite some time, but if it finishes after a few seconds, it probably didn't really wipe your SSD securely.
Most SSDs should be encrypted by default and all secure erase would do is to wipe the encryption key and generate a new one which takes no time and does effectively and securely wipe the entire disk.
Arch is just too much time for me doing sysadmin and keeping up with breaking changes (it's a rolling distro, so all kinds of system configurations are possible, many are untested or just bleeding edge).
If you like using an unstable bleeding edge system where you have to configure everything yourself by hand with absolutely zero tolerance for user error, use Arch.
If I'm on a tight schedule with doing mission critical projects on my work computer where there are hard deadlines and Murphy's Law involved, forget it.
If you're curious, I'm currently using Mint with Cinnamon.
Cannot agree. I have 5 machines on Arch (a notebook, a desktop, three servers), and the administrative cost is pretty much negligible (unless, of course, I decide to install a new application or something like that). Maybe 3 minutes a day total for interacting with the guided auto-update cronjob and reading the system health report.
I thought the same thing but I set up arch on my main machine a couple of months ago and apart from the setup pain, it has been a much smaller hassle than I anticipated. It's pretty stable, didn't have any breakage until now. I love it!
But I do admit the initial setup was pretty intense, I loved the learning experience.
I'm also using Cinnamon, on Arch. It's quite clean and nice. :)
> I thought the same thing but I set up arch on my main machine a couple of months ago and apart from the setup pain, it has been a much smaller hassle than I anticipated. It's pretty stable, didn't have any breakage until now. I love it!
As an Arch user of ~5-6 years, I do have some advice outside the obvious realm of "keep regular (working) backups" that can help minimize future headaches (not necessarily for you since you've likely already encountered these points; mostly for others who may be interested in dabbling with Arch):
1) Update often (at least once every 1-2 months; try not to let it get more than 6 months out of date), and keep a close eye on the news items--or, better yet, subscribe to the arch-announce mailing list. You'll get a heads up on anything that could be a potentially breaking change. When breaking changes come down the turnpike, don't panic--the Arch maintainers post upgrade instructions walking you through the process along with the appropriate news entry and it's usually fairly straightforward. There's also the forums if you get into a bind, and there's tons of great people willing to help (but search first!).
2) Keep your configurations up to date (either with `yaourt -C` or examining the .pacnew files by hand with vimdiff or similar). This isn't really Arch-specific, but because Arch is* a rolling release distribution, sometimes new versions of things come out fast enough that existing configurations can fall out of date and present unique challenges. But it all very much depends on the upgrade policy of upstream packages. Usually this isn't a problem unless.
Fortunately, the lion's share of "breaking" changes you'll encounter will be minor. The most recent one you've probably encountered by now may have been the certificate path issue with ca-certificates existing already. I suspect most of the major architectural changes are in the past at this point (mostly due to the disruption caused by systemd), but keeping your system relatively up to date is a great way to inoculate it against unnecessary future effort. It's always possible to upgrade ancient installs if you let them lapse (I should know!), but it's better to avoid that if possible.
I hope you continue to enjoy Arch for many years to come!
One concern I've had with Arch was how they would handle the big C/C++ library ABI changes. It only seems to happen about once a decade, but when it does it is a giant pain. Last time this happened while I was using Debian testing you had to pin a ton of packages for many months while everything moved over to the new ABI. How does Arch manage these core library ABI changes?
Arch is currently transitioning to GCC 7 and OpenSSL 1.1.0 (yes, both).
The GCC transition is a straight up replacement of packages, gcc and those built with the new gcc. Some AUR packages may need to be rebuilt, but the community of users agrees in some measure that that's something a user of AUR must know already.
The OpenSSL transition is keeping both versions around for a while, shunting the older version from package `openssl` to `openssl-1.0`, which will eventually go away too.
I guess the GCC 6 to 7 transition was not a big deal, but that could be partly because Arch tries to stick with upstream code and doesn't add patches of its own much.
> The OpenSSL transition is keeping both versions around for a while, shunting the older version from package `openssl` to `openssl-1.0`, which will eventually go away too.
In my experience, this hasn't been a terribly unsettling transition and has gone quite smoothly except for a few individuals unlucky enough to update during a mirror turnover (or so I gathered from the forums as their problems often went away after updating again a short time later).
Mostly, insofar as OpenSSL 1.1.x is concerned vs. openssl-1.0, I've only had problems with projects that make heavy use of cmake as there's no easy workaround outside either a) creating/linking a few directories that cmake expects to find OpenSSL 1.0 in, or b) (my preferred approach because it avoids polluting the file system) patching the FindOpenSSL macro. But, I don't think this is a problem Arch needs to resolve as this is most certainly an upstream issue and only affects AUR packages (which, as you said, users of the AUR should already have some experience in this area anyway!).
You're absolutely spot on with that last statement--one of the reasons I love Arch is because package maintainers keep things as close to upstream as possible and GCC updates have largely been painless. Certainly more so than I recall under Gentoo.
Now that I think about it, the few pain points I've encountered with less frequently updated Arch installations have almost always involved multiple incompatible changes to core/filesystem layered with pacman updates. There are others, sure, but those ones seem to stick out in my mind the most. It's better to avoid them entirely by updating regularly, of course, but it happens.
Yes, I remember once I needed to print something on my laptop and I realized I hadn't set up printing yet on my Arch system (a non-trivial process that requires at least 30 minutes of time reading the Wiki) because I hadn't printed anything in years. The thing I was printing was an important legal document for my startup, and it was very frustrating to my other founders that everyone on their Macs had already printed and signed their copies right away but me.
Now, if I want to print something on Mint/Ubuntu, all I have to do is just plug the printer into my USB port, and it Just Works.
We also had an intern who could code but never used Arch who spent the majority of his first month learning how to set up and use Arch. Not very productive time spent.
Arch is fine if you're looking to use a hobby distro (I had the showstopper problem once where I was trying to work remotely and couldn't connect to wifi but I didn't have wifi to read the Wiki to figure out how to troubleshoot it and so I was screwed -- after all, who remembers how to use wpa_supplicant by heart from command line?), but for getting shit done, it's far from ideal.
This was a simple auth based network too where my Mac user founders were just typing in the username and password they got from the receptionist and they were good to go. Meanwhile, I was still trying to figure out what the hell mschapv2 is, unable to get any real work done for at least a half hour. By the time it was working (I was using the GUI wifi frontend recommended on the Arch Linux and it still required me to manually select from every permutation of encryption possible instead of autodetecting like any friendly OS does for you -- how would the receptionist possibly know if the network uses WPA2 Personal, WPA2 Enterprise, ...), my brain was already wore out from the troubleshooting and frustration and so I really wasn't able to write as much code as I wanted to -- we had to leave shortly after anyways, so effectively, I got nothing done.
I also remember Arch choking on connecting to a wifi network that had an ampersand (&) in the password. I was stuck using NetworkManager because I needed to connect to wifi in command line to unbreak Xorg (NVIDIA driver update hosed it), and while I was perfectly able to connect to this wifi network from the GUI (the only one I had access to in the vicinity -- this was a desktop computer), I just couldn't connect to it at all from command line no matter what escape sequences I tried (quotes, \&, etc.) and no amount of troubleshooting trying to Google and read Arch Wiki and manually copying and pasting on my 4" phone screen (since I had no Internet access otherwise) could fix it. Even the guys on Arch Linux's IRC channel were stumped.
I agree Arch should not be used for mission critical stuff.
I have been in situations similar to yours. I run Arch, and I've had to deal with printers (and more exotic hardware) under urgency, I've had to connect to weird WiFi networks where the network admin was some guy who knew only how to click buttons and take screenshots, and I've absolutely had to figure out how to do things from a cli without internet access.
----
About the printing business: if you know you don't have something set up in Arch, and you know it will probably take a while to do it, and you don't want to spend that time ... you should definitely use something else for it while you can. This is without all the caveats of the general business of getting printers set-up and running on any OS.
About the intern: why would you even ask an intern to set up and use Arch when all they needed was a working Linux distro? That's just using the wrong tool for the job. It's almost like asking an intern to learn how to drive simply to travel to office every day.
About the network auth:
1. The GUI tool you used was the same as on every Linux distro: NetworkManager (or a NetworkManager client by you DE, like KDE does).
2. If it was WPA2 Personal, you'd have just a password. If it was WPA2 Enterprise, you'd definitely have something more than just a password; and enterprise Wi-Fi is convoluted enough that it requires detailed steps even on Windows 7. The NetworkManager GUI never asks for a username when WPA2 Personal is selected (because that wouldn't make sense).
About your last issue:
1. NetworkManager is the network management software all Linux distros use with GUI.
2. NetworkManager now has a cli, but perhaps that wasn't the case then.
3. It is always wise for a system administrator to know their tools; or at the least, how to use their tools. `wpa_supplicant`'s manpages could have told you what to do when you couldn't reach Arch Wiki.
I'd pre-configured my Arch system for the day when I'd need to connect a printer to it. And when I needed it and I connected it and it worked, it worked. When it didn't, and it was urgent, I asked someone else to print using their system. This is a common scenario with Windows 7 too.
When a network posed me troubles, I have always debugged it and solved it.
When I needed to figure out how to do something on the cli without internet, I relied on the manpages and the `--help` outputs and my sysadmin knowledge in general.
And I'm okay with that, because it gives me the freedom to have my system my way and because I can do that.
When I'm asked by someone to pick a distro for them, if they do not want to manage their system to such low levels, I recommend Ubuntu. It's just a matter of picking the right tool for the job.
That's my experience too—I used to use Arch, but switched to Debian because I have better things to do with my time. I know quite a few people IRL who have made the same switch (or to Ubuntu).
For me the rolling release model of Arch makes it not the most suitable distro for development, lately I had to fight with the openssl upgrade to 1.1.0 because some rust libs I used no longer worked. Downgrading is not an option for pacman, so I had to resort to compiling on docker containers and that's not the most comfortable dev environment. I'm gonna migrate to NixOS or GuixSD this weekend, the package managers they have are really a breakthrough to package management.
I used to install Arch by hand, then I discovered ArchFI and ArchDI scripts. They do every basic step of the install wiki, and if you need you can do some things manually from another console.
With those, I go from an empty machine to a fully functional system with total control/feedback on what is installed (Arch way) in about one hour (most of the time is waiting for packages to download and install).
:) well the nice thing about Arch is that Arch doesn't tell you what you can and can't install on your system...that is pretty much entirely up to the user.
To be fair KDE4 _was_ a disaster when it was released but in time it came to be as stable and usable as the much loved KDE3.
The path to plasma was also somewhat rocky (but nowhere near as bad as the KDE3 - KDE4 transition).
I used to enjoy using KDE but I feel they lost their way and are overly attracted by "shiny new things" like workspaces, activities, or semantic desktop - virtually everything being dependent on strigi then akonadi then nepomuk then baloo or ...
The real bugbear is that every time they ditch everything for a new metaphor - so much of the previously working and reliable software is also ditched or obsolete.
It doesn't suit my needs to have to keep re-configuring my computer and software and workflow so I stopped using it.
I do think it's great and valuable that it's there as an option though.
Akonadi was the main reason why I ditched KMail for Thunderbird six years ago. I recently went back, and it's marginally better. But it's still pretty outrageous that a service that only gives one user-visible extra feature (mail checking and calendar reminders as a background service) costs 330 megabytes of RAM. Most of that for the MySQL instance that it's using; no idea why they have to use anything else than SQLite for a mailbox with maybe 100 mails. Stopping Akonadi is a thing that I just have to do before launching Minecraft on my notebook.
$ free; akonadictl stop; sleep 15; free
total used free shared buff/cache available
Mem: 3934720 1227872 1296492 253868 1410356 2203068
Swap: 0 0 0
total used free shared buff/cache available
Mem: 3934720 897308 1626924 249452 1410488 2538072
Swap: 0 0 0
I'm still sticking with KMail for now, though. The UI is a bit nicer (esp. w.r.t. GPG), and mail checking as a background service is nice.
I have to admit, this is one of the things I miss most whenever I boot to Windows. Explorer lacks an awful lot of the conveniences present in Dolphin.
Dolphin can be a bit unstable and has some UI warts, but tabs/tab views, the drag-and-drop copy/move/link dialog, and tree view in the file list pane are some of my favorites. Right-click copy to/move to are also hugely useful.
Dolphin rocks! Double panel, tabs, filter, and F4 for the built-in shell which is kept in sync with the current folder... and then there are the additional columns in table view, where you can see image resolution (OK Explorer has it too) or number of lines for text files...
BTW there's also the additional information during long file operations (like bandwidth used, etc), and (I believe) it's the only system I know of, where I can pause a file transfer operation, and resume it later...
> BTW there's also the additional information during long file operations (like bandwidth used, etc), and (I believe) it's the only system I know of, where I can pause a file transfer operation, and resume it later...
I think Explorer does that too (bandwidth; don't remember about pause--I think it has that feature), but Dolphin certainly had it first. The task tray integration for background copying is also quite nice and useful if you're on a different virtual desktop (to me, anyway!).
You're absolutely right: Almost everything else is anemic compared to Dolphin!
Question for all the Arch peeps, what's the goal/purpose of Arch? As an outside observer it seems needlessly complicated and most of the complexity being specific to Arch. Why not just run Slack or Debian if you want a barebones full distro? Slack and BSD's are closer to old school UNIX. Arch is after my time, am I missing something?
Latest stable software (this is such a massive deal), it's as lean as you want it to be, and you have the AUR where you can install pretty much any piece of software on the internet in one command. Once you get it running (which doesn't take long once you know what you're doing), it does exactly what you want it to do, is always running the latest and greatest, and in my experience lets me get what I want done faster than anything else.
Slackware and Debian are nowhere near as current as Arch, and don't have anything near comparable to the AUR.
The main downside of Arch is that you want to be updating somewhat regularly (every couple months at max). This isn't an issue for desktops/laptop I regularly use, and I run Arch on my home server that gets updated around monthly, but it's not a "set and (almost) forget" like Debian stable or CentOS.
EDIT: I run Arch on my work laptop and my home server (which runs my blog and some services) and it's never fallen over in several years. Occasionally updates will require me to do something, but if there's an update that requires manual intervention they post about it on archlinux.org.
For the "you have to update it every once in a while" problem, I built a small cronjob that automates the updates, but allows me to intervene. Source is at [1], and it talks to me via XMPP. [2]
I consider myself a power user and I've found Arch to be the only distro that doesn't get in my way. Everything is minimalist. My favorite feature is Arch's build system. It's sooo simple that it's in fact a pleasure to write PKGBUILD files for every little thing I want installed. I've never, in the 2.5 years of my Arch installation, ever copied files directly to /usr because it took a few minutes to whip up a new package and let the package manager take care of conflicts, etc. for me.
For me it's the easy of keeping up to date with the latest-and-greatest versions of packages too (NodeJS, GCC, CMake, etc.). The latest Ubuntu/Debian typically gives you TWO YEAR OLD versions of common software. Upgrading packages on Ubuntu/Debian feels such cumbersome (add a PPA, compile yourself). It even feels easier to keep up to date on Windows...
The lack of layers of configuration is subjectively simpler and clearer for me. It rather read 'man someprogram.conf' and use 'vi' than try to understand how some other package generates its configuration file.
Yes, it's not as stable. Sometimes stuff breaks. I'd rather fix it now than later.
Debian is not kept as up to date as arch. There's also no AUR equivalent. AUR had Steam for Linux packaged for example about 2h after the official release, with people adding the missed dependencies very shortly.
Also not having many prescribed defaults is sometimes nice. I want KDE with that display manager / lockscreen. And I can have it without uninstalling some metapackage which will make updates harder in the future. (like gnome-desktop on Ubuntu)
I haven't used slack for over a decade, so can't really comment.
Also until recently it was one of the few distros with nicely integrated grsec kernel. :-(
Well Debian stable, but you can sync up to testing, unstable or experimental (if you want bleeding edge) for newer packages and have a rolling release.
I'm running Kubuntu right now, not that I'm a huge ubuntu fan, but it has the latest KDE plasma desktop with the debian package manager if you just want a KDE that works with minimal effort plus a kick ass package manager and you can upgrade without reinstalling like Fedora. I know how to do stuff manually, I'm old lol, but I use Linux as my main desktop OS and have been for years. More years than I like to think about because it just reminds me how old I am now. It's nice to just be able to install a distro and be up and running in a few hours instead of a few days. You can add grsec (are anything else kernel related) to any distro, just compile a kernel. If your going for the retro thing, Slack and BSD's is the way to go. I guess what I'm getting at is what need is Arch trying to fulfill?
As you said, you could get grsec in any distro by recompiling kernel. In Arch it was available immediately as a package. People took care of updating it / applying the new releases.
Just install it yourself. I don't get what the big deal is. If you have to edit a bunch of config files then just do it yourself. If you want something straight forward there are better options. If you want something closer to traditional *NIX there are better options. If Arch is your thing that's cool, but as an oldish (41) Linux user that's been using Linux primarily since the late 90's I'm not sure what Arch is going for. I'm not trying to be harsh, but honestly I just don't quite get what Arch is doing. It seems to introduce an added level of complexity that specific to Arch. If your into that sort of thing, there's better options. If your looking for something easier there's better options. I just don't get it. lol
I said grsec was available in few distros prebuilt. Debian was in that group as well.
It may not be for you. I ended up packaging new libraries every few weeks on other distros. Debian experimental included. Arch solves that problem for me. I guess Gentoo, nixos, and others would do that as well. But this one's my choice.
Mixing names. Community repo is different from AUR. The first has rules, the second one is a free for all.
AUR steam was missing dependencies in a way that specific installed apps weren't functioning right, but that's not something you could figure out from the binary itself. For example Portal depended on texture compression which wasn't included by default. Without buying everything in the Steam store you wouldn't know that.
The good thing was that it was corrected within a day. How long does a typical distro bug report take, before it's even looked at?
This is also Arch, but I just yesterday reported a bug to the distro maintainers because the upgrade to Perl 5.26 broke my monitoring. A fixed package was published into the repo within 3 hours: https://bugs.archlinux.org/task/54322
I guess I got lucky though. A turnaround time that short for community projects also depends on whether the contributor in question happens to be at his desk at that point.
He's talking about the AUR not the official community repo. It was tested on the AUR and the feedback was integrated to identify the dependencies. It's since graduated from the AUR to the official community repo.
The AUR has pretty much everything, community maintained. The arch build system makes it trivial to package something no matter where it comes from. You can use it to repackage a .deb or a .rpm (it'll even download it from its official home and verify checksums), or a project from a git repo, for specific releases, or tracking a branch, or what have you.
There's all kinds of software on the AUR and it's all up to date. The official repos also move very fast, compared to other distros.
Read about a new feature in a recent version of some software you use? Just update, it's probably already landed in arch. You don't have to wait six months for a release window to install software that's already ready.
I didn't say "official community repo", but "community repo", because I don't know about Arch.
Then you're calling this repo which I must not name community repo "community-maintained". So it is a community repo.
All while people in this thread advertise for Arch because this community non-community repo is so awesome. But when things break people are not allowed to take it into account.
If you try to lure people with this AUR, at least own its failures, not only its successes.
Nothing you've pointed at is a failure, it's simply the process. How can you call up in two hours, rapidly tested and brought to 100% shortly thereafter, a failure? How can using the software you want to use immediately after its release ever be called a failure? Success is waiting for two months for a release window? No thank you.
I simply find it remarkable that someone actually chose this failure for his advertising. There are certainly many instances of quick turnaround times that don't center on failure.
Packages go to the 'testing' repository to allow checking for thing being broken. They usually only stay there for a short time (a week or so, often less).
It's funny how you complain about Arch being "needlessly complicated", then suggesting to use Debian instead. I'm building packages for Arch all the time (mostly for personal use). The one time I tried to build a Debian package, I gave up after spending multiple hours of going through all the ceremony that's involved in that.
I'm not saying that Debian is hopelessly overengineered. All the complexity is there for a reason, because Debian strives to cover 100% of what everyone needs. Arch is content with achieving 95%, if it means that you can avoid a huge part of the complexity that comes from catching the last 5%.
For example, some Debian packages use a dialog system to set themselves up during installation. Arch avoids that complexity and instead expects the user to read the wiki on how to configure the application in question.
Plus, makepkg is awesome. If, say, I need a specific font installed, I have no trouble with whipping out a short PKGBUILD that downlads, checks, and installs the files - providing me with the option to manage it using Pacman.
Its simplicity seems so essential to me that I can't imagine why there are so few distros providing tools like it.
The truth is almost exactly the opposite of what you say actually. Yes, Arch is more difficult to install and configure the first time because it requires you to actually understand the steps it takes. But the top quality documentation really makes that not so difficult.
Arch is actually much _easier_ to maintain afterwards (once installed and configured) than almost anything else. This is because it does not add layers of useless abstraction, very much unlike DEB and RPM based distros. Packages are very close to upstream, with only minimal patches when really necessary, unlike RPM/DEB distros which heavily customize and patch most packages. Packages are also very easy to create, you can learn to do that in minutes, again very much unlike DEB and RPM which are needlessly overengineered and have whole ecosystems of helper scripts aiming to help with that complexity, but you have to spend a lot of time to actually learn them as well.
It's only complicated for the first time.
My setup is running just fine for 5+ years.
Compared to when I use Ubuntu, have to upgrade every 6 months and suddenly I got headache because all custom configuration doesn't work anymore or custom repository doesn't work anymore because the maintainer is missing.
Slackware is fine, but Patrick and a bunch of guys is only a small team, and custom package can only do so much. Bless him and may Guinness always flow freely.
Debian is problematic, because their libraries are usually old.
I love Arch, personally. Have been using it for quite a few years now as my desktop and love the rolling release paradigm.
That said, when setting up machines for others I pick the current LTS release of Kubuntu and install unattended-upgrades to ensure security updates get maintained. It provides a much better ongoing maintenance story for machines that are less "pets" and more "tools".
§
Due to a disk crash, I recently had to reinstall Arch for the first time in quite a few years. IMO the install process has become more opaque. I seem to remember the wiki use to have a newbie install guide which was really helpful and explained a lot of things as you go, or at least what the common practices were or the pros and cons of different options. That seems to be gone and anything that may be an opinion seems to be excised from any install instructions, and you're left to stumble around different pages a lot more, which is a bit of a shame.
The reason I like it is because I can build my OS from the ground up. And Arch makes this super easy with pre-built packages and an awesome package manager.
Most other distros you download a pre-built OS, and then you remove all the stuff you don't want. And add stuff you do want.
With Arch you just add all the things you want. You know exactly what is installed, and where/how it is configured because you did it yourself.
It's also much easier to switch between DE and/or WM with Arch than most distros.
> Why not just run Slack or Debian if you want a barebones full distro? Slack and BSD's are closer to old school UNIX. Arch is after my time, am I missing something?
I can't speak for other Arch users but to me, selecting it as my primary OS has been a natural evolution. I first cut my teeth on OpenBSD. Probably not the best choice, but it was the late 90s when I was first exposed to Unix-like OSes, and a friend of mine was something of an OpenBSD evangelist at the time. I then migrated to FreeBSD, and later (around 2005?) to Gentoo. I tried numerous other distros around that period and never really found one I liked (one that felt, well, BSD-ish) more than Gentoo. I admit I only dabbled in Slackware, but Gentoo made more sense to my BSD-addled mind--perhaps because portage was at least passingly analogous to the ports collection, which I still very much admire.
Of course, having to rebuild literally everything on the system or use unofficial binary overlays just for binary packages grew increasingly more frustrating, particularly for desktop use, and I lamented about this on Slashdot many years ago. A kind soul suggested Arch and persuaded me to try it, suggesting that it was close enough to Gentoo that I may enjoy it. I did, and I've been using it ever since--probably for about 5-6 years. I suspect its relative similarity to Gentoo has probably changed with systemd--for better and worse--but one of the things that's simultaneously both very exciting and frustrating is Arch's constant evolution. I can honestly say that's definitely for the better.
It is still possible to build a minimal system in Arch fairly easily, so I'm not quite sure I follow why this may not qualify as "barebones." I'll grant that the absolute minimal base image "out of the box" will still weigh in around 330-360 megs, but I have had limited success reducing that footprint for abusing containers via custom builds (I seem to recall it was necessary to remove the Perl dependency in OpenSSL to shave off a fairly sizable chunk--that may not be true anymore).
The other side of the coin is that it's similar enough to Gentoo (with PKGBUILDs) that you can very quickly build your own custom packages, but it's also a binary rolling release system which gives you access to fairly new things relatively quickly. I'm aware there's also Aptosid, but I'm not a huge fan of Debian-based distros for my own personal use (preference, mostly).
If I had to wager a bet, I think a good chunk of Arch users post 2012 are in the same boat as me. They're recovering Gentoo users. Therein may perhaps be a better answer.
> As an outside observer it seems needlessly complicated and most of the complexity being specific to Arch.
Ostensibly, that may seem true until you scratch the surface. One of the things that's important to understand about Arch is that its maintainers ship upstream packages with minimal changes. Everything you receive is more or less packaged as the developer intended. This can lead to some inconveniences if you want things mostly preconfigured (or configured through an ncurses GUI), but I'd submit that the Arch wiki is absolutely fantastic answer to this. It is this design choice that follows, at least spiritually, with the BSDs and Gentoo, and is also one of the reasons why Debian/Ubuntu's default decision to start services immediately after installation (MySQL, etc) without providing the option to configure first befuddles me.
The other side of the coin is that at least some of that complexity isn't strictly due to Arch-specific choices as much as it is these defaults I mentioned. The init system is entirely systemd with no overlays and no compatibility layers outside what systemd itself provides. Once you learn pacman and how to use PKGBUILDs there really isn't anything that deviates substantially from any other systemd-based distro. I'll agree there are some sharp corners here and there, sometimes due to choices made during upgrades/migrations (OpenSSL's recent update comes to mind), but I suspect at least some of your observations may simply be due to what you're familiar with versus what Arch provides.
Out of curiosity, what is it that you feel is "needlessly complicated?"
Thanks for the detailed response. I just don't know myself. I'm sure me looking at the issue through the lens of my old ass perceptions (yea I'm old) is a factor. I read the guide in the article and other stuff online about Arch and it seems to involve a significant amount of manual work. I came from an era when there was no other option. You just had to do it. It seems to me that much of setting up Arch involves reading guides and copypasta. Things that other distros just do. If your into the whole old school thing, to me, there's better options. If you want to put in work and do some cool stuff to learn, there's better options. I'm not hating on Arch, I just don't exactly get it. If Arch is what got you into the whole Linux/open-source thing than more power to you. Don't take it as an a attack. :)
There's nothing wrong with preferences. It's why I don't care much for Debian-based distributions (Debian/Ubuntu make some weird choices, IMO, some for historical reasons and some because it's just the way they do things). Although I will counter that Arch is substantially less "manual work" than certain other popular distros, like Gentoo! Likewise, much of the Arch wiki centers on third party software--and sometimes more advanced installation options like mdadm, etc. (Although I will concede that the latter is much easier to configure under Ubuntu during installation; the difference is that under more manual distros, you're essentially forced to learn the tool at an earlier stage.) To be honest, I don't think it's a matter of "better" options for learning/manual installations as much as there are different options (Gentoo is more manual; LFS would be "better" in terms of understanding ground-up distributions).
Please don't take me as being contrarian! I just happen to really like Arch. I'm also fond of Gentoo and FreeBSD, and begrudgingly use Ubuntu for situations its better suited for, because sometimes you don't want to upgrade to the newest version of a package (Arch's recent OpenSSL bump from 1.0.2 to 1.2.x can cause some issues).
Anyway, I appreciate your opinions! Everyone's different and has different preferences for different reasons (past experiences, patience levels, familiarity, etc)--and that's a good thing! Arch certainly isn't for everyone, but I think it fills the rolling release without long build cycles (with just the defaults, please) niche quite well. However, I still recommend Ubuntu for newcomers or for people who simply don't want the headache!
Almost agree 100%, but why would you encrypt the whole disk? There is no need for that. It's 100% free software availabe for anyone. Encrypt your home directory. And this has Ubuntu figured out quite nicely IMHO.
Encrypting the full disk is easy, fast, and you don't have to think about whether e.g. some unencrypted log file is going to have your password in it because you accidentally typed your password into the username box. I use full disk encryption on Debian and macOS and it was easy to set up on both systems.
Encrypting the home directory ensures the privacy of your data should someone get his hands on your machine. Under the same scenario, full disk encryption would also gives you the guarantee the system hasn't been tampered with.
That's profoundly incorrect: full disk encryption provides no guarantees that your system has not been tampered with. I don't see how that would even be possible.
If the machine is turned off and only the home directory is encrypted, I am at liberty to patch the kernel or binaries, update your package manager or dns settings, or anything in between really. I can do none of that with FDE (assuming I don't already know the key of course).
Once "they" have your machine you have to assume it is compromised. If they have the ability to install kernel patches or custom binaries, they most certainly have the ability to install monitoring hardware and/or modify your bios. That is, no matter what you encrypt your machine is compromised and your only recourse is to recover the encrypted data and move on to a new system. Here "they" is someone who would take your machine, modify it, and get it back to you maybe without your knowing.
If you are just concerned with identity thieves and basic asset protection then as long as that stuff is encrypted you are fine, whether that is whole system, home directory, ~/Private directory, or just those specific files.
It can't be turtles all the way down. If you encrypt /, then you're still not encrypting the kernel and I win. If you encrypt the kernel, then you're still not encrypting GRUB and I win.
There's no way to prevent anyone from tampering with your system, or even to make any tampering evident. The best you can do is a cost-benefit analysis and risk analysis of the different options.
I run the same setup for almost 7 years without any issues, and it's always fast, always reliable and productive. I prefer Xmonad over pretty stuff like KDE though, just like many Arch users. It's not beautiful and there is a learning curve, but once you got used to it, you can use it for years, just like me and lots of other people. And it'll give you a stability that lasts years.
Recently I created a distro based on Arch, to attract more pragmatic developers to use Arch. My distro is called Happy Hacking Linux and unlike Arch, it comes with a ready-to-use desktop. It's a niche distro made just for pragmatic developers, so its installation wizard starts with asking where your dotfiles are located, and sets up a system that you can immediately start building stuff. Last January, I converted my dad's old desktop pc to my personal development workspace within 30 minutes. And that useless machine was running my workspace as fast as a brand new Mac with bloated OSX.
If you're curious, check it out; http://kodfabrik.com/happy-hacking-linux