SSH should not become a different user; it should call something like `/bin/login` which uses PAM for authentication and is capable of starting user sessions.
Because I go long periods of time without internet access, and I don't want to have to "sudo apt install" a fucking thing, ever. Especially not a tiny utility that is all of 172k in size, that I might need for something. Understand?
I want EVERYTHING that I might use installed AT ALL TIMES, FROM DAY ONE, so that I can IMMEDIATELY USE IT when required.
This is only one of many reasons why I abandoned the giant dumpster fire that is mainstream Linux. I do not agree with their idiotic philosophy, on practically every level.
You've now discovered that there are sections of God's Green Earth that you never knew existed! One of many benefits of stepping outside the Matrix for a moment.
I would never ever install your distro for this reason alone.
Someone has already pointed out that old/deprecated/obsolete software like a telnet client represent tech debt.
Removing the telnet client was, in part, a recognition that its complementary server was deprecated and unsafe. If everyone was transitioned to ssh and nc, [and custom MUD clients], why keep telnet around?
Any software like this represents tech debt and a support burden for the upstreams and distros which carry them. You have unnecessarily assumed a burden in this way.
Furthermore, ask the maintainers of OpenBSD or any hardened OS about attack surfaces. The more software that you cram into the default distribution, the more bundled features an OS or system has, you are multiplying your potential vulnerabilities, your zero-days, and your future CVE/patch updates.
Especially in the face of growing supply-chain attacks and LLM-automated vulnerability disclosure. Your focus should be on limiting attack surface in every regard.
It is good practice for everyone to uninstall unnecessary apps and software. Whether you use Android, iOS, Mac, Linux, BeOS or Plan9 or Inferno. Do not install and maintain software that you do not use or need. It will come back to bite you.
> Furthermore, ask the maintainers of OpenBSD or any hardened OS about attack surfaces.
OpenBSD still ships with telnet.
Their developers don't entertain nonsense virtue signaling about things that are "unsafe" and they know their users are not idiots that need to be coddled.
Hammers and matches are unsafe if you use them wrong.
> I would never ever install your distro for this reason alone.
And you are? Completely mystified as to why you'd think I would care. I built this distro for me and my people, not you. That's the whole point. We're getting off this ride.
> Someone has already pointed out that old/deprecated/obsolete software like a telnet client represent tech debt.
Not a subscriber to this religion. There is nothing about new software that inherently makes it safe, and nothing about old software that inherently makes it vulnerable.
New flaws are introduced all the time, and old bugs do get found and fixed.
I can patch old code. I can't guarantee that new code doesn't contain bugs.
The ONLY way to ensure code is flawless is through validation--mathematical proof. When you have devised a proof framework that I can use across my distro, get back to me. At this time you're nowhere near that level, and are therefore unqualified to lecture anyone about security.
> Removing the telnet client was, in part, a recognition that its complementary server was deprecated and unsafe.
Unsafe? On my personal LAN? I think not.
You don't get to just 'deprecate' things that I might need, or want to use for perfectly valid reasons.
That's the entire point of my distro: computing the way I WANT IT, not the way Ubuntu wants it.
> If everyone was transitioned to ssh and nc, [and custom MUD clients], why keep telnet around?
Because it's 172 kilobytes. Contrast with the giant bloated carcass of everything else they shove in there that's oh-so-needed by the herd.
> Any software like this represents tech debt and a support burden for the upstreams and distros which carry them. You have unnecessarily assumed a burden in this way.
I'm a distro maintainer. Hello? Telnet represents ZERO maintenance burden for me. There are no operators standing by on hotlines to "support" any of this. It's a 172 kilobyte utility.
> Furthermore, ask the maintainers of OpenBSD or any hardened OS about attack surfaces. The more software that you cram into the default distribution, the more bundled features an OS or system has, you are multiplying your potential vulnerabilities, your zero-days, and your future CVE/patch updates.
Nobody can magically teleport themselves inside my computer and compromise my telnet client. Nobody is injecting packets into my LAN.
> Especially in the face of growing supply-chain attacks and LLM-automated vulnerability disclosure. Your focus should be on limiting attack surface in every regard.
You're concerned about supply chain attacks, so your mitigation is...doubling down on getting the Latest Updates to everything? Because new code is inherently good.
Telnet has to go--way too risky to keep that around--but KDE/Gnome/systemd/dbus/etc stays?
'traceroute' is useless and dangerous, but let's keep the giant QT framework with its vendored copy of Chromium? (That's QT5 and QT6, each with a vendored Chromium, mind you.)
Chromium, by the way, itself represents tens of gigabytes of code/data now inside its repository, with 'third party' directories vendored three or even four levels deep. But a 72k traceroute utility is likely to be packed with security flaws and should be avoided.
> It is good practice for everyone to uninstall unnecessary apps and software. Whether you use Android, iOS, Mac, Linux, BeOS or Plan9 or Inferno. Do not install and maintain software that you do not use or need. It will come back to bite you.
Completely wrong and misleading theory of security you are proposing here.
I devised this new distro exactly because I was tired of my computing experience being shaped and controlled by clueless kids with intellectually bankrupt arguments and/or wolves in sheeps' clothing.
You talk about me, my, mine, my network, my computer. But you're promoting a "distro". That means you're distributing software. It's not yours anymore.
Attackers on a network will use techniques to "pivot". Once a "foothold" is established then they scan for other places to attack. They will indeed get inside "your" computer, or router, and then compromise your telnetd.
It comes back to the liberty of swinging your arms vs. the proximity to my nose. If your distro is connected to a network, then you're responsible and accountable for security issues that result. There are thousands of distro kiddies sending out their favorite flavor of Linux, but how many audited it like Theo de Raadt?
You don't seem to understand the CVE under discussion. It doesn't even affect telnet(1). Practically nobody runs telnetd(8) anymore since the introduction of encryption, ssh, and the like. MUD players use MUD clients. Network admins use nc(1). The reason "telnet" was deprecated is: it's just not really useful anymore without its complementary service. telnet(1) isn't inherently dangerous, it's just superfluous, and distros pretty much evaluated that it wasn't worth hanging on to.
As for "traceroute", I'm not sure it's "useless or dangerous", but it can be misleading and definitely superfluous. It is widely misinterpreted by novices trying to prove something about their WAN connectivity. It misrepresents network topology and doesn't work real good with modern equipment or protocols. It was a judicious decision to bundle it with network debugging tools, because not everyone needs to debug networks. Especially the ones who believe that they can.
I would say that any network debugging tool available is also useful to your attackers with a foothold. A "living off the land" attack will leverage your telnet client, will run traceroutes on your network, and they will use all the software cruft that you didn't uninstall! I am pretty sure there are distros that simply don't come with development environments, C compilers, or various interpreters anymore, and it is for this reason: they are not inherently insecure or vulnerable, but "living off the land" will weaponize them every time.
However, I must concede that your temperament and tone is well-suited to being a distro administrator. You remind me of Linus Torvalds vs. Andrew Tanenbaum, or Theo de Raadt vs. FreeBSD. Perhaps Scott Adams vs. the world. Carry on, good sir.
> I have a 21 yo car and a 12 yo car and will eventually have to get something 'modern' (worse) that forces spyware/subscriptions on me just to get from point at to point b
I daily a 30 year old car. There exists a sweet spot of reliability, safety, and comfort (probably the early-mid 2000s) that in theory, you should never have to buy a vehicle outside of, newer or older. There will always be clean old cars in good shape you can buy, you don't need a new vehicle.
Unless you can't buy gasoline anymore. But that's still quite a long ways away imo.
luckily I don't drive much. The one car is just falling apart and is ridiculously expensive to maintain (don't buy a used Mercedes, ever). The other is just not fun to be in but at least it gets reasonable gas mileage. I really want an electric vehicle, especially the way things are going right now, but buying anything built in the last 10 years is just depressing at best and electric vehicle life is much lower on average than ICE lifespan due to battery life. Ah well. Maybe this will push me to an electric scooter for all the in-town travel and I will only need a 'real car' once a month or so.
> He finally found a safe spot and successfully pitted the car to a stop.
No such thing as a safe spot to PIT someone, ever, let alone while they're asleep at the wheel. This is a great example of why people hate all cops, anyone with two brain cells to rub together would get in front of the car and gradually slow to a stop.
I agree with the recommendation that you yourself replied to: move in front of the vehicle, and gradually slow to a stop, with lights and sirens optional but recommended
I am thrown by the question of "What else should have been done" though, after grandparent made an explicit recommendation
Damn, is this the first time ever the east coast is doing better than Colorado? We’ve had record snowfalls all over Quebec, I spent all day last Friday skiing in a foot of fresh powder. Unheard of on the ice coast*.
*not literally. But still, crazy amount of snow this year so far
I have a decent amount of second hand experience with used cars, through my brother who is a mechanic and spent a number of years working at a used car dealership. Hyundai/Kia is the only company he ever had to do engine replacements for at said dealership, and he did dozens. All under 200k km (frequently right after the extended warranty on the engines ran out, and occasionally on the second or even third engine for the vehicle). These are cars with good service history and otherwise in excellent condition. Sometimes cars they got on trade, sometimes purchased from auctions, sometimes customer cars (after they were sold). No rhyme or reason, just a genuinely bad design that was “fixed” but never fixed.
The only other universally-bad major component is JATCO CVT transmissions. I think his record was an Infiniti QX60 that had 95k km and a blown transmission. Most small vehicle/sedan CVTs he did were in the 160-190k km range, with some lasting as long as 250k km. And of course they were not repairable, since even if parts were available, the entire thing grenades leaving basically nothing left to rebuild.
Point being, “one engine issue due to a manufacturing flaw” is drastically underselling the issue, at best. It is an incorrectly-engineered engine that fails prematurely when built within specification, except when the tolerance stackup lines up in your favour and you perform much more frequent maintenance than prescribed. Oh and the affected engines were manufactured over about 15 years (and there’s signs that their current GDI 4-cylinders are still affected).
The theta 2 is installed in millions. It the failure rate was even close to 20% within 30k miles dealers would be overwhelmed.
It may it be as well designed and that is responsible for some failures when combined with poor maintenance. I think the manufacturing issues bumped it to a percentage that is noticeable.
However. It can't be a significant amount because it would collapse the dealer network
I mean, if you’re in the top 3 percent of anything, yes that’s pretty good, but not unbelievably so, especially in the field of chess. If for instance you randomly put together a classroom full of chess players, there’s decent odds one of them is better than top 3%. Two classrooms and it’s almost a certainty.
Put another way, looking at chess.com users, there are ~6 million people who would count as the top 3 percent. Difficult to achieve, yes, but if 6 million people can achieve it, it’s not really a “humble brag,” it’s just a statement.
It made me smile to hear “I’m only 97th percentile” isn’t a humblebrag. You may be employing an old saw of mine, you can make people* react however you want by leaning on either percentages or whole numbers when you shouldn’t.
* who don’t have strong numeracy and time to think
> and wireguard is about as easy a personal VPN as there is.
I would argue OpenVPN is easier. I currently run both (there are some networks I can’t use UDP on, and I haven’t bothered figuring out how to get wireguard to work with TCP), and the OpenVPN initial configuration was easier, as is adding clients (DHCP, pre-shared cert+username/password).
This isn’t to say wireguard is hard. But imo OpenVPN is still easier - and it works everywhere out of the box. (The exception is networks that only let you talk on 80 and 443, but you can solve that by hosting OpenVPN on 443, in my experience.)
This is all based on my experience with opnsense as the vpn host (+router/firewall/DNS/DHCP). Maybe it would be a different story if I was trying to run the VPN server on a machine behind my router, but I have no reason to do so - I get at least 500Mbps symmetrical through OpenVPN, and that’s just the fastest network I’ve tested a client on. And even if that is the limit, that’s good enough for me, I don’t need faster throughput on my VPN since I’m almost always going to be latency limited.
Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.
Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.
Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.
> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."
This is a great example of how the homelab bottomless pit becomes normalized.
Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?
And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.
Late reply I know, but I wanted to clear up that I don’t want to normalize a 2.5kW homelab. Usually when talking to people about it I refer to it as “insane.” But, having an absolutely insane amount of computer and RAM is fun (and I personally find it genuinely useful for learning, in particular in terms of engineering for massive concurrency) and I can afford the hydro, so whatever. To match the raw compute and RAM with current gen hardware, you only need maybe 500W - you’ll just be spending a shitload of money up front, instead of over time on hydro. (To match my current lab’s utilized performance, I’d need at least 2 servers, one of which with a ~threadripper 7955WX and 256GB of DDR5, and another with an Epyc 9475F and 1TB of DDR5. That would put me somewhere in the neighborhood of $35k? Ish? Costs me about $115/month to run the rack right now (cheaper than my hot tub) and cooling is free in the winter (6~7 months of the year) so the break even is loooooong term. And realistically, $100ish a month isn’t crazy, considering I self host basically everything - the only services I pay for are my VPS to run my mail server, and AWS for glacier S3 for backup-of-last-resort.
Again, not trying to normalize 2500W, most people don’t need that (and I don’t really either), but I do make good use of it.
As for “rewiring the house for 240V”, every house* in Canada and the US is delivered “split-phase” 240V (i.e. 240V with a centre tapped neutral, providing 120V between either end of the 240V phase and neutral or 240V from phase to phase), and many appliances are 240V (dryers, water heaters, stove/ranges/ovens, air conditioners). If you have a space free in your breaker panel, adding a 240V 30A circuit should cost less than $1k if you pay an electrician, and can be DIY’d for like $150 max unless you have an ancient panel that requires rare/specialty breakers or the run is very long. It’s far from the most expensive part of a homelab unless you’re running literally just a raspberry pi or something.
*barring an incredibly small exceptional percentage
I agree with you. My use case doesn't call for perfect uptime. Sounds like yours doesn't either (though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)).
But that doesn't mean its for us to say that someone else's use case is wrong. Some people self host a nextcloud instance and offer access to it to friends and family. What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
My point was simply that different people have different use cases and different needs, and it definitely can become a bottomless pit if you let it.
For me, IPMI, PiKVM, TinyPilot, any sort of remote management interface that can power on/off a device and be auto powered on when power is available, so you can reasonably always access it, and having THAT on the UPS means that you can power down the compute remotely, and also power back up remotely. Means you never have to send someone to reboot your rack while youre out of town, you dont shred your UPS battery in minutes by having the server auto boot when power is available. Eliminates reliance on other people while youre not home :tada:
But again, not quite a bottomless pit, but there are constant layers of complexity if you want to get it right.
> though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)
Generator was a requirement for the sump pump. My house was basically built on a swamp, so an hour in spring without it means water in the basement. Now admittedly, I spent an extra couple hundred bucks to get a 240V generator with higher capacity than strictly necessary, but it was also roughly the minimum amount of money to spend to get one that can run on gasoline or propane, which was a requirement for me. 240V to the rack cost me $45, most of that cost being the breaker (rack is right next to the panel).
> What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
I host roughly a dozen services that have around 25 users at the moment, but I charge $0 for them. I make it very clear: I have a petabyte of storage and oodles of compute, feel free to use your slice, and I’ll do my best to keep everything up and available - for my own sake (and I’ve maintained over 3 nines for 8 years!). But you as a user get no guarantee of uptime or availability, ever, and while I try very hard to backup important data (onsite, offsite split to multiple locations, and AWS S3 glacier), if I lose your data, sucks to suck. So far most people are pretty happy with this arrangement.
I couldn’t possibly fathom worrying about other people’s access to my homelab during a power outage. If I wanted to care, I’d charge for access, and I’d have a standby generator, multiple WANs, a more resilient remote KVM setup, etc. But then I’d be running a business - just a really shitty one that takes tons of my time and makes me little money. And is very illegal (for some of the services I make available, at least), instead of only slightly illegal.
reply