I can imagine that using eBPF will be faster, but I never really imagined SElinux as slow myself. I guess it's because of all the files that need to be opened, and updating policy.
They probably mean for hyper scaling environments SElinux is slow to use, it is designed for traditional servers that don't change often.
It's interesting to see my old pal SElinux be replaced.
My philosophy regarding AI is that you should never have it do something you couldn't do yourself.
Of course people break this rule, or the concept of vibe coding wouldn't exist. But some of us actually get a lot of value from AI without succumbing to it. It just doesn't make sense to me to trust a machine's hallucinations for something like programming code. It fabricates things with such confidence that I can't even imagine how it would go if I didn't already know the topic I had it work on.
I'm working on a serious embedded app written in C, and Opus has been invaluable to me. I don't consider myself a C developer, but by carefully reviewing the changes and making lots of my own contributions, I'm finding that I've progressed from junior to intermediate C comprehension. A lot of the idioms are still fuzzy, but I no longer find it intimidating. That's wonderful, because learning C has been something I'd put off for 40 years and microcontrollers were the thing that forced my hand.
I think that there's a real rift between people who use LLMs to rough out large swathes of functionality vs people who took the "vibe coding" brain fart way, way too literally. I'm kind of horrified that there are people out there who attempt to one-shot multiple copies of the same app in different instances and then pick the best one without ever looking at the code because "vibe coding". That was always supposed to be a silly stupid thing you try once, like drinking Tide pods or whatever the kids do for fun... not something people should be debating a year later.
But I have written C in the past, it was almost 20 years ago, and everything seemed to work fine, until the memory leaks.
Of course today I would ask the AI, why is my program leaking memory. I think you have a point, AI would be sort of like having a mentor help you find bad practices in your C code.
You've inspired me to maybe try my hand at Rust, something I've been wanting to do since I heard of it.
Same here. I can read and understand most of it, but not enough to debug it. And outsourcing that task to Claude is like taking a long winding path through thick, dark woods.
Perl was my first favourite programming language, used it for most things, from 1999 until switching to Python in 2012.
I still find the occasional Perl script in my current job, usually going through someone's legacy infrastructure, and I always have the same reaction, "phew I'm glad I switched to Python".
That reaction has nothing to do with the culture, it's 100% technical.
Just a few of the main points, I don't know why Perl coders were so adverse to comments, it's almost like some of us took a perverse pleasure in producing the most illegible piece of code.
It's like a stream of someone's consciousness.
I used to take pride in being fluent in PCRE, as well as some other dialects, and looking through an old Perl script you easily see why, it's used on every 10th line. And it always strikes me with a sense of relief when I realize all those instances of Regex are solved in a more OOP/Pythonic way today. Regex is something I reserve for edge cases.
We have similar backgrounds, and I totally agree with your k8s sentiment.
But I wonder what this solves?
Because I stopped abusing k8s and started using more container hosts with quadlets instead, using Ansible or Terraform depending on what the situation calls for.
It works just fine imho. The CI/CD pipeline triggers a podman auto-update command, and just like that all containers are running the latest version.
Great setup! Where Uncloud helps is when you need containers across multiple machines to talk to each other.
Your setup sounds like single-node or nodes that don't need to discover each other. If you ever need multi-node with service-to-service communication, that's where stitching together Ansible + Terraform + quadlets + some networking layer starts to get tedious. Uncloud tries to make that part simple out of the box.
You also get the reverse proxy (Caddy) that automatically reconfigures depending on what containers are running on machines. You just deploy containers and it auto-discovers them. If a container crashes, the configuration is auto-updated to remove the faulty container from the list of upstreams.
Plus a single CLI you run locally or on CI to manage everything, distribute images, stream logs. A lot of convenience that I'm putting together to make the user experience more enjoyable.
But if you don't need that, keep doing what works.
Technically I could allow my web proxy to discover my services today already, but I refuse to have Traefik (in my case) running as the same user as my services. I prefer to only let them talk over TCP/IP and configure them dynamically with Ansible instead.
It always amazed me that people used that feature in Traefik or Caddy, because it essentially requires your web proxy to have container access to all your other services. It seems a bit intimate to me, but maybe I'm old school.
I'm helping a company get out of legacy hell right now. And instead of saying we need microservices, let's start with just a service oriented architecture. That would be a huge step forward.
Most companies should be perfectly fine with a service oriented architecture. When you need microservices, you have made it. That's a sign of a very high level of activity from your users, it's a sign that your product has been successful.
Don't celebrate before you have cause to do so. Keep it simple, stupid.
> And instead of saying we need microservices, let's start with just a service oriented architecture.
I think the main reason microservices were called “microservices” and not “service-oriented architecture” is that they were an attempt to revive the original SOA concept when “service-oriented architecture” as a name was still tainted by association to a perceived association with XML and the WS-* series of standard (and, ironically, often systems that supported some subset of those standards for interaction despite not really applying the concepts of the architectural style.)
Service oriented architecture seems like a pretty good idea.
I've seen a few regrettable things at one job where they'd ended up shipping a microservice-y design but without much thought about service interfaces. One small example: team A owns a service that runs as an overnight job making customer specific recommendations that get written to a database, and then team B owns a service that surfaces these recommendations as a customer-facing app feature and directly reads from that database. It probably ended up that way as team A had the data scientists and team B had the app backend engineers for that feature and they had to ship something and no architect or senior engineer put their foot down about interfaces.
That'd be pretty reasonable design if team A and team B were the same team, so they could regard the database as internal with no way to access it without going through a service with a well defined interface. Failing that, it's hard to evolve the schema of the data model in the DB without a well defined interface you can use to decouple implementation changes from consumers and where the consuming team B have their own list of quarterly priorities.
Microservices & alternatives aren't really properties of the technical system in isolation, they also depend on the org chart & which teams owns what parts of the overall system.
SOA: pretty good, microservices: probably not a great idea, microservices without SOA: avoid.
For anyone unfamiliar with SOA, there's a great sub-rant in Steve Yegge's 2011 google platforms rant [1][2] focusing on Amazon's switch to service oriented architecture.
That's the right approach. This is what the article suggests:
>> For most systems, well-structured modular monoliths (for most common applications, including startups) or SOA (enterprises) deliver comparable scalability and resilience as microservices, without the distributed complexity tax. Alternatively, you may also consider well-sized services (macroservices, or what Gartner proposed as miniservices) instead of tons of microservices.
I'm curious, and the specific list of problems and pain points (if--big if!--everyone there agrees what they are) can help more clearly guide the decisions as to what the next architecture should look like--SoA, monolithic, and so on.
Say more? That sounds like an old codebase, and poor programming practices. What about the codebase and your struggles with it suggests that a rearchitecture is in order? What about it suggests that microservices might be the right solution?
I apologize if that sounds critical; it's not meant to be. Microservices/SoA are often the best available solution given human and technical constraints--I'm not skeptical, just curious.
I've been using Gemini CLI for months now, mainly because we have a free subscription for it through work.
Tip 1, it consistently ignores my GEMINI.md file, both global and local. Even though it's always saying that "1 GEMINI.md file is being used", probably because the file exists in the right path.
Tip 12, had no idea you could do this, seems like a great tip to me.
Tip 16 was great, thanks. I've been restarting it everytime my environment changes for some reason. Or having it run direnv for me.
All the same warnings about AI apply for Gemini CLI, it hallucinates wildly.
But I have to say gemini cli gave me my first real fun experience using AI. I was a late comer to AI, but what really hooked me was when I gave it permission to freely troubleshoot a k8s PoC cluster I was setting up. Watching it autonomously fetch logs, objects, troubleshoot until it found the error was the closest thing to getting a new toy for christmas for me in many years.
So I've kept using it, but it is frustrating sometimes when AI is behaving so stupid you just /quit and do it yourself.
Thanks for sharing. Gemini CLI doing live troubleshooting for a K8s cluster is surreal. I am keen to try that out, since I have just created RKE2 clusters.
I've been using OpenBSD and PF for nearly 25 years (PF debuted December 2001). Over those years there have been syntax changes to pf.conf, but the most disruptive were early on, and I can't remember the last syntax change that effected my configs (mostly NAT, spamd, and connection rate limiting).
During that time the firewall tool du jour on Linux was ipchains, then iptables, and now nftables, and there have been at least some incompatible changes within the lifespan of each tool.
PF is also from 2001. But its roots go further back, I once used a very PF-like syntax on a Unix firewall from 1997. I forget which type of Unix it was, maybe Solaris.
Either way, I don't think there is any defense for the strange syntax of IPtables, the chains, the tables. And that's coming from a person who transitioned fully from BSD to Linux 15 years ago, and has designed commercial solutions using IPtables and ipset.
Funny, I am sitting here in a dark nordic country, turned off my lights to vacuum about 15 minutes ago, with the dyson laser lol. I open HN and the first thing I see after my eyes adjust to the flashbang is this link at the very top of the page. :D
It is more expensive than for example IKEA glasses, but I swear by Duralex.
Ever since an IKEA glass spontaneously exploded on my desk.
I suddenly remembered we used Duralex at school in Sweden in the 90s, so I ordered brand new Duralex glasses and no explosions yet.
I imagine if they were being used in schools, with 13-17 year olds, being washed every day for years until they were covered in scratches, they must be pretty tough.
They probably mean for hyper scaling environments SElinux is slow to use, it is designed for traditional servers that don't change often.
It's interesting to see my old pal SElinux be replaced.
reply