Having tried Kubernetes on Raspberry Pi's once (my homelab is all SBCs, not all Pis but all ARM), there are two major pitfalls:
1. K3s may not be too heavy, but basically anything that runs on it is. For all but the most basic jobs, I was still looking at one, or maybe two, apps per SBC. Couple with 10/100 networking on many of these, and there's a lot of extra time/latency spent just on chattering across the network.
2. It's relatively rare that you're using _just_ the SBC - more likely is that you have some peripherals, or some local storage. That limits where your pods can schedule, but because of #1, you still end up with dedicated nodes for dedicated functions.
I ended up pulling the plug on that project early, and went back to scheduling workloads manually. For instance, I have Transmission and Squid running on the Pi with the big flash drive. InfluxDB and Grafana live side by side, and also live next to the SDR (I take in some metrics by radio). It's all still containerized, just with manual `docker run`'s and versioned config files, instead of with Kubernetes.
Resource intensive? I run Nomad on a bunch of Raspberry PIs at home, not sure what orchestrator is more lightweight than Nomad honestly.