Well, there are more and less important parts of the car. I wouldn't bat an eye for 3d printed dash parts or the extreme example, a cup holder, but on flip side anywhere where there is heat is potentially bad for anything 3d printed with heat that's not metal or some hard to print high temp stuff, and anywhere where mechanical robustness = safety is spot where you want something very well tested, not "I printed it and it looks light".
but it didn't fail because of stress. It failed exactly because it was made from wrong material. If the exact same part was injection molded from the same material it would melt too
k3s makes it easy to deploy, not to debug any problems with it. It's still essentially adding few hundred thousand lines of code into your infrastructure, and if it is a small app you need to deploy, also wasting a bit of ram
K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.
"It's still essentially adding few hundred thousand lines of code into your infrastructure"
Sure. And they're all there for a reason: it's what one needs to orchestrate containers via an API, as revealed by a vast horde of users and years of refinement.
> K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.
...the fact it's still k8s which is a mountain of complexity compared to near anything else out there ?
> I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)
it was very simple to setup, and even if you just leased a bunch of servers off say OVH, far FAR cheaper to run your own than paying any of the big cloud providers.
It also had pretty low requirements, ceph can do all that but setup is more complex and RAM requirements far, far higher
MinIO is far less complex than getting same functionality on Ceph stack.
But that's kind of advantage only on the small companies and hobbyist market, big company either have enough needs to run big ceph cluster, or to buy it as a service.
Minio is literally "point it at storage(s), done". And at far smaller RAM usage.
Ceph is mon servers, osd servers, then rados gatway server on top of that.
My point was even 45drives virtualization of Ceph host roles to squeeze the entire setup into a single box was not a "hobby" grade project.
I don't understand yet exactly what MinIO would add on top of that to make it relevant at any scale. I'll peruse the manual on the weekend, because their main site was not helpful. Thanks for trying though ¯\_(ツ)_/¯
What I tried to say (perhaps not successfully) was that core Ceph knows nothing about S3. One gets S3 endpoint capability from the radosgw which is not a required component in a ceph cluster.
It gets complex with ACLs for permissions, lifecycle controls, header controls and a bunch of other features that are needed on S3 scale but not at smaller provider scale.
And many S3-compatible alternatives (probably most but the big ones like Ceph) don't implement all of the features.
For example for lifecycles backblaze have completely different JSON syntax
reply