Google tech is the best. Google invented k8s, their k8s cloud is the best in the world. There's nothing similar to GKE Autopilot. Similar things could be said about some other services. People want the best cloud and they hope that those issues won't affect them.
We're heavy k8s users at work. We used to use GKE, now we use a mixture of EKS and bare metal after migrating most stuff away from GCP. Honestly, GKE isn't better than k8s anywhere else in any meaningful way at all.
You can just use default K8 without any added service. It works well enough. There is no obligation to use GC for K8. There are a lot of good base K8 providers.
Google has some good tech and then most part of the cloud is just years behind. Want an example: you can't configure the gcp docker registry [EDIT: artifact registry] to cleanup images automatically (after a number of days or number of images or anything).
This is total standard in AWS and the ticket for this on GCP has been open for many years now.
GCR is deprecated and I'm using Artifact Registry. (sorry for the confusion)
That is not backed by GCS buckets - if it were, I'm sure that I would have found this solution while searching the web.
But even if: I would like to e.g. keep the latest 100 images always. I doubt this would be possible with a simply GCS polcy without writing custom code or something like a cron job.
> But even if: I would like to e.g. keep the latest 100 images always. I doubt this would be possible with a simply GCS polcy without writing custom code or something like a cron job.
Note also how the project is hosted under the GoogleCloudPlatform GH org. To me, when I see GCP projects that are set up like this (in the GH org but disclaimed as "not official" in the GCP docs), this suggests that Google engineers built it knowing it's a pain point; and those engineers will support it to the best of their ability in the capacity of being maintainers of this open-source project; but Google as a company don't want to officially support it (yet), and so your GCP support contract won't get you any business-level support for it.
It's sort of like how, in Postgres, there is code which is maintained by the Postgres maintainers, but which lives under contrib/ as an extension. It's essentially a lability-waiver for that component.
---
† I do have a guess as to why Google do things this way. At least where dev tools are concerned, Google seems to eschew the usual distributed-systems architecture for long-running jobs (of having a thin API client binary that submits jobs to a cloud-side control-plane daemon, which then drives the job forward, and which can then be polled/subscribed for job status by said client.) Rather, Google seemingly have a philosophy of designing local fat clients that reach into the cloud to drive backend processes as the "control node" for those processes. The Cloud Dataflow (⇒ Apache Beam) architecture is designed this way, for example. I believe it's the reason that the Google Cloud SDK ships with so many binaries — there are a lot of fat clients in there that actually drive logic, rather than just sending messages to daemons that drive the logic.
And, presuming developers are issued good workstations, I can see the advantages of this architecture. A local control node synchronously knows its own status, rather than having to poll for it; a local control node can use local resources (like how Cloud Dataflow can consume and produce local files on the ends of the pipeline with the same streaming efficiency as a regular CLI text-processing shell command); and an operation started by mistake, with local control, can be cancelled by just ctrl+c-ing the control process.
Depending on how you design the client, it can also "mandate manual usage" — i.e. ensure that the developer is interactively running the process for the process to proceed, and therefore that said dev is available in case anything goes wrong. (I've personally dropped [async daemon-driven] Continuous Deployment, in favor of this sort of "synchronous dev-workstation-driven deployment.")
I wish someone at Google would write up a paper on this philosophy; it's pretty clearly implicit in a lot of their work, but I've never seen it mentioned explicitly anywhere. (Maybe it's just one dev-tools lead who has influenced a lot of these projects, doing what they think is "obvious"?)
That is object specific though, so probably won't work unless I use exactly one image.
>
derefr 3 hours ago | parent | context | flag | on: Tell HN: Google Cloud suspended our production pro...
> But even if: I would like to e.g. keep the latest 100 images always. I doubt this would be possible with a simply GCS polcy without writing custom code or something like a cron job.
No, it does not "exist". It's either builtin and then it exists, or it doesn't. What you linked is a way for me to build it myself. And I found this AND this solution is even linked in the years old Google ticket. And guess what, they didn't build it. On AWS no problem. As I said: years behind.
Sure, I can setup my own cron job (or here cloud run function / github action). But that's not what I expect from a leading cloud vendor. This is not a niche feature!
Please don't defend it, Google doesn't deserve it. Credits to whoever build the 3rd party solution, but Google really failed here.
> Depending on how you design the client, it can also "mandate manual usage" — i.e. ensure that the developer is interactively running the process for the process to proceed
Just no.
Let's face it: Google is just too incompetent to do it. Not the developers there, but the company as a whole in the way it is organized. Even IF it were as you said and it would be a philosophy, they could say that close the ticket, but it's still open.
I can list you dozens of similar things with GCP and related Google services.
Yeah I agree overall. BigQuery, Spanner, Cloud Run, Pub/Sub, Vertex AI. Like all clouds, some of the services suck, like Data Fusion, but overall the services work well and work well together. In particular, the way Google Cloud makes authentication and permissions seamless is amazing. Google does struggle with support though.
K8S runs well on all major cloud providers. You can run it on premise without licensing costs quite well (for example k3s, rancher, …). Or fully managed with Open Shift.
AWS, Azure and GC are all on the same level. Any of them gives you „the best“ technology.