Hacker Newsnew | past | comments | ask | show | jobs | submit | creztoe's commentslogin

That sounds awesome and also a huge headache. I've found its much easier to work with an ORM that is specifically designed for GraphQL, like TypeORM[1]. Otherwise you are just kind of trying to force a square peg in round hole (like Graphene[2] for Django). As for not exposing some data, just use "private" schema directives, etc.

[1] https://github.com/typeorm/typeorm [2] https://github.com/graphql-python/graphene


That's how I understand it. Basically you have a machine hosted somewhere with a dedicated IP so you can access all remote machines from anywhere at any time, as long as they are connected to it via reverse proxy.


Do not pass sensitive data to docker build via --build-arg. When you access this with "ARG" you will log the information in the docker history, visible to all. Use "--secret" or use the ARG in an intermediate build stage which doesn't have it's history preserved, then copy any necessary files form the intermediate image to your file image manually.

A perfect example of this would be passing your NPM_TOKEN to install company scope packages.


This kind of confuses me a bit. Why would you want to turn Pub/Sub into a queue?

How badly does this affect message acknowledgment and retries? I assume just a huge hit to latency. This seems like a horrible idea for anyone expecting to use multiple subscribers or expecting to chunk multiple messages per request.

Services relying on Pub/Sub should be idempotent anyway. If you need to work around that for some reason, you are better off dumping messages from your subscriber into RabbotMQ or Redis for processing and use a Subscriber/Scheduler/Worker pattern.


Having used both types. Turning the pub/sub into a queue has some advantages in debugging and processing. Kafka has the idea of each queue being partitioned and having hash keys. Which means you can have a bunch of processes reading from the same queue and no one really steps on each other. Basically sharding at the data stream level with guaranteed ordering. It is a neat concept. Another is playback. Kafka uses a groupid/offset to keep track of where you are at. Another nice bit is messages are decently hard to lose as they stick around and you can playback by just moving the offset. The update is maybe 10 bytes into a memorybacked filestore. At first I too was skeptical of perf but it can scale very nicely and lets you scale a topic horizontally as well as vertically. In the background you have an expire time for a message. So maybe you only keep it for one week. Or you can set it to last years. For something like that you would be better off putting it in a db table though.

Idempotent is a good idea even in a system like this. But it is not always possible as your upstream data sources may be something very different.


> Services relying on Pub/Sub should be idempotent anyway. If you need to work around that for some reason, you are better off dumping messages from your subscriber into RabbotMQ or Redis for processing and use a Subscriber/Scheduler/Worker pattern.

Exactly. Pub/Sub is made to send messages to any consumer. To bring in order you direct messages to a specific consumer. Alright well Pub/Sub wasn't the right tool then.


Maybe it's just me, but I didn't get that from rozab's comment at all. I figured the point was something like, "Why is this project unique? Maybe the time would have been better spent contributing to an established project that has already accomplished these basic features." Which seems like an honest question.


This seems like it's just a click-baity ad for their productivity analysis tools. I struggle to find a single thing of value in this post.


I'm lazy. I've found a gitlab wiki sufficient for my basic needs. 1. Version control 2. Easily editable from terminal (markdown) 3. Easily viewed via browser (either on gitlab or self hosted gollum instance) 4. Supports some HTML, such as `details` and `summary` for drop down visibility. 5. Supports `[[_TOC_]]` to add a table of contents (works well with in `details` at the top of every page) 6. Supports nested directories for the sidebar navigation on gitlab (github forces a single layer)


Could you explain why helm is garbage? I think it suits its purpose rather well without being too complex. You can essentially "plug-in" different types of resources rather easily. Especially in v3 now that you don't need to install Tiller and can avoid setting those cluster permission requirements.

Have you tried some Kubernetes api libraries? You can generate and configure resources with [python kubernetes-client](https://github.com/kubernetes-client/python) without much trouble. Personally I prefer editing them as JSON instead of python objects, but it isn't too bad.


> Could you explain why helm is garbage?

Not the OP, but..

1. YAML string templating makes it very easy to get indentation and/or quotation wrong, and the error messages can easily end up pretty far from the actual errors. Structured data should be generated with structured templating.

2. "Values" aren't typechecked or cleaned.

3. Easy to end up in a state where a failed deploy leaves you with a mess to clean up by hand.

4. No good way to preview what a deploy will change.

5. Weird interactions when resources are edited manually (especially back in Helm 2, but still a thing).

6. No good way to migrate objects into a Helm chart without deleting and recreating them.

7. Tons of repetitive boilerplate in each chart to customize basic settings (like replica counts).

It's a typical Go solution, in all the wrong ways.


> "Values" aren't typechecked or cleaned.

Helm 3 does offer a solution: a JSONSchema definition file for the values.

Which works ... in a very Helm-like fashion. Meaning: it's messy and awkward.


It's not going to solve all your problems but dhall can fix your first few gripes. I've been using it for several months and it's an excellent way to write configuration imo.


Yeah, I have used Nix to generate them in the past, which worked pretty great too. But Helm does, admittedly, solve a real problem: garbage collecting old resources when they're deleted from the repo. I just wish we could have something much simpler that only did that...


`kubectl apply --prune` should nominally do this. Irritatingly (I acknowledge I'm almost as responsible as anyone else for doing something about this), it's had this disclaimer on it for quite some time now:

> Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See ⟨https://issues.k8s.io/34274⟩.

I haven't used it in anger, so I can't add any disclaimer or otherwise of my own.

kpt is the recent Google-ordained (AFAICT) solution to this problem, but is ot yet at v1.

You could also resolve this yourself by either:

* versioning with labels and deleting all resources with labels indicating older versions

* using helm _only_ in the last mile, for pruning


As helm charts become used by more people and more complicated, exposing more of the underlying config, they just turn into a set of yaml files with as much or more complexity as the thing they are trying to replace. Configuration with intelligence that allows arbitrary overrides of any of the underlying configuration is important in order to meet all use cases. Without that, helm will only be useful for a strict subset of use cases and eventually you will outgrow the chart, or the chart will grow in complexity until its worthless.


We've found Kustomize, or just straight up writing the deployments ourselves the best approach.

The actual spec for a Deployment/Daemon Set/Stateful Set/CRD is usually super straightforward once you get the Kubernetes terminology, and most of the issues I've had with Helm have boiled down to "oh they haven't parametised the one config I need to change"


I always have some configs that helm hasn't parameterized. But that's not a problem because I always fetch my charts from helm hub into my repo. So I just add any parameters I need.


I also think helm is terrible.

Helm's stated goal is to be something of a package manager for apps in k8s, but this is fundamentally unworkable as shown by... Helm. It's hard to describe just how unworkable this idea is.

Let's start with an example, you want to install an app (let's say Jira) and a DB backend of your choice, postgres or mysql.

The first step where this all falls down is, it may or may not support your preferred DB. Sure, Jira does, but does the chart?

Assuming it does support your preferred backend, maybe it depends on the chart for the db you picked. If it does, it's going to install it for you, hopefully with best practices, almost certainly not according to your corporate security policy. This is also a problem if say, you have a db you want to use already, prefer to use an operator for managing your db, or use a db outside k8s.

You got lucky, and it supports your DB just the way you want it. Next question, do you want an HA Jira? Often, this part is done so differently that HA Jira and Single host Jira are straight up different charts.

Do you want your DB to be HA? Unfortunately, the chart Jira chart author picked to depend on is the non-HA one. Guess you're out of luck.

Maybe you want to add redis caching? Nginx frontend/ingress? Want to terminate TLS at the app host and not ingress? How do you integrate it with your cert management system?

We haven't even looked at the config, where you have to do everything in a variables.yaml file which is never documented as well as the actual thing it's configuring on your behalf, and is not always done in a sensible manner.

Hopefully it's clear from this that as a user, helm isn't going to work for you, because just as there's no such thing as the average person, there's no such thing as an average deployment. Even a basic one is filled with one off variations for every user that a public chart needs to support.

As a developer, helm is unworkable because you're templating yaml inside yaml. This isn't too bad if you're just tweaking a few things on an otherwise plain chart, but a public chart, that naively hopes to support all the possible configurations? Your otherwise simple chart is now 5-10x longer from all the templating options. Have fun supporting it and adding the new features you'll inevitably need to add and support.

As a counterpoint to all this, kustomize gets a lot right. I don't mean that kustomize is perfect, or even good, but I've found like k8s itself, it understands that the problem space is complex and to try and hide that complexity leads to a lesser product that is more complex because of leaky abstractions.

Kustomize acts as a glue layer for your manifests, so instead of some giant morass of charts and dependencies none of which work for you, you're expected to find a suitable chart for each piece yourself and compose them with Kustomize.

Going through the same example again as a user:

Your vendor has provided a couple basic manifests for you to consume, maybe even only one, because they're expecting you to supply your own DB. Since they only need to supply the Jira part, instead of having an HA chart and a Single node manifest, They just give you one manifest with a stateful set. Or maybe they give you two. One as a deployment and one as a Stateful set. The stateful set might also have an upgrade policy configured for you.

Since the vendor punted on the DB, you can do whatever you like here. You'll have to supply a config map with your db config to the Jira deployment, but that's okay, it's easy to override the default one in the manifest with kustomize. You are now free to use a cloud managed DB, an operator managed one, or just pull the stock manifest for your preferred DB.

Want to terminate TLS in your app again? Easy enough, Cert Manager will provide the certs for you, and supply them as secrets, ready to consume in your app, you just need to tell it where to look.

So now you have all the parts of your Jira deployment configured just how you like, but they're all separate. Maybe you just edited the stock manifests to get your changes in the way you like. Dang, you've created a problem when the vendor updates the manifest, as now you need to merge in changes every time you update it. That seems like a huge hassle. What if you could just keep your changes in a patch and apply that to a stock manifest? Then it's easy to diff upgrades of the stock manifest and see what changed, and it's easy to see what you care about in your patch.

All of this seems like it's getting kind of unwieldy, maybe we can make it easier. We'll have a single kustomization.yaml, and it'll be really structured. In it, you can list all the paths to folders of manifests, or individual files, or git repos/branches/subpaths. We'll also specify a set of patches that look like partial manifests to apply to these base manifests to make it clear what is the base and what goes on top. Then finally, for common things like image versions and namespaces, we'll expose them directly so you don't need to patch everything. We can do that because we're using standard manifests that can be parsed and modified.

That is kustomize, and as awkward as it is, it's just trying to make it clear what you're applying, and what your one customization from stock is in a maintainable way. It does a better job of 'package management' by not managing packages. This is pretty similar to how Linux package managers work. If you install wordpress on your server, it's going to install php, but it might not install apache or nginx, as everyone wants something different. It's definitely not going to install a DB for you. You as the admin have to decide what you want and tell each part about the other.


I understand your pain and can see where you are coming from.

But Helm is just a package manager and not a software delivery platform like you ask.

I mean do you have the same expectations from a Deb or RPM package?

If I give you a deb package that "contains" Jira, won't you have the exact same concerns?


Thanks for the detailed post. Really helps newcomers like myself.

One of the benefits as a new user of k8s is the ability to grab a helm chart to get me most of the way with something like ELK. I want to go the way of Kustomize but can't seem to find the same with it?


If you want an ELK stack, you should look into the operators provided by Elastic [1]. All you need to do is write a very small manifest or two for each thing you want operated. I feel like this is a better solution to 'I want an ELK cluster' than a helm chart because it solves more problems without leaking.

[1] https://www.elastic.co/blog/introducing-elastic-cloud-on-kub...


I love how the kubernetes client is only compatible with python <= 3.6


I think you are half correct. The gateway has nothing to do with verifying the file during a DNS challenge. However, the IP of the machine requesting the cert IS saved with that cert information and made public. Let's encrypt will even warn you during the verification process.


The IP of the machine requesting the certificate is recorded by Let's Encrypt, but it is not (ordinarily) made public and certainly isn't (as you can see by inspecting it for yourself) saved with the certificate information.

ISRG is required to keep enough information about the issuances they make to allow them to usefully diagnose problems after the fact. Ideally when we discover a problem it will be possible for the issuer to go back and figure out which (if any) previously issued certificates were affected so that these certificates can be revoked if appropriate.

But although they had at one point planned to publish more of this information, they do not in fact do this routinely.


Yeah, I was referring to the certbot warning of "logging" the IP publicly. But I guess that policy never actually came to fruition. Thanks for the clarification!


I agree with the confusion. There is the ability to rollout updates in a "wave"[1], but I'm not sure how this is better than a simple rollout strategy in kubernetes since a reboot of the node seems inevitable.

[1] https://github.com/bottlerocket-os/bottlerocket/tree/develop...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: