Hacker Newsnew | past | comments | ask | show | jobs | submit | choeger's commentslogin

I am working with main/master for years now, and there's one problem you don't have with develop: Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done. If your changes are somewhat independent, you can cherry-pick them from develop into master in an arbitrary order and call that a release whenever you want to.

> Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done.

That's what tags are for, QA tests the tagged release, then that gets released. Master can continue changing up until the next tag, then QA has another thing to test.


Can I tag a bugfix that goes in after a feature was already merged into main? Basically out of order. Or do I need to tag the bugfix branch, in which case the main branch is no longer the release, so we need to ensure the bugfix ends up in the remote main branch as well as the release. Seems like it could cause further conflicts.

git doesn't care what order or from which branch you tag things in. If you need to hotfix a previous release you branch from that previous release's tag, make your bugfix and tag that bugfix then merge the whole thing back to main.

Presumably you are maintaining the ordering of these releases with your naming scheme for tags. For instance, using semver tags with your main release being v1.2.0 and your hotfix tag being v1.2.1, even while you've got features in flight for v1.3.0 or v1.4.0 or v2.0.0. Keeping track of the order of versions is part of semver's job.

Perhaps the distinction is that v1.2.0 and v1.2.1 are still separate releases. A bug fix is a different binary output (for compiled languages) and should have its own release tag. Even if you aren't using a compiled language but are using a lot of manual QA, different releases have different QA steps and tracking that with different version numbers is helpful there, too.


I'm not sure what you mean, what does "tag a bugfix", "tag the bugfix branch" or "ensure the bugfix ends up in the remote main branch as well as the release" even mean?

What are you trying to achieve here, or what's the crux? I'm not 100% sure, but it seems you're asking about how to apply a bug fix while QA is testing a tag, that you'd like to be a part of the eventual release, but not on top of other features? Or is about something else?

I think one misconception I can see already, is that tags don't belong to branches, they're on commits. If you have branch A and branch B, with branch B having one extra commit and that commit has tag A, once you merge branch B into branch A, the tag is still pointing to the same commit, and the tag has nothing to do with branches at all. Not that you'd use this workflow for QA/releases, but should at least get the point across.


It means you need a bugfix on your release and you don't want to carry in any other features that have been applied to master in the meantime.

In that case one can just branch off a stable-x.y branch from the respective X.Y release tag as needed.

It really depends on the whole development workflow, but in my experience it was always easier and less hassle to develop on the main/master branch and create stable release or fix branch as needed. With that one also prioritizes on fixing on master first and cherry-pick that fix then directly to the stable branch with potential adaptions relevant for the potential older code state there.

With branching of stable branches as needed the git history gets less messy and stays more linear, making it easier to follow and feels more like a "only pay for what you actually use" model.


"with potential adaptions relevant for the potential older code state there"

And there it is. Not "potential adaptations", they will be a 100% necessity for some applications. There are industries outside webdev where the ideals of semver ("we do NOT break userland", "we do NOT break existing customer workflows", https://xkcd.com/1172/) are strongly applied and cherry-picking backports is not a simple process. Especially with the pace of development that TBD/develop-on-main usually implies, the "potential older code state" is a matter of fact, and eliding the backport process into "just cherry-pick it" as you did is simply not viable.


Usually what I've seen is one of two solutions, the former (usually) being slightly favored: A) hide any new feature behind feature flags, separate "what's in the code" from "how the application works" essentially or B) have two branches, one for development (master) and one for production. The production branch is what QA and releasers work with, master is what developers work with, cherry-picking stuff and backporting becomes relatively trivial.

We've been using feature flags but mostly for controlling when things get released. But feature flags carry their own issues, they complicate the code, introduce parallel code paths, and if not maintained properly it gets difficult to introduce new features and have everything working together seamlessly. Usually you want to remove the flag soon after release, otherwise it festers. The production branch is also ok, but committing out of order can break references if commits are not in the same order as master, and patching something directly to prod can cause issues with promoting changes from master to prod, it requires some foresight to not break builds.

The second one you described is basically GitFlow, just substitute "master branch" for "production branch" and "dev branch" for "master branch". I mean, you literally said "master is what developers work with", so why not call it the "development branch"?

With (B) you've just reconstructed the part of git-flow that was questioned at the start of this thread. Just switch the two branches from master/production to develop/master.

B is basically Gitflow with different branch names - “one for development” is called develop, “one for production” is called main.

and Git Flow and similar day "that's what merges to main or for". GF and TDB really are way more similar than anyone wants to admit. It's basically "branch for release" vs "merge for release". There are benefits and downsides to both. IE: fully continuous and non-blocking QA/testing is non-trivial, and GF can help with keeping development on "the next thing" moving along without having the dreaded potential huge rebase looming overhead if QA comes back with lots of changes needed. Or just if some requirement changes come down from proect management.

For smaller projects with tests, something like TBD is great: easy to reason about, branches are free, tags are great. For bigger things with many teams working on overlapping features, keeping a TBD setup "flowing" (pun intended) can require a bit more fore-thought and planning. Release engineering, in other words. TBD vs GF is kind of just "do you want your release engineering at the beginning or at the end"?


I worked at a place that had Gitlab review apps set up. Where the QA people could just click a button and it would create an instance of the app with just that PR on it. Then they could test, approve, and kill the instance.

Then you can merge to master and it's immediately ready to go.


Yeah same. The idea that you'd be merging code to `main` that isn't ready to deploy is crazy to me, but that doesn't mean you need a `develop` and `prod` branch. The main + 1-layer of branches has generally been totally sufficient. We either deploy to branch-preview environment or we just test it locally.

What's the difference between what you describe, and continuously merging things into main and cutting releases from a branch called stable?

They're the same strategy with different branch names.

Are you using feature flags in your workflow pattern? These can be used to gate releases into your production environment while still allowing development work to be continuously integrated to trunk without blocking.

This also means that the release to prod happens post-integration by means of turning the feature flag on. Which is arguably a higher quality code review than pre-integration.


Yes, you have to include QA in the continuous integration process for it to work. That means at any time you can just tag the top of the master branch to cut a release, or do continuous delivery if it makes sense (so no tags at all).

It sounds like you are doing a monorepo type thing. Git does work best and was designed for multiple/independent repos.


Even in a monorepo you can tag releases independently in git. git doesn't proscribe any particular version tag naming scheme and stores tags similarly to refs in a folder structure that many (but not all) UIs pay attention to. You can tag `project-a/v1.2.0` and `project-b/v1.2.0` as different commits at different points in the repo as each project is independently versioned.

It makes using `git describe` a little bit more complicated, but not that much more complicated. You just need to `--match project-a/` or `--match project-b/` when you want `git describe` for a specific project.


That's true, but git also doesn't have tags that apply to a subset of the repository tree. You can easily check out `project-b/v1.2.0` and build project-a from that tree. Of course, the answer to that is "don't do that", but you still have the weird situation that the source control implementation doesn't match the release workflow; your `git describe` example is but one of the issues you will face fighting the source control system -- the same applies to `git log` and `git diff`, which will also happily give you information from all other projects that you're not interested in.

For me, the scope of a tag should match the scope of the release. That means that a monorepo is only useful if the entire source tree is built and released at the same time. If you're using a monorepo but then do partial releases from a subtree, you're using the wrong solution: different repo's with a common core dependency would better match that workflow. The common core can either be built separately and imported as a library, or imported as a git submodule. But that's still miles ahead of any solution that muddles the developers' daily git operations.


I understand the low level details of why tags don't work that way and why git leaves that "partial release" or "subtree release" as a higher level concept for whoever is making the tags in how they want to name them.

I know there are monorepo tools out there that do things like automate partial releases include building the git tag names and helping you you get release trees, logs, and diffs when you need them.

I think a lot of monorepo work is using more domain specific release management tools on top of just git.

Also, yeah, my personal preference is to avoid monorepos, but I know a lot of teams like them and so I try my best to at least know the tools to getting what I can out of monorepos.


Do you have any examples of tooling like that, providing the monorepo tiling on top of git's porcelain so to speak? I had assumed that most of such tooling is bespoke, internal to each company. But if there's generic tooling out there, then I agree, it's useful to know such.

That's absolutely an issue that a lot of it is bespoke and proprietary.

I found someone else's list of well known open source tools (in the middle of a big marketing page advertising monorepos as an ideal): https://monorepo.tools/#monorepo-tools

That list includes several I was aware and several I'd not yet heard of. It's the cross-over between monorepo management tool and build tool is. It's also interesting how many of the open source stacks are purely for or at least heavily specialized for Typescript monorepos.

I don't have any recommendations on which tools work well, just vaguely trying to keep up on the big names in case I need to learn one for a job, or choose one to better organize an existing repo.


What about device attestation? Will you be able to run banking apps and Netflix et. al.?

For me the biggest concern is that while you may be able to use and run your own device, you will be locked out of most propietary services. Much like how more and more websites simply don't work with Firefox anymore.


All Swedish banking apps I've tried works great. Including BankID, swish, Sparbanken, Nordea, LF, Revolut and more.

I've had less issues than with CalyxOS for example, where more apps broke.


I only use Firefox. It has been years since I ran into a chrome only website. Though recently I ran into an edge only websit on my corporate network, not even sure how that happens.

This might be one of those things were if there is big enough user base, companies will start to take it seriously.

Nearly all non-banking apps work with very few exceptions. A large majority of banking apps work. A growing number of banking apps were adding checks for Google certification but now a growing number of those are explicitly allowing GrapheneOS via the Android hardware-based attestation system it supports which can be used to verify the hardware, OS and app with an alternate OS or non-Google-certified hardware if it adds the hardware support for it.

Here's a community maintained list of apps and whether or not they work:

https://privsec.dev/posts/android/banking-applications-compa...

This is linked to from the Banking Apps section on GrapheneOS docs: https://grapheneos.org/usage#banking-apps

Sample size of 1: my UK banking apps all work fine.



Well i do use banking and netflix on graphene os on my pixel 8a and everything works perfectly

> For me the biggest concern is that while you may be able to use and run your own device, you will be locked out of most propietary services.

Although this is not the case, moving away from proprietary services (and self-hosting your own) is an important goal in itself. See for instance the recent controversy regarding Discord's age verification.


The panel price doesn't matter. It's the installation and the surroundings (electrical setup, converter, battery) that determine the price nowadays.


> Dad in Victoria Australia just got 10.6kw fully installed and operational for $4000 AUD. ($2,700 USD)

How the heck are the panels even installed and connected for that price? That's about 25 panels, IIRC. What about the installation material and the ac/dc converter?


All covered in that price.

Government incentives. Spend tax dollars putting solar on literally every roof in the country instead of more coal or nuke plants.


I think it's important to note that not all collisions are equally dangerous. Consider a sat on a polar orbit colliding with one on a equatorial orbit. Or two satellites on different directions. That is going to be spectacular. Otoh, these kind of collisions are unlikely and should be manageable by just assigning certain shells (say 5km) for every possible direction and orientation.

If two Starlink satellites collide that go roughly in the same direction, it's not exactly a huge problem.

I think the biggest issue is to coordinate this and potentially disallow some excentric orbits.


Not quite how it works, unfortunately.

Once you've got even hundreds of satellites in non-equatorial orbits, trying to provide global coverage - their ground tracks very frequently cross each other. Even if they're all at the same orbital inclination. While those mostly won't be 90 degree crossings - the great majority will involve several km/s relative velocity. And you'd run out of (say) 5km LEO shells very quickly.


But the orbit is a minimum of about 50,000 kilometers, and the satellite is maybe a meter across. That's a very low probability of a collision per crossing.

I get that 'probably safe' or '0.001% chance of destruction per day' is not very satisfying for an investment that cost millions, but everything always comes down to odds. None of these satellites are eternal, even if they're the only thing in their orbit.


Is it still a small number when multiplied by the square of the number of satellites and the number of times they orbit each day?


Don't know. But I'm sure that people at NASA and other such places have done that calculation. I just wanted to point out that orbital space is big, so you have to do the math to see if there's an actual problem.


They have, and it is. It's a big problem.


It starts by believing that there are distinct human races (which there are not). That alone makes most US Americans racist based on language alone. No (sane) German would nowadays speak of "Rasse" to describe someone with a different skin color.

Then, of course, racism consists of the believe that some races are intrinsically less valuable (in whatever sense) than others. I didn't see Scott Adams voice that part. But I might have missed it or it might have been implied.

But it's important to note that US identity politics of the last couple of decades looks increasingly weird to me as an outsider in any case.


Using "Rasse" as a direct dictionary translation and then saying that it doesn't have the same cultural connotation in another culture is nonsensical. The term "race" means something in the context of American culture, which is due to our troubled history. And Adams' comments are also in the context of that same culture.

But I believe some other countries have their own challenges living up to their nominal multi-ethnic ideals. Surely if I pop open a copy of Der Spiegel and start commenting about the finer points of an immigration policy proposal from an American perspective, I am going to get something wrong.


"It starts by believing that there are distinct human races (which there are not). . That alone makes most US Americans racist based on language alone. "

Sorry, but no.

The scientific community has moved away from 'race' in the biological sense (although there is debate) but the sociological construct of race, which is what we refer to in this context, obviously exists.

When a person 'self identifies' as Black, or Asian or White - that is 'race' - in the 'social construct' sense and it's perfectly accepted and normal - the recognition of that does not make one racist.


> but the sociological construct of race, which is what we refer to in this context, obviously exists.

I doubt that something built on self-identification yields a meaningful concept of racism.


It's clear as day, and it's hard to understand that someone could be confused by this.

It's literally on the census form.

'Race' is a cultural euphemism for broader ethnicity.

AKA 'European = White' - 'African = Black' - more or less.

These are not arbitrary groups of 'self identification' like 'emo' or 'punk'.

These groups are even self organizing - every single US city is built around small enclaves of groups - they pop right out on urban maps.

We've been fighting tribal wars since the dawn of time, it's not hard to imagine how the 'Flemish' vs. 'Dutch' is not going to extend to 'European vs. African'.

Elon Musk, on twitter, 2 days ago, was interjecting on this horrible bit of 'race war' nonsense, talking about 'blacks eviscerate whites' etc..

Again - while there's feeble support for the notion of 'race' in the field of biology (although I think it's more controversial than stated), we obviously have cultural foundations around those concepts.

Honestly - this kind of argument is plausibly the 'worst thing' about HN. I don't understand how something so common and obvious could be devoid in the face of some, odd, hair-splitting rhetoric.


Thing is: Industrialization is about repeating manufacturing steps. You don't need to repeat anything for software. Software can be copied arbitrarily for no practical cost.

The idea of automation creating a massive amount of software sounds ridiculous. Why would we need that? More Games? Can only be consumed at the pace of the player. Agents? Can be reused once they fulfill a task sufficently.

We're probably going to see a huge amount of customization where existing software is adapted to a specific use case or user via LLMs, but why would anyone waste energy to re-create the same algorithms over and over again.


People re-create the same algorithms all the time for different languages or because of license incompatibility.

I'm personally doing just that because I want an algorithm written in C++ in a LGPL library working in another language


In fact this is a counter argument to the point of the article. You're not making 'just more throwaway software' but instead building usable software while standing on the shoulders of existing algo's and libraries.


Well yes. To me industrial software is hardened algorithms, not throwaway slop like the author is arguing. LLMs are very good at porting existing algorithms and as you say it’s about standing on the shoulders of giants. I couldn’t write these from scratch but I can port and harden an algo with basic engineering practices.

I like the article except the premise is wrong - industrial software will be high value and low cost as it will outlive the slop.


The "industrialisation" concept is an analogy to emphasize how the costs of production are plummeting. Don't get hung up pointing out how one aspect of software doesn't match the analogy.


> The "industrialisation" concept is an analogy to emphasize how the costs of production are plummeting. Don't get hung up pointing out how one aspect of software doesn't match the analogy.

Are they, though? I am not aware of any indicators that software costs are precipitously declining. At least as far as I know, we aren't seeing complements of software developers (PMs, sales, other adjacent roles) growing rapidly indicating a corresponding supply increase. We aren't seeing companies like mcirosoft or salesforce or atlassian or any major software company reduce prices due to supply glut.

So what are the indicators (beyond blog posts) this is having a macro effect?


It's the central point of the metaphor. Software is not constrained by the speed of implementation, it's constrained by the cost of maintenance and adaptation to changing requirements.

If that wasn't the case, every piece of software could already be developed arbitrarily quickly by hiring an arbitrary amount of freelancers.


But focusing on production cost is silly. The cost to consumers is what matters. Software is already free or dirt cheap because it can be served at zero marginal cost. There was only a market for cheap industrial clothes because tailor made clothes were expensive. This is not the case in software and that's why this whole industrialization analogy falls apart upon inspection


> You don't need to repeat anything for software. Software can be copied arbitrarily for no practical cost.

...Or so think devs.

People responsible for operating software, as well as people responsible for maintaining it, may have different opinions.

Bugs must be fixed, underlying software/hardware changes and vulnerabilities get discovered, and so versions must be bumped. The surrounding ecosystem changes, and so, even if your particular stack doesn't require new features, it must be adapted (a simple example: your react front breaks because the nginx proxy changed is subdirectory).


You're describing maintenance of existing software or even existing deployments that's a completely different beast.

I am certain cost can go down there, but that will only compete against SaaS where the marginal cost of adding another customer is already zero.


> You're describing maintenance of existing software or even existing deployments that's a completely different beast.

Yeah, that's a part of software that's often overlooked by software developers.


You're absolutely correct! ( ;) )

The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.

Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)

Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.

My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.


So maybe I am too much a layperson here, but even without any direct therapetutic effects, it is pretty remarkable to have an easily scalable mechanism to get self-replicating agents into tumors, but nowhere else, is it not?


Yes it is amazing! Solid tumors tend to be poorly oxygenated, as they don't have a good network of blood vessels to supply them. The bacteria in these experiments can only live in low oxygen environments, so they will multiply in the tumor and die in any other part of the body they end up in. It's a clever idea, hopefully it will be successful.


Even lay-er person, but maybe the specificity is not that impressive in mice? Perhaps when you scale to more complex animals it is inevitable to see false positives (detrimental effects to healthy cells)?


The answer is type-safety. LLMs produce errors, as do humans. A language with carefully designed semantics and a well-implemented safety checker (or maybe even a proof assistant) will help express much more correct code.

I've had the most success running Claude iteratively with mypy and pytest. But it regularly wants to just delete the tests or get rid of static typing. A language like Haskell augmented with contracts over tests wouldn't allow that. (Except for diverging into a trivial ad-hoc dynamic solution, of course.)


this is why i like (and vibe code in!) nim. it's python-ish enough, strongly typed, and creates fast, compiled binaries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: