A very important thing about constraints is that they also liberate. TUIs automatically work over ssh, can be suspended with ctrl-z and such, and the keyboard focus has resulted in helpful conventions like ctrl-R that tend to not be as prominent in GUIs.
This is a very interesting question, and a great motivator for Galois theory, kind of like a Zen koan. (e.g. "What is the sound of one hand clapping?")
But the question is inherently imprecise. As soon as you make a precise question out of it, that question can be answered trivially.
Generally, the nth roots of 1 form a cyclic group (with complex multiplication, i.e. rotation by multiples of 2pi/n).
One of the roots is 1, choosing either adjacent one as a privileged group generator means choosing whether to draw the same complex plane clockwise or counterclockwise.
The question is meaningless because isomorphic structures should be considered identical. A=A. Unless you happen to be studying the isomorphisms themselves in some broader context, in which case how the structures are identical matters. (For example, the fact that in any expression you can freely switch i with -i is a meaningful claim about how you might work with the complex numbers.)
Homotopy type theory was invented to address this notion of equivalence (eg, under isomorphism) being equivalent to identity; but there’s not a general consensus around the topic — and different formalisms address equivalence versus identity in varied ways.
Sure. Either that or the reverse. "They're not the same" in the sense that they can't both be clockwise. "They are the same" in the sense that we could make either one clockwise.
Migrating to a place where people can contribute more to society is both a core tenet of individual liberty and one of the most positive-sum things any human being can do.
But given that you both feign concern over visa holders' working conditions [1], while at the same time advocating for policies that lead to worse working conditions [2], perhaps you just hate freedom and were never acting in good faith in the first place.
It's just the process of learning the hard way that not heeding the warnings everyone gave is painful.
Now that they're facing consequences which we warned about (and prepped for as a result) they want to ignore it thinking that the monsters under the bed will go away.
Ah yes, the inevitable future where the only way we'll know to interact with the machine is through persuading a capricious LLM. We'll spend our days reciting litanies to the machine spirits like in 40k.
Praise and glory be to the Agentic gods. Accept this markdown file and bless this wretched body of flesh and bone with the light of working code. Long live the OpenssAIah
I’m sure you’re here to educate me, but this is not about criss-cross merges between two different work branches, this is about whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.
I have an early draft of a blog post about them :) as a source control expert who built both these systems and tooling on top of them for many years, I think they're the biggest and most fundamental reason rebases/linear history are better than merges.
> whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.
The problem with this is that the latter has an infinitely higher chance of resulting in criss-cross merges than the former (which is 0).
It's definitely not 0 because rebase heavy workflows involve the rerere cache which is a minefield of per-repo hidden merge changes. You get the results of "criss-cross merges" as "ghosts" you can't easily debug because there aren't good UI tools for the rerere cache. About the best you can do is declare rerere cache bankruptcy and make sure every repo clears their rerere cache.
I know that worst case isn't all that common or everyone would be scared of rebases, but I've seen it enough that I have a healthy disrespect of rebase heavy workflows and try to avoid them when given the option/in charge of choosing the tools/workflows/processes.
To be honest I've used rebase-heavy workflows for 15 years and never used rerere, so I can't comment on that (been a happy Jujutsu user for a few years — I've always wondered what the constituency for rerere is, and I'm curious if you could tell me!) I definitely agree in general that whenever you have a cache, you have to think about cache invalidation.
rerere is used automatically by git to cache certain merge conflict fixes encountered during a rebase so that you don't have to reapply them more than once rebasing the same branch later. In general, when it works, which is most of the time, it's part of what keeps rebases feeling easy and lightweight despite capturing in the final commit output sometimes a fraction of the data of a real merge commit. The rerere cache is in some respects a hidden collection of the rest of a merge commit.
In git, the merge (and merge commit) is the primitive and rebase a higher level operation on top of them with a complex but not generally well understood cache with only a few CLI commands and just about no UI support anywhere.
Like I said, because the rerere cache is so out-of-sight/out-of-mind that's why problems with it become weird and hard to debug. The situations I've seen that have been truly rebase-heavy workflows with multiple "git flow" long-running branches and even sometimes cherry picking between them. (Generally the same sorts of things that create "criss-cross merge scenarios".) Rebased commits start to bring in regressions from other branches. Rebased commits start to break builds randomly. If what is getting rebased is a long-running branch you probably don't have eyes on every commit, so finding where these hidden merge regressions happen becomes full branch bisects, you can't just focus on merge commits because you don't have them anymore, every commit is a candidate for a bad merge in a rebased branch.
Personally, I'd rather have real merge commits where you can trace both parents and the code not from either parent (conflict fixes), and you don't have to worry about ghosts of bad merges showing up in any random commit. Even the worst "criss-cross merge" commits are obvious in a commit log and I've seen have had enough data to surgically fix, often nearly as soon as they happen. rerere cache problems are things that can go unnoticed for weeks to everyone's confusion and potentially a lot of hidden harm. You can't easily see both parents of the merges involved. You might even have multiple repos with competing rerere caches alternating damage.
But also yes rerere cache problems are so generally infrequent that it might also take weeks of research, when it does happen, just to figure out what the rerere cache is for, that it might be the cause of some of your "merge ghosts" haunting your codebase, and how to clean it.
Obviously by the point where you are rebasing git flow-style long runnning branches and using frequent cherry picks you're in a rebase heavy workflow that is painful for other reasons and maybe that's an even heavier step beyond "rebase heavy" to some, but because the rerere cache is involved to some degree in every rebase once you stop trusting the rerere cache it can be hard to trust any rebase heavy workflow again. Like I said, personally I like the integration history/logs/investigatable diffs that real merge commits provide and prefer tools like `--first-parent` when I need "linear history" views/bisects.
You have to turn rerere on, though, right? I've never done that. I've also never worked with long-running branches — tend to strongly prefer integrating into main and using feature flags if necessary. Jujutsu doesn't have anything like rerere as far as I know.
Hmm, yeah looks like it is default off. Probably some git flow automation tool or other sort of bad corporate/consultant disseminated default config at a past job left the impression that it was default on. It's the solution to a lot of papercuts working with long-running branches as well as the source of new problems as stated above; problems that are visible with merge commits but hidden in rebases.
Note that while C++ templates are more powerful than Rust generics at being able to express different patterns of code, Rust generics are better at producing useful error messages. To me, personally, good error messages are the most fundamental part of a compiler frontend.
True but you lose out on much of the functionality of templates, right? Also you only get errors when instantiating concretely, rather than getting errors within the template definition.
No, concepts interoperate with templates. I guess if you consider duck typing to be a feature, then using concepts can put constraints on that, but that is literally the purpose of them and nobody makes you use them.
If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later? This behavior is in fact used to decide between alternative template specializations for the same template. Concepts do it better in some ways.
> If you aren't instantiating a template, then it isn't used, so who cares if it has theoretical errors to be figured out later?
Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
A big concern here would be accidentally depending on something that isn't declared in the concept, which can result in a downstream consumer who otherwise satisfies the concept being unable to use the template. You also don't get nicer error messages in these cases since as far as concepts are concerned nothing is wrong.
It's a tradeoff, as usual. You get more flexibility but get fewer guarantees in return.
Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
>Just because you aren't instantiating a template a particular way doesn't necessarily mean no one is instantiating a template a particular way.
What I meant is, if the thing is not instantiated then it is not used. Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that. Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to. But that's not a problem with the language.
> Of course what you are describing is possible, but those scenarios seem contrived to me. If you have reasonable designs I think they are unlikely to come up.
I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled. IIRC Swift takes advantage of this (polymorphic generics by default with optional monomorphization) and the Rust devs are also looking into it (albeit the other way around).
> Whoever does come up with a unique instantiation could find new bugs, but I don't see a way to avoid that.
I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
> Likewise someone could just superficially meet the concept requirements to make it compile, and not actually implement the things they ought to.
Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
>I suppose it depends on how much faith you place in the foresight of whoever is writing the template as well as their vigilance :P
The actual effects depend on a lot of things. I'm just saying, it seems contrived to me, and the most likely outcome of this type of broken template is failed compilation.
>As a fun (?) bit of trivia that is only tangentially related: one benefit of definition-site checking is that it can allow templates to be separately compiled.
This is incompatible with how C++ templates work. There are methods to separately compile much of a template. If concepts could be made into concrete classes and used without direct inheritance, it might work. But this would require runtime concepts checking I think. I've never tried to dynamic_cast to a concepts type, but that would essentially be required to do it well. In practice, you can still do this without concepts by making mixins and concrete classes. It kinda sucks to have to use more inheritance sometimes, but I think one can easily design a program to avoid these problems.
>I believe you can't avoid it in C++ without pretty significant backwards compatibility questions/issues. It's part of the reason that feature was dropped from the original concepts design.
This sounds wrong to me. Template parameters plus template code actually turns into real code. Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable". No language I can dream of that has generics could do any different.
>Not always, I think? For example, if you accidentally assume the presence of a copy constructor/assignment operator and someone else later tries to use your template with a non-copyable type it may not be realistic for the user to change their type to make it work with your template.
I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
Sure. Contrivance is in the eye of the beholder for this kind of thing, I think.
> and the most likely outcome of this type of broken template is failed compilation.
I don't think that was ever in question? It's "just" a matter of when/where said failure occurs.
> This is incompatible with how C++ templates work.
Right, hence "tangentially related". I didn't mean to imply that the aside is applicable to C++ templates, even if it could hypothetically be. Just thought it was a neat capability.
> This sounds wrong to me.
Wrong how? Definition checking was undeniably part of the original C++0x concepts proposal [0]. As for some reasons for its later removal, from Stroustrup [1]:
> [W]e very deliberately decided not to include [template definition checking using concepts] in the initial concept design:
> [Snip of other points weighing against adding definition checking]
> By checking definitions, we would complicate transition from older, unconstrained code
to concept-based templates.
> [Snip of one more point]
> The last two points are crucial:
> A typical template calls other templates in its implementation. Unless a template using concepts can call a template from a library that does not, a library with the concepts cannot use an older library before that library has been modernized. That’s a serious problem, especially when the two libraries are developed, maintained, and used by more than one organization. Gradual adoption of concepts is essential in many code bases.
And Andrew Sutton [2]:
> The design for C++20 is the full design. Part of that design was to ensure that definition checking could be added later, which we did. There was never a guarantee that definition checking would be added later.
> To do that, you would need to bring a paper to EWG and convince that group that it's the right thing to do, despite all the ways it's going to break existing code, hurt migration to constrained templates, and make generic programming even more difficult.
I probably could have used a more precise term than "backwards compatibility", to be fair.
> Until you actually pass in some concrete parameters to instantiate, you can't test anything. That's what I mean by saying it's "unavoidable".
I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
> No language I can dream of that has generics could do any different.
I've mentioned Swift and Rust already as languages with generics and definition-site checking. C# is another example, I believe. Do those not count?
> I wasn't prescribing a fix. I was describing a new type of error that can't be detected automatically (and which it would not be reasonable for a language to try to detect). If the template requires `foo()` and you just create an empty function that does not satisfy the semantic intent of the thing, you will make something compile but may not actually make it work.
My apologies for the misdirected focus.
In any case, that type of error might be "new" in the context of the conversation so far, but it's not "new" in the PL sense since that's basically Rice's theorem in a nutshell. No real way around it beyond lifting semantics into syntax, which of course comes with its own tradeoffs.
That is all very good information. I don't often get into the standards and discussions about the stuff. Maybe ChatGPT or something can help me find interesting topics like this one but it hasn't come up so much for me yet.
>I'm a bit worried I'm misunderstanding you here? It's true that C++ as it is now requires you to instantiate templates to test anything, but what I was trying to say is that changing the language to avoid that requirement runs into migration/backwards compatibility concerns.
I see now. I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime. For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it. If we had reflection, I think this could also be worked out at compile time somehow. But I'm not very up to speed with what has been tried in this space. I'm guessing that concept definitions can be very extensive and also depend on complex expressions. That sounds hairy compared to what could be done without concepts, for example with an abstract class.
> I could imagine a world where templates are compiled separately and there is essentially duck typing built into the runtime.
The bit of my comment you quoted was just talking about definition checking. Separate compilation of templates is a distinct concern and would be an entirely new can of worms. I'm not sure if separate compilation of templates as they currently are is possible at all; at least off the top of my head there would need to be some kind of tradeoff/restriction added (opting into runtime polymorphism, restricting the types that can be used for instantiation, etc.).
I think both definition checking and separate compilation would be interesting to explore, but I suspect backwards compat and/or migration difficulties would make it hard, if not impossible, to add either feature to standard C++.
> For example, if the template parameter type is a concept, your type could be automatically hooked up as if it was just a normal class and you inherited from it.
Sounds a bit like `dyn Trait` from Rust or one of the myriad type erasure polymorphism libraries in C++ (Folly.Poly [0], Proxy [1], etc.). Not saying those are precisely on point, though; just thought some of the ideas were similar.
> but you lose out on much of the functionality of templates, right?
I don't think so? From my understanding what you can do with concepts isn't much different from what you can do with SFINAE. It (primarily?) just allows for friendlier diagnostics further up in the call chain.
You're right but concepts do more than SFINAE, and with much less code. Concept matching is also interesting. There is a notion of the most specific concept that matches a given instantiation. The most specific concept wins, of course.
reply