Hacker Newsnew | past | comments | ask | show | jobs | submit | cormacrelf's commentslogin

For GET /, sure, and some mature load balancers can do this. For POST /upload_video, no. You'd have to store all in-flight requests, either in-memory or on disk, in case you need to replay the entire thing with a different backend. Not a very good tradeoff.

And at the same time, Apple claimed it was the biggest ever album release by number of downloads, or something like that. They were not only messing with our libraries, they were claiming we wanted it and were in fact U2 fans.


You can absolutely do that and there is nothing for general linear algebra libraries to do.

The actual paper is very clear about what it's for: https://fiteoweb.unige.ch/~eckmannj/ps_files/ETPRL.pdf

It says:

    Consider now a general time-dependent field B(t) of duration T. The pulse B(t) may be extremely convoluted ... Can one make the field B(t) return the system to its original state at the end of the pulse...?
This pulse is modelled as a long sequence of rotations. For maths purposes if you had such a sequence, you can obviously just multiply all the rotations together and find the inverse very easily. For physics purposes, you don't really have access to each individual rotation, all you can do is tune the pulse. Creating an "inverse pulse" is quite unwieldy, you might literally need to create new hardware. The paper asks "what if we just amplified the pulse? Can we change this alone and make it not impart any rotation?"

They are trying to take any pulse B(t) and zero out any rotation it imparts on some particle or whatever by

    uniformly tuning the field’s magnitude, B(t) → λB(t) or by uniformly stretching or compressing time, B(t) → B(λt)
And the answer is that you can do that, but you might have to perform the pulse twice.


So it’s similar thinking to spin echoes.

https://en.wikipedia.org/wiki/spin_echoes


How about -F -regexthatlookslikeaflag? Verbatim, that errors out as the command line parsing tries to interpret it as a flag. If you don’t have -F, then you can escape the leading hyphen with a backslash in a single quoted string: '\-regex…', but then you don’t get fixed string search. And -F '\-regex…' is a fixed string search for “backslash hyphen r e g e x”. The only way is to manually escape the regex and not use -F.

I think maybe a syntax like -F=-regex would work.


Yeah, that's a good call out. You would need `rg -F -e -pattern`.


The convention is use use -- to denote the end of options in command-line tools - anything after that is parsed as a normal argument even if it starts with a dash. If rg doesn't support that it should.


It has since day 1. And you can use the `-e/--regexp` flag too, just like grep.


From the article:

> The way Nixpacks uses Nix to pull in dependencies often results in massive image sizes with a single /nix/store layer ... all Nix and related packages and libraries needed for both the build and runtime are here.

This statement is kinda like “I’m giving up on automobiles because I can’t make them go forward”. This is one of the things Nix can do most reliably. It automates the detection of which runtime dependencies are actually referenced in the resulting binary, using string matching on /nix/store hashes. If they couldn’t make it do that, they’re doing something pretty weird or gravely wrong. I wouldn’t even know where to start to try to stop Nix from solving this automatically!

I wouldn’t read too much into their experience with it. The stuff about versioning is a very normal problem everyone has, would have been more interesting if they attempted to solve it.


To be fair to the authors, this IS a problem, albeit one they phrased poorly, especially with building docker images via nix. The store winds up containing way more than you need (eg all of postgres, not just psql), and it can be quite difficult to patch individual packages. Derivations are also not well-pruned in my experience, leading to very bloated docker images relative to using a staged Dockerfile.

Image size isn’t something we’ve focused a lot on, so I haven’t spent a ton of time on it, but searching for “nix docker image size” shows it to be a pretty commonly encountered thing.


> Meta has a sophisticated implementation of a target determinator on top of buck2, but I don’t believe it is open-source.

It is: https://github.com/facebookincubator/buck2-change-detector

> Some tools such as bazel and buck2 discourage you from checking in generated code and instead run the code generator as part of the build. A downside of this approach is that IDE tools will be unable to resolve any code references to these generated files, since you have to perform a build for them to be generated at all in the first place

Not an issue I have experienced. It's pretty difficult to get into a situation where your IDE is looking in buck-out/v2/gen/781c3091ee3/... for something but not finding it, because the only way it knows about those paths is by the build system building them. Seeing this issue would have to involve stale caches in still-running IDE after cleaning the output folder, which is a problem any size repo can have. In general, if an IDE can index generated code with the language's own build system, then it's not a stretch to have it index generated code from another one.

The problem is more hooking up IDEs to use your build system in the first place. It's a real slog to support many IDEs.

Buck recently introduced an MSBuild project generator where all build commands shell out to buck2. I have seen references to an Xcode one as well, I think there's something there for Android as well. The rust-analyzer support works pretty well but I do run a fork of it. This is just a few. There is a need (somewhat like LSP, but not quite) for a degree of standardization. There is a cambrian explosion of different build systems and each company that maintains one of them only uses one or two IDEs and integrates with those. If you want to use a build system for an IDE they don't support, you are going to have a tough time. Last I checked the best effort by a Language Server implementation at being build-system agnostic is gopls with its "gopackagesdriver" protocol, but even then I don't think anyone but Bazel has integrated with it: https://github.com/bazel-contrib/rules_go/wiki/Editor-and-to...


Maybe it’s better selfhosted, but GitLab is almost unbearably slow. I booted up a Gerrit instance to compare and simply rendering a MR page is maybe 10 seconds vs zero. GitHub is still 10x faster. GitLab manages to be almost that slow for cached pages, making you wait, then realise it’s outdated, and load again, totalling maybe 20 seconds just to “go back to the MR list”. Its awful.

Whatever it is you think you might like about GitLab in theory, it’s much worse when this is your reality. When it takes that long to render a single MR, you do not want to be creating more of them than you have to, and you certainly don’t want to make yourself and the rest of your team navigate between MRs to do code review.


At least not directly, as of the 2022 disclosure form. https://www.scotusblog.com/wp-content/uploads/2023/06/Robert...


Sadly it looks like Supreme Court financial disclosure forms aren't particularly accurate representations of their conflicts of interest: https://www.propublica.org/series/supreme-court-scotus


Sure, wasn’t sure that even needed to be said at this point, but even so, this form was probably produced by a broker. Even if it does not cover things like this https://www.businessinsider.com/jane-roberts-chief-justice-w..., I think it was likely accurate as to his holdings in publicly traded stocks and index funds.


Some people may enjoy going through an enormous learning curve to do configuration like that, but the benefits there are pretty abstract and personal, and the pressure to make the onboarding any easier is very low. It's partly because these kinds of users are willing to (a) suffer through a lot in the name of learning and feel good about having done that, and (b) read and write what appears to be a dozen book-length tomes of documentation, that it doesn't get any easier for beginners. I know because I was also one of them in 2015-16 or thereabouts.

Nix doesn't need any more home-manager tutorials, because it doesn't need any more small-time tinkerers. It would benefit more from becoming essential to a bunch of businesses who will become invested in making their own developer experience acceptable at scale, and who will have to improve Nix to that end.

Pretty soon a bunch of people are going to realise they actually do need the exact same version of every tool in every toolchain on every machine in a team, to make use of the transformative caching abilities of tools like Bazel and Buck2. And if that catches on, I would not be surprised to see an alternative Nix frontend configured in Starlark, like every other tool in that arena. There's already a buck2-nix that generates dhall under the hood.


Car and aircraft bodies generally fall under the “useful articles” exception, so they are not copyrightable. Otherwise they would be sculptures.

There is a separate regime that covers useful articles: design patents, which have a much shorter term. The design patents can AFAIK cover things like toys, I don’t know about game assets. You might have to look at the actual grant of patent rights to see what is claimed. I don’t get the impression that the big aircraft manufacturers care about games for plane nerds.


I would be a little surprised if design patents can protect against portrayals in media like games/movies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: