Hacker Newsnew | past | comments | ask | show | jobs | submit | eichin's commentslogin

Interesting detail on the algorithm but seems to completely miss that if you care about non-streaming performance, there are parallel versions of xz and gzip (pxzip encodes compatible metadata about the breakup points so that while xz can still decompress it, pxzip can use as many cores as you let it have instead.) Great for disk-image OS installers (the reason I was benchmarking it in the first place - but this was about 5 years back, I don't know if those have gotten upstreamed...)

There's also a parallel version of bzip2, pbzip2.

https://man.archlinux.org/man/pbzip2.1.en

And zstd is multi threaded from the beginning.


I'm surprised that this shows anything running usefully on my 2021-era thinkpad (with "Iris Xe"'TigerLake graphics) which inspires me to ask - are external GPUs useful for this sort of thing?

Particularly because, although there are smaller "study" versions as he worked up to it, the painting itself is 10 feet wide and most references don't include that context (at least not visually, they often list the numbers.)

This view in the wiki communicates the scale and setting in the museum. I have a photo further back.

https://en.wikipedia.org/wiki/A_Sunday_Afternoon_on_the_Isla...


Yes I happen to live two blocks from the Art Institute and have seen the painting dozens of times but it's still pretty impactful every time.

Some of their earlier videos go into a lot of detail on the safety interlocks (including that the radiation near the device can be lower than ambient because it's basically a large chunk of shielding :-)

As for pricing, https://news.ycombinator.com/item?id=45392896 had some numbers from 5 months ago. It seems like the kind of thing that you'd want as a nearby service, unless you needed to do continuous inspection (they have some automated conveyor sampling products too, it looks like.) My last company had a few 3d-printed components that would have been interesting to spot check after wear testing, but for a lot of things, the competition for the scan is "open it up with a screwdriver" :-)


I bet it's something you can lease with a traceable calibration certificate.

Presumably he got better in the intervening decades, but part of how we stopped the Morris Worm was that it was badly written (see the various version of With Microscope and Tweezers for detail, particularly about the "am I already running" check that ended up being why it got noticed, "because exponential growth".) Even for "bored 1st year grad student skipping lectures" it should have been better code :-)

(Also, writing a Scheme dialect was a first-semester CS problem set - if you're in a 1980s academic CS environment it was more effort to not accidentally write a lisp interpreter into something, something in the water supply...)


Actually, while the Actual Nodes are a linux thing, bash itself implements (and documents) them directly (in redirections only), along with /dev/tcp and /dev/udp (you can show with strace that bash doesn't reference the filesystem for these, even if they're present.)

So, you're not wrong, but...


BITD the one "fd sanitizer" I ever encountered was "try using the code on VxWorks" which at the time was "posix inspired" at best - fds actually were pointers, so effectively random and not small integers. It didn't catch enough things to be worth the trouble, but it did clean up some network code (ISTR I was working on SNTP and Kerberos v4 and Kerberized FTP when I ran into this...)


The easy answer is Westinghouse (look for the youtube short about "things that spin"...)


As someone who worked for Nokia around the iPhone launch (on map search, not phones directly) - I also wanted to believe this at the time. But in retrospect, it feels like what actually mattered was that capacitive multi-touch screens were the only non-garbage interface, and only Apple bought FingerWorks...

Not clear that this is a helpful interpretation, other than "we're in the primordial ooze stage and the thing that matters will be something none of the current players have", but that's hard to take to the bank :-)


If parallelism adds indeterminacy, then you have a bug (probably in working out the dependency graph.) Not an unusual one - lots of open source in the 1990s had warnings about not building above -j1 because multi-core systems weren't that common and people weren't actually trying it themselves...


Whenever I traced them, those bugs were always in the logic of the makefile rather than in the compiler. A target in fact depends on another target (generally from much earlier in the file) but the makefile doesn't specify that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: