Hacker Newsnew | past | comments | ask | show | jobs | submit | restalis's commentslogin

"to Thai, squid, octopus, and cuttlefish are all ปลาหมึก. For English speakers, those are similar things, but all clearly distinct. But for Thai speakers, they're all ปลาหมึก, just different types."

Then, ปลาหมึก = coleoidea? If so, the squid, octopus, and cuttlefish are (in English and many other languages) all just types of coleoidea: https://en.wikipedia.org/wiki/Coleoidea


Also, the kind of satellites that aren't much more than mirrors, even with today's knowledge, they can be designed to change their profile/surface and thus reduce the absorption of the incident radiation, if they'd had to cross the space between the sun and the sunlight collector areas.

"I’ve seen too many «well crafted» implementations of such technically vexing features as «fetching data and returning it» that were so overengineered that it should have been considered theft of company money."

This judgement has merit. However, over the years I got to perceive that over-engineering tendency to be the manifestation of exploratory spirit in one's craft. This is how the Unix got to be created at Bell Labs. To their managers, Ken Thompson and Dennis Ritchie worked on programs like the "ed" editor, thus they cared about "value being provided to the user (or the business)". What was later officially named Unix was not pitched as an operating system, but instead framed mostly just a needed way to organize the growing set of utilities, among other things (i.e. as a footnote). What are the over-engineered bits (and the related gained experience) in a given project may become useful for something else. People (tend to) do this kind of stuff. But should they be blamed, considering the enticing promise of growth and development of new technologies, practiced by employers themselves, as part of recruitment game?


Bell Labs employees were explicitly hired to do research.


"We should look at how people are using LLMs right now instead of chasing promises of superintelligence."

This. When more computing power and memory resources became available to software engineers, we've seen how that impacted the software development. Sure, there were happy stories of new class of problems being attacked, which couldn't before due to resource limits, but a lot of software just stopped being frugal and did pretty much what it did before, but somehow consuming much more. Extrapolating to the case of staggering resources poured over the AI solutions, I'd be surprised if most of it won't just be consumed to generate higher resolution (and longer) videos.


Pretty sure you always have to oversell. They're ofc doing that already


rsync does what was designed to do and the lack of scope creep is not a bad thing. There is "fpsync" - another tool on top of rsync (which was mentioned in one of the comments at article's page) that covers the parallel processing use-case: https://manpages.debian.org/bullseye/fpart/fpsync.1.en.html


rsync was designed to synchronize directories of files from one directory to another, optionally on another host, and to do so efficiently.

It doesn’t do that.

Even before this conversation, a few weeks ago I started working on a modern replacement.


"No stuffing about with swapping physical hardware just because I've temporarily relocated myself."

That's exactly the use case for which the carriers offer roaming plans. The bonus is that you (as in your phone number) get to remain connected and accessible by your contacts, as no other phone number is involved at any point. One should not need to change the SIM unless is about one's phone change.


Managing the light source, specifically the 13.5nm length on the wave spectrum, that gets generated from overheated tin plasma, is in fact the most challenging part of the machine. Here "managing" includes the process of hitting a rightly sized tin droplet with lasers at the right angles, and all the rest of the complicated fluid math necessary to get the most of that precious lighting moment, as well as the proper handling of that spark event's after-effects, of course. As opposed to the rest of the machine parts (like directing the EUV light to the reticle through those mirrors you mention), the light generation part is dynamic, very easily to get wrong, and very costly to iterate on.


I was thinking about this too. When you read a program, there is this information payload, which is the metaphorical ball you have to keep your eyes on, and more or less forget about the rest as soon as it isn't relevant any more. In the functional paradigm it's like seeing the juggle of a bunch of such balls instead (plus the expectation to admire it), but that's just wasteful on reader's attention.


Most likely, it didn't happen (yet) due to kernel related stuff being still actively worked on¹ and (more importantly) due to a shortage of developers willing and capable to tackle that kind of challenge.

¹ At the time of this writing, there are open PRs like this one: https://github.com/reactos/reactos/pull/8422


"putting yourself and your hard work in legal risk"

Like what? I'm genuinely curious what personal risks faces anyone from contributing to ReactOS. I also am curious what kind of legal risk may threaten the work? I mean, even in the unlikely scenario that something gets proven illegal and ordered to be dismissed from the project, what would prevent any such particular expunged part to be re-implemented by some paid contractor (now under legally indisputable circumstances), thus rendering the initial effort (of legal action) moot?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: