Hacker Newsnew | past | comments | ask | show | jobs | submit | wmwragg's commentslogin

I generally use AWK as my scripting language, or often just write the whole thing directly in AWK. It doesn't change, is always installed on all POSIX platforms, easily interfaces with the command line, and is an easy to learn small language.


Could you please provides examples on how to do it? Specially given that the operating system calls dont return back the output of the command? Thx


You probably want the JPEG XL Info[1] site then. A nice site outlining what JPEG XL actually is.

[1] https://jpegxl.info/


While I get why, it bugs me that they have comparison images between jxl and other formats, yet it doesn't actually use jxl, as evidenced by all images displaying correctly on my chrome browser.


It uses jxl if the browser supports it, using <picture>¹.

¹ https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...


This is standard practice. They need to use current lossless formats to display examples to people who don't have the format yet. They are still showing accurate examples of compression artifacts. I'm not sure what else you'd expect them to do.


This is something I think a lot of people don't seem to notice, or worry about, the moving of programming as a local task, to one that is controlled by big corporations, essentially turning programming into a subscription model, just like everything else, if you don't pay the subscription you will no longer be able to code i.e. PaaS (Programming as a Service). Obviously at the moment most programmers can still code without LLMs, but when autocomplete IDEs became main stream, it didn't take long before a large proportion of programmers couldn't program without an autocomplete IDE, I expect most new programmers coming in won't be able to "program" without a remote LLM.


That ignores the possibility that local inference gets good enough to run without a subscription on reasonably priced hardware.

I don't think that's too far away. Anthropic, OpenAI, etc. are pushing the idea that you need a subscription but if opensource tools get good enough they could easily become an expensive irrelivance.


My concern is that inference hardware is becoming more and more specialized and datacenter-only. It won’t be possible any longer to just throw in a beefy GPU (in fact we’re already past that point).


Yep, good point. If they don't make the hardware available for personal use, then we wouldn't be able to buy it even it could be used in a personal system.


There is that, but the way this usually works is that there is always a better closed service you have to pay for, and we see that with LLMs as well. Plus there is the fact that you currently need a very powerful machine to run these models at anywhere near the speed of the PaaS systems, and I'm not convinced we'll be able to do the Moore's law style jumps required to get that level of performance locally, not to mention the massive energy requirements, you can only go so small, and we are getting pretty close to the limit. Perhaps I'm wrong, but we don't see the jumps in processing power we used to see in the 80s and 90s, due to clock speed jumps, the clock speed of most CPUs has stayed pretty much the same for a long time. As LLMs are essentially probabilistic in nature, this does open up options not available to current deterministic CPU designs, so that might be an avenue which gets exploited to bring this to local development.


> there is always a better closed service you have to pay for

Always? I think that only holds for a certain amount of time (different for each sector) after which the open stuff is better.

I thought it was only true for dev tools, but I had to rethink it when I met a guy (not especially technical) who runs open source firmware on his insulin pump because the closed source stuff doesn't gives him as much control.


From some comments I read in this thread, costs could be around 100-500k USD to get anywhere near current frontier models. My concern is that the constant price reductions we saw in cost per transistor (either storage or logic) over the last ~three decades are over, and that the cost per transistor will only go up!


Local inference is already very good on open models if you have the hardware for it.


Yep I agree, I think people haven’t woken up to that yet. Moore’s Law is only going to make that easier.

I’m surprised by how good the models I can run on my old M1 Max laptop are.

In a year’s time open models on something like a Mac Studio M5 Ultra are going to be very impressive compared to the closed models available today.

They won’t be state of the art for their time but they will be good enough and you’ll have full control.


> on reasonably priced hardware.

Thank goodness this isn't in a problem!


This is the most valid criticism. Theoretically in several years we may be able to run Opus quality coding models locally. If that doesn't happen then yes, it becomes a pay to play profession - which is not great.


Yep, this is my take as well. It's not that open source is being stolen as such, as if you abide by an open source license you aren't stealing anything, it's that the licenses are being completely ignored for the profit of a few massive corporations.


Yeah, that's what I meant by "stolen", I should have been clearer. But indeed, this is the crux of the problem, I have no faith that licenses are being abided by.


What profit? All labs are taking massive losses and there's no clear path to profit for most of them yet.


The wealthiest people in tech aren't spending 10s of billions on this without the expectation of future profits. There's risk, but they absolutely expect the bets to be +EV overall.


Expected profit.


Yes that was my first thought as well, and as the images aren't designed to be run on a mac specifically, like a native app might be, there is no expectation for the developers to create a native apple silicon version. This is going to be a pretty major issue for a lot of developers


Case in point - Microsoft's SQL Server docker image, which is x86-only with no hint of ever being released as an aarch64 image.

I run that image (and a bunch of others) on my M3 dev machine in OrbStack, which I think provides the best docker and/or kubernetes container host experience on macOS.


I’ve worked in DevOps and companies I’ve worked for put the effort in when M1 came out, and now local images work fine. I honestly doubt it will have a huge impact. ARM instances on AWS, for example, are much cheaper, so there’s already lots of incentive to support ARM builds of images


In our small shop, I definitely made sure all of our containers supported aarch64 when die M1 hit the scene. I'm a Linux + Thinkpad guy myself, but now that I've got an x13s, even I am running the aarch64 versions!


How do you build multi-arch in CI? Do you cross-compile or do you have arm64 runners?


It depends. Mostly it is choosing the right base image architecture. For rust and golang we can trivially cross compile and just plunk the binary in the appropriate base image. For JVM based apps it is the same because we just need to have the jars in the right place. We can do this on either architecture.

The only hold out is GraalVM which doesn’t trivially support cross compilation (yet).


We're mostly a PHP / JS (with a little Python on the side) shop, so for our own code it's mostly a matter of the right base image. Building our own images is done on an x86-64 machine, with the aarch64 side of things running via qemu.


It has a huge impact if you need to run the exact same container as in production. This kills Macs in those shops. And there are more than you might think.


Apple Silicon is ARM64 which is supported by Linux and Docker.


But Docker images don't necessarily have ARM64 support. If you are exclusively targeting x64 servers, it rarely makes sense to support both ARM64 and AMD64 platforms for development environment/tests, especially if the product/app is non-trivial.


Every port I've done to a new hardware or software platform has shaken loose at least a handful of bugs or assumptions that were well worth ironing out. And in the case of a port to Apple Silicon, you get a very fast development environment at the end of it. This library also helped with 90% of the work:

https://github.com/DLTcollab/sse2neon


Or, if you just want to create multi-arch images for your project, on your Mac...so that your non-Mac customers can use them.


I guess now it makes sense. Got 3 years to turn on ARM builds.


No, it still doesn't make sense.

And it looks like Rosetta 2 for containers will continue to be supported past macOS 28 just fine. It's Rosetta 2 for Mac apps that's being phased out, and not even all of that (they'll keep it for games that don't need macOs frameworks to be kept around in Intel format).


Parent doesn't want to merely run ARM64 Linux/Docker images. They want to run Intel images. Lots of reasons for that, from upstream Docker images not available to ARM64, to specific corporate setups you want to replicate as close as possible, or who aren't portable to ARM64 without huge effort.


I'm aware, I use ARM images all the time, I was trying to indicate that the usual refrain that the developers have had years to migrate their software to apple silicon, doesn't really apply to docker images. It's only the increase in use of ARM elsewhere (possibly driven by the great performance of macs running apple silicon) which has driven any migration of docker images to have ARM versions


Yeah but many people are using x86-64 Docker images because they deploy on x86-64. Maybe ARM clouds will be more common by that time.


Yep, this is another reason I've needed the use of x86-64 images, as although they should be technically the same when rebuilt for ARM, they aren't always, so using the same architecture image which is run in production, will sometimes catch edge case bugs the ARM version doesn't. Admittedly it's not common, but I have had it happen. Obviously there is also the argument that the x86-64 image is being translated, so isn't the same as production anyway, but I've found that to have far less bugs than the different architecture


> Obviously there is also the argument that the x86-64 image is being translated, so isn't the same as production anyway

I've never seen this make a practical difference. I'm sure you can spot differences if you look for them (particularly at the hardware interface level) but qemu has done this for decades and so has apple.


Many container images are multi-arch, although probably not ones that are built in-house.


We built our in-house images multi-arch precisely for this reason!


That's not really the point though right? It means that pulling and using containers that are destined for x86 will require also building arm64 versions. Good news is buildx has the ability to build arm64 on x86, bad news is people will need to double up their build steps, or move to arm in production.


W/o rossetta i can't build x86_64 images anymore. Today i can setup OrbStack amd64 linux and build native amd64 images on my mac to put on my servers.


What they talk about is Rosetta's macOS frameworks compiled for Intel being kept around (which macOS Intel apps use, like if you run some old <xxx>.app that's not available for Apple Silicon).

The low-level Rosetta as a translation layer (which is what containers use) will be kept, and they will even keep it for Intel games, as they say in the OP.


Yeah, Science shouldn't be concerned with usefulness, just like Art. It's the application of those fields which should concern itself with usefulness i.e. applied science, engineering, design etc. I'm not saying that scientific research shouldn't be carried out by companies with specific goals in mind, just that it shouldn't be the expected default.


I created a smaller version of chess called Mischia[1], mainly for casual games, but except for the opening phase it tries to play like normal chess. Games usually last 15 to 20 minutes.

[1] https://www.chessvariants.com/rules/mischia


My issue isn't specifically the subscription model, though it gets annoying and I prefer a one time fee, or the cost, I'm happy to pay, my issue is the lock-in accompanied with the SaaS subscription model. You stop paying you loose access to your work, esp. as most SaaS models aim for proprietary data formats and no, or deliberately really annoying and cumbersome, export abilities often only allowing exporting in their own proprietary formats "for backup", which are useless once you stop paying


This is part of why I'm completely ok with Jetbrains subscriptions - they have a perpetual fallback license. When they transitioned from the "buy a version, use it forever" licensing approach to a subscription model they added the perpetual fallback license (likely after some spicy feedback from customers).

You can use whatever version you've had for a year forever.

So if you cancel your subscription, the old version still works (and is probably fairly functional).

---

The problem that Jetbrains had before the subscription model change is that major upgrade versions (that you paid for) were driven by accounting needs rather than engineering. "Need more money?" - release the version that is currently getting built, even if it doesn't offer compelling value. "Got some neat things for the next version?" - hold off on releasing it to customers until the company needs more money.

The subscription model made it so that accounting had a stable and predictable revenue stream and engineering could release things as features were developed.


This model makes the most sense for professional tools (IDEs, CAD, Office Suites,...). Like insurance or support contract. You don't lose everything the moment you want to shrink your spending budget. But for utilities (pdf readers, task managers,...), subscriptions feel like extortion.


The problem was real. And update subscriptions are a good solution. But other update subscriptions allow perpetual use of the version at the end of the subscription. JetBrains products require downgrading to the year old version. And downgrading was not supported when I looked. This is a lock in tactic.


Sometimes. I always point to Jetbrains with their fallback license as it being done right

Buy 1 year continuous and you get forever access to that version.. or you can keep paying for upgrades on a subscription

..its also worked out incredibly well for them profit-wise


The book "Understanding and Using C Pointers: Core Techniques for Memory Management" by Richard M Reese, is a great way to learn pointers


Note that the previous Java LTS release license reverts to the OTN license one year after the new Java LTS version is published, making it no longer free, assuming you keep up with the security updates. See the license note on the Java 21 LTS version[1].

[1] https://www.oracle.com/uk/java/technologies/downloads/#java2...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: