Hacker Newsnew | past | comments | ask | show | jobs | submit | bitwize's commentslogin

I recently spun up Gemma 4 26B-A4B on my local box and pointed OpenCode at it, and it did reasonably well! My machine is 8 years old, though I had the foresight to double the RAM to 32 GiB before the RAMpocalypse, and I can get a little bit of GPU oomph but not a lot, not with a mere GTX 1070. So it's slow, and nowhere near frontier model quality, but it can generate reasonable code and is good for faffing with!

Careful with that though. The guy whose entire job is to "take requirements from the customers and bring them to the engineers" really does get awful tetchy if the engineers start presuming to fill his role. Ask me how I know.

I was like, I went from waiting for a cashier who's an absolute ninja with the scanning machine, to fumbling with my own groceries and fighting with GLaDOS about whether it was actually placed in the bag, or how much it weighs vs. how much it's supposed to weigh. Which usually ends with me waiting for an attendant anyway. And this is supposed to be a win?

Self checkout is the face of enshittification.


And that's why Peter Gibbons is clearly management material!

This is why Apple gave Macs a command key in 1984. Control is for sending control cides; command is for issuing commands!

I think that the best UI design language is somewhere between "flat" and "skeuomorphic". I want neither a UI with notes apps that have Moleskine leather and vellum paper textures, nor the Android 12-like vague shapes of current-day macOS. The Windows 9x, and even more so, its predecessor NEXTSTEP, look and feel was perfect. Widgets had depth and definition, were still abstract but readily identifiable to the eye.

Is there any interest in 100% human-made malarkey like this still? I know the nonstop firehose of AI "art" has sharpened my interest in even the worst human-created art.

Have you seen the business models for these companies? Literal underpants gnome memes. OpenAI's goes like this:

1. Build AGI

2. Use said AGI to tell us how to become profitable

3. Profit!

Anthropic seems to be going all in on enterprise sales. Which means they don't actually have to please customers, or it's what ThePrimeagen humorously calls a "yacht problem"—a problem that only needs a solution after the IPO. For now all they have to do is convince corporate leadership that this is the future of work and sow enough FOMO to close those sales contracts and their projected sales, and stock valuation, goes through the roof.

Of course that value will collapse if they go without delivering on their promises long enough. That's why they call it a bubble. But by then, hopefully, Dario and the early investors will be long gone and even richer than they were to start. Their only competitor, OpenAI, is confronted with the same issues: the scalability problems won't go away, and addressing them doesn't drive stock valuation the way promising high rollers that AGI and total workforce automation are just around the corner does.


The HLL-to-LLM switch is fundamentally different to the assembler-to-HLL switch. With HLLs, there is a transparent homomorphism between the input program and the instructions executed by the CPU. We exploit this property to write programs in HLLs with precision and awareness of what, exactly, is going on, even if we occasionally do sometimes have to drop to ASM because all abstractions are leaky. The relation between an LLM prompt and the instructions actually executed is neither transparent nor a homomorphism. It's not an abstraction in the same sense that an HLL implementation is. It requires a fundamental shift in thinking. This is why I say "stop thinking like a programmer and start thinking like a business person" when people have trouble coding with LLMs. You have to be a whole lot more people-oriented and worry less about all the technical details, because trying to prompt an LLM with anywhere near the precision of using an HLL is just an exercise in frustration. But if you focus on the big picture, the need that you want your program to fill, LLMs can be a tremendous force multiplier in terms of getting you there.

As I recall MacOS system calls were done through invalid instructions which would cause the CPU to "trap" (raise an interrupt). Giving rise to the question Mac extension writers asked of each other: "How many traps did you patch?"

You want to talk trap patches? Your Mac's INITs have minimal trap patches, touch the Toolbox in boring places like SysBeep() and GetNextEvent(). The Developator's Mac has big, tricky INITs with trap patches that hook into half of the Device Manager.[1] The Developator is in touch with the metal, starts like a warm boot off the ROM reset vector, stops on an NMI.

[1] See https://www.macrelics.com/legacynth/


The question serious patch trappers ask is whether you patched the traps used to patch traps to make sure that, when you patched traps, no other INIT could patch those traps after you to get in line before you when the traps were handled.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: