Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The first "proper" "modern" computer I had, initially came with 8 megabytes of RAM.

It's not a lot, but it's enough for a dumb terminal.





That's not disproving OP's comment; OpenAI is, in my opinion, making it untenable for a regular Joe to build a PC capable of running local LLM model. It's an attack on all our wallets.

Why do you need a LLM running locally so much that's the inflated RAM prices are an attack on your wallet? One can always opt not to play this losing game.

I remember when the crypto miners rented a plane to deliver their precious GPUs.


Some models are useful; using whisper.cpp comes to mind to create subtitles for, for example, family videos or a lecture you attended without sending your data to an untrusted or unreliable company.

My first one- a Gateway 486/66- started with 4MB RAM (in 1993). It could run linux, X windows, emacs, and G++ in a terminal all at the same time, but paged so badly the machine was unusable during a compile. I spent $200 to double it to 8, then another $200 to double it to 16, then another $200 to double it to 32MB (over a couple years), at which point, the machine absolutely flew (no paging during compiles). It seemed like an obscene amount of money for a college student to spend, but the lesson taught me a lot about computer performance and what to upgrade.

A dumb terminal doesn't even need a tenth of that.

Only if you don't mind the screen being limited to 640x480 and/or low bit depth.

Where do you put the framebuffer?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: