That's not disproving OP's comment; OpenAI is, in my opinion, making it untenable for a regular Joe to build a PC capable of running local LLM model. It's an attack on all our wallets.
Why do you need a LLM running locally so much that's the inflated RAM prices are an attack on your wallet? One can always opt not to play this losing game.
I remember when the crypto miners rented a plane to deliver their precious GPUs.
Some models are useful; using whisper.cpp comes to mind to create subtitles for, for example, family videos or a lecture you attended without sending your data to an untrusted or unreliable company.
My first one- a Gateway 486/66- started with 4MB RAM (in 1993). It could run linux, X windows, emacs, and G++ in a terminal all at the same time, but paged so badly the machine was unusable during a compile. I spent $200 to double it to 8, then another $200 to double it to 16, then another $200 to double it to 32MB (over a couple years), at which point, the machine absolutely flew (no paging during compiles). It seemed like an obscene amount of money for a college student to spend, but the lesson taught me a lot about computer performance and what to upgrade.
It's not a lot, but it's enough for a dumb terminal.