Amazing article. I was under the misapprehension that temp and other output parameters actually do affect caching. Turns out I was wrong and this explains why beautifully.
Because in my mind, as a person not working directly on this kind of stuff, I figured that caching was done similar to any resource caching in a webserver environment.
It´s a semantics issue where the word caching is overloaded depending on context. For people that are not familiar with the inner workings of llm models, this can cause understandable confusion.
Being wrong about details like this is exactly what I would expect from a professor. They are mainly grant writers and PhD herders, often they are good at presenting as well, but they mostly only have gut feelings about technical details of stuff invented after they became a professor.
Excellent HN-esque innovation in moderation: immediate improvement in S/N ratio, unobtrusive UX, gentle feedback to humans, semantic signal to machines.
How was the term "rug" chosen, e.g. in the historical context of newspaper folds?
I'd note, when I gave the input/output screenshot to ChatGPT 5.2 it failed on it (with lots of colorful chain of thought), though Gemini got it right away.
Thanks for sharing; you clearly spent a lot of time making this easy to digest. I especially like the tokens-to-embedding visualisation.
I recently had some trouble converting a HF transformer I trained with PyTorch to Core ML. I just couldn’t get the KV cache to work, which made it unusably slow after 50 tokens…
Hopefully you can write the teased next article about how Feedforward and Output layers work. The article was super helpful for me to get better understanding on how LLM GPTs work!
[see https://news.ycombinator.com/item?id=45988611 for explanation]