Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At least in older CPUs the caches were SRAM (static RAM). It is complicated but requires no refreshing. DRAM is basically just a capacitor per bit and capacitors leak so you constantly have to refresh the entire memory space. When the CPU sends a request to RAM, the memory controller might be too busy refreshing the soon to decay parts to actually respond right away. And if I recall correctly when you read from DRAM you destroy what was there so the process is to read it, then write it back, then send the answer to the CPU which is just a lot of steps. But the price and die size difference is huge so we use GB or TB levels of DRAM and MB levels of SRAM.


Wouldn't this bound the overall memory bandwidth, not the per core bandwidth? I've sort of assumed that just providing more line fill buffers wouldn't be sufficient, and that the number of LFB is chosen in tandem with a number of other things, but I'm not sure what the other things are (that is, just increasing the # of LFB might not be meaningful without also increasing XYZ).


I can’t speak to that. Last time I looked at this stuff was when I was taking an electrical engineering class and we were talking about constructing RAM out of flip flops.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: