Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So is the idea here that the buffer is actually on the monitor now and no longer on the gpu? Is that why there is 3x256MB memory on the controller? Perhaps the gpu just pushes bits across the wire as soon as its ready now? And the monitor is now responsible for maintaining a localize triple buffer mechanism?


The idea is that the monitor will only refresh after it is given a frame to render from the graphics card, which makes the monitor sync to the framerate of the graphics card, not the other way around. From what I can tell, this basically means instead of having a fixed refresh rate, the modified monitor now has a variable refresh rate, slaved to the output of the GPU.


Right, but a traditional 1920x1080 monitor only needs ~50MB of memory to store a frame of data to flush to the screen. So why introduce 3x256MB of memory on the controller? Unless the idea is to move away from managing frame buffers on the gpu.


Could this be a bits versus bytes issue? a 256MB ram chip typically is 256 megabits (and a typical memory module would have some multiple of 8 of them). 3x256Megabits is 96 Megabytes, which still seems like a lot, but might (1920 x 1080 x 32-bit = a little less than 8 megabytes, unless my math is way wrong)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: