Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has to do with the way the software interacts with hardware on PC.

Basically, a GPU is a very complex computer with several hierarchies of execution streams e.g. there are vector SIMD streams that execute same code over different data, there are threads of such streams that preempt each other, there multiple processing units each running a set of such threads, and there are even structures of such processing units. Yet all of this is hidden from the programmer and only API there is an abstract "scene" description. E.g. you can say "first render these polygons with such and such settings, then render other polygons with other settings, then show what is rendered so far and return to the default state".

Going from such a high level description to the thousands of execution streams that GPU will execute is a very complex procedure that changes with each driver version and is not fully understood by any single person. On top of this you have other processes running on your machine while playing the game and they can and will steal CPU and the OS scheduling slots, adding a lot of variance to your frame time.

You can render the same data set several times and sometime it will take 10ms but other times it will take 100ms depending on what other processes decided to do at the time so it's impossible to guarantee constant frame time on PC.

On consoles it's not a big deal as you can program the GPU directly and don't compete with other processes. A great number of games do run with constant frame rate - it's not trivial but it's not a rocket science either.



thank you for your detailed and thoughtful answer. This makes a lot of sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: