I've never understood what is so difficult about making software that can generate frames at a fixed rate, and I don't understand this product- I don't understand how having a variable refresh rate would do anything other than harm smoothness, and encourage more bad software.
Maybe someone can clarify this for me, but what is so wrong with writing software that can just meet the frame deadline? Maybe the hardware innovation should be hardware and drivers that helps you do vertical sync more reliably?
If your renderer can draw 40fps in the highest density scenes (lots of polygons, particles, and effects) on specific hardware, then that is the most you'll be able to guarantee to that user without sacrificing detail. That will be your "fixed frame rate".
However, in simpler scenes the same renderer is likely to output significantly more frames (even up to thousands) at the same detail level.
So in this scenario, all you're doing by setting a fixed rate is throwing away tons of frames in low density scenes. Most gamers would prefer to have the most fps possible at any given moment, even if it means variability.
The most hardcore gamers I know use 120hz monitors, and machines that can deliver 241fps (120 * 2 + 1) in the highest detail scenes. They then set the engine to cap frames at 241fps which will eliminate tearing, negating the need for this technology. However, their gaming machines cost a LOT, so this would deliver similar results for a much wider range of hardware.
If you're generating thousands of frames and throwing them away, you wrote your rendering software wrong. I ask again, what is so impossible about just rendering the 60fps (just a bit under the limit of the rate at which a human is able to perceive any difference), and then not rendering any more? instead of rendering as fast as you can, do the different trade of of always meeting the deadline.
sigh why am I explaining this again? is it really hard to understand? why?
Your question was answered; it has nothing to do with the case where you're "generating thousands of frames and throwing them away". What you want is a rendering engine that will perform at 60fps in the worst case. What engine devs want to write is an engine that can do better (even much much better) than 60fps in the average case, and be allowed to slip in those pathological cases. Gamers want more frames. More frames than is noticeable. They want some slack so that if something totally unrelated to the game ties up the machine, the framerate drop is not noticeable. They want to be able to double it so that they can drive a 3D display but still have the same effective framerate per eye.
Having a consistent 40fps is much worse (for a gamer) than a variable framerate that will dip down to 40fps for 1% (or 10%) of the play time. Having to limit your most complex scene to what can be guaranteed rendered at 60fps is much less appealing to a developer than making sure all the likely scenes can render at 60fps.
> Having a consistent 40fps is much worse (for a gamer) than a variable framerate that will dip down to 40fps for 1%
stuttering animations is better than smooth animations?
Stuttering is better than smooth.
gotcha.
> Your question was answered;
for someone who doesn't appear to understand what I'm asking you have a high degree of confidence that I've been answered.
What is so terrible about having a lower complexity budget that guarantees 60fps? What if you had 60 fps no exceptions as a constraint in your hardware and software design, how far could you really go with some creativity? Think about it- is having a complexity ceiling the only possible way to ensure 60fps?
>What is so terrible about having a lower complexity budget that guarantees 60fps
>Think about it- is having a complexity ceiling the only possible way to ensure 60fps?
I'm really not following you. Are you asking what is the benefit of this technology when games come out every day at 60fps even now? This technology allows them to get the same fidelity and smoothness on less powerful hardware, with more complicated simulations, and lower latency.
>I ask again, what is so impossible about just rendering the 60fps
Are you asking why renders can't maintain perfectly steady frame rates without going above or below 60fps, regardless of WHAT they are rendering on screen or what is happening in the simulation? You can't see how that's a 'non trivial' problem?
The most notable example of a game using dynamic tradeoffs to maintain a solid 60FPS is ID's Rage engine -- written by Johh Carmack, one the people on stage at this very presentation, who was lauding this technology and saying he has been pushing GPU and monitor manufacturers to implement this for years.
Carmack notes that while they were able to stay at 60 with an incredible amount of work, if they had been able to target 90% of 60fps with this technology there would have been little visual difference but the gameplay and visual complexity ceiling would have been vastly higher.
Look into what is involved in modern 3D rendering of high-detail scenes. It is NON-TRIVIAL, and I can tell you this as someone who has done 3D programming for 17 years.
Pro Tip: If an entire industry of experienced people finds something very hard, and you don't know anything about the topic but you don't see why it would be hard, maybe the relevant factor here is the "you don't know."
It reminds me of my mom who said on multiple occasions "All these rockets are dangerous and they explode; I don't see why the scientists don't just use the majestic forces that keep the planets in their orbits to move the rocket."
Yes, I don't know. that's what I am saying.
I am not in this thread saying "You're wrong and I'm right", and I'm not asking you to say to me "no, you're wrong"
I am asking you to explain it*. not just say "well, it's hard, and this guy and this industry says it's hard" and call it a day.
do you understand? "because it's hard" is not an interesting answer. It's a boring and contentless answer.
This has to do with the way the software interacts with hardware on PC.
Basically, a GPU is a very complex computer with several hierarchies of execution streams e.g. there are vector SIMD streams that execute same code over different data, there are threads of such streams that preempt each other, there multiple processing units each running a set of such threads, and there are even structures of such processing units. Yet all of this is hidden from the programmer and only API there is an abstract "scene" description. E.g. you can say "first render these polygons with such and such settings, then render other polygons with other settings, then show what is rendered so far and return to the default state".
Going from such a high level description to the thousands of execution streams that GPU will execute is a very complex procedure that changes with each driver version and is not fully understood by any single person. On top of this you have other processes running on your machine while playing the game and they can and will steal CPU and the OS scheduling slots, adding a lot of variance to your frame time.
You can render the same data set several times and sometime it will take 10ms but other times it will take 100ms depending on what other processes decided to do at the time so it's impossible to guarantee constant frame time on PC.
On consoles it's not a big deal as you can program the GPU directly and don't compete with other processes. A great number of games do run with constant frame rate - it's not trivial but it's not a rocket science either.
Well, I left the start of an explanation in another reply. But the problem is that you are asking a question where it takes years to really understand the answer, and certainly hours for a commenter to write a summary. For someone to put that much effort in, they have to be motivated to put in the effort; but you are coming across in a very unpleasant way, and not offering anything in return, so why would anyone put in the effort just for you?
I have no control over how you perceive my posts. I write them in a neutral way, produce text, and it's up to you to read in a tone of voice and mood. You are free to perceive anything about them you want. You are certainly welcome to refuse to reply to them and you always were. I was apparently pleasant enough for you to give glib responses but not enough to reply with anything much of substance more than "you're asking a stupid question and you can never be as smart as me HAHAHA!!" I mean, if that's your answer then you probably are better off not responding to me at all, leaving my question hanging, and not polluting the conversation with such negativity.
As for the start of an answer, what was wrong with just .. starting with that? What was the point of all the other stuff you wrote? Think about it. what is your mission here? to be informative or just to attempt to make me feel bad for being curious?
The mission is to make you a little more self-aware of how your posts are being read and perceived. "why am I explaining this again? is it really hard to understand? why?" does not sound like a neutral tone to most people, it sounds condescending, which is confusing when (as you admit) you are the one who is asking for an explanation for something you do not understand.
Yes I don't understand something and I'm asking for an explanation. If you, or someone else does not understand my actual question, and has nothing more to contribute than "It's hard" then I am not interested in their answers or their condescension and I couldn't give two tosses if you turn around and perceive me as being condescending. It's projection. And I wasn't asking you anyway. But since I'm here, what is the point of YOUR post? I don't need more self awareness, people on hacker news need to stop answering questions they don't understand with bullshit nonsense condescension and getting pissy when the victim of their idiocy gets annoyed by it.
Seriously I will not tolerate this curiosity shaming, the ethic that one should feel embarrassed about asking questions. I do not buy the idea that it is condescending to be dissatisfied with shallow lazy nothing answers. I have no control over you perceiving it that way. It is just a flaw in your background that you should perhaps be more self aware about.
You are being rude to people that you are asking a favor from. Stop blaming everyone else; isolate the common factor. They're not shaming you because of the asking itself.
I am not being rude to people I am asking a favor from. I am being rude to people who have nothing to contribute here other than saying "oh it's too complicated for you to understand. Seriously I have been doing this for 15 years. it's just too hard to explain. You're just like my idiot mother who didn't understand how physics works. LOL". That is bullying. Not being helpful. I don't need favors from them.
No, they are being rude. There was no reason to post that verbal diarrhoea and I'm just not taking it like a doormat, and that bothers you. The common factor is them being dickheads. If you look, there are lots of other thoughtful people answering me (with real actual answers and creativity instead of condescension) that I am rewarding and conversing with appropriately.
The common factor is the culture of curiosity shaming here. And sorry, but quite frankly you and them can just get fucked. I don't care if you think I am being rude. I want to be rude about that. It is internet cancer. Other thoughtful people should be rude about it too. I want to shoo that attitude away from everywhere I can. Not walk on eggs just in case someone might take offense. You have no interest in making me "self aware". You just see a nail sticking up and you want to hammer it down.
I was really hoping you wouldn't pull this into an argument about semantics. Since you did, I'm almost certain this will be my last post.
'smug' was a description of your mannerisms, not your motivation. And 'snapping' is only pointing out that you made your pronouncements based on three sentences. I am making no claim to psychological insight.
I just think you're acting like a petty jerk, and blaming everyone else.
It's adorable the way you tell people to get fucked and use other insults of similar vitriol and then imply that my comments are invalid for using a term like 'petty'.
You have no problem with what I was asking, just the way in which I asked it, and perpetuated this whole thread to press into me that I wasn't abjectly deferential humble and thankful enough to the great overlords of HN for deigning to bother to waste their time on an obviously worthless scumbag like me.
THAT is petty and obnoxious. and it is to that I say, "get fucked." Criticizing superficial aspects about my manner, instead of the content of my question is the very definition of petty.
>I ask again, what is so impossible about just rendering the 60fps, and then not rendering any more?
There seem to be two ways to interpret your question:
>1.) Why can't games render 60 frames per second always?
Because some scenes are more complex than others. Rendering a complex scene can longer than 16.7 ms, and there is no way around that.
>2.) If a game comes far below the frame completion deadline (e.g., completes a frame in 0.5 ms, where the deadline is 16.7 ms), why doesn't it simply stop doing anything until the deadline has passed?
I do not know the answer. But I can say that many games actually do this. It is usually referred to as a frame cap.
Thanks I think this is an insightful interpretation of my question. I think I have been satisfactorily answered elsewhere on the thread. It totally makes sense now, but on the other hand I still don't see how this tech helps much. It would improve things marginally, in theory but it seems like a bandaid to what you really want, which is a steady stream of animation frames. it feels like a concession to the impossibility of creating rich smooth realtime animation in the presence of a multiprocess operating system.
60 fps means a budget of 16.6 ms per frame. If it takes you 16.7 ms per frame, you don't get 59.8 fps - you get 30 fps when vsync is enabled. So renderers in practice have to get a good way under that time budget to get a reliable 60 fps.
Furthermore, 60 fps is not "just under the limit" for what a human can see; read e.g.:
On consoles you used to get a vsync interrupt. A reliable signal-- an event which you could control very precisely from your game software.
Nowadays when I program games, I cannot get a promise that a frame event will fire. I can get a "well, this code may run, unless the operating system needs those cycles".
So, so is skipping a whole frame, OR halving the frame rate really the best that is possible? Why is it either/or? What are operating systems, hardware vendors and driver writers doing about it?
The problem is that you can only start sending a frame to the display at 16.67ms intervals. If you miss that deadline, you can either swap buffers now (causing tearing), or you can swap buffers at the next vertical blank interval (which results in an effictive 30Hz refresh rate). These are the only two options that are supported by current displays. There's nothing you can do on the computer side to get around these constraints, because the limitation comes from display connections like VGA, HDMI, etc. that only deal in fixed refresh rates. Trying to drive such connections with a variable refresh rate would be like changing the baud rate of a serial connection on the fly without any way to let the receiving device know about the change.
This is an interesting point. The point is made elsewhere in the thread that if you miss the deadline, but send what you have drawn so far anyway, with modern typical rendering software/hardware you'd get an incomplete drawing with holes and missing layers etc.
But that is not what tearing is. Tearing is, you've missed the deadline, so you finish the rendering to completion, then swap the buffer in the middle of a vertical scan. What you're saying is, you can do that, or wait until the next vertical scan before swapping the buffer. And further more, that can either result in simply a skipped frame, or the game switches down to a 30fps mode, perhaps based on some running statistical about frame render durations.
I'm reminded about a discussion Carmack (was it him? or am I having a brain fart) about mitigating the tearing and framerate problem by doing per scanline rendering instead of per frame rendering.
Meeting the frame deadline is about scene complexity, not code. Keeping scene complexity under the limit is nearly impossible in large, rich game worlds. On small console games it is done by putting in walls and strictly limiting the visual budget, but that would remove much of the appeal of a game like Skyrim.
Skyrim exists on at least one console. Isn't it possible to make a game with a similar level of richness of skyrim within a budget? can hardware help with enforcing the budget? I intuit that there must be some kind of trademark you can make to get under the deadline- like for instance, perhaps, progressive rendering of a frame, so that if the frame doesn't finish in time you get a half resolution frame, instead of just the top half of a frame?
See my Rocket comment above. But in reply to this specific comment I will drop you a hint (this hint is still just a small piece of the whole situation):
3D rendering is so deeply pipelined that it is difficult or even impossible for the program to know if a frame render is going to finish on time. It takes a long time to get information about completed results back from a GPU; on PCs you almost certainly can't get that info during the same frame you are rendering, unless you are rendering tremendously slowly.
In order to make an estimate about whether the frame is going to be done in time, you would have to guess. Okay, then, so now you decided to stop rendering this frame, what do you do? Leave a giant hole in the scene? Turn off the postprocess? Draw low-detail versions of some things (hint: still very slow)?
Your program does not even really know for sure which pieces of the scene are fast to render and which are slow. It does not know if specific textures are going to be paged out of VRAM by the time you get to a specific mesh, or not. etc etc
So you are saying it's completely impossible to start out just rendering the lowest complexity scene and progressively refine it, so that if you stop at any point in time, you still have something reasonable to show for it? And that GPU manufacterers have been working on making steady frame rates more and more difficult instead of easier?
What you are suggesting could lead to horrific flicker. If you render one frame in low complexity follow by another frame at medium complexity, followed by another frame in low complexity... etc., you'd get flashing as (for example) shadows appeared and disappeared repeatedly.
Well, wouldn't that depend on what you mean by "low complexity" ? That's what we mean nowadays, but could you not design a version of "low complexity" that reduces the appearance of flicker?
The problem is that if the low quality render differs in any appreciable way from the high quality one, there will be flickering. So the low quality renders have to be extremely similar to the high quality ones, in which case why not just use the low-quality ones all the time?
Though come to think of it, there is one example that almost does what you describe. Dynamic resolution scaling has some artifacts, but is being used in some games (and is notably used in Dead Rising 3). Though one has to decide before the frame is rendered what resolution to use, so you still get frames that take longer than 16.7ms or whatever your target is.
One could do something similar with render quality, but it has the same drawback that you have to decide beforehand what quality you want to use. One would also have to ramp up and down the quality slowly, which is difficult as the complexity of a given scene can vary wildly over the space of a single second. It also wouldn't help with spikes in rendering time.
I think you have a misapprehension that rendering is or can be done via progressive enhancement and that's never really been a direction games have pushed into.
Rendering a scene is kind of like constructing a building. If you stop at an arbitrary point in the construction process, you don't get a pretty good building, you get a non functional structure.
The real reason is that there might possibly be some way of doing a progressively enhanced, rendered game but it's never been high enough priority for anyone to have done serious work in that area.
This is an interesting point, and I can kind of see the outline of some vaguely political reasoning here. Let us clarify that we are talking about realtime rendering for games. Of course, rendering in general has been done in all sorts of different ways, and it's pretty straightforward to demonstrate a raytracer that progressively enhances is rendering over time.
The point is, this isn't a direction that cutting edge games programming has gone in, because getting more detail into the game has always been a higher priority than having a steady predictable framerate- And to make a further point, practically speaking, painter's/z-buffer algorithm is easier to optimise than some other rendering algorithms. Though, the others are not impossible, just not fruit that is quite as low hanging.
Yes, it is not possible. Learn about what a Z Buffer is and how it works.
Dude seriously, I have been doing this a long time. I am going to stop replying after saying just one more thing:
If you are running on an OS like Windows (which this product is targeted at), you do realize that the OS can just preempt you at any time and not let you run? How do you predict if you are going to finish a frame if you don't even know how much you will be able to run between now and the end of the frame?
I am not talking about predicting in advance, I am talking about just doing as much as you can, and then when the deadline comes, sending the result. With the way software is written now this results in tearing, because pixels are simply rendered left to right, top to bottom. but what if you rendered them in a different order such as with an interlaced jpeg, which would show you a low res version of the image when partially downloaded.
The render pipeline is more or less, compute a bunch of geometry on the cpu, send it to the gpu with some textures and other things, tell it to render, and once the gpu has rendered the frame, you can tell it when to switch.
There's no point at which you have a usable partial frame to display, and it doesn't make sense to compute every other pixel, and if there's time come back and get the rest, because computing a pixel's neighbor will need a lot of the same intermediate work, and you probably don't have the resources to keep that. Parallel rendering generally divides the rendering tasks into different regions of the screen for each unit, not interleaved pixels.
To answer your question regarding why not just use 40fps, instead of going up and down; If you cap framerate at 40fps and your monitor doesn't refresh at an even multiple of 40, you're going to end up with consistent judder which is probably worse than occasional framerate dropping.
If you look at it from the other side, look at all the extra buffers that are needed to support fixed framerate monitors when frame generation really doesn't need to be fixed. At the desktop, if nothing on the screen moves, there's no need to transmit the buffer 60x per second, except for legacy reasons, in a graphics intensive application, the time to generate a buffer may vary. Video usually was recorded at fixed frequency, but it often doesn't match the frequency of the monitor. CRTs absolutely required that the electron beam trace every pixel several times a second, but LCDs don't.
I only used 40fps as an example. I know you'd probably want 60fps, 30fps or 20fps
(or possibly 50 or 25 if your refresh rate has more of a PAL bent)
Here's an idea I am curious about now. If you can almost but not quite reliably generate full res images at 60 FPS, can you generate quarter resolution (that is, half the pixels on each dimension for a quarter of the pixels) at 240 fps, or does the overhead for each render outstrip the efficiency from generating fewer pixels? That is, how much of a fixed overhead for a frame is there, and can it be spread over 4 frames, with slightly offset camera?
3D graphics images aren't calculated left to right, top to bottom. They are calculated by drawing a bunch of triangles all over the image to represent 3D geometry. Triangles are often drawn on top of other triangles. Many modern games also use multipass rendering to achieve certain lighting and special effects. Only after a whole image is computed can it be transferred to the monitor if you want the image to make sense. If you stop rendering half way through, the end result would be objects full of holes with entire features missing or distorted. The time needed for the actual transfer of the image to the display is generally a drop in the bucket by comparison.
Well, while you are right for the way games and GPUs currently work, there is more than one 3d rendering algorithm- Zbuffer is not the end all be all. The question is what can be rendered efficiently with the hardware we have. At some point we could collectively decide that having a rock solid framerate is more important than more detail, decide to use a scanline progressive renderer, and there you go. It is possible, but would we do it?
But on the other hand I was confused- I forgot that zbuffer was the algorithm still in use in most game rendering engines. and you cleared that up. thanks.
Maybe someone can clarify this for me, but what is so wrong with writing software that can just meet the frame deadline? Maybe the hardware innovation should be hardware and drivers that helps you do vertical sync more reliably?