This looks incredible and I want nVidia to succeed because it's actually been a long time without any game-changing (no pun intended, honest) improvements (evolutionary or revolutionary) in the gaming and graphics market.
That said, I read the article and yet remain confused as to where exactly the G-sync module integrates with the monitor. From what I understand, it the G-sync hardware/firmware will run on a packet level, analyzing in realtime the incoming feed of DisplayPort packets and deciding how much of what goes where and when. Very neat.
The most important question, I believe, is what monitors can this be used with? The text makes it clear that users will be able to mod their own ASUS VG248QE monitors to add on the G-sync card, but that's a very, very specific make and model. Is this technology going to require hardware manufacturers to cooperate with nVidia, or will their cooperation simply make things nicer/easier?
Also, some of us have (in business environments) $1k+ S-IPS 30"+ monitors — the quality of these monitors is way above that of consumer models like the VG248QE and others. If there is no way to generically mod monitors without onboard DSPs, I could see that hindering adoption.
> Also, some of us have (in business environments) $1k+ S-IPS 30"+ monitors — the quality of these monitors is way above that of consumer models like the VG248QE and others. If there is no way to generically mod monitors without onboard DSPs, I could see that hindering adoption.
I think Nvidia is targeting hardcore gamers first and foremost. Most gamers are not gaming at 2560x1600/1440. Some are, but most aren't.
The most popular monitors by pro gamers right now (Twitch/eSport players and enthusiasts) are 120/144hz 1ms monitors, such as the ASUS VG248QE. Color reproduction isn't as important to pro gamers as smoothness/framerates.
Also hardcore/pro players are dumping lots of money on the most expensive computer rigs, often upgrading to the latest and greatest every generation. They are a very important marketing group for Nvidia.
It's not really about the latency (5ms vs 1ms is negligible), but its about the pixel response to reduce/eliminate ghosting and other artifacts of LCD's persistent pixels. The speed that the pixels can update is proportional to the amount of ghosting. Interestingly enough, it won't eliminate it no matter how fast the pixels update. The real problem with ghosting turned out to be precisely the pixel-persistence. Even more interesting is that someone discovered a hack for the modern 3D monitors like the ASUS mentioned that completely eliminates ghosting: the strobing backlight functionality necessary for 3D completely eliminates ghosting when applied to 2D. I currently have this setup and its exactly like using a CRT. A flat, light, 1920x1080 CRT. It's beautiful.
He's actually completely wrong. Persistence is about image quality, and can be mitigated by filtering that hardcore gamers always turn off, because it costs them latency.
Reducing latency isn't about how noticeable it is. Latency can be completely impossible to detect for you but still hurt you.
Input lag is the time between providing some input, such as clicking with your mouse, to getting feedback of this event on the screen. As the clicking will be prompted by things happening on the screen, input lag acts as a command delay to everything that is done. The most interesting feature of latency is that all latency is additive. It doesn't matter how fast or slow each part in the system is, none of them can hide latency for one another. Or, even if the game simulation adds 150ms and your stupid wireless mouse adds 15ms, the 2 ms added by the screen still matter just as much.
The second mental leap is that the human controlling the system can also be considered to be just another part in the pipeline adding latency. Consider a twitch shooter, where two players suddenly appear on each other's screens. Who wins depends on who first detects the other guy, successfully aims at him, and pulls the trigger. In essence, it's a contest between the total latency (simulation/cpu + gpu + screen + person + mouse + simulation) of one player against the other player. Since all top tier human players have latencies really close to one another, even minute differences, 2 ms here or there, produce real detectable effects.
This is completely wrong. When even the fastest human reaction time is on the order of 200ms, 5ms vs 1ms of monitor input lag has no effect on the outcome. Also consider that 5ms is within the snapshot time that servers run on, so +/- 5ms is effectively simultaneous to the server on average.
Pixel persistence is not about image quality and cannot be mitigated by anything, except turning off the backlight at an interval in sync with the frame rate you're updating the image. This is how CRTs work, and that's why they had no ghosting effects. The 3D graphics driver hack I mentioned does exactly that for 3D enabled LCD monitors.
People can notice input latencies that are many times smaller than their reaction time. 200ms of input latency is going to be noticeable and bothersome to basically everyone for even basic web browsing tasks. Most gamers will notice more than 2-3 frames of latency, and even smaller latencies will be noticed in direct manipulation set-ups like touchscreens and VR goggles where the display has to track 1:1 the user's physical movements.
I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms. So in the case of shooting someone as they popped out from behind a corner, the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
I definitely agree that small latencies can be noticed, even latencies approaching 5ms (but not 5ms itself--I've seen monitor tests done that showed this).
> I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms.
You did not understand the point of my post. The quality that matters is total latency. How long a human takes to react is completely irrelevant to what level of latency has an effect. Whether average human reaction time was 1ms or 1s doesn't matter. All that matters is that your loop is shorter than his, and your reaction time is very near his, so any advantage counts.
> the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
Server tick time is the same for everyone. Top level gaming tourneys are held in lans, where the players typically make sure that the network latency from their machine to the server is not any greater than from anyone else. However, none of that matters to the question at hand.
Assume that total latency of the system, including the player, can be approximated by:
and assume all are normally distributed random around some base value, except display lag, and you have:
(midpoint, standard deviation)
rand(200,20) + rand(20,5) + rand(16,2) + 15
while I have:
rand(200,20) + rand(20,5) + rand(16,2) + 5
The total latency is utterly dominated by the human processing time. Yet if we model this statistically, and assume that lower latency wins, the one with the faster screen wins 63% of time. That's enough of an edge that people pay money for it.
No I understood your point, I just don't agree that it results in any meaningful advantage. What you didn't model was the fact that the server does not process packets immediately as they are received. They are buffered and processed in batch during a server tick. If the two packets from different players are not received along a tick boundary, then the server will effectively consider them simultaneous.
And remember, we're considering 1ms vs 5ms, so the difference would be 4ms in this case. I would like to see what percentage an advantage someone has in this setup. Even 63% isn't anything significant considering skill comes down to game knowledge rather than absolute reaction time. People will pay for smaller/bigger numbers, sure. But that doesn't mean there is anything practically significant about it.
But it doesn't average out. The 5ms player is continuosly 4ms behind the other player. As above poster explained - the times add up. So you have 200ms + one to five ms. If server tick is as little as 5ms the problem is even worse as in that case player A will with exact same reactions get a faster tick in 4 out of 5 times. I don't know how often it matters, but I'd expect top player to have pretty similar reaction times. So let's say 2 opponents are both between 200 and 220 ms reaction time - then constantly having 4ms more for one player definitely sounds like it will have an effect.
edit: Or in other words - it depends on how often the reaction of the opponents is with the 4ms difference. That certainly depends on the game and the players.
Most server ticks are nowhere near 5ms. Quake 3 ran on 20-30 tick, CS:S/CS:GO runs on 33 by default and up to 66 if you run on high quality servers. 100 tick was somewhat common in CS:S. Some really expensive servers claimed 500 tick but I never bought it. Either way no one's client would update that fast.
Furthermore, if you watch pro matches, you'll quickly realize their skill has nothing to do with having the fastest reaction time. Once you get to a certain skill level, it all comes down to game knowledge. Having a consistent 4ms advantage is absolutely negligible.
If you get a chance, demo a 120hz monitor setup and spin around quickly in an FPS. It's quite noticeable. It almost feels extra surreal a la the Hobbit at 48fps until you get used to it.
What is the real size of the market of gamers who upgrade with every new generation of hardware? I've gotta say, I know many gamers (though no professional ones), and none of them upgrade that often. It's more like once every 2-3 years at the most.
Hardcore gamers are not a great source of income however a great marketing resource for nvidia. They are very influential on others when choosing products. They also represent a large portion of the review industry online.
Having the crown for best graphic card even translates to sales on low end laptops.
Exactly, especially in the Twitch.tv and eSport era. Sponsored players flaunt their hardware, often linking to Amazon product pages or giving hardware away in their Twitch channels. These professional players have tens of thousands of viewers, and thousands of subscribers, on Twitch. There's a lot of marketing to be had.
1. The game decides what happened since the last frame: an opponent appeared on the screen
2. Game -> GPU (3D geometry)
3. GPU -> Frame buffer (geometry is rasterized into pixels)
4. Another part of the GPU handles the DisplayPort protocol... Frame buffer -> DisplayPort -> Monitor
5. This is where G-sync works. Based on all the images supplied on the NVidia site, the $100 G-sync board is a drop-in _replacement_ for the controller board in your monitor. So obviously, it's only guaranteed to work on the ASUS VG248QE. Monitor's DisplayPort input -> Frame buffer
6. Where G-sync shines: a vanilla controller board in your monitor should not buffer very much of the pixel data. But they sometimes buffer waaaay more than necessary. To oversimplify, you could receive 1 row of pixels on the DisplayPort and send them out as analog signals to the transistors in the LCD panel while filling up the buffer again for the next line. NVidia's uses something about the vertical blank packet to tell the board that a new frame is coming _now_.
7. The transistors react to the change in voltage and either block light or allow it to pass through.
8. You see the change and shoot your opponent.
Because of the way the liquid crystals respond to voltage, the analog signals to the panel are anything but simple [1]. The display can't just be left "on" and can't be expected to instantly react to changes. So the G-sync board has to be panel-specific for the Asus display, but is smarter than your average display controller.
The oversimplified example above breaks down because the transistors embedded in the panel need a varying signal, so the controller is actually driving the panel at rates much higher than 144Hz per full frame. This way, each transistor experiences an average voltage something like [1] caused by the rapid updates coming from the controller. Separate LCD driver chips disappeared over 10 years ago; now all LCD panels are designed to be driven by a high-frequency digital signal that averages out to the voltage the panel needs. In car audio it's called a "1-bit DAC," but inside an LCD it's called a "wire." :)
Interestingly, G-Sync may actually be capable of shortening the panel's life. Since it is capped at 144Hz which is the panel's maximum rated speed, it may be perfectly safe. But any time you go and change the analog signals to the panel there could be harmful effects: slow changes in color calibration, ghosting, or even dead pixels.
That said, I read the article and yet remain confused as to where exactly the G-sync module integrates with the monitor. From what I understand, it the G-sync hardware/firmware will run on a packet level, analyzing in realtime the incoming feed of DisplayPort packets and deciding how much of what goes where and when. Very neat.
The most important question, I believe, is what monitors can this be used with? The text makes it clear that users will be able to mod their own ASUS VG248QE monitors to add on the G-sync card, but that's a very, very specific make and model. Is this technology going to require hardware manufacturers to cooperate with nVidia, or will their cooperation simply make things nicer/easier?
Also, some of us have (in business environments) $1k+ S-IPS 30"+ monitors — the quality of these monitors is way above that of consumer models like the VG248QE and others. If there is no way to generically mod monitors without onboard DSPs, I could see that hindering adoption.