It's not really about the latency (5ms vs 1ms is negligible), but its about the pixel response to reduce/eliminate ghosting and other artifacts of LCD's persistent pixels. The speed that the pixels can update is proportional to the amount of ghosting. Interestingly enough, it won't eliminate it no matter how fast the pixels update. The real problem with ghosting turned out to be precisely the pixel-persistence. Even more interesting is that someone discovered a hack for the modern 3D monitors like the ASUS mentioned that completely eliminates ghosting: the strobing backlight functionality necessary for 3D completely eliminates ghosting when applied to 2D. I currently have this setup and its exactly like using a CRT. A flat, light, 1920x1080 CRT. It's beautiful.
He's actually completely wrong. Persistence is about image quality, and can be mitigated by filtering that hardcore gamers always turn off, because it costs them latency.
Reducing latency isn't about how noticeable it is. Latency can be completely impossible to detect for you but still hurt you.
Input lag is the time between providing some input, such as clicking with your mouse, to getting feedback of this event on the screen. As the clicking will be prompted by things happening on the screen, input lag acts as a command delay to everything that is done. The most interesting feature of latency is that all latency is additive. It doesn't matter how fast or slow each part in the system is, none of them can hide latency for one another. Or, even if the game simulation adds 150ms and your stupid wireless mouse adds 15ms, the 2 ms added by the screen still matter just as much.
The second mental leap is that the human controlling the system can also be considered to be just another part in the pipeline adding latency. Consider a twitch shooter, where two players suddenly appear on each other's screens. Who wins depends on who first detects the other guy, successfully aims at him, and pulls the trigger. In essence, it's a contest between the total latency (simulation/cpu + gpu + screen + person + mouse + simulation) of one player against the other player. Since all top tier human players have latencies really close to one another, even minute differences, 2 ms here or there, produce real detectable effects.
This is completely wrong. When even the fastest human reaction time is on the order of 200ms, 5ms vs 1ms of monitor input lag has no effect on the outcome. Also consider that 5ms is within the snapshot time that servers run on, so +/- 5ms is effectively simultaneous to the server on average.
Pixel persistence is not about image quality and cannot be mitigated by anything, except turning off the backlight at an interval in sync with the frame rate you're updating the image. This is how CRTs work, and that's why they had no ghosting effects. The 3D graphics driver hack I mentioned does exactly that for 3D enabled LCD monitors.
People can notice input latencies that are many times smaller than their reaction time. 200ms of input latency is going to be noticeable and bothersome to basically everyone for even basic web browsing tasks. Most gamers will notice more than 2-3 frames of latency, and even smaller latencies will be noticed in direct manipulation set-ups like touchscreens and VR goggles where the display has to track 1:1 the user's physical movements.
I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms. So in the case of shooting someone as they popped out from behind a corner, the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
I definitely agree that small latencies can be noticed, even latencies approaching 5ms (but not 5ms itself--I've seen monitor tests done that showed this).
> I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms is negligible, considering the fact that human reaction time is 200ms.
You did not understand the point of my post. The quality that matters is total latency. How long a human takes to react is completely irrelevant to what level of latency has an effect. Whether average human reaction time was 1ms or 1s doesn't matter. All that matters is that your loop is shorter than his, and your reaction time is very near his, so any advantage counts.
> the 200ms reaction time + human variation + variation in network latency + discreet server time, will absolutely dominate the effects.
Server tick time is the same for everyone. Top level gaming tourneys are held in lans, where the players typically make sure that the network latency from their machine to the server is not any greater than from anyone else. However, none of that matters to the question at hand.
Assume that total latency of the system, including the player, can be approximated by:
and assume all are normally distributed random around some base value, except display lag, and you have:
(midpoint, standard deviation)
rand(200,20) + rand(20,5) + rand(16,2) + 15
while I have:
rand(200,20) + rand(20,5) + rand(16,2) + 5
The total latency is utterly dominated by the human processing time. Yet if we model this statistically, and assume that lower latency wins, the one with the faster screen wins 63% of time. That's enough of an edge that people pay money for it.
No I understood your point, I just don't agree that it results in any meaningful advantage. What you didn't model was the fact that the server does not process packets immediately as they are received. They are buffered and processed in batch during a server tick. If the two packets from different players are not received along a tick boundary, then the server will effectively consider them simultaneous.
And remember, we're considering 1ms vs 5ms, so the difference would be 4ms in this case. I would like to see what percentage an advantage someone has in this setup. Even 63% isn't anything significant considering skill comes down to game knowledge rather than absolute reaction time. People will pay for smaller/bigger numbers, sure. But that doesn't mean there is anything practically significant about it.
But it doesn't average out. The 5ms player is continuosly 4ms behind the other player. As above poster explained - the times add up. So you have 200ms + one to five ms. If server tick is as little as 5ms the problem is even worse as in that case player A will with exact same reactions get a faster tick in 4 out of 5 times. I don't know how often it matters, but I'd expect top player to have pretty similar reaction times. So let's say 2 opponents are both between 200 and 220 ms reaction time - then constantly having 4ms more for one player definitely sounds like it will have an effect.
edit: Or in other words - it depends on how often the reaction of the opponents is with the 4ms difference. That certainly depends on the game and the players.
Most server ticks are nowhere near 5ms. Quake 3 ran on 20-30 tick, CS:S/CS:GO runs on 33 by default and up to 66 if you run on high quality servers. 100 tick was somewhat common in CS:S. Some really expensive servers claimed 500 tick but I never bought it. Either way no one's client would update that fast.
Furthermore, if you watch pro matches, you'll quickly realize their skill has nothing to do with having the fastest reaction time. Once you get to a certain skill level, it all comes down to game knowledge. Having a consistent 4ms advantage is absolutely negligible.
If you get a chance, demo a 120hz monitor setup and spin around quickly in an FPS. It's quite noticeable. It almost feels extra surreal a la the Hobbit at 48fps until you get used to it.
I wonder if that latency is noticeable to them or this this the same market as the audiophile market that sells gold-plated cables for 100x markup.