Latency is much harder to reduce than it is to increase bandwidth.
At a certain point you're also limited by the speed of light, round-trip latency for halfway accross the world cannot be physically less than ~200ms (unless our knowledge of physics advances and SoL is no longer a limit)
I think it was Microsoft that proposed an approach. They'd modify games to continually speculatively execute and render every user input. So that might be they render you beginning to run left/right/forward/back as well as jumping and shooting. When you actually change the input, a local device can switch streams and start speculatively executing all over again.
It probably works best if the game engine cooperates. But that's not necessary. You can just split processes on the OS and run each different bit of user input in a different process with no cooperation from the process. (Though I admit this might be tricky on current hardware and heavy games.) Given enough compute and bandwidth, you could do this continually.
In theory, with unlimited compute/bw this means you can have local latency (just the cost of input/stream switching) because you could speculatively execute every possible input to the game, all the time, out to the latency duration. In practise, it'll probably prune things based on the likely inputs and only speculate a bit out. This is probably enough to provide a smooth experience for most users that aren't playing competitively.
If you think about a game as a mapping from a limited set of user inputs to a 2D image, some optimizations start coming out, I suppose.
But that sounds almost impossibly computationally expensive for 3D games and the like. Furthermore most game inputs aren't discrete but continuous, making the problem even hearder.
They tested with Doom 3 and Fable 3. I don't recall the specifics but I'm gonna guess that the actions people take are really quite limited, so with a bit of work you can probably guess what they're going to do enough to make things playable.
Technically you can get it down to 85ms without going beyond any known physics, you just need to figure out a way to transmit information through the Earth rather than around it.
So the next big breakthrough in data transmission will be neutrino rays....
>>unless our knowledge of physics advances and SoL is no longer a limit
The latency of the human mind is around the same. There are lots of tricks like predicting the future game state that can result in a better user experience.
UI Response time is at most 100 ms [1], as anything more that is very noticeable laggy.
Actual perception times are much lower than that, about 13ms [2]. You can see the difference for yourself by looking at a 30FPS (33ms) and 60FPS (16ms) video [3], and the effect is much greater when you're actually providing the inputs.
With local rendering, when you shoot a bullet you see the bullet shoot after local hardware latency (~50ms) and get a confirmed kill after local hardware latency + round-trip time (say ~50ms).
If the rendering is distant, the time until you will see that bullet shoot becomes 100ms instead of 50ms.
As long as the enemy player is remote, the enemy player position (and thus confirming a kill) will inevitably be delayed by the network latency (whether rendering is local or distant). That won't change.
The only thing that will change with remote rendering is that your own moves are also going to be delayed, which is certenly anoying I agree.
But, on the positive side, this ensures consistancy between the view and the model, that is: you won't try to shoot at a player that is not actually where you're seeing him. That happens a lot with local rendering.
It's been 10 years I hear that Gigabit fiber is coming everywhere. It's taking way long that they say, and it still does not resolve the latency issue if you are far from the server.
the latency of in-home streaming is far inferior to out-of-home streaming. Unless the server is just next door to where you live.