> You're always bound by the same latency - your input goes to the server and server returns world state for your PC to render
That's untrue of most games. Most games will accept inputs immediately on the client, and only correct from the server if things get significantly out of sync ("rubber banding"). Dead reckoning is both hard and super important.
> As for economics I already said you can exploit the shared state very much in games if you rework the way rendering works. Right now games focus on camera space rendering because they only care about 1 view output and view space effects are cheapest in this scenario.
The way you say this makes me think you have no idea how rendering works. There is no rendering without a camera. The idea doesn't even make sense.
Particle effects are dependent on graphics card bandwidth and fill rate. No benefit of shared state. Animation is done in vertex shaders. No benefit of shared state. Lighting and shadows are done through GPU buffers. No benefit of shared state.
The only possible system where shared state could (maybe) be useful in the way you describe is some sort of massive ray tracing operation. That wouldn't have anything to do with GPUs, and if you're talking about real time ray tracing the economics of this just got sillier. Now you're basically talking about a super computer cluster.
> I have no doubt this is the future of VR
VR is BY FAR the most sensitive to even minor latency. There's no way you're doing VR over a network.
>That's untrue of most games. Most games will accept inputs immediately on the client, and only correct from the server if things get significantly out of sync ("rubber banding"). Dead reckoning is both hard and super important.
Most games aren't MMOs
>The way you say this makes me think you have no idea how rendering works. There is no rendering without a camera. The idea doesn't even make sense.
What a narrow minded view. There are data structures that can store geometry and lighting information in world space - for example you can have world represented by some sparse voxel data structure and calculate lighting in worlds space - then camera rendering is just raycasting in to the datastructure which is the same for all views. Animation and particles are about updating the world geometry.
>Particle effects are dependent on graphics card bandwidth and fill rate. No benefit of shared state. Animation is done in vertex shaders. No benefit of shared state. Lighting and shadows are done through GPU buffers. No benefit of shared state.
This is because current rendering systems are optimized for client side rendering which is my point. If you discard the notion that the only way to render 3D geometry is using GPU pipeline and triangle rasterization you'll see that there are a lot of possibilities. Unfortunately not a lot of research has been done because rendering 3D polygons fit the constraints we had historically and is really robust, everything is optimized towards it.
>VR is BY FAR the most sensitive to even minor latency. There's no way you're doing VR over a network.
Which is why I said you could stream geometry data updates instead of video - this way your client re-renders to match camera movements but the animation is streamed from the server
> What a narrow minded view. There are data structures that can store geometry and lighting information in world space - for example you can have world represented by some sparse voxel data structure and calculate lighting in worlds space - then camera rendering is just raycasting in to the datastructure which is the same for all views. Animation and particles are about updating the world geometry.
That only works for diffuse lighting. There is more than diffuse lighting. I'm not "narrow minded", I actually work on renderers for a living so I know the actual data structures in use. Things like specular reflections and refractions are entirely dependent on where the viewer is; and calculating lighting information for an entire scene is way less efficient than calculating it for a viewer (see: how deferred renderers work).
> Unfortunately not a lot of research has been done because rendering 3D polygons fit the constraints we had historically and is really robust, everything is optimized towards it.
Huh? There's been decades of research into ray tracing and voxels. Believe me, people have put a lot of thought into how to optimize these things.
You can add light ID list to the structure (eg. 1 or 2 ints with byte IDs or w/e) - essentially solving light occlusion problem in world space. You can then do view space specular and you do transparent objects as a separate pass just like you do with deferred.
>Things like specular reflections and refractions are entirely dependent on where the viewer is; and calculating lighting information for an entire scene is way less efficient than calculating it for a viewer (see: how deferred renderers work).
If you're rendering for a single view - my whole point is that if you're rendering for multiple clients then solving lighting world space makes that calculation shared just like gbuffer is an optimization for view space with it's own tradeoffs and workarounds.
>Huh? There's been decades of research into ray tracing and voxels. Believe me, people have put a lot of thought into how to optimize these things.
Compared to triangle rasterization it's nowhere near close - for eg. I've only seen a decent voxel skinning implementation a few years back - realtime is entirely based on it and it's baked in to the hardware pipeline.
That's untrue of most games. Most games will accept inputs immediately on the client, and only correct from the server if things get significantly out of sync ("rubber banding"). Dead reckoning is both hard and super important.
> As for economics I already said you can exploit the shared state very much in games if you rework the way rendering works. Right now games focus on camera space rendering because they only care about 1 view output and view space effects are cheapest in this scenario.
The way you say this makes me think you have no idea how rendering works. There is no rendering without a camera. The idea doesn't even make sense.
> recomputing lighting/shadows, animation, particle effects, etc
Particle effects are dependent on graphics card bandwidth and fill rate. No benefit of shared state. Animation is done in vertex shaders. No benefit of shared state. Lighting and shadows are done through GPU buffers. No benefit of shared state.
The only possible system where shared state could (maybe) be useful in the way you describe is some sort of massive ray tracing operation. That wouldn't have anything to do with GPUs, and if you're talking about real time ray tracing the economics of this just got sillier. Now you're basically talking about a super computer cluster.
> I have no doubt this is the future of VR
VR is BY FAR the most sensitive to even minor latency. There's no way you're doing VR over a network.