That is different - they weren't using cloud rendering - they were just stuffing games to some sort of VM and streaming the output.
Cloud rendering would be something like you and me play the same game and the server rendering only has 1 instance of every resource used for rendering to reduce memory overhead.
If you have shared instance worlds (eg. MMO) you can then do shared state effects like animation, advanced lighting, etc. and reuse the calculations for each client.
I'm curious how much of a savings you could actually expect to gain from rendering a single scene for multiple cameras at once. My understanding is that, historically, a lot of the work in rendering a scene is camera-dependent, and that a lot of performance optimizations for rendering rely on being able to avoid computing things that aren't visible to the camera. Has that changed significantly over the years, or am I just wrong?