I'm curious how much of a savings you could actually expect to gain from rendering a single scene for multiple cameras at once. My understanding is that, historically, a lot of the work in rendering a scene is camera-dependent, and that a lot of performance optimizations for rendering rely on being able to avoid computing things that aren't visible to the camera. Has that changed significantly over the years, or am I just wrong?