When you full screen this, it's crazy how tiny the area that spins is. For me it's like an inch or inch and a half on a 32 inch 4k display at a normal seated position.
(If I move my head closer it gets larger, further and it gets smaller)
That's crazy. I feel dumb for initial thinking it was somehow doing eye tracking to achieve this, despite having no such hardware installed.
I would be curious to see a similar thing that includes flashing. Anecdotally, my peripheral vision seems to be highly sensitive to flashing/strobing even if it is evidently poor at seeing fine details. Make me think compression in the time domain (e.g. reducing frame rate) will be less effective. But I wonder if the flashing would "wake up" the peripheral vision to changes it can't normally detect.
It’s normal to be "more sensitive" to brightness differences in the peripheral areas compared to the fovea. The fovea has more color receptors, in the other areas, there are comparatively more monochromatic receptors (brightness). The general density of the fovea is also much larger.
Imagine if we could hook this into game rendering as well. Have super high resolution models, textures, shadows, etc near where the player is looking, and use lower LoDs elsewhere.
It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)
Foveated rendering is already a thing. But since it needs to be coded for in the game, it's not really being used on PC games. Games designed for Playstation with the PS VR 2 in mind do use foveated rendering since they know their games are being played with hardware that provides eye tracking.
That's foveated rendering. Foveated streaming, which is newly presented here, is a more general approach which can apply to any video signal, be it from a game, movie or desktop environment.
They are complementary things. Foveated rendering means your GPU has to do less work which means higher frame rates for the same resolution/quality settings. Foveated streaming is more about just being able get video data across from the rendering device to the headset. You need both things to get great results as either rendering or video transport could be a bottleneck.
Not quite: you can use it for games rendering, but with a Wifi adapter you more importantly want to use it for the video signal, and only transfer highres in the area you're looking at. A 4k game (2048*2048*2 screens) is 25gbit uncompressed at 100fps, which would stress even Wifi-7. With foveated rendering you can probably get that down to 8gbit easy.
Valve is applying it to the streamed view from the computer to reduce the bandwidth requirements it's not actually doing foveated rendering in the game itself because not all games support it.
Foveated streaming is just a bandwidth hack and doesn't reduce the graphic requirements on the host computer the same way foveated rendering does.
As a lover of ray/path tracing I'm obligated to point out: rasterisation gets its efficiency by amortising the cost of per-triangle setup over many pixels. This more or less forces you to do fixed-resolution rendering; it's very efficient at this, which is why even today with hardware RT, rasterisation remains the fastest and most power-efficient way to do visibility processing (under certain conditions). However, this efficiency starts to drop off as soon as you want to do things like stencil reflections, and especially shadow maps, to say nothing of global illumination.
While there are some recent'ish extensions to do variable-rate shading in rasterisation[0], this isn't variable-rate visibility determination (well, you can do stochastic rasterisation[1], but it's not implemented in hardware), and with ray tracing you can do as fine-grained distribution of rays as you like.
TL;DR for foveated rendering, ray tracing is the efficiency king, not rasterisation. But don't worry, ray tracing will eventually replace all rasterisation anyway :)
I think you could do foveated rendering efficiently with rasterization if you "simply" render twice at 2 different resolutions. A low resolution render over the entire FOV, and a higher resolution render in the fovea region. You would have overlap but overall it should be less pixels rendered.
I believe the standard way is to downgrade the sampling density outside the area you're looking, see https://docs.vulkan.org/samples/latest/samples/extensions/fr... . Optimally you could attach multiple buffers with different resolutions covering different parts of clipspace, saving vram bandwidth. Sadly this is not supported currently to my knowledge, so you have to write to a single giant buffer with lower sample resolution outside the detail area, and then just downsample it for the coarse layer.
Linus the shrill/yappy poodle and his channel are less than worthless IMO.