Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At some point it will be computationally cheaper to predict the next pixel than to classically render the scene,

This is already happening to some extent, some games struggle to reach 60 FPS at 4K resolution with maximum graphics settings using traditional rasterization alone, so technologies like DLSS 3 frame generation are used to improve performance.



Instead of the binary of traditional games vs AI, it's worth thinking more about hybrids.

You could have a stripped down traditional game engine, but without any rendering, that gives a richer set of actions to the neural net. Along with some asset hints, story, a database (player/environment state) the AI can interact with, etc. The engine also provides bounds and constraints.

Basically, we need to work out the new boundary between engine and AI. Right now it's "upsample and interpolate frames", but as AI gets better, what does that boundary become?


I think that's more to do with poor optimization that the actual level of graphical fidelity that requires it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: