I think the most likely path forward for commercialization/widespread use is to use AI as a post-processing filter for low poly games. Imagine if you could take low quality/low poly assets, run it through a game engine to add some basic lighting, then pass this through AI to get a photo-realistic image. This solves the most egregious cases of world inconsistency and still allows for creative human fine-tuning. The trick will be getting the post-processor to run at a reasonable frame rate.
Don’t we already have upscalers which are frequently used in games for this purpose? Maybe they could go further and get better but I’d expect a model specifically designed to improve the quality of an existing image to be better/more efficient at doing so than an image generation model retrofitted to this purpose.