Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dithering is super useful in dark scenes in games and movies.

By adding random noise to the screen it makes bands of color with harsh transitions imperceptible, and the dithering itself also isn't perceptible.

I'm sure there are better approaches nowadays but in some of my game projects I've used the screen space dither approach used in Portal 2 that was detailed in this talk: https://media.steampowered.com/apps/valve/2015/Alex_Vlachos_...

It's only a 3 line function but the jump in visual quality in dark scenes was dramatic. It always makes me sad when I see streamed content or games with bad banding, because the fix is so simple and cheap!

One thing that's important to note is that it's a bit tricky to make dithering on / off comparisons because resizing a screenshot of a scene with dithering makes the dithering no longer work unless one pixel in the image ends up exactly corresponding to one pixel on your screen



It's not cheap for streaming. It's harder to compress and is lost in the process. The video codec is a smart low pass filter.


The AV1 codec has support to tell the decoder to generate fake film grain, so you can add back all the noise lost in compression.

Although I don't think it's very widely used, I dunno if that's due to the compressors or decompressors.


It needs to be done by the client and not be part of the actual video stream, otherwise it doesn't even work. When done by the client it's cheap.


Not so cheap if your hardware decoder only supports 8-bit color, which is a common limitation of H.264 decoders in particular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: