Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone know how long it takes per image? The only piece of information I could find was that the run script uses 8 GPUs, which suggests that it takes a while.


Probably a significantly long time. That said, I do wonder if you need to actually run this at the resolution of the output image. Since really this is just changing the tones in the image and not altering the details, you could probably optimize the algorithm heavily such that it works for high resolution images quickly.

As an example, I used frequency separation to split the detail layer from the tone layer in the original, high resolution stock photo of SF. From there I took the lower res (25% size) output from this script and used it as my tone layer. The results are OK: http://i.imgur.com/oakLUiE.jpg

It has the same overall tones, and some of the sharpness is preserved at a high resolution. My 30 second approach suffers from some edge glow, but I'm sure it could be greatly improved in an efficient, automated way.


With the linked implementation (there is at least 1 fork rewriting the matlab bits in python) the three steps needed per image amount to probably over an hour on one k80. However, you probably need the -backend cudnn and -cudnn_autotune flags not to run out of memory. Downscaling the images from currently 700 to say 500 significantly speeds up the process as well. Still, it definitely takes longer than neural style.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: