Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Except it is only worth doing, if when taking into account loading data into the GPU and getting the results back, is still faster than total execution on the CPU.

It doesn't help that GPU beats the CPU in compute, if a plain SIMD approach outperforms the total execution time.



Especially if you're saving watts in the process. And not utilizing a capital-intensive asset.


The "GPU as accelerator" vs. "GPU-native software" split. The former usually results in or from poor, generic architectures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: