Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and then compile the ONNX to the native format of the device.

I'm assuming you are talking about https://github.com/onnx/onnx-mlir?

In your experience, how much faster is a "compiled" onnx model vs. using an onnx runtime?



For other people reading this:

Back in the day TensorFlow had tfdeploy which compiled TensorFlow terms into NumPy matrix operations. Our synthetic tests saw speedups of factor 50.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: