Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Use-cases like this are why Mojo isn't used in production, ever. What does Nvidia gain from switching to a proprietary frontend for a compiler backend they're already using? It's a legal headache.

Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole.





Kernels now written in Mojo were all in hand written in MLIR like in this repo. They made a full language because that's not scalable, a sane language is totally worth it. Nvidia will probably end up buying them in a few years.

NVidia is perfectly fine with C++ and Python JIT.

CUDA Tile was exactly designed to give parity to Python in writing CUDA kernels, acknowledging the relevance of Python, while offering a path researchers don't need to mess with C++.

It was announced at this years GTC.

NVidia has no reason to use Mojo.


I don't think Nvidia would acquire Mojo when the Triton compiler is open source, optimized for Nvidia hardware and considered a industry standard.

Nobody is writing MLIR by hand, what are you on about? There are so many MLIR frontends

how mojo with max optimize the process?

what about a fourty feet pole? would it be viable?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: