Hacker Newsnew | past | comments | ask | show | jobs | submit | khurdula's commentslogin

"we hope to open-source future versions of the model."

Love to see it. Cheers!


We define determinism as a model behaving predictably, while also producing useful supporting metadata, like confidence scores from specialized DNNs/CNNs, not just text tokens generated as "scores".

So for the same kind of task, you can expect the same kind of output every time, without randomly breaking structured output or having to constantly change generation hyperparams.


Bruh, if it were priced at like $2,499 it would make sense, but this is just too much.


Damn, just visiting this site makes me want to reinstall Minecraft haha.


What if I said, we outperform them? Check this out: https://jigsawstack.com/blog/openai-audio-stt-vs-jigsawstack...


Are we supposed to use AMD GPUs for this to work? Or Does it work on any GPU?


> This project provides a Docker-based inference engine for running Large Language Models (LLMs) on AMD GPUs.

First sentence of the README in the repo. Was it somehow unclear?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: