Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

is that impressive? I was thinking 100 tok/s on an H100 is really slow considering LMDeploy claims 2000+ on an A100 and a large batch size.


We get 100 tokens a second with batch size 1. Those 2000+ figures are for large batches.


Ah, that's fair, and faster than any of the LMDeploy stats for batch size 1; nice work!

Using an H100 for inference, especially without batching, sounds awfully expensive. Is cost much of a concern for you right now?


I don't think they're saying they're doing batch size of 1, just giving performance expectations of user facing performance


Yeah, and this is basically what I was asking.

100 tokens/s on the user's end, on a host that is batching requests, is very impressive.


I think they _are_ saying batch size 1, given that rushingcreek is OP.


Yes they are saying batch size 1 for the benchmarks, but they aren't doing batch size 1 in prod (obviously).


I don't think that is obvious. If your use case demands lowest latency at any cost, you might run batch size 1. I believe replit's new code model (announced about a month ago) runs at batch 1 in prod, for example, because code completions have to feel really fast to be useful.

With TensorRT-LLM + in-flight batching you can oversubscribe that one batch slot, by beginning to process request N+1 while finishing request N, which can help a lot at scale.


I'm not sure about TensorRT, but in llama.cpp there are seperate kernals optimized for batching and single use inference. It makes a substantial difference.

I suppose one could get decent utilization by prompt processing one user while generating tokens for another.


Without batching, I was actually thinking that's kind of modest.

ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:

https://github.com/turboderp/exllamav2#performance

I didn't test codellama, but the 3090 TI figures for other sizes are in the ballpark of my generation speed on a 3090.

100 tokens/s batched throughput (for each individual user) is much harder.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: