Hacker Newsnew | past | comments | ask | show | jobs | submit | genpfault's commentslogin

llama.cpp (b8642) auto-fits ~200k context on this 24GB RX 7900 XTX & it shows a solid 100+ tok/s ("S_TG t/s") on the first 32k of it, nice!

    ./llama-batched-bench -hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q4_K_XL \
    -npp 1000,2000,4000,8000,16000,32000,64000,96000,128000 -ntg 128 -npl 1 -c 0
    |    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
    |-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
    |  1000 |    128 |    1 |   1128 |    0.416 |  2404.87 |    1.064 |   120.29 |    1.480 |   762.20 |
    |  2000 |    128 |    1 |   2128 |    0.755 |  2649.86 |    1.075 |   119.04 |    1.830 |  1162.83 |
    |  4000 |    128 |    1 |   4128 |    1.501 |  2665.72 |    1.093 |   117.08 |    2.594 |  1591.49 |
    |  8000 |    128 |    1 |   8128 |    3.142 |  2545.85 |    1.114 |   114.87 |    4.257 |  1909.47 |
    | 16000 |    128 |    1 |  16128 |    6.908 |  2316.00 |    1.189 |   107.65 |    8.097 |  1991.73 |
    | 32000 |    128 |    1 |  32128 |   16.382 |  1953.31 |    1.278 |   100.12 |   17.661 |  1819.16 |
    | 64000 |    128 |    1 |  64128 |   43.427 |  1473.74 |    1.453 |    88.12 |   44.879 |  1428.89 |
    | 96000 |    128 |    1 |  96128 |   82.227 |  1167.50 |    1.623 |    78.86 |   83.850 |  1146.42 |
    |128000 |    128 |    1 | 128128 |  133.237 |   960.69 |    1.797 |    71.25 |  135.034 |   948.86 |

~50 tok/s on M1 Max 64Gb

Oh nice that's pretty good!

Doesn't seem to serve rendered samples so you have to set "browser.display.use_document_fonts" to "1" to see anything useful.

I think it also requires internet access, so you have to enable internet.

Which is the default, and 99.9% of Firefox users, 99.99% of all users will not have this issue.

600 GB/s of memory bandwidth isn't anything to sneeze at.

~$1000 for the Pro B70, if Microcenter is to be believed:

https://www.microcenter.com/product/709007/intel-arc-pro-b70...

https://www.microcenter.com/product/708790/asrock-intel-arc-...


Recent kernels have SR-IOV support for these chips too. B&H has them listed for $950.

https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...

When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.


I'm probably going to snag one of the Intel cards just for the SR-IOV and use with VM's

I tried to use SRIOV to virtualize mellanox nics with vlans on redhat Linux. Long story short it did not work. Per Nvidia the os has to also run open switch. This work was on an already complex setup in finance ... so adding open switch was considered too much additionally complexity. This requirement is not something I run across in the docs.

Anybody know better?


The situation in networking is a lot different than graphics. I don't know much other than that it depends on what specific protocol, card, firmware, and network topology you're using and there's not really generic advice. If the question is setting up Ethernet switching inside the card so VFs can talk to the network, then I think the Linux switchdev tools can configure that on their own without Open vSwitch but you probably need to find someone who understands your specific type of deployment for better advice.

Depending what you're doing AMD's support for VirtIO Native Context might be a useful alternative (I think it gives less isolation which could be good or bad depending on use).

I tend to agree that the vram size and bandwidth is the core thing, but this B70 Pro allegedly has 387 int8 tops vs a 5090 having 3400 int8 tops. 600 compares vs 1792GB/s. I'm delighted so see an option with quarter the price! But man, a tenth the performance? https://www.techpowerup.com/347721/sparkle-announces-intel-a... https://www.tomshardware.com/pc-components/gpus/nvidia-annou...

838 seems to be the real INT8 TOPS number for the 5090; going from 800 to 3400 takes an x2 speedup for sparsity (so skipping ops) and another x2 speedup for FP4 over INT8.

So it's closer to half the speed than a tenth. Intel also seems to be positioning this card against the RTX PRO 4000 Blackwell, not the 5090, and that one gets more like 300 INT8 TOPS. It also has less memory but at a slightly higher bandwidth. The 5090 is much faster and IIRC priced similarly to the PRO 4000, but is also decidedly a consumer product which, especially for Nvidia, comes with limitations (e.g. no server-friendly form factor cards available, and there are or used to be driver license restrictions that prevented using a consumer card in a data center setup).


Thank you for the correction. That seemed way too lopsided to be believed. This assessment balances the memory to tops ratio much much more evenly, which is to be expected! I was low key hoping someone would help me make sense of how wildly disparate figures were, but I wasn't seeing.

AMD R9700 is 378/766 tops int8 dense/sparse. 644GB/s of 32GB memory. ~$1400. To throw one more card into the mix. Intel undercutting that nicely here.

You're right that for companies, the pro grade matters. For us mere mortals, much less so. Features like sr-iov however are just fantastic so see! Good job Intel. AMD has been trickling out such capabilities for a decade (cards fused for "MxGPU" capability) & it makes it such an easier buy to just offer it straight up across the models.


especially for exploratory work 1/10th the perf is fine. Intel isn't able to compete head to head with Nvidia (yet), but vram is capability while speed is capacity. There will be plenty of use cases where the value prop here makes sense.

It's more like a 70 class card with extra VRAM.

I think the B65 is priced at $650. Both supported by llamacpp I believe. With that power draw you could run two of them.

Intel GPU prices have stayed fine, but I do wonder if they are viable for Inference if they will wind up like Nvidia GPUs, severely overpriced.

I mean it kind of is considering that's comparable to a 5070 which has 672 GB/s? Benefit of NVIDIA being the only one using GDDR7 for now I guess.

7800 XT has 624 GB/s as well, and can be found for $400 used. 16 GB of course.

I've heard ROCm is still a crapshoot though. Is that true?

If you stick with your OS/package manager-distributed version, installation isn't painful anymore (provided that version approximately overlaps with your generation of GPU). It's okay for inference, and okay for training if you don't stray too far beyond plain torch. If you want to run code from a paper or other more esoteric stuff you're still going to have a bad time.

I don't have an Intel dGPU, but I suspect the situation there is even worse. I mean you go to the torch homepage: https://pytorch.org/get-started/locally/ and Intel isn't even mentioned. (It's here though: https://docs.pytorch.org/docs/stable/notes/get_start_xpu.htm...)


The product would be excellent in 2024, but now it's a landfill filler. You can run some small models at pedestrian speed, novelty wears off and that's it.

Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.

32GB? Meh.


It's true that it's severely late and missed it's market window but 512gb just isn't possible.

Not to be confused with GNU parallel[1], written in Perl.

[1]: https://en.wikipedia.org/wiki/GNU_parallel



> A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user

Google People[1]?

[1]: https://qntm.org/perso


> "A system's purpose is what it does"

POSIWID: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...



Yes, I don't even know how I didn't know about this at the time of wiring the article. But a must read for sure!


Nice! Getting ~39 tok/s @ ~60% GPU util. (~170W out of 303W per nvtop).

System info:

    $ ./llama-server --version
    ggml_vulkan: Found 1 Vulkan devices:
    ggml_vulkan: 0 = Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
    version: 7897 (3dd95914d)
    built with GNU 11.4.0 for Linux x86_64
llama.cpp command-line:

    $ ./llama-server --host 0.0.0.0 --port 2000 --no-warmup \
    -hf unsloth/Qwen3-Coder-Next-GGUF:UD-Q4_K_XL \
    --jinja --temp 1.0 --top-p 0.95 --min-p 0.01 --top-k 40 --fit on \
    --ctx-size 32768


Super cool! Also with `--fit on` you don't need `--ctx-size 32768` technically anymore - llama-server will auto determine the max context size!


Nifty, thanks for the heads-up!


What am I missing here? I thought this model needs 46GB of unified memory for 4-bit quant. Radeon RX 7900 XTX has 24GB of memory right? Hoping to get some insight, thanks in advance!


MoEs can be efficiently split between dense weights (attention/KV/etc) and sparse (MoE) weights. By running the dense weights on the GPU and offloading the sparse weights to slower CPU RAM, you can still get surprisingly decent performance out of a lot of MoEs.

Not as good as running the entire thing on the GPU, of course.


Thanks to you I decided to give it a go as well (didn't think I'd be able to run it on 7900xtx) and I must say it's awesome for a local model. More than capable for more straightforward stuff. It uses full VRAM and about 60GBs of RAM, but runs at about 10tok/s and is *very* usable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: