Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You absolutely cannot implement stream compaction “at the speed of native” as WebGPU is missing the wave/subgroup intrinsics and globally coherent memory necessary to do that efficiently as possible.


It's possible you might not need direct access to wave/subgroup ops to implement efficient stream compaction. There's a great old Nvidia blog post on "warp-aggregated atomics"

https://developer.nvidia.com/blog/cuda-pro-tip-optimized-fil...

where they show that their compiler is sometimes able to automatically convert global atomic operations into the warp local versions, and achieve the same performance as manually written intrinsics. I was recently curious if 10 years later these same optimizations had made it into other GPUs and platforms besides cuda, so I put together a simple atomics benchmark in WebGPU.

https://github.com/PWhiddy/webgpu-atomics-benchmark

The results seem to indicate that these optimizations are accessible through webgpu on chrome on both MacOS and Linux (with nvidia gpu). Note that I'm not directly testing stream compaction, just incrementing a single global atomic counter. So that would need to be tested to know for sure if the optimization still holds there. If you see any issues with the benchmark or this reasoning please let me know! I am hoping to solidify my knowledge in this area :)


Oh look it's subgroup support landing last week: https://github.com/gfx-rs/wgpu/pull/5301


That's a wgpu-specific extension, not part of the actual WebGPU spec, so you can't use it on the web.

https://github.com/gpuweb/gpuweb/blob/main/proposals/subgrou...

There is a proposal for supporting subgroups in WebGPU proper but it's still in the draft stage.


I'm aware. It is an implementation of the linked proposal.

The `wgpu` implementation linked will make its way into Firefox eventually. Dawn will follow up with a similar one for Chrome.

I was linking it to demonstrate there are no technical hurdles and it's only really approval remaining.


Ok, but that's not what "landing" means.


Native extensions unusable on Web browsers don't count.


Then nothing involving WebGPU counts since it's not implemented on other browsers than Chromium and not on Linux even in Chromium…

WebGPU is brand new, and the paint is still wet. It doesn't make sense to dismiss things that haven't landed in browsers yet as “unusable on the web”.


There’s an advanced setting in Safari to enable it, but I can’t say how well it works. In this instance it doesn’t.


It doesn't work at all. Doesn't even exist in Safari anymore because they ditched the old implementation and are rewriting everything.


Multiple engineers are working on adding it back: https://github.com/WebKit/WebKit/pulls?q=is%3Apr+is%3Aclosed...


Welcome to Web standards, and Google's ChromeOS transformation of the Web, with help of many Web developers out there.

Doesn't change the fact that is a Web standard, for Web browsers.


It is a WIP web standard. And the spec is still evolving most things are stable at that points, but new features are still being added, like this one!).

And that's how the web works, it was the same for WebRTC which spent 2-3 years in such a state, same for MSE, etc.


I think compilers should be smart enough to substitute group-shared atomics with horizontal ops. If it's not already doing it, it should be!

But anyways, Histogram Pyramids is a more efficient algorithm for implementing parallel scan anyways. It essentially builds a series of 3D buffers, each having half the dimension of the previous level, and each value containing the sum of the amounts in each underlying cells, with the top cube being just a single value, the total amount of cells.

Then instead of doing the second pass where you figure out what index thread is supposed to write to, and writing it to a buffer, you just simply drill down into said cubes and figure out the index at the invocation of the meshing part by looking at your thread index (lets say 1526), and looking at the 8 smaller cubes (okay, cube 1 has 516 entries, so 1100 to go, cube 2 has 1031 entries, so 69 to go, cube 3 has 225 entries, so we go to cube 3), and recursively repeat until you find the index. Since all threads in a group tend go into the same cubes, all threads tend to read the same bits of memory until getting down to the bottom levels, making it very GPU cache friendly (divergent reads kill GPGPU perf).

Forgive me if I got the technical terminology wrong, I haven't actually worked on GPGPU in more than a decade, but it's fun to not that something that I did cca 2011 as an undergrad is suddenly relevant again (in which I implemented HistoPyramids from a 2007ish paper, and Marching Cubes, an 1980s algorithm). Everything old is new again.


You seem knowledgeable, and I’m possibly going back into a GPGPU project after many years out of the game, so: overall do you see a good future for filling these compute-related gaps in the WebGPU API? Really I’m wondering whether wgpu is an okay choice versus raw Vulkan for native GPGPU outside the browser.


The answer to that for any given feature is "can untrusted code be trusted with that?". Wave intrinsics are probably doable. Bindless maybe, but expect a bunch of bounds checking overhead. Pointers/BDA, absolutely not.

Native libraries like wgpu can do whatever they want in extensions, safety be damned, but you're stepping outside of the WebGPU spec in that case.


What's BDA in that context, please? I can only confidently assume it's not “battle damage assessment”.


Buffer Device Address, the Vulkan name for raw pointers in shaders.


Thx


Don't know about GPGPU, but can give you a probably correct answer: Compared to "native" APIs you trade features for compatibility. It's always going to lag behind Vulkan/DX/Metal. Are you ok with excluding platforms? Vulkan/Metal/DX. If not, then I'd give wgpu a chance. Wgpu is also higher-level than Vulkan, which is borh a pro and a con.


Middleware, the portability, latest features of native APIs, and nice GPGPU tooling.


shhh... you might cause their reality distortion field to fail!


The demo doesn't work on mobile Chrome. Worse, the blog post crashes the embedded browser in the HN app. May I suggest just linking to the demo instead?


This is the eternal browserbros. attempt to make us think native has zero value now that we have a completely captured and bloated browser.

The browser is dead, the only thing you can use it for is filling out HTML forms and maybe some light inventory management.

The final app is C+Java where you put the right stuff where it is needed. Just like the browser used to be before Oracle did it's magic on the applet.


> The browser is dead,

Yea. Nah!

That obit is a bit premature


So you're telling me you write Java professionally?


Funnily enough, in a world with WASM, we might actually have Java in the backend and C in the frontend rather than vice versa as it would've been likelier in the 90s.


The irony of half world backed by VC money, trying to reinvent Erlang, Java and .NET application servers, while pretending to be innovative.


WASM is adding GC... recreating the wheel of the applet but without escaping the problem of javascript glue.

Go is just Java without the WM.

Rust is just a native compiler that creates slow programs and complains a lot.


> Rust is just a native compiler that creates slow programs and complains a lot.

Good morning Troll

I'll give you "complains a lot."


Corrective upvote from me - the comment is too funny


You had me all the way up until the rust bit.


It's pretty much the only professional language you can write.

If you consider respect and responsibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: