All true, except the bit that wgpu-py compiles to WASM. It's all desktop.
In the plans that we do have for running the browser, Fastplotlib, Pygfx and wgpy-py will still be Python, running on CPython that is compiled to WASM (via Pyodide). But instead of wgpu-py cffi-ing into a C library, it would make JS calls to the WebGPU API.
To clarify this a bit, wgpu is a Rust implementation of WebGPU, just like Dawn is a C++ implementation of WebGPU (by Google). Both projects expose a C-api following webgpu.h. wgpu-py Should eventually be able to work with both. (Disclaimer: I'm the author of wgpu-py)
I don't know Datashader that well, but from what I understand, it generates an image from a set of primitives (e.g. points), and then allows you to interactively inspect that image. It does not re-render the points on every frame like Fastplotlib/Pygfx does.
Not sure if this is what you're asking :) but the UI framework will somehow provide access to the OS-level surface object, so that the GPU API can render directly to the screen.
Apart from being based on wgpu, Pygfx also has a better design IMO. Korijn deserves the credit for this. It's inspired by ThreeJS, based on the idea to keep things modular.
We deliberately don't try to create an API that allows you to write visualizations with as few lines as possible. We focus on a flexible generic API instead, even if it's sometimes a bit verbose.
I wish. The revenue from the ads goes to readthedocs, AFAIK nothing is paid to the maintainers of the project.
That said, readthedocs is a pretty nice platform to host your docs in a simple way. Plus users are not tracked. So personally I don't mind so much, but I'm going to have a look at the paid plan to remove ads for our users :)
Cheers! FWIW, I think you could easily build Sphinx docs using GHA and publish it on GitHub Pages and not pay a dime. Depending on your distaste for Microsoft’s monopoly, I suppose ;)
- Shader language is SPIRV-compatible GLSL 4.x thus it makes it fairly trivial to import existing GL shaders (one of my requirements was support for https://editor.isf.video shaders).
Cons:
- Was developed before Vulkan Dynamic Rendering was introduced so the whole API is centered around the messy renderpass thing which while powerful is sometimes a bit more tedious than necessary when your focus is desktop app development. However, Qt also has a huge focus on embedded so it makes sense to keep the API this way.
- Most likely there are some unnecessary buffer copies here and there compared to doing things raw.
- Does not abstract many texture formats. For instance still no support for YUV textures e.g. VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM and friends :'(
Sorry, I'll try to be clearer. QRhi docs[1] say "The Qt Rendering Hardware Interface is an abstraction for hardware accelerated graphics APIs, such as, OpenGL, OpenGL ES, Direct3D, Metal, and Vulkan." And PySide6 includes a (python) wrapper for QRhi[2]. Meanwhile, pygfx builds on wgpu-py[3] which builds on wgpu[4] which is a "is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL".
So, from the standpoint of someone using PySide6, QRhi and pygfx seem to be alternative paths to doing GPU-enabled rendering, on the exact same range of GPU APIs.
Thus my question: How do they compare? How should I make an informed comparison between them?
> How should I make an informed comparison between them?
Pygfx provides higher level rendering primitives. The more apples to apples comparison would be wgpu-py versus QtRhi, both of which are middleware that abstract the underlying graphics API.
The natural question is are you already using Qt? You say you are, so IMHO the pros and cons of the specific implementations don't matter unless you have some very specific exotic requirements. Stick with the solution that "just works" in the existing ecosystem and you can jump into implementing your specific business logic right away. The other option is getting lost in the weeds writing glue code to blit a wgpu-py render surface into your Qt GUI and debugging that code across multiple different render backends.
Yeah, sounds like QRhi is about at the level of WebGPU/wgpu-py.
It sounds to me that Qt created their own abstraction over Vulkan and co, because wgpu did not exist yet.
I can't really compare them from a technical pov, because I'd have to read more into QRhi. But QRhi is obviously tight to / geared towards Qt, which has advantages, as well as disadvantages.
Wgpu is more geared towards the web, so it likely has more attention to e.g. safety. WebGPU is also based on a specification, there is a spec for the JS API as well as a spec for webgpu.h. There's actually two implementations (that I know of) that implement webgpu.h: wgpu-native (which runs WebGPU in firefox) and Dawn (which runs WebGPU in Chrome).
This is indeed one of the major differences. Many of the problems that are plaguing Vispy are related to OpenGL. The use of wgpu solves many of them.
Also, wgpu forces you to prepare visualizations in pipeline objects, which at drawtime require just a few calls. In OpenGL there is way more work for each object being visualized at drawtime. This overhead is particularly bad on Python. So this particular advantage of wgpu is extra advantageous for Python.
In the plans that we do have for running the browser, Fastplotlib, Pygfx and wgpy-py will still be Python, running on CPython that is compiled to WASM (via Pyodide). But instead of wgpu-py cffi-ing into a C library, it would make JS calls to the WebGPU API.