It's definitely still our intention to make it run in the browser. We're not actively working on that yet, but we've recently been able to remove some hurdles on that path, in particular the issue related to Webgpu being async.
Great to hear that - I was impressed by pygfx but my immediate thought was that in this age of near universal browser access, it's a shame there's no ability to interact from there!
I used to love making physics visualizations using VPython[1]! It's awesome to see similar tools pop up. I gave up on VPython after python3, since it was a pain to migrate.
- Shader language is SPIRV-compatible GLSL 4.x thus it makes it fairly trivial to import existing GL shaders (one of my requirements was support for https://editor.isf.video shaders).
Cons:
- Was developed before Vulkan Dynamic Rendering was introduced so the whole API is centered around the messy renderpass thing which while powerful is sometimes a bit more tedious than necessary when your focus is desktop app development. However, Qt also has a huge focus on embedded so it makes sense to keep the API this way.
- Most likely there are some unnecessary buffer copies here and there compared to doing things raw.
- Does not abstract many texture formats. For instance still no support for YUV textures e.g. VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM and friends :'(
Sorry, I'll try to be clearer. QRhi docs[1] say "The Qt Rendering Hardware Interface is an abstraction for hardware accelerated graphics APIs, such as, OpenGL, OpenGL ES, Direct3D, Metal, and Vulkan." And PySide6 includes a (python) wrapper for QRhi[2]. Meanwhile, pygfx builds on wgpu-py[3] which builds on wgpu[4] which is a "is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL".
So, from the standpoint of someone using PySide6, QRhi and pygfx seem to be alternative paths to doing GPU-enabled rendering, on the exact same range of GPU APIs.
Thus my question: How do they compare? How should I make an informed comparison between them?
> How should I make an informed comparison between them?
Pygfx provides higher level rendering primitives. The more apples to apples comparison would be wgpu-py versus QtRhi, both of which are middleware that abstract the underlying graphics API.
The natural question is are you already using Qt? You say you are, so IMHO the pros and cons of the specific implementations don't matter unless you have some very specific exotic requirements. Stick with the solution that "just works" in the existing ecosystem and you can jump into implementing your specific business logic right away. The other option is getting lost in the weeds writing glue code to blit a wgpu-py render surface into your Qt GUI and debugging that code across multiple different render backends.
Yeah, sounds like QRhi is about at the level of WebGPU/wgpu-py.
It sounds to me that Qt created their own abstraction over Vulkan and co, because wgpu did not exist yet.
I can't really compare them from a technical pov, because I'd have to read more into QRhi. But QRhi is obviously tight to / geared towards Qt, which has advantages, as well as disadvantages.
Wgpu is more geared towards the web, so it likely has more attention to e.g. safety. WebGPU is also based on a specification, there is a spec for the JS API as well as a spec for webgpu.h. There's actually two implementations (that I know of) that implement webgpu.h: wgpu-native (which runs WebGPU in firefox) and Dawn (which runs WebGPU in Chrome).
> wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
> The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox and Deno
From my experience with vispy, it is more limited than pygfx. I mean, you can always use gloo to get whatever you want but the "built ins" are much more limited than what pygfx seems to have. I really like vispy anyways, I think this seems like an evolution with some lessons learnt from vispy.
This is indeed one of the major differences. Many of the problems that are plaguing Vispy are related to OpenGL. The use of wgpu solves many of them.
Also, wgpu forces you to prepare visualizations in pipeline objects, which at drawtime require just a few calls. In OpenGL there is way more work for each object being visualized at drawtime. This overhead is particularly bad on Python. So this particular advantage of wgpu is extra advantageous for Python.
Apart from being based on wgpu, Pygfx also has a better design IMO. Korijn deserves the credit for this. It's inspired by ThreeJS, based on the idea to keep things modular.
We deliberately don't try to create an API that allows you to write visualizations with as few lines as possible. We focus on a flexible generic API instead, even if it's sometimes a bit verbose.
For years, we have looked at something solid to be able to implement 3D colour science visualisation. We used Vispy, but encountered some issues when interacting with the scenegraph, then a quick stint with Three.js which required doing dirty things to pass Python data to Javascript, and finally, Pygfx is the one that enabled us to do what we wanted: https://github.com/colour-science/colour-visuals
If someone is looking for a renderer that also has tools for game development in Python. Panda3D is another good choice. It has a task and event system along with multiplayer and physics.
Question slightly related to this topic: how do native (e.g. Qt, GTK, etc.) desktop applications usually embed 3D views? Say for example, a desktop application for visualizing .obj files. Or something like AutoCAD, maybe (though I’m not sure which UI framework it uses).
Not sure if this is what you're asking :) but the UI framework will somehow provide access to the OS-level surface object, so that the GPU API can render directly to the screen.
Let’s put it this way: I would consider myself eligible to be irked by ads on Read the Docs pages if I paid whoever maintains the project.
If they bother you enough, absolutely no one’s going to frown if you estimate how much they would be making on ads from traffic volume yearly, email the maintainer and suggest to pay that in return for turning off the ads for a year.
(Some people might frown if you just block the ads, since after all it is robbing a fellow open-source dev of some income.)
There are two points I am curious about:
— I would like to know if RTD forces the ads. Considering they have a business tier, it would be funny if they had to finance OSS project hosting from ads.
— EthicalAds started as an ad platform for developers, but apparently is now an “AI ad network”. I wonder if Python OSS project owners know about the 180 degree turn that’s happening there…
I wish. The revenue from the ads goes to readthedocs, AFAIK nothing is paid to the maintainers of the project.
That said, readthedocs is a pretty nice platform to host your docs in a simple way. Plus users are not tracked. So personally I don't mind so much, but I'm going to have a look at the paid plan to remove ads for our users :)
Cheers! FWIW, I think you could easily build Sphinx docs using GHA and publish it on GitHub Pages and not pay a dime. Depending on your distaste for Microsoft’s monopoly, I suppose ;)
We used the free tier for a couple of years before a user mentioned the ads. We had no idea our docs were surrounded by ads as no one on the team had ever tried it without ublock origin... We upgraded to the paid plan after that :)
I can't say you're the only one, but I've certainly never bristled over this before. Or even spared it any thought. And now that I am thinking about it, I find it quite helpful actually, not bristle-worthy at all. I still have no idea how to pronounce GIF or several other acronyms that I commonly use.
You're angry because the author tried to make the library searchable by removing a few letters..?
And they didn't even come up with the gfx shorthand for graphics, its admittedly an old one and barely seen nowadays... But it's always been the sister to sfx/sound effects
Whether you are a user bristling at a library author telling you how to pronounce the name of the project, or library author bristling at people mispronouncing the name of the project, in OSS world we all have our things to bristle about but differ in our abilities to influence them.
I feel inclined (especially because they're trying to tell me otherwise) to pronounce it in a rather different way which I shall not make explicit here beyond saying that it splits as pyg/fx rather than as py/gfx.
I think that it is pretty important to both be prescriptive about the pronunciation of abbreviations you create, and to explain them if they are non-obvious.
Way back when I first read about nginx, I had absolutely no way to know that folks usually pronounced it "engine-X". Led to an embarrassing conversation where I and a coworker were completely at cross-purposes to one another.
Obviously there are a bunch of abbreviations with disputed pronunciations (gif/jif, SQL/sequel, etc), and since the creators weren't prescriptive about them, we're all free to argue about them for the rest of time...
Important because precise communication matters in engineering. I don't mean "embarrassing" in the sense of an in-group, I mean it in the sense that misunderstandings cause engineering mixups that are trivially avoided by clear communication
I mean that seems a bit uncharitable. I don’t think the author communicating their intent behind the name is aggravating or manipulative. It’s simply an explanation. Shouldn’t be worth more than an ‘ah, I see what they’re going for’ and move on.
It’s a sign, not a cop. Doesn’t seem like any undue pressure to control people, or prevent the reader from doing what they will with.
One wonders if the IPA pronunciations on Wikipedia is similarly bristling
There are some tickets about it: https://github.com/pygfx/pygfx/issues/650 https://github.com/pygfx/wgpu-py/issues/407