Rust does not seem well suited for GPU programming. Current shader languages could be improved and are only C-like probably for ergonomics.
What's the benefit here? The only one I can see is if you don't want to learn a shading language/compute platform and you are already writing in Rust and you want your codebase to be in a single language.
It's cool that this exists, but it really is oversold and a bit off putting calling it "the future" of GPU programming.
It's a bit of a question about what kind of GPU programming we speak about.
Like you have complex GPGPU programs which have nothing anymore to do with graphics, programming of Graphics (shaders) and using programming for Graphics to do GPGPU (roughly).
In the later two cases you might want a shader language but in the first case you might want something more "general purpose".
And I mean tracking down wrong computations due to soundness issues or similar is still a gigantic pain in GPGPU.
And the absence of code level abstraction mechanisms is a problem with many shader languages, too.
But I still agree that I don't expect rust succeeding there, but it would be nice and it has the potential to at least get some niche success.
Whoa, was checking the rust GPU examples out this morning, but didn't end up using it.
Needed to find a way to bicubic resize images fast on an embed Nvidia gpu.
Was looking at both llvm ptx & vulkan spir-v as a way to reduce dependency on Nvidia hardware. We ended up using npp with autogen c++ binding.
The rust GPU foundation is a great idea. Lots of interesting possibilities for rust devs.
Rust patterns, because it's advanced type checking (part of memory safety) makes a lot of functional tasks faster for embedded devs. It's also a dream to cross compile and deploy with (cargo(, several times faster and easier than c++.
You need fewer tests and less (often neglected) runtime error handling code because of the contract with the compiler.
I also love how I can write crates for python using pyo3 that data science team can use.
From a PR standpoint, this page encourages the viewpoint that Rust fans are too often Rust fanatics.
As for ultimate usefulness, I am not really convinced. The big selling point on Rust is no memory leaks, no use after free and so on. These issues do not exist in shaders; one cannot allocate memory in shaders really, so that point of Rush seems, well, pointless. I would also never put bounds checking into a shader either; that harms performance a great a deal. I guess if you like Rust's syntax this is a useful, though doing bits that are natural in typical Rust would likely be far from ideal on GPU code. This happens also with current shading languages too, but I strongly suspect this makes it worse.
As a point against usefulness: it is somewhat useless on Apple platforms since one will need to translate into Metal (and yes, there are tools for that), but that drops so many capabilities of targeting Apple devices. Apple GPU's, because of their nature, can do things other GPU's cannot (and also the other way) and to make this useful to me (or any performance minded project that targets Apple GPUs), will require shoe-horning those features somehow into this.
But I guess, it is nice for those who like the syntax of Rust (I confess I do not like Rust's syntax) and are targeting Vulkan as it gives one another shading language to write in; which I guess means a point for SPIR-V (and the interface language idea in general).
There are other big selling points of Rust, compared to GPU-specific languages.
One is an expressive type system with algebraic types and generics. It's a lot easier to produce versions of code optimized for a certain use case. Only CUDA comes close.
This brings us to the ecosystem. Outside of CUDA, how many libraries exist that can easily be used from a shader context?
Rustc has great error messages. I don't know others, but errors in OpenGL shaders make me want to flip tables.
And being able to run the same code on the CPU is valuable as well, leading to simpler testing and debugging.
So unless you like CUDA and its problems, then this is rather attractive.
A modern language with generics and quality IDE experience with the ability to write truly composable code is what interests me in the project. Modern shader codebases are truly enormous, but the languages up until recently lacked any tools for composition beyond the function. HLSL got templates in 2021 and GLSL is a dead end with no development.
No Metal support is IMO no problem. I and most of the rest of the games industry couldn’t care less about MSL. Nobody uses it. I’ve seen numbers from Apple that say upwards of 80% of developers they asked target Metal via SPIR-V Cross and HLSL from DXC.
I love what this project wants to do, use a “real” language to get the developer experience while writing shaders. However IMO slang will eat Rust-GPU’s lunch. You are right, borrow checking is not useful for shaders, and slang delivers most of the same language improvements (modules, generics) while being able to target many more platforms. And slang is production ready, unlike rust-gpu.
I’d still love to see what rust-gpu could become though, especially with DX12 moving to spirv in the future. It’s be nice to share rust code between my engine code and shader code.
> No Metal support is IMO no problem. I and most of the rest of the games industry couldn’t care less about MSL. Nobody uses it. I’ve seen numbers from Apple that say upwards of 80% of developers they asked target Metal via SPIR-V Cross and HLSL from DXC.
That is a real, real shame with respect to performance. A number of algorithms one can do with MSL (or really because of tile based rendering) so much faster (nearly no main memory bandwidth consumption). Atleast there is GL_EXT_shader_pixel_local_storage for GL. For Vulkan it appears that there is VK_EXT_shader_tile_image, but I do how widespread it is on Android.
There are other nice features that are Metal-ish only as well (tile shaders are a big deal) but typically the memoryless textures tops my list.
A language that is slow to compile can't be the future of anything
Waiting for your code to compile in order to be able to see how this new color looks, or this new font or this new title or this new game player speed feels, just is BAD, very BAD
You don't have to trust what i say
You can however trust facts
Here a real world example to verify that fact everyone can test at home
Clone this popular open source game written in Rust:
Iterative C++ compile times can be very fast if you know what are you doing, 17 seconds is very long unless you have link time optimizations on(which makes no sense for an iterative build).
As far as I can tell they didn't change a shader at all, they linked to code that looks like it runs on the CPU as part of the main binary not as a shader.
Yes, in linux there is a file system type called tmpfs which is mapped to memory (most of the time iirc).
> Filesystem Size Used Avail Use% Mounted on
> tmpfs 31G 22M 31G 1% /tmp
Try running `cargo build --timings`, it creates a report about build times.
My guess would be storage speed, i would not expect memory speed to be that much of an issue.
edit:
> Windows compilation 8min took less time than Linux 45min?
I dont think i understood this correctly, but in the case you are asking (for clarification) about my compile times, i got around 3m:18s on linux, and around >8 minutes on WSL.
> Yes. I was confused by its output, i.e. 2541.91s
Thats the output of the shell command `time` (im using zsh so the output differs from other shells) eg `time cargo build`.
system is the cpu time spent calling kernel functions
and user is the cpu time spent outside kernel functions.
The time is measured per core, so if the program runs for 2 seconds with 16 threads, the user time would be around 32.
Notice there is a cpu metric (1341% cpu), if you do:
(user + system) / cpu * 100 / 60 = ~3.29 minutes = 3m:17s
What's the benefit here? The only one I can see is if you don't want to learn a shading language/compute platform and you are already writing in Rust and you want your codebase to be in a single language.
It's cool that this exists, but it really is oversold and a bit off putting calling it "the future" of GPU programming.