Hacker News new | past | comments | ask | show | jobs | submit login
Compute Shader 101 [video] (youtube.com)
177 points by raphlinus on June 5, 2021 | hide | past | favorite | 18 comments



This is a talk I've been working on for a while. It starts off motivating why you might want to write compute shaders (tl;dr you can exploit the impressive compute power of GPUs but portably), then explains the basics of how, including some sample code to help get people started.

Slides: https://docs.google.com/presentation/d/1dVSXORW6JurLUcx5UhE1...

Sample code: https://github.com/googlefonts/compute-shader-101

Feedback is welcome (please file issues against the open source repo), and AMA in this thread.


This is fantastic. I worked on nvidia CUDA compiler from 2008-2012 so I am familiar with GPU architecture, but not the state of the art programming tools. You did a great job providing continuity between what I know and what I need to know now. I am really excited about WebGPU and IREE and have been following these projects closely. Thank you!


I found this talk incredibly useful as a newbie, and I’m looking forward to playing with the sample code.

I’ve been in a “don’t understand the GPU compute landscape well enough to decide what to get started on” limbo for a while, and this talk was exactly what I needed. Thanks!


Is the audience mainly folks in the scientific computing or academic world (aka folks that use CUDA mainly)? I went into it as a graphics engineer and started to get that sense haha (good job putting it together though)


I'm actually not 100% sure what audience would find this most compelling, it's something of an experiment. I'm personally motivated by 2D vector graphics rendering, but that's a pretty small world, and I wouldn't be surprised if the main applications for compute shaders ended up being scientific and machine learning.


Compute shaders are used heavily in the AAA games and film industry.


Just watched it now, quite interesting how you presented everything.


I will watch the talk later, but I think it is pretty obvious why to write compute shaders. :)

https://home.otoy.com/render/octane-render/


Really cool to see this being written with Rust/WGPU! I really hope it has a strong future ahead of it - being able to target multiple native graphics APIs with one codebase is very alluring.

A friend and I adapted a wgpu-rs "boids" example to render strange attractors with compute shaders, though we basically haven't applied _any_ polish to it yet:

https://github.com/bschwind/strange-attractors


I started going through rust's wgpu tutorial last week.

Despite feeling overwhelmed with boilerplate, it really is incredible what you can do; Vulkan, Metal, and DirectX compatibility, and you only need to write it once.


Thanks for sharing this! I have not watched the full video yet but definitely will. Recently got interested in compute shaders thanks to this video: https://youtu.be/X-iSQQgOd1A?t=616 and can recommend it to those that want to see a bit of what is possible with compute shaders.


That's a great motivating example for compute shaders, and I'm sure very fun to play with. It should be reasonably straightforward to put the ant-like agent simulation logic on top of the infrastructure in my sample code (either wgpu or the lightweight abstraction layer), and I'd love to see that.


Such an amazing video, though I will admit a lot of it went straight right over my head. I recently stumbled and have been fascinated by the demoscene culture and specially timed head-to-head competitions[1]. Shader programming boggles my mind, it still feels insanely complex, verbose, and hard to grok.

[1] https://m.youtube.com/watch?v=O-1zEo7DD8w&t=1723s


Love the video! Being totally unfamiliar with shaders: can they be used to improve performance of things like file operations on multiple files? This week I was working on hashing files in parallel in Python and although it's faster now than the single threaded version, the project would hugely benefit from being able to work faster since some of the work being done takes thirty minutes - several hours.


Potentially, but it depends on the task. You need thousands of threads to get real performance boost, and hashing is an inherently sequential task, not really GPU-optimized (hash functions in the Merkle-Damgård family like SHA; you can do Merkle trees too). In fact, I'm not sure why anyone would want to hash on a GPU, unless, say, they wanted to hash lots of short strings as some kind of science project or something.


The “short” could be a problem depending on how short we’re talking. I’m working on thousands of files that can sometimes be only a few kilobytes or less. I wouldn’t expect to use it to speed the process of a single file, but if it could be used to hash a thousand or multiple thousands of files at a time it could offer one or more orders of magnitude of improvement for some use caes.


Yes, for thousands of files of a few kilobytes each, GPU might be a good speedup. On a discrete card, a potential bottleneck is copying the files to device memory, but this might be a case where high end integrated graphics might perform well, as there's no need for staging buffers and the copy. But it's hard to know the real performance without trying it and measuring.


I wrote a path tracer in a compute shader. It was quite fun and much faster than the CPU version. Not trivial though; even such simple things as generating noise take at least some thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: