Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also in this case talking about the graphics stack it's not like OpenGL or DirectX are setup to be testable. You can't really "unit test" shader code, the best you can do is render it in some test scenes and screenshot the results. Which ends up flaky & noisy due to valid-per-spec differences in GPU & driver behaviors.


Every language has valid-per-spec differences. That's exactly why you test.

For sure OpenGL/DX requires more infrastructure to run unit tests than a generic block of C code. But it's absolutely possible to "unit test" shader code, with buffer read-back and/or vertex stream out, among other options. It's more the game engines themselves that aren't setup for unit tests rather than the graphics stack


> For sure OpenGL/DX requires more infrastructure to run unit tests than a generic block of C code. But it's absolutely possible to "unit test" shader code, with buffer read-back and/or vertex stream out, among other options.

Which is what I said, you can screenshot & compare. But it becomes a fuzzy compare due to acceptable precision differences.

And it ends up being more of an integration test and not a unit test.

> Every language has valid-per-spec differences.

They really don't, but that's not entirely what I'm talking about. I'm talking about valid hardware behavior differences, which doesn't exist broadly. How a float performs in Java is well-defined and never changes. How numbers perform in most languages is well-defined and does not vary.

GPU shaders are completely different. Numbers do not have consistent behavior across differing hardware & drivers. This is a highly unique situation. Even in languages where things are claimed to be variable (like the size of int in C & C++), end up not actually varying, because things don't cope well with it. Shaders don't play any such similar games.


> Which is what I said, you can screenshot & compare. But it becomes a fuzzy compare due to acceptable precision differences

You make it sound rigorous than it can be. A readback doesn't need to be a "screenshot" and doesn't need to be of a full scene. A frame buffer can be a 1x1 value.

Regarding precision differences, it's not much different than testing floating point math anywhere else. Shaders allow fast-math style optimizations generally, but they can be disabled at least on some platforms[1][2], otherwise one can take care in floating point math, or provide tests just using integer math.

> And it ends up being more of an integration test and not a unit test.

Sure, if you just setup scenes, render, screenshot and do a fuzzy compare, that looks more like an integration test. And I agree it's more common to see integration tests for renderers. But really, it's a bit more involved in that you have to deal with uploads, command queues, readbacks, but you really can setup the infrastructure to do proper unit tests, and then you can decide how you want to handle unit testing of flexible precision code, either toggling precision in the compilers, or building your tests to properly bound your expected precision, or both.

> GPU shaders are completely different. Numbers do not have consistent behavior across differing hardware & drivers.

This is an outdated and simply not true view, every modern (PC-Spec?) GPU hardware has IEEE754 compliant floats. They have to, otherwise GPGPU wouldn't have taken off in scientific computing. compiler defaults may just not be right.

[1] https://github.com/Microsoft/DirectXShaderCompiler/blob/mast... [2] See: #pragma optionNV(fastmath off)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: