> Most 3D applications render more than a single object. In WebGL, each of those objects requires a collection of state-changing calls before that object could be rendered. [...] All the pieces of state in the WebGL example are wrapped up into a single object in WebGPU, named a “pipeline state object.” Though validating state is expensive, with WebGPU it is done once when the pipeline is created, outside of the core rendering loop. As a result, we can avoid performing expensive state analysis inside the draw call. Also, setting an entire pipeline state is a single function call, reducing the amount of “chatting” between Javascript and WebKit’s C++ browser engine.
> Resources have a similar story. Most rendering algorithms require a set of resources in order to draw a particular material. In WebGL, each resource would be bound one-by-one. However, in WebGPU, resources are batched up into “bind groups”. [... In both APIs] multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.
The clear point in both of these comparisons is that the same operations must be done in both APIs, but the WebGPU version allows much of the work to be pre-computed, allowing the draw calls (where the performance bottleneck lies) to have as little overhead as possible.
That's neither clear nor unambiguous. The text I highlighted including their summary of it states there is a reduction of code. Not only that, the "both APIs" part you're mentioning isn't even pertaining to the code but rather the execution of the pipeline.
My point is, and has been, that they should focus on the perf gains and reduce the misleading sentiment that it has simplified the API footprint.
It may not be less code to type, but it's much less code to run :)