Hacker News new | past | comments | ask | show | jobs | submit login
WebGPU and WSL in Safari (webkit.org)
270 points by chmaynard on Sept 12, 2019 | hide | past | favorite | 181 comments



As a cross-platform graphics developer who is currently knee-deep in papering over platform incompatibilities, what I mostly care about at this point is compatibility. I don't care about shader size or compilation time, as long as both are reasonable. I do care about having yet another shading language to cross-compile to.

Please, everyone, just use something that already exists. It is frustrating how the big players constantly use new and incompatible graphics APIs to try to obtain some sort of competitive advantage, adding to the workload we developers have to deal with. There is almost no cost to the platform developers to add new APIs; instead, the costs are borne by the application developers. It's a classic case of negative externalities.


I miss the years where inspite of all its flaws, OpenGL was a non deprecated option on all the platforms. I know it left performance on the table, I know the drivers for some vendors were iffy and I know reasoning about performance across different vendors / drivers / platforms was difficult - but it worked most of the time and allowed me to have a mostly unified codebase everywhere.


You can still kind of have that experience by using ANGLE[1] as your OpenGL implementation. It lets you use the same dialect of OpenGL ES on Windows, Linux, Mac, and Android, with consistent behavior across platforms.

ANGLE is the base of the WebGL implementation in Chrome. For WebGPU, we (Google) are working on a new native library called Dawn[2] that will fill the same role that ANGLE does for WebGL. I'm personally hopeful that Dawn itself can eventually be useful as a cross platform graphics abstraction for native apps as well as web apps. There's also Mozilla's gfx-rs[3] in the same space.

[1] https://github.com/google/angle

[2] https://dawn.googlesource.com/dawn

[3] https://github.com/gfx-rs/gfx


That's nice, but it's a pretty big dependency to add.


This really. I'm still using OpenGL and it's worth so much to have an adequate truly cross platform graphics API. The gfx-hal people are doing good work in Rust for a cross platform vulkan like api, but it's not there yet.


OpenGL is STILL viable today. You can use a shared subset of ES 3.x and Desktop GL 3.x, and the amount of places where you need to if/#ifdef because of differences is vanishingly small (mostly some minor shader decls). Certainly don't need a giant library like ANGLE to do this.

We still don't know when Apple will actually remove GL from their platforms, but since the backlash will be so huge, it might be a while. And even when they do, the first step of 95% of code-bases that rely on GL will be to link in some GL on Metal emulation library. The net effect will be a slight performance drop for Apple vs other platforms. Well done Apple.

At some point switching to Vulkan (and Vulkan on Metal for Apple, or the reverse if you're an Apple centric developer) may be worth it, but for the moment the amount of apps and games that will see close to identical performance in GL and Vulkan is still huge. If you are e.g. geometry/shader/memory bound, you'll see exactly 0% speedup from API changes. You really need to be pushing significant amount of draw calls to see a difference. If you're not doing this, its worth "hanging in there" with GL to see if the post-GL landscape de-clusterfucks itself in the meantime :)


Thanks for this perspective, it is helpful. In our industry not reacting to developments of the day is often the most courageous choice one can make.


Only when "all platforms" leaves out game consoles, which never had full support for OpenGL, besides a timid attempt with GL ES 1.0 + Cg on PS2.


It's true, but for most applications, games consoles are not even a consideration. There's a mountain of programs that only target Windows, Linux, macOS, Android, and iOS or some subset thereof.


Doesn't change the fact that OpenGL was never available in all platforms, regardless how often one spreads that urban myth.

Even on Mac OS it wasn't available, rather Quickdraw 3D was the API to go to.

Had Apple been successful with their in house OS, and surely they wouldn't have cared about OpenGL.


Are you saying OpenGL wasn't on Mac OS before Mac OS X? That's not true. Apple released OpenGL for Mac OS 9 in 1999.

WWDC 1999: Mac OS 9 - A work in progress https://youtu.be/LkmSrCsKPLk?t=1360


WWDC 1999 was after NeXT's acquisition, the engine was already rolling, adopting NeXTSTEP features.

Also note that Copland did not have any OpenGL support planned.


WebGL is like that now.


> There is almost no cost to the platform developers to add new APIs

To anyone who has ever spent time working on a platform API this is deeply insulting.

> …the costs are borne by the application developers.

Yes, whereas with cross-platform, least common denominator APIs the cost is borne by end users.

This is literally the attitude that lead to Java applets and then Flash, then later mistakes like Web SQL and (P)NaCL.


> To anyone who has ever spent time working on a platform API this is deeply insulting.

I've worked on platform APIs too. Maybe "almost no cost" wasn't the right phrase, but there's certainly less cost for the platform vendor.

> Yes, whereas with cross-platform, least common denominator APIs the cost is borne by end users.

It's a tradeoff that has to be evaluated on a case-by-case basis. So far I haven't seen arguments for WSL that are compelling enough.


My critique on all of these remains largely the same since we tried doing WebCL:

-- Path 1: We do need a leap here, and that leap should be essentially unleashing GPGPU primitives (memory barriers, etc.). The web is a ~decade behind, when we have a chance to 10X+ over what regular apps do.

-- Path 2: If not, then pure conservative standards work a la webgl2. Useful, and should take care for politicking to not prevent Path 1 from happening in parallel.

It's disconcerting for, ~7 years later, the same problems to keep happening.


They have to implement the existing API or their own. The cost will be the same for both cases. For big platform vendors, almost no(extra) cost.

On the other hand, we, the ordinary developer must pay the cost of supporting N APIs on M platforms. Every APIs carelessly invented by vendors just because almost no extra cost burden us all.


Hence middleware engines, with productive high level APIs, while using every feature across all boards, without having to deal with messy extensions.

AMDN and Khronos are still fighting to get the love of CAD industry, adding back OpenGL features, because after having to create Iris Inventor like APIs in-house, they aren't keen in switching to yet another low level API.


> Yes, whereas with cross-platform, least common denominator APIs the cost is borne by end users.

I’m one of the most vehement proponents for the use of platform-specific UI toolkits here, but even I disagree with that this statement without qualification. Standards, in general, are a useful thing to have. We’ve occasionally seen (for example, in the case of application frameworks) that each platform doing their own thing has now made it so that the better experience for the user is the non-general API. But standardized interfaces (POSIX, OpenGL, JavaScript) when done right have been a boon for developers and end-users alike.


All those standards are good examples of write once, debug everywhere, implement multiple code paths tailored by bugs, OEM specific features and workarounds.

Making the end result in industrial code bases hardly much different from implementing multiple times.


This seems a somewhat ill-informed comment, since you shouldn't be thinking of something like Vulkan as a platform API in the first place. Vulkan (and most other APIs, e.g. OpenGL) is an almost entirely platform-independent API targeting modern GPUs. There are a small number of platform-dependent entry points for window system interaction and such like, but their surface area is a very small percentage of the overall API since GPUs are such complex beasts.


And it shows, given the lack of tooling versus what platform APIs offer out of the box.

Instead of a modern API, frameworks, 3D model format, IDE support for debugging shaders, we get plain old C, extensions creep, and go search yourself for basic feature like displaying text or loading textures.

Hardly an improvement on OpenGL status quo.


OK, of the criticisms of Vulkan I've heard, "it doesn't have a function for display text" is the weirdest one. I would be upset if Khronos did include a function for displaying text, because it would invariably be broken in numerous ways.


That attitude is why plenty of graphical developers rather adopt platform APIs instead, because we get the full package for anything graphics related, instead of learning to make fire with sticks.

I did my graduation thesis in OpenGL, hunting for libraries that kind of work together, and doing from scratch what every other OpenGL beginner has to go through, is no fun.

Every Vulkan book does the same, first set of chapters are tips about which libraries to hunt for.

The dark side allows for a better gratification on time to pixel, instead of rites of passage.

Something that Khronos has failed to learn with OpenGL and their Vulkan SDK proves that they didn't actually learn their lesson.

If Vulkan SDK was actually at the same level as DirectXTK, Metal Kit, 3DS SDK, PhyreEngine,GX2... maybe it would get more love from those us that rather embrace the dark side.


WebSQL wasn't a mistake.

It make an enormous amount of existing code available on the web.

It just wasn't agreed to by Firefox because it didn't have enough JS.


It wasn't agreed on because it was not standardized and most importantly, because there were no alternative implementations. SQLite is amazing but accepting a single implementation as standard is a huge dependency. I was very sad to see it getting dropped but it had to be.


The only thing that already exists that is provably web-safe is the WebGL dialect of GLSL (which no one seems to like). WebSPIR-V would be based on SPIR-V but in many ways has to be a new thing as well.

Edit: I originally said "WebGPU dialect of GLSL" which is not a thing. I meant "WebGL dialect of GLSL", which is a thing and which works ok in practice.


I don't see any way in which it won't be way more work to migrate to this proposed WSL than to get my existing SPIR-V tooling working with Web SPIR-V.


The actual functionality exposed is close enough that cross-compiling is a solvable problem. But fair point.

(On the flip side, having two syntactically very similar but actually different things can be dangerous, as it may lead devs to think some tooling works when it actually doesn't.)


Are there any technical obstacles to using GLSL with WebGPU? It's popular with some people.

See: https://www.shadertoy.com/


It might be that browser vendors would be better at writing compilers than GPU vendors, but there were a lot of bugs in GLSL compilers. SPIR-V was introduced to be a simpler format for GPUs to ingest, letting you do the complex parsing up front on your machine (where it's easier to identify and fix problems).

The other benefit was that changes could be made to improve the usability of the shading language without having to wait on all vendors to update their compilers. If the change was just syntactic sugar, it could be made without changing the SPIR-V. It's kind of like how you don't need an operating system update to move from C++98 to C++11.


Hilariously, Apple actually considers the shift from C++98 to C++11 something that required an operating system upgrade (they purposefully shipped an ancient version of libstdc++ while disingenuously doing comparisons against only what they shipped, not what had come from upstream, for as long as they could until libc++ was reasonable, which debuted on macOS 10.7--where I will note it was horribly broken due to an LLVM optimizer bug in the specific clang build they used to compile it, so really it wasn't available until macOS 10.8--and as Apple does not believe in static linking, their SDK doesn't have a way to work around this; of course, you can, and I did--by getting the code for libc++, modifying it to use an underlying libstdc++ as the moral equivalent of libc++abi, and statically linking that, which worked great and is frankly what Apple should have done themselves during the transition period :/--it is notable that Apple didn't want people to: they really do mentally model that as an operating system upgrade, which to me is an amazing demonstration of how little they understand of the power and potential of decoupled toolchains).


> Apple does not believe in static linking, their SDK doesn't have a way to work around this

Apple isn’t alone this regard; as far as I can tell Windows does this too because they don’t commit to a stable syscall interface like Linux does. That being said, you may find Apple’s official stance on statically linking your binaries amusing: https://developer.apple.com/library/archive/qa/qa1118/_index...

In general, Apple links a lot of things to their OS releases: while their toolchains are distributed separately they’re practically tied to one or two major OSes unless you engage in hacks to make them continue working. And the languages which they exercise significant control over (namely Swift and Objective-C) are often tied to OS versions.


Browser vendors are a lot better at writing compilers than GPU vendors. Also there are fewer different browser engines than different GPU drivers so that also helps.


It strangely hasn’t been seriously considered but it might not be a bad option (though it may need a more rigorous reformilation of the spec).


You can always just cross compile your SPIR-V to GLSL or something. People have been cross compiling between HLSL and GLSL for ages, it's a solved problem even if there are sharp edges.


There is no WebGPU dialect of GLSL, only a SPIR-V execution environment for WebGPU. Lnaguages in themselves aren't "safe" or "unsafe", their implementations are.

Also please stop trying to influence a technical debate in a standardization group by putting blog posts on top of Hacker News.


Sorry, I meant WebGL, simply mistyped.

As far as posting blog posts, we think this is just as fair as the Chrome team's communication about their own WebGPU implementation, which included a public demo and sessions at Google I/O which were widely reported.

As for the post getting upvoted to the top of Hacker News, that's the community's choice, not ours. The initial poster wasn't even an Apple person (afaik). If you are insinuating some kind of vote manipulation then that is inaccurate.


You're absolutely wrong here. A language must specify its semantics. Those semantics describe the safety properties of that language.


I rather care about middleware that takes advantage of best hardware integration in each platform, instead of bare bones APIs full of endless extensions, which are only portable in theory, with "portable code" full of OEM specific code paths.


What are you working on? I have no experience in graphics development, only a cursory familiarity with some concepts, but can I help? Typical Frontend has kind of become boring now that everything is abstracted and the harder stuff mostly resides in graphics or on the edge. Just haven't really found a way to make that change yet.


> Please, everyone, just use something that already exists.

Says one "who is currently knee-deep in papering over platform incompatibilities", instead of using one of the solutions that do it for you.


I wasn't referring to pathfinder_gpu. :)

(I didn't downvote you.)


Web Shading Language is a non-starter. I looked quite closely at the specification. Apple has argued in the WebGPU WG that SPIR-V is not defined enough, semantically, to be meaningful, but WSL is even worse. Sometimes, it's even explained in terms of SPIR-V, like in the case of discard, where it was hastily explained in terms of SPIR-V's OpKill after we identified it was completely missing.

Apple has blocked any possibility of using a SPIR-V profile, sometimes with bad-faith arguments, despite it being the suggestion of all other vendors and the graphics community participating in the WG.

The team working on the WSL compiler had to have dominator analysis patiently explained to them, and then told us it was too complex of a concept, despite being something learned about in Compilers 101.


The WSL specification is a single document that tells you how the language executed and how it ensures security. SPIR-V simply does not have this. I don’t get your argument here.

It’s really funny to say that we had to have dominator analysis patiently explained to them. I don’t have a clue what you are talking about there. In one meeting I remember having to explain dominator analysis to SPIR-V folks. (I am both a member of the WSL team and an expert on dominator analysis.)


WSL tells you very little about how the language is actually executed. I do not see any explanation of, say, waves and quads, which will need to be spelled out explicitly when these things are added. It's a requirement for proper LOD selection for texturing, along with ddx/ddy.

Simple branching in shaders is notoriously under-defined as well because of complications related to divergence/convergence and the SIMT model.

re: dominator analysis, it was brought up by an Apple WebGPU developer (Myles C. Maxfield) as a complexity of parsing SPIR-V. David Neto had to explain it to Myles. https://docs.google.com/document/d/1wG9BRLUSw4FbpnvqieK--jTW...


> WSL tells you very little about how the language is actually executed. I do not see any explanation of, say, waves and quads, which will need to be spelled out explicitly when these things are added. It's a requirement for proper LOD selection for texturing, along with ddx/ddy.

The spec has a semantics that describes that the language does when executed in great detail.

It seems that your complaint is that there is some thing that you think that SPIR-V describes better than WSL - waves and quads in this case.

I was there when we brought up dominators. You are right that we brought it up as a complexity of SPIR-V, but not because we didn't know what dominators are. We do know what they are, and because we do know, we know that it's complex. We're not saying "it's so complex that we don't understand it". We are saying "we understand it and we know it's complex". In particular, an early argument against text-based languages is that you have to specify and then validate variable scoping. Dominator analysis is a variable scoping rule, just a very complicated one for a wire format to have.


> The spec has a semantics that describes that the language does when executed in great detail.

Here's what the WSL specification says about loading a texture, one of the simplest possible features of a shading language, but one that introduces a lot of complexity (again, to load a texture requires calculating an LOD, which requires talking about derivatives and scheduling pixels and shader execution in terms of quads):

> Todo: fill this section Sample, Load, Gather, etc..

That's it.

Now, your Babylon.JS demo clearly samples textures. So you have some semantics implemented, but what semantics are they? Note that these things are baked pretty heavily in hardware, so if you get the semantics wrong, you can't really fix the hardware. Your semantics have to model reality as it exists. Your automated tests can say that the spec is correct, but might break on a real GPU after translation to SPIR-V/Metal/HLSL.

The WebGPU working group does not currently have any participation from AMD or NVIDIA. Khronos and the SPIR-V working group do. I trust them to get these details right.

Fun fact: The current modelling of "discard" of as OpKill instead of OpDemoteToHelperInvocation is incorrect for Metal's "discard", which I presume is one of your backends right now.


> The WebGPU working group does not currently have any participation from AMD or NVIDIA. Khronos and the SPIR-V working group do. I trust them to get these details right.

On the other hand, the SPIR-V working group has no participation from Apple or Microsoft (vendors of relevant platform APIs, relevant shading languages, and in Apple's case, of relevant GPUs).

You might argue that Apple should join the SPIR-V group but you could likewise argue that AMD and Nvidia should join the WebGPU group. After all, their voices would be valuable as to the API and not just the shader language. The barrier to entry to a W3C Community Group is also much lower.


I doubt those vendors really want to retrace their steps to go through the multi-year pain and many committee discussions that brought us SPIR-V, again. Look at all those names on the SPIR-V specification. Do you want to waste all those people's time, again, for that many years?

Nobody inside the WG but Apple wants WSL.


I was there for one of the times that Phil (from Apple) explained dominator analysis and why it's an added complexity of a binary language. Phil is the architect of our JS VM and one of the main designers of WSL so I think he knows!

Your claim that the WSL spec tells you very little seems off to me. The spec includes formal semantics for everything. If some things aren't in the spec yet they can be added. There is also a pretty extensive test suite. To me the level of specification rigor is considerably higher than SPIR-V, which has an informal spec with separate documents for adding web safety.


What I don't understand about this logic is why you don't opt to augment SPIR-V with the aspects required as opposed to branching off on an entirely different standard. I'm sure there are politics I'm not privy to, but here I am watching 48 processors pegged on compiling shaders and I really don't care to deal with another instruction set.


Google has a well-defined SPIR-V execution environment they have been championing: https://github.com/gpuweb/spirv-execution-env/blob/master/ex...


Is there a test suite?


I don't see why it should be relevant at this point, although there are several large test suites (including fuzzers) for common libraries in the SPIR-V ecosystem. Shader language tests should be added to the WebGPU conformance test suite regardless of the language.


But no tests for the web safety layer afaik.



I think we should focus more on the advantages and disadvantages of WSL and SPIR-V, and their relevance to existing or future ecosystems -- not debate over which language had the first web tests.


When someone says one option is a nonstarter because it's new and immature, I think its fair to look at different signs of maturity to evaluate that statement.

(I personally don't think any option is a "nonstarter" but it's hard to even get agreement on the relevant evaluation criteria.)


It's not "new and immature." It has a formal specification and is used in production on devices/pcs everywhere. The comment itself was sort of vacuous and nitpicking a single aspect which is something that could be remedied through collaboration.


The set of web safety changes for SPIR-V is kind of new and immature in my opinion. I agree that it's building on a core that is widely deployed in other contexts.


By that standard, everything with respect to a shader ISA for the web is "new and immature." The point is finding a suitable common ground for all vendors to agree on working on. Not everyone going every which way to work on uncoordinated efforts.


If we can compile GLSL, GLSL ES, HLSL and OpenCL C to SPIR-V, and we can use SPIR-V in OpenGL, Vulkan and OpenCL, then why on earth can't WebGPU just use SPIR-V? If it's not safe enough, then define a safe subset or safety extensions with strict validation. If it's not Appley enough, then write a Metal Shading Language to SPIR-V compiler (the reverse has already been done). Hell, if SPIR-V really is too low-level, GLSL ES already exists in WebGL and is plenty friendly and capable. But surely the graphics world doesn't need yet another GPU language?


Because Apple hates Khronos


I don’t know if they hate the Khronos group, but they sure do seem to not care about them, or their standards. Pushing Metal, letting OpenCL languish, now this, it’s frustrating for people like me who have no time for learning multiple platform specific languages like these.

The only way I’m using any thing like this is through a tool or library or framework like Unreal Engine, where I just don’t have to care.

This is short term win long term loose, yes the platform specific programs look good now, but long run this stuff is going to be a negative as people look to avoid dealing with multiple platform specific frameworks.

I’m hoping the return of a proper Mac Pro is going to steer back against this trend. I’m reasonably confident that if third party gpu support is ok, then all of the work apple puts into optimisations on these Apple specific graphics languages is going to take a back seat, if NVidia are going to sell me a GPU for the Mac Pro, I’m confident that they care much more about CUDA than anything apple specific.


Considering that Apple is a Promoter Member of the Khronos Group, I doubt this.


They created OpenCL and supported OpenGL (ES) as the only graphics API on their platforms for years, but now they have dumped both open standards in favour of their own proprietary ones.


Actually they created Quickdraw 3D, and only adopted OpenGL as the NeXT team came onboard.

Although they created and gave OpenCL 1.0 to Khronos, they were not happy with the path that Khronos was taking it and their support basically stagnated on OpenCL 1.0.

As side note Google is yet to support OpenCL on Android, instead just like they did with their Android Java, they decided to create their own flavour, Renderscript, which isn't compatible with OpenCL.

As for OpenGL ES, I bet they only did it, as they were trying to "play pretend" being another kind of Apple as survival mechanism, nowadays they don't need it any longer.

Finally, Metal is one year older than Vulkan, offers a modern API instead of plain old C, and contrary to Khronos, just like other platform vendors, Apple understands the value of providing nice frameworks and debugging tools to go along an API.


> Apple understands the value of providing nice frameworks and debugging tools to go along an API.

You should have a look to pwcalton's Twitter feed if you want a quick overview of how nice the tools in questions are. Getting a kernel panic when running Xcode doesn't sound like “nice debugging tools” to me…


Better that not having none to start with.


They also then immediately dropped Quickdraw 3D like the hot garbage it was.


Thanks to NeXT people.


> [...] Metal is one year older than Vulkan [...]

Is it really though? Vulkan is based on AMD Mantle which AMD gave to Khronos. AMD Mantle (2013) predates Apple Metal (2014).


AMD gave Mantle to Khronos as they were going nowhere on their own.

When Apple released Metal, Mandle was still an AMD API.


> [Apple] only adopted OpenGL as the NeXT team came onboard.

I'm not talking about pre-NeXT Apple, that's ancient history.


Well, plenty of old Apple people are still around on their offices.

Many just don't realise how they were fooled to believe in the wolf in desguise.

Corporations do whatever to save themselves.

Do you think Microsoft will keep being FOSS champion, after they wipe out the board?

How naive are those that assign human behaviour to corporations.


And they consistently shipped obsolete 4-5 year old OpenGL versions on macOS.


The wire size comparison is also weird. Are they comparing serving WSL directly (depending on a WSL compiler they went ahead and built into the Safari preview) vs GLSL plus a compiler to convert to SPIR-V because that's not built into any browser (yet)? Why would that be an interesting comparison?


GLSL+glslang+SPIR-V was one of the suggested approaches for developers that want to compile GLSL at runtime on the web.

Google has been working on getting a minimal form of glslang out the door, but they're having trouble because glslang is a pretty big beast.


But presumably if that was the eventually standardized format the compiler could ship with browsers, though, right? It just seems weird to compare against another format that hasn't been agreed to either but they shipped the compiler with the browser preview so people can test it.

It should either be comparing sources or comparing (source + compiler to get it working in a shipping stable browser), not mixing the two.


The proposal from SPIR-V advocates is to not ship any high level language compiler with the browser. That could change, but currently no one is excited about accepting both a binary language and a text-based language.


Why would you expect people to ship a compiler though? Like: when I want to do runtime code generation on .NET, I don't concatenate strings containing C# and run them through a C# compiler... that would be madness!!... instead I use the APIs provided to do IL generation, which have been extremely amazing essentially since day one and provide unbounded flexibility over working with any specific textual language. If I really really wanted a compiler at runtime, maybe I want a different one, and I can then use differently models like F# or a Lisp variant... things that are much harder if someone insists I translate everything into some silly specific language :/.


Shipping support for both a text format and a binary format directly in the browser is in some ways the worst of both worlds. If both are implemented natively, you get double the attack surface at the ingestion layer. If the text format is supported by compiling to the binary format (or vice versa) using WASM that happens to ship with the browser, it will probably be slow (whereas ingesting a text format directly seems pretty fast).


What you are describing is exactly shipping a compiler, the JIT compiler bundled in the .NET Framework and Core runtimes.

You cannot do that in .NET Native, for example.


well, one of the points to go with SPIR-V is to not standardize higher level languages that translate into it


Others have explained why it's relevant (this is the recommended solution for online shader generation from SPIR-V advocates.)

But there's also a direct comparison of WSL vs SPIR-V wire size in he same graph, for cases where SPIR-V would be generated in advance.


It seems like there would be alternate ways to do online shader generation than concatenating strings and compiling if there were a well-formed binary standard. Maybe it is the easiest conceptually, but I don't see that as being a performance target in that case.

Edit: Also seems like they are leaning pretty heavily into Babylon.js. I don't think a standard should be developed that closely with any one specific application or framework.


Our collaboration with Babylon.js happened after most of the spec proposal work. It was inspired by Google’s similar collaboration with Babylon.js prompting Google’s own SPIR-V based implementation of WebGL. This makes the comparison more apples-to-apples.

WebGPU itself is guided by input from many framework and engine developers.


Sure. I would just consider that the working group keep a broad perspective on the issue. There are developers who use graphics frameworks bare-metal, and there are also a number of frameworks that haven't been mentioned and don't seem to be cooperating, for instance, Pixi.js and Three.js. I think the WG has a responsibility to represent their interests as well, whether or not they are actively participating. I also don't think that the technology has to match the existing architecture of engines. Instead, we should be looking at find the best technology for a long-term solution.

Many developers already know that they will have to make significant overhauls to upgrade to WebGPU. "If WebGPU turns out to be as powerful as we all hope it to be libraries like Three.js will likely have to be rewritten from scratch in order to take full advantage of it." [0]

[0] https://github.com/mrdoob/three.js/issues/15936#issuecomment...


We (Apple) have privately reached out to quite a few JS frameworks and web game engines. We’re trying to take all their input into account and to share it with the WG.


You may not like it but it's pretty clear that WSL is at least a starter. It has a spec, a test suite, and an implementation (which fully implements safety). And the implementation is pretty fast, and should at least dispel WSL vs SPIR-V performance concerns.

This is arguably ahead of the proposed web dialect of SPIR-V (which needs to subset the language, add validation steps, and insert safety checks).


The SPIR-V for WebGPU has the subsetting of the language at [1], validation is fully implemented in spirv-val and as are the safety checks in spirv-opt.

SPIR-V is also proven to compile to all the target APIs for WebGPU, while WSL only has a MSL transpiler right now.

[1] https://github.com/gpuweb/spirv-execution-env/blob/master/ex...


The article is pretty good at explaining why WebGPU is needed for the Web. It could probably be more clear about the fact that all browser vendors are developing this API (i.e. it's not Apple's thing, and never been, if you don't count the name borrowed from their earlier prototype) and have implementations in the works.

For shading languages, it's not clear to me why GLSL-to-SPIRV would be so slow. Would be great to hear back from Google. Side note: we are currently experimenting [1] with Rust-based SPIR-V generation, and the transformation is looking to be mostly trivial - there is no reason it should be slow.

A bigger question though that needs to be resolved is how to specialize SPIR-V shaders. Without this, there is no good alternative to text-based cut-and-paste that the blog argues to be the best approach.

[1] https://github.com/jrmuizel/glsl-to-spirv


> The article is pretty good at explaining why WebGPU is needed for the Web. It could probably be more clear about the fact that all browser vendors are developing this API (i.e. it's not Apple's thing, and never been, if you don't count the name borrowed from their earlier prototype) and have implementations in the works.

The first thing you read is: "WebGPU is a new API being developed by Apple and others in the W3C which enables high-performance 3D graphics and data-parallel computation on the Web."


right?! short of listing all the browsers and renders out there I think they did ok.


> it's not clear to me why GLSL-to-SPIRV would be so slow. Would be great to hear back from Google.

Off the top of my head:

- glslang supports all previous GLSL versions and profiles, which adds many checks and corner cases throughout the code

- it's tricky to generate correct SPIR-V; for instance, it took me a while to properly understand the constraints around basic-block ordering

Disclaimer: I left Google in 2016 and haven't looked at glslang/shaderc since. I am only offering personal opinions here.


> A bigger question though that needs to be resolved is how to specialize SPIR-V shaders.

Is the OpSpecConstant family not enough? Those exist specifically for specialisation.


My main problem with SPIR-V specialization today is that it doesn't affect the shader interface. For WebGPU, we don't want to validate presence of resources that the user doesn't need, we don't want to allocate registers for them in our pipeline layouts, etc. I think having this ability is critical.


Specialization constants don’t seem to cut it for all the use cases. There’s variation beyond plugging in constants. Also some frameworks and engines give their clients the ability to provide shader snippets at runtime.


The answer to shader snippets in the SPIR-V world would be linking. I don't think that's well developed in the ecosystem if at all, but it's a perfectly natural extension.


The Web Shading Language is going to have to pick a new acronym; I did a double take when I saw that Windows Subsystem for Linux was coming to Safari


It really should be WebSL like everything else in its space (WebGL, WebAssembly, etc.)


It's a bit weird that they talk bout WebGPU and WebGL and then name it WSL instead of WebSL.


It's probably following after GLSL (OpenGL Shading Language), which is admittedly an even worse name.


When two bad names collide in funny ways, creating a "No, this cannot possibly be true" moment.


That’s reasonable feedback.


WebSL is definitely a lot more clear!


I trust nothing from Apple when it comes to graphics. My years of pain as a graphics engineer trying to support their hardware and get good performance has created this deepseated mistrust, especially given their track record of abandoning open source standards, including the ones they started! (I'm looking at you, OpenCL).


Please let Apple's shader language die an early death.

Maybe they're on some ego trip to leave their mark in a more visible way than through shaping the shared effort in various places and want to have a big thing to call their own. Or maybe they just want to get some vendor lock-in.

But it's yet another slightly-incompatible, slightly-better slightly-worse alternative to the existing shader languages, and nobody wants to deal with that, just like nobody really wanted to care about Metal. Apple's tight control over their devices will probably force some people to build some kind of compatibility layer into their pipelines, but it won't have to be good, and it won't be the primary target.


In the meantime, they never added support for WebGL2 and MSAA on Framebuffer Objects. Blargh.


https://twitter.com/gfxprogrammer/status/1171553288681996288

"We're collaborating with Dean and the WebKit team to use ANGLE for WebGL's backend in WebKit, which will allow an easy upgrade to WebGL 2.0 on all platforms. Follow https://bugs.webkit.org/show_bug.cgi?id=198948 for updates on the work."


That's great news! Thank you!


WebGL 2 is still a work in progress in WebKit, not abandoned.


It's been so long I assumed it was abandoned completely at this point.


Haven’t we been down this text based vs Bytecode avenue before? The same arguments were made with asmjs against Bytecode wireformat, and ultimately we ended up with wasm. WSL seems to be making the same arguments.


I'm unsure how they're getting better performance. Webgl is using opengl in a sort of sandboxed manner, does this webgpu approach bypass some checks or find different performant ways to do them? Afaik the biggest problem with gpu access from browsers is that the you instructions are not at all hardware secure. So everything has to be managed.


The performance comes mainly from a different programming model, not primarily from direct access to GPU resources (although the access is more direct than before).

The potentially expensive stuff has basically been moved out of the render loop, and into the initialization phase. For instance, when you create a shader in WebGL, that shader may be patched and recompiled internally when it is used for different situations (for instance different render target pixel formats). And "last minute" changes like this happen all over the place in GL and WebGL, because the GL model has so many flexible "knobs" that can be pressed at any time.

In WebGPU everything is "baked" at creation time. You may need to create more "state objects" upfront for all possible state combinations you need, and this creation may be more expensive than in the GL model, but once those state objects are created they are very efficient to use, and more importantly, they wont be "recompiled" during use.

It's a similar effect like having an unpredictable GC which can kick in at any time and produce frame spikes (like Javascript), versus not having a garbage collector at all (like WASM).

WebGPU also needs a lot fewer calls in the "hot path" compared to WebGL. When in WebGL you may need dozens of calls to change state between draw calls, in WebGPU it's only a handful.


If you are 100% careful, you can get equal performance with a good WebGL implementation, like ANGLE, and good user code. But this is very difficult, and requires very careful programming.

I built a WebGPU-like layer [0] for my 3D web application https://noclip.website , so I get some of the performance benefits today, under ANGLE, but WebGPU has these best-practices baked in, so everyone will see an increase.

[0] https://github.com/magcius/noclip.website/blob/master/src/gf...


Yes, WebGL requires some of the same tricks needed in the D3D9 era (batching and avoiding redundant state changes).

I'm quite confident that WebGPU will still provide a nice performance boost compared to carefully written WebGL, because the API is a better match to the underlying native APIs (D3D12, Vulkan and Metal), while ANGLE is basically a GLES2/3 "emulator" on top of D3D, and D3D11 is quite different from the GL programming model.


Unbelievable performance. Bravo. You don't happen to be using WASM, do you?


I use a limited version of WASM for decompression and texture decoding. Everything else is just being very careful to not create any objects in the hot path.


This is incredible.. I spent over an hour going through all the rooms of Banjo-Kazooie; I always loved that game. The performance is great!

Great job you did there..


That sounds more like implimentation details of how opengl is used, rather than something 'different' from webgl. Wouldn't it be possible to just modify the webgl implimentation or flavor it for the performance seen here?


It seems to be structured more like Metal, DirectX 12 or Vulkan, so it's probably lower-level than WebGL, and reduces CPU load by lowering the cost of draw calls. These lower level APIs also make better use of multi core CPUs, and look at the huge performance differential between the other Macs and the likely high-core-count iMac Pro.

Their performance chart only lists Mac computers, so I assume under the hood this specific implementation is either translating WebGPU calls to Metal calls or at least the implementation is more like Metal. I suppose on other platforms, replace Metal with Vulkan or DirectX 12.

I just like reading about this stuff, I am not a graphics programmer, so I may be wildly off base.


One big way to improve on performance over WebGL is that WebGL exposes a slow layer on top of an already slow imperative API (OpenGL), while new-style APIs (Vulkan, DX12, Metal) are built around command buffers and state objects that you can assemble and then dispatch to the GPU in big chunks.

Think of it like the equivalent between calling memcpy(a++, b++, 1) in a loop and calling memcpy(a, b, n) instead.


It's mentioned in the article, GPU calls (and validation) are done up front instead of in the draw loop. If the gallery is accurate though, looks like there isn't any validation right now.


I just assumed that Apple exposed some API functionality in Webkit that allowed more efficient low-level usage of the GPU hardware. This leads me to believe that adoption of the WebGPU technology will probably fail to break out of the Apple ecosystem.


No, the other browser vendors are on board too. In the case of Apple, the API backend will be Metal. It'll be Vulkan or DX12 on other systems.


Or NVN, libGNM.


Anyone from other browser vendors care to comment? Would love to know their progress and what they think of SPIR-V/WSL.


Here are Google WebGPU efforts:

https://github.com/gpuweb/gpuweb/wiki/Implementation-Status

Which already mentioned on the standard meetings that although Chrome already ships a prototype implementation of Web Compute, they don't plan to fully productise it, rather take the learnings from the community into WebGPU shaders.



The helmet gif is breathtaking. The eliding over abstractions as an API simplification is a bit disingenuous.

  gl.Foo(...)
  gl.Bar(...)
  gl.Baz(...)
into

  encoder.setPipeline(pipeline)
Unless they're intuiting what needs to be done, those Far, Boo, and Baz calls are still happening somewhere.

That's not to say abstraction isn't a useful way of separating out code. And OpenGL/WebGL's pipeline setup has always been a bit archaic. This is definitely progress - it's just not really being presented directly in the article.


...it's the exact same difference like GL to modern 3D-APIs (like Metal, Vulkan or D3D12, or even D3D11). The cost for those Foo, Bar and Baz calls is paid once during the creation of the pipeline object which usually happens at the start of the application.

Making that pipeline object active during rendering is very cheap (about as cheap as one of those gl.Foo() calls).


My point was the post is misleadingly saying: "These 15 lines go away and are replaced by this 1 line".

That's not what's happening (unless I'm mistaken) and the code examples are comparing apples to oranges.


Misleading if the point were that the API got smaller or easier, maybe.

But the point is instead that the amount of work done in the main render loop is smaller. Those 15 lines do go away in that sense- they only happen once on startup under the new API.


Well, yeah ok, the lines move from the drawFrame() function into the init() function basically.

It may not be less code to type, but it's much less code to run :)


I re-read the article - and that's just not what the article implies. They make the implication that there's a reduction in "code to type".

I suppose this is just an exercise in pedantry though. I'm excited, regardless, for any API simplification.


This seems pretty clear and unambiguous to me

> Most 3D applications render more than a single object. In WebGL, each of those objects requires a collection of state-changing calls before that object could be rendered. [...] All the pieces of state in the WebGL example are wrapped up into a single object in WebGPU, named a “pipeline state object.” Though validating state is expensive, with WebGPU it is done once when the pipeline is created, outside of the core rendering loop. As a result, we can avoid performing expensive state analysis inside the draw call. Also, setting an entire pipeline state is a single function call, reducing the amount of “chatting” between Javascript and WebKit’s C++ browser engine.

> Resources have a similar story. Most rendering algorithms require a set of resources in order to draw a particular material. In WebGL, each resource would be bound one-by-one. However, in WebGPU, resources are batched up into “bind groups”. [... In both APIs] multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.

The clear point in both of these comparisons is that the same operations must be done in both APIs, but the WebGPU version allows much of the work to be pre-computed, allowing the draw calls (where the performance bottleneck lies) to have as little overhead as possible.


That's neither clear nor unambiguous. The text I highlighted including their summary of it states there is a reduction of code. Not only that, the "both APIs" part you're mentioning isn't even pertaining to the code but rather the execution of the pipeline.

My point is, and has been, that they should focus on the perf gains and reduce the misleading sentiment that it has simplified the API footprint.


And clear and unambiguous to me:

  gl.UseProgram(program1);
  gl.frontFace(gl.CW);
  gl.cullFace(gl.FRONT);
  gl.blendEquationSeparate(gl.FUNC_ADD, gl.FUNC_MIN);
  gl.blendFuncSeparate(gl.SRC_COLOR, gl.ZERO, gl.SRC_ALPHA, gl.ZERO);
  gl.colorMask(true, false, true, true);
  gl.depthMask(true);
  gl.stencilMask(1);
  gl.depthFunc(gl.GREATER);
  gl.drawArrays(gl.TRIANGLES, 0, count);
>>> On the other hand, rendering a single object in WebGPU might look like:

  encoder.setPipeline(renderPipeline);
  encoder.draw(count, 1, 0, 0);


For years if not decades, imperative OpenGL calls like the ones above have been internally building state objects and command buffers. APIs like Vulkan and WebGPU just expose the state objects and command buffers directly. Part of the issue with OpenGL is that the state is hidden behind the driver (due to how much things have changed since the APIs were specified) and you build the state with a massive number of API calls that often need to get RPC'd into a sandbox or what have you.

A carefully specified API like WebGPU that exposes state objects can allow those to be built outside the sandbox (in user content) and efficiently RPCd in a smaller set of calls.

It is realistic to say those calls go away, even if similar calls have to happen "once". In comparison, when I port native apps over to WebGL I'm invoking dozens of WebGL API calls every time I draw some tris and it's SUPER SLOW. Compared to that the cost of doing some basic setup is a rounding error.


The course of development with WebGPU initiative is breathtaking. The effort poured into the API is producing content that will be consumed by people that desire to study GPGPU applications.

Included in the final specification or not, a well-tailored shading language is only of benefit to the GPGPU community, similar to how simultaneous existence of DirectX, Metal, Vulkan has been to the robustness of the abstractions of the engines built upon them. No one should doubt the rigor of developers that are shaping the WSL specification; they are experts in their game. Moreover, the web mode of SPIR-V will also have considerable differences in its morph, and all parties are trying to find a good common denominator where none of the existing works is an exact solution; the work of Apple here will benefit everybody going forward.

Also, we are getting GPGPU computing abilities in the browser for a much broader market on many form factors! Let's prepare for it!


The claim that the developers shaping the WSL specification are experts is questionable given that the actual experts in shading languages -- i.e., the implementers building GPU hardware -- are looking at SPIR-V and seem to be pretty frustrated by the NIH syndrome of the web developer community here.


Do you have some evidence for this assertion? Because Apple happens to make its own GPUs and shading language... a shading language that also runs on other people's GPUs. Seems like relevant expertise to me.


Apple is huge. It's not clear to me whether the Apple GPU folks are involved, or whether it's just Apple's web people playing at graphics.


We already have the ultimate shading language: SPIR-V.


Unrelated: the video doesn’t fit on the screen in Mobile Safari.


Update: the demo site doesn't work for me either. There's a 404 in the console for https://webkit.org/demos/webgpu/babylon/babylon.max.js.map.


I've reported the video issue to the author. The demo site requires STP 91 or newer with the WebGPU experimental feature enabled.


I'm using Safari Technology Preview Release 91 (Safari 13.1, WebKit 15609.1.3) on a MacBook Pro (Retina, 13-inch, Early 2015) running macOS Catalina 10.15 Beta (19A558d) with the WebGPU experimental feature enabled and not seeing anything. I'd be happy to file a WebKit bug or Feedback Request if you'd like.


Please do! It's working for me with STP 91 on a Catalina beta build. Could be a hardware-specific issue. A bug report would be useful. Please Cc mjs at apple dot com and I'll make sure the right people see it. For reference, do the other demos in the gallery work for you? https://webkit.org/demos/webgpu/


Might be, my computer has been having a number of OpenGL issues in Catalina. Anyways, I filed https://bugs.webkit.org/show_bug.cgi?id=201765 and CC'd you.


Random comment. I've monitored the discussions of WHLSL/WSL vs SPIR-V and I can't decide which side I'm on.

I started in the SPIR-V camp. I liked the idea that WebGPU would just get "shader assembly language" and that would let a 1000 other higher level languages blossom. I also liked the idea that injesting assembly seemed easier and less code than injesting a higher level language.

But, Apple had their original WHLSL post (https://webkit.org/blog/8482/web-high-level-shading-language...) and I liked some of their arguments for a text based language. I like the idea that I can just make snippet/jsfiddle/codepen and don't need a entire tool chain nor any 3rd party libraies just to get something working. I have 100s of standalone WebGL samples that have next to zero external code. I'd like to do the same with WebGPU but if it's SPIR-V based that will be impossible.

I also agreed that AFAICT SPIR-V being more assembly language like is not really a help. To validate it still requires a bunch of work, understanding it as a whole (building up an AST?) and checking everything so it's not clear to me there's a win there. The SPIR-V came does list some valid points here in relation to validation

https://github.com/gpuweb/gpuweb/issues/44

It's also been pointed out that SPIR-V is not a small format. Suggestions have been made that if you care about download size you'd send some other format that is smaller + a library to decompress/compile that format into SPIR-V. That sounded less interesting to me as again I need now both an offline toolset and a library to use its results.

I don't remember why but at some point I gravitated back toward the SPIR-V camp but when I went over the arguments it turned out to be mostly a tie.

I feel like if someone wrote a WSL or GLSL to SPIR-V compiler that was very small (50k to 100k compressed JS), ideally in JavaScript not WebAssembly, that might help decide the issue? Unfortunately AFAIK none of the devs involved with WebGPU are JS programmers so they do things like re-compile ANGLE's GLSL compiler via Emscripten as a solution which turns into a large bloated library with multiple parts.

I hate the idea that I won't be able to create small WebGPU demos without large external libraries or without finding places to host binary spirv files. A small library might push me in the SPIR-V camp. Without that I think I lean to the WSL camp.

I think the SPIR-V camp would argue that small samples don't matter. What matters is AAA games and AAA apps like Google Maps and those teams don't care if they have to use large offline toolsets. They'd also argue there's 2 or 3 orders of magnitude more devs using a large library like three.js than using low-level WebGL so the fact that without a large library the experience sucks is irrelevant. So again, I don't know which direction is better.


Also, with SPIR-V the browsers are simpler, thus less likely to be buggy, incompatible, or have widely different performances.

All in all there's waaaaay more reasons to use SPIR-V.


SPIR-V with full web safety rules and translation to native shader formats is not really simpler to implement than WSL with those same properties. People make these claims but the code doesn’t best that out.

There’s also a good chance that in the end the shader processing will be in a common library like ANGLE.

With GLAL the issues were much more about the drivers consuming a textual language than about the browser consuming a textual language.


three.js does shader generation at runtime and I expect they don’t want to significantly increase their download size by adding a compiler, so a text-based language is likely better for frameworks like that. The calculus may be different for a game engine like Unity. In that case there is already an offline tool chain involved.


is tensorflow going to use this?


Tensorflow.js will probably add a WebGPU backend when WebGPU becomes widely available in browsers. But that won't by itself close the performance gap with native ML libraries. For that there is a new W3C community group specifically looking at ways of adding machine learning acceleration to the web: https://www.w3.org/community/webmachinelearning/


This backend work has already begun (and can run posenet, albeit still slower than WebGL): https://github.com/tensorflow/tfjs/tree/master/tfjs-backend-...


I would love to see a wasm API for Web GPU and then have a new tensorflow web runtime targeting that.


it would really democratize machine learning so that developers don't have to rent GCP/AWS to serve apps that do inference


Totally not another side channel. Waiting for WebPCI and WebDMA.


Such a shame that webGPU is not Vulkan based... Thanks Apple.


It wasn't just Apple, even several Google engineers working on WebGPU agreed that simply mimicking Vulkan wasn't the right model. There are some notes here: https://docs.google.com/document/d/1-lAvR9GXaNJiqUIpm3N2XuGU...

The bit about network transparency is the most interesting one IME (which, in this case, assumes the "network" is essentially a collection of intercommunicating system processes, not nodes), as well as the bits about how Vulkan exposes too much of the vendor model to the programmer, which also isn't their goal. But it's a very sensible goal for Vulkan itself on the desktop.

If anything, the work on the gfx-rs means that WebGPU might actually end up as a more low-ish-level "portable" graphics API than Vulkan itself if you want to target every major platform + the web. In particular, gfx-hal is a non-Vulkan abstraction layer on top of D3D12/Metal/Vulkan, and webgpu-native/wgpu-rs provide WebGPU APIs for desktop platforms (in C/Rust) based on gfx-hal. Eventually you should be able to target WebGPU+WebAssembly too. I just ran a wgpu-rs demo yesterday on my mac with Metal and on my Windows desktop with DX12 with no changes! Pretty good stuff so far. And if you're a die hard Vulkan fan and don't care about WebGPU, then gfx-portability will let you program against Vulkan while building on gfx-hal, all the same, so you can still target Windows/Mac. (It would be interesting to ask how gfx-rs overcomes the problems outlined in the above design doc.)

The shading language will probably end up being the most contentious point in the end, but ultimately I think the overall tool quality/available for desktop is impressive, and WebGPU will probably end up as a good offering in the design space.


by "network", what they really mean is interprocess communication, right?


This is not about being a Vulkan fan. Not even about portability.

The real issue is the one no one talk about. Webgpu is the intersection of features from Vulkan, directX 12 and metal. Which means that webGPU is a subset of Vulkan, metal and DX, by design it is less powerful than metal, less powerful than DX and less powerful than metal. By being an intersection, by design they've created a sub-par, non evolutive api. As a consequence, performance reachable on desktop, and expressivity of possible software and graphical features will be inferior on the web which is deeply sad

Nobody understanding and stating that is another sad thing.


WebGPU is for the Web. You're expected to be able to put up a webpage and have it run everywhere (as much as is possible). Vulkan is designed so you have to make paths for every differen type of hardware. That's a non-starter for the web


The intersection of hardware features is greater than the intersection of Vulkan, dx12 and metal features. Moreover, hardware features should be queryable at runtime


With extensions count growing every week.

Good luck making sense of Vulkan in 10 years time.


That's why it's versioned.. Webgpu could start from Vulkan 1.1 or the latest Vulkan version could be bumped to Vulkan 1.2 for marking it's the one used for webgpu.


Except that OpenGL and WegGL already proven that there is what the paper says, what the driver states and what the GPGPU actually does.


Turns out that the intersection is already a huge advance over WebGL, and that much of the stuff outside the intersection is of marginal value or even outright regrettable.


How can you make such a massive claim? Did you read all of Vulkan extensions?


I have followed the WebGPU CG discussions about differences between the APIs and what features one has that the other doesn't. The initial goal of the API is to have a solid core, and not rely on extensions for important functionality. But it does have an extension facility so I'm sure some things will make it in as extensions over time.


Just like WebGL fails short of OpenGL ES, with less features and lower FPS count on the same hardware.


Yes, thanks Apple. Vulkan is rather awful to use, compared to D3D12 and especially compared to Metal.

Metal, in my opinion, strikes the best balance between performance and low-level control.

With Vulkan the intention is that you have multiple codepaths for every vendor for good performance. This is way too much effort for an API that is already of little relevance.

I think that with AZDO-style extensions to OpenGL/WebGL you would've gotten 95% of the performance benefit without adapting a whole new rendering API (and shading language). Unfortunately, AMD could never get a proper OpenGL driver going so they pushed for Mantle which ultimately turned into Vulkan.


Metal also is quite different between mobile and desktop, so it's not solving this portability issue completely. It has the benefit of standardizing more of the limits/capabilities in the feature tables as a part of the "spec". This is much better than querying everything about the physical device at run/init time and making decisions from there.

Where Metal is really different, in a good way, is that it has progressive complexity. You can use the basic commands and get pretty far, allowing the driver to manage lifetimes for you and not caring about multi-threading too hard. But then there are ways to get more control, gradually, which you can opt into later. This is very different from Vulkan's all-or-nothing approach.

For the Web, it seems to me that having the same property would work best.


Not really interesting until Chrome adopts it


Chrome is adopting it!

They've signaled "intent to implement", and have started work on it in chrome for mac.

https://github.com/gpuweb/gpuweb/wiki/Implementation-Status


Note that Chrome has not signaled adoption for WSL (formerly WHLSL), instead preferring to use a profile of SPIR-V. They also put out a demo with the Babylon.js team for this as well.

https://medium.com/@babylonjs/webgpu-is-coming-to-babylon-js...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: