Hacker Newsnew | past | comments | ask | show | jobs | submit | fulafel's commentslogin

The WebGPU spec identifies squarely as a web standard: "WebGPU is an API that exposes the capabilities of GPU hardware for the Web." There are also no mentions of non-web applications.

The It's true that you can use Dawn and wgpu from native code but that's all outside the spec.


There is mention of desktop applications in their getting-started docs; it seems well within the intention of the maintainers to me.

https://eliemichel.github.io/LearnWebGPU/introduction.html

> Yeah, why in the world would I use a web API to develop a desktop application?

> Glad you asked, the short answer is:

    Reasonable level of abstraction

    Good performance

    Cross-platform

    Standard enough

    Future-proof

This is an indie site. Nothing wrong with it but it's not canon.

And yet Electron exists…

The intent and the application are never squarely joined. Yes it’s made for the web. However, it’s an API for graphics. If you need graphics, and you want to run anywhere that a web page could run, it’s a great choice.

If you want to roll your own abstraction over Vulkan, Metal, DX12, Legacy OpenGL, Legacy DX11, Mesa - be my guest.


That might exclude a lot of your user base. For example a big chunk of Android users, or Linux workstation users in enterprise settings who are on older LTS distributions.

SDL GPU doesn't properly support Android anyways due to driver issues, and I doubt anyone's playing games on enterprise workstations.

It's also totally US centric, whereas the blog post is written to sound general. (They do cop to it in the caveats section)

Vulkan Compute is catching up with HIP (or whatever the compatibility stuff is called now), which seems like a welcome break from CUDA - in this benchmark it beats CUDA in some benchmarks on AMD: https://www.phoronix.com/review/rocm-71-llama-cpp-vulkan

For most devs using GLSL instead of C++20, or Python GPU JIT, is a downgrade in developer experience.

For Python: PyTorch has Vulkan support according to https://docs.pytorch.org/executorch/stable/backends/vulkan/v... - wonder how performance is there.

CUDA is not only for AI.

For a lot of use cases a major advantage of IPv6 is to get away from ambiguous rfc1918 addressing.

You can then just put an allow rule between arbitrary v6 addresses anywhere on the internet when you need connectivity without any other hacks like proxies, NAT, etc and the associated complexity and addressing ambiguity/context dependence of rfc1918 addresses.

So fex you can just curl or ssh to your mycontainer.mydomain.net or you can put an allow rule from mycontainer.mydomain.net to a vm or laptop on your home network.

Internetworking, they call it.


I'm talking about an internal network, not the public connection.

The context in the GP comment was generally getting v6 connectivity for containers.

"Internal" is a context dependent term that you introduced. But to give a use case for that, for example you might want to have (maybe at a future date) two hosts on your networks on AWS and Hetzner talk to each other, still without allowing public connectivity.


> my state will need at least 5GW of power to literally keep the lights on.

I think this abstraction is missing the elasticity of demand that can by unlocked by end-to-end dynamic pricing. Probably if the production was cut in half for some day, and hourly price hiked up until demand matches production, customers would still choose to keep most of the lighting while postponing some more energy intensive loads.


Intel seems vulnerable to Trump tariffs, would seem more likely than TSMC getting into trouble.

You might think that a dGPU is always faster but the limited memory capacity bites you there (unless you go to datacenter dGPUs that cost tens of thousnds). Look at eg https://www.ywian.com/blog/amd-ryzen-ai-max-plus-395-native-... or the various high end Mac results.

So I want this Thinkpad.

https://www.lenovo.com/us/en/p/laptops/thinkpad/thinkpadp/th...?

AMD Ryzen™ AI 9 HX PRO 370 Processor (2.00 GHz up to 5.10 GHz) Operating System Windows 11 Pro 64 Graphic Card Integrated AMD Radeon™ 890M Memory 64 GB DDR5-5600MT/s (SODIMM)(2 x 32 GB)

But I also seriously want to run LLMs. My hunch is a gaming laptop is the best way to do this on the go without spending 5000$ for a Thinkpad with a high end graphics card.


I guess it's the physical HDMI port that's needed, as Minis and the Pro laptops have working monitor HDMI monitor support?

Does consumer hardware (non-MI) need proprietary kernel drivers for running rocm + pytorch?

No. But you might need a specific version of rocm built for your gpu. These are built on https://github.com/ROCm/TheRock

Right now AI support on AMD is officially only on specific models. But they are working hard to turn this around to have broader support. And making progress.


Vulkan compute is also getting some good press as a local llm platform (at least on the linux side), will be interesting to see which crosses the line to "can ship production quality apps on this" first.

Nope! Works fine with in-tree somewhat recent kernel. The AMD driver is actually open source, not just a wrapper into a big on device blob like the NVIDIA one. tinygrad also has a driver that doesn't even need the kernel module, just mmapping the PCIe BAR into Python.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: