It's fine to use this sort of tech on a web page but it's vital to detect whether or not the device rendering the page supports it well, and only use it (or scale it up) where support exists. There are GPU detecting functions out there, and measuring the framerate your app is getting is simple, so progressively enhancing is not hard. That said, browsers could make it easier.
I viewed the page on my crappy Android phone and got maybe 5fps. Anyone shipping an experience like that hasn't done a good job of supporting their users.
In my experience, it is hard. As you suddenly don't have only one result you need to test and polish, but many different ones.
So it is possible, but it is not easy to do right, except for very small applications.
And as for the example here, well - it is a collection of many examples, loading all at once. Of course it is hardware intense - but in a real world project, you would use only some of it, or make a disclaimer, that it is hardware intense, but if you make a game, I think it is expected, that it does not run on all hardware?
As you suddenly don't have only one result you need to test and polish, but many different ones.
In the case of WebGL you need to account for more than the browser. If the user's GPU is on Chrome's blacklist then they won't see anything - and the only way to do that is using the GPU detection functions I mentioned. You have to do progressive enhancement for 3D stuff. And if you've implemented it for that then you have the necessary infrastructure in the code to do it for performance as well, so it's not hard...
I viewed the page on my crappy Android phone and got maybe 5fps. Anyone shipping an experience like that hasn't done a good job of supporting their users.