Don't listen to the HN haters, this is incredible and really exciting! Thanks for contributing such a great project and making it Open Source - that takes a lot of bravery and courage. Keep it up, very impressed!
Yea, so I don't know much about web development but I spend quite a lot of time in the Unreal Engine, and to see something like this in a web browser was really impressive!
Although it wasn't immediately obvious as to how to add scripts and create actors etc.
Ah, Javascript-based VR. That's easily five years away from being a pleasant experience with anything but the most modest of scenes. And by then we'll be using WebAssembly.
It's all well and good, in a sense, so long as your main loop does little more than queue some buffers, shaders and materials before handing over the work to the GPU; but as soon as things become dynamic JS really starts to show its limitations.
To wit, I checked out the cannon.js demos and giggled as their examples screamed along at a sweat-inducing 3 fps, showing little more than a small stack of low-resolution spheres failing to tumble to the ground. The machine I'm using isn't a beast by any stretch of the imagination, but this is ludicrously poor performance even for it.
The problem isn't JavaScript, it's singlethreading. If your main loop does little more than queue the drawcalls you're doing it wrong. Web workers work.
And ideally your physics engine doesn't live in the browser context, for the same reason you don't draw geometry on your CPU. It's a specialized computation suited for a specialized environment. It's easy enough to run actual native Bullet Gpu-accelerated outside the browser and bus diffs to your renderer.
90FPS in Javascript-based VR is happening today. I'm doing it.
I'm nonetheless excited for the optimizations we'll make in the next 5 years, though it has less to do with wasm and more to do with cutting out the fat on the path to the GPU, both at the scene graph and webgl layers.
A modern GPU has 720 GB/s of memory throughput. To achieve even a fifth of that in the environment typical of an AAA game titles, you need to efficiently invoke the host ogl drawing commands. Nvidia has some discussion of this in their bindless texture white papers. Essentially, the bind/unbind steps in OGL and similar have started to degrade performance.
This is not going to be easy to do in javascript, although VR/4k is not the limiting factor, rather it is the complexity of the scene.
A lot of it boils down to figuring out how to coalesce and pipeline your rendering (mostly a THREE.js problem) and submit it quickly and efficiently to the GPU (a webgl layer problem).
You'll have to trust me that there are people actively working on these things -- the people providing the VR platforms. But these are things that need work at web infrastructure layer, not something that will be solved by moving away from Javascript, which is not the bottleneck.
And there are actually already workarounds for most of these. We just need to massage the workarounds into default-on features in our browsers and frameworks.
I don't dispute that Javascript will be capable of it, or that there aren't ways of working around its limitations for modest scenes now. But I find it curious that you describe the problem as a matter of threading, as though throwing more cores at it is an appropriate solution.
Settings aside VR for a moment; the simple scene I described from cannon.js ought to be capable of being rendered in real-time, in software, at a smooth framerate on the machine I use. I believe this because I have decades-old software that can do this on that same hardware.
Shuffling the issue from improving Javascript's baseline performance to spreading the concern across multiple cores may improve the perceived experience for the user, but it comes at the expense of Watts and the overall amount of work that the computer is able to concurrently perform.
It's why in order to conserve power on my laptops I avoid launching the browser. The software trades battery life in exchange for overcoming baseline performance issues.
Do you have some more in-depth information on how to build these high-performance javascript-based VR applications? I work a lot with canvas2D and run into performance problems quite quickly, I'd like to know a bit more about the structure of a highly performant application.
I've been meaning to blog about all of the stuff I'm learning here (doing it fulltime and there's a lot to write). Drop me an email if you want to be added to the mailing list; email in proifile.
But quickly re: canvas2D and JS based VR, you really want to keep your textures (canvas) small, and your updates minimal. Texture uploads to the GPU kill you because you have a per-frame bandwidth budget, and with current THREE.js and WebGL that budget is quite low. You also want to avoid some gotchas like y-flipping.
This is all going to get better though, with faster WebGL and more intelligent THREE.js optimizations with e.g. subtexture updates [0], which currently require you to know your WebGL hacks and OffscreenCanvas [1], which is a very nice for squeezing things out of the main loop, but isn't yet available in most browsers.
As they say the future is already here, it's just not evenly distributed yet. I'm trying to distribute it.
Also, most of the Javascript high performance programming is about avoiding garbage collector as much as possible, I wish they just let us manage memory manually.