Been doing this at Faro Inc since 2023 - I helped build it. The real magic is simply the lookup rasterization on device. Since mobile device GPU’s are fast now it fits inside the geometry shader.
Faro does scanning, not tracking. It shoots lasers in all directions while simultaneously taking 360deg imagery, resulting in high density colored point clouds and gaussian splat pre-imagery. I no longer work there as they up rooted their executive team.
Are there any existing examples of partial render offload to the cloud?
Crazy good insight here: splatting is largely a search problem, and that can be offloaded to the cloud.
> Specifically, on the cloud side, we propose asynchronous level-of-detail search to identify the necessary Gaussians for the client. On the client side, we accelerate rendering via a lookup table-based rasterization.
Various forms of point/blob rendering have been around for decades. What has been missing has been good workflows to create the content.
That paper kicked off a rapid stream of a thousand papers by taking a photogrammetry-style workflow and producing better than photogrammetry results by reframing the process as gradient decent on differentiable point samples. This allowed the research to stand on the shoulders of all the work being put into deep learning tech.
The early 2000s splatting from point-based rendering is what George Drettakis and his students realized could be applied in this new NeRF domain.
Basically, all the reasons that point splats didn't work for regular surface rendering nearly 25 years ago (holes, inefficient, can't edit them like meshes) are less of an issue in a light field capture setup.