Just remembered this interesting provocative video from 2011 - https://youtu.be/00gAbgBu8R4
As I understood right - to achieve something like unlimited detailed geometry we need to store objects in adaptive vector form which is very fast to render in current zoom level and will need a little bit of computation if zoom also changes a little. And what encoding form of models/geometry will be the fatest way for rendering? From 2d/ui engine perspective it's just a pixel cache and we only need to copy pixels to viewport. And there will be max zoom level where it makes no sense to cache pixels for bigger zoom level because of memory overhead and ability to render straight from original vector representation in realtime.
But what is equivalent of pixels caching in 3d world? Due to ability of viewing objects/geometry from different sides/angles (without changing distance from camera to object e.g zoom level) we need to have something like 3d pixel cache which seems like huge memory requirements. Maybe voxels or points/splats? Or maybe just same 2d pixels cache but for each 6 sides of object (like 6 edges of cube) and do some pixel interpolation for intermediate viewing angles?
Except they never really made their "unlimited detail" look anywhere near as good as the comparatively low poly trick-based rendering they were competing against and definitely not the high end imagescan data being rendered by recent Unreal engine demos. Even Euclideon's highest end rendering demos that I've seen (also imagescan-based voxel data IIRC) looks rather shoddy compared to modern AAA game engines.
Maybe it technically could push more polygons, but it looked like crap.
But yeah, I agree that their tech seems to be a clever way to index and access large amounts of point cloud data, allowing them to stream from disk just what is needed for the current view -- a clever database more or less.
But for all the claims they made about how it will revolutionise everything, their demo's were pretty damn bad.