This doesn't really seem volumetric in nature to me.
Firstly, UE4 has broad-phase that I understand they are quite happy with, I doubt they would add in a ray-trace, since not only would that not really improve anything, it would also not integrate well with any non-nanite techniques of rendering geometry, and my understanding is at least for now, nanite can only handle (mostrly)static geometry, and I doubt they would want to give up the ability to work with other rendering techniques.
If I had to take a wild guess, I would guess they have a somewhat standard process to generate a low detail mesh, which is what get's dispatched to the GPU, then, atop of that mesh they may build triangles acceleration structures parameterized across the surface perhaps similar to this paper [1] From there, you could do similar to what you suggest, and just generate additional triangles such as is in that paper.
However, given how poorly GPUs handle pixel-sized polygons, that may not be the best approach. so I wouldn't be surprised if tessellation is used to around a quad level and then from there, the rest is either done using compute-style rasterization in the pixel shader, or some outputs are written and additional rasterization is actually deferred for a compute job.
I'm sure there are all sorts of exceptions and edge-cases, but I wouldn't be surprised if it looks something like that general workflow. On the other hand, I'd be very surprised if there was anything that looked like ray-tracing on the scene level, perhaps traversing the surface-space data structures looks a bit like ray-tracing.
Firstly, UE4 has broad-phase that I understand they are quite happy with, I doubt they would add in a ray-trace, since not only would that not really improve anything, it would also not integrate well with any non-nanite techniques of rendering geometry, and my understanding is at least for now, nanite can only handle (mostrly)static geometry, and I doubt they would want to give up the ability to work with other rendering techniques.
If I had to take a wild guess, I would guess they have a somewhat standard process to generate a low detail mesh, which is what get's dispatched to the GPU, then, atop of that mesh they may build triangles acceleration structures parameterized across the surface perhaps similar to this paper [1] From there, you could do similar to what you suggest, and just generate additional triangles such as is in that paper.
However, given how poorly GPUs handle pixel-sized polygons, that may not be the best approach. so I wouldn't be surprised if tessellation is used to around a quad level and then from there, the rest is either done using compute-style rasterization in the pixel shader, or some outputs are written and additional rasterization is actually deferred for a compute job.
I'm sure there are all sorts of exceptions and edge-cases, but I wouldn't be surprised if it looks something like that general workflow. On the other hand, I'd be very surprised if there was anything that looked like ray-tracing on the scene level, perhaps traversing the surface-space data structures looks a bit like ray-tracing.
[1] https://graphics.stanford.edu/~niessner/papers/2016/4subdiv/...