That's not true -- even with depth sensing cameras, it will still be full of artifacts, and things like curly hair or strands of hair will become disastrous because they're not easily geometrically modeled.
The Oculus Quest 2 doesn't do anything like what you're describing -- it essentially just pipes in stereoscopic video from its stereo cameras and stitches them together in a trivial way. It doesn't attempt to build geometric representations of objects in your environment at all.
(For guardian functionality it does very simple things like using the depth cloud to figure out the height of the floor, and if there are points inside the guardian that shouldn't be there, but that doesn't inferring object geometries.)
The Oculus Quest 2 (and the Quest 1) infers the geometry of your environment in the same way a Magic Leap does. The Quest uses the mesh to show perspective-correct stereoscopic pass-through views. https://www.youtube.com/watch?v=3V__SEPobM4
if you look at the video you can see there are artifacts around the hair. It is likely applying some matting via AI to make it less obvious, but it is still there.
The Oculus Quest 2 doesn't do anything like what you're describing -- it essentially just pipes in stereoscopic video from its stereo cameras and stitches them together in a trivial way. It doesn't attempt to build geometric representations of objects in your environment at all.
(For guardian functionality it does very simple things like using the depth cloud to figure out the height of the floor, and if there are points inside the guardian that shouldn't be there, but that doesn't inferring object geometries.)