So far Meta has published a whole bunch of spatial APIs including scene detection, spatial anchors, plane detection etc. [0] They are far ahead of Apple in many of these respects, but what they haven't had is a device that made this attractive to actually develop for in any meaningful way. Quest Pro lacks a depth sensor and has too small a market presence to attract a lot of developers, while Quest 2 has ugly, black and white low resolution pass through.
So Quest 3 will be the first spatial computing device the world has ever had access to that is available at mass market price (<$500) with depth sensor and high quality pass through. It will finally be worthwhile for developers to build mixed reality apps targeted to regular consumers.
Based on what you mentioned all of those are also available in ARKit or Vision API: plane detection (vertical and horizontal), anchors (both local and geolocated), 3d scene reconstruction, 3d object reconstruction, custom planar markers, qrcode & barcodes, 3d human pose skeleton, 3d hand skeleton, face landmarks mesh, world tracking (SLAM), text detection. Haven't checked Meta APIs but it doesn't look for me as far ahead of Apple.
So Quest 3 will be the first spatial computing device the world has ever had access to that is available at mass market price (<$500) with depth sensor and high quality pass through. It will finally be worthwhile for developers to build mixed reality apps targeted to regular consumers.
https://developer.oculus.com/documentation/unity/unity-spati...