The user is pointing out that that the real fridge behind them is reflected by the surface of the virtual object in front of them. And consider on top of that, the fridge is not visible to the headset at that moment. It is captured in the 3d spatial model that was created of the room. None of this is a pre-rendered or rigged or specifically engineered scenario. It's just what the operating system does by default. So one app that is totally unknown to another app can introduce reflections into the objects it displays. This is just so far beyond what can happen in the Quest platform by any means at all. And it can only happen because the 3d spatial modeling is integrated deeply into the native rendering stack - not just layered on the surface of each app.
I'm with you on how incredible it is from a technical perspective.
The most fascinating thing to me is how we've gone from Apple being the pragmatic and real world product focused company, moving slower but making sure what they ship has undeniable practical value, and not promising much beyond ("we don't talk about future products and roadmap").
Compared to the "wow look at that technical prowess, not much useful right now, but such potential !" that we're getting with this device. I don't see it completely fall flat, but it feels it's on the same course as the Apple Watch or the HomePod, to be the biggest in the niche category it defines (whatever it ends up be), and a smaller presence in the general space ("smart eyewear ?") with better fitted and more practical devices taking 70% of the market.
The clunkier XReal probably keeping chugging along, being to the AVP what the Xiaomi or Huawei smart bands are to the Apple Watch. And Meta probably being the Samsung shooting at the target from 5 different angles.
I keep seeing people trying to shove this into the narrative of "Apple coming late but doing it better and solving real problems". But it doesn't fit that narrative well. Apple here is early to something else that just happens to look like the thing that people are viewing as the predecessor, and it's utility is highly questionable and full of all kinds of weird gaps. In a telling kind of way, they are actually in part leaning on the aspects they are not trying to sell (Here, look at these immersive experiences! But shhh don't call it VR) - to paper over the fact that the core of what they are really trying to build is just not ready yet.
Its just a skybox with the cubemap of the room, no? The Quest 3 does a full room scan and can you get a 3d mesh of the room as well. They could provide a cubemap as well.
Meta doesn't want to pass any of the camera feed to an app for privacy reasons so they don't make it available but its a legal issue not a technical one. They can (and should) do this for the browser model viewer.
Apple enforces a single material model so they can inject the lighting data in a uniform way. Its a bit of a nuclear option to have a fixed shader pipeline. But I digress...
>So one app that is totally unknown to another app can introduce reflections into the objects it displays
Reflections of the real environment on virtual objects do not imply a global scene graph, or the ability of apps to reflect one another. It just means that each app gets fed the environment map that mostly had to be generated anyway because of SLAM. Neat but not some kind of radical form of IPC.
Take a look at this Reddit post for example:
https://www.reddit.com/r/VisionPro/comments/1ba5hbd/the_most...
The user is pointing out that that the real fridge behind them is reflected by the surface of the virtual object in front of them. And consider on top of that, the fridge is not visible to the headset at that moment. It is captured in the 3d spatial model that was created of the room. None of this is a pre-rendered or rigged or specifically engineered scenario. It's just what the operating system does by default. So one app that is totally unknown to another app can introduce reflections into the objects it displays. This is just so far beyond what can happen in the Quest platform by any means at all. And it can only happen because the 3d spatial modeling is integrated deeply into the native rendering stack - not just layered on the surface of each app.