Instead of having multiple virtual screens I would prefer a lot more if I could simply spread individual Mac windows around my virtual space. In mixed reality the whole concept of a virtual monitor makes no sense, it's just an unnecessarily limiting abstraction you could easily do without. The whole room is your "desktop".
It's basically what the SimulaVR guys are aiming at, and I'm surprised Apple didn't go this way with their Mac integration. Especially because the native visionOS apps do seem to behave just that way.
I imagine it's mostly a CPU/battery/bandwidth concern at this point — having one wireless 4.5k/60 (or is it 90?) stream from your Mac is difficult enough; having a potentially unbound number of them (for arbitrary number of Mac windows open) is a different problem altogether.
This is it exactly. I’m surprised the HN crowd isn’t immediately picking up the bandwidth issues with streaming unlimited 4k screens from your MacBook.
That sort of seamless render switching doesn’t sound technologically doable if you include that the render is happening on a different machine. I could be wrong.
Noted that this would be a non issue if you could just connect a cable between your Mac and the Vision. You already have the huge battery pack dangling off the side. This is an apple issue, not a technical one.
Do the math of how much bandwidth like three windows/screens (because with this model each window is basically its own screen) at 4K/100hz/10bit color each would take.
You're at limits of TB4 _very quickly_.
You can compress the image, try to do something smart with foveated rendering (only stream the windows that users are looking at; but that breaks if you want to keep a window with logs in your peripheral vision), use chroma subsampling, etc; but those all are varying trade-offs with relation to image quality.
You don't need to send every pixel for every frame uncompressed.
It would be more like VLC, not sending pixels that aren't changing. And you don't really need 100Hz either. You can't read that fast anyway, the content could refresh lower than that as long as the actual window is moved at 100Hz to avoid nausea.
I really doubt the Mac display functionality as-is is refreshed at 100Hz with full pixel fidelity without compression. WiFi can't handle those kinds of speeds reliably.
Foveated rendering doesn’t stop streaming if the user isn’t looking at the window. Just streams in lower quality.
If you would just trivially stream all windows to the vision of course it will at some point be at the limit of current technology. But I would assume a company like apple has the means to push the state of transmission media (rather than just using a now 4 year old standard) and be able to think of something smarter than „just stream all windows in maximum quality even if the user doesn’t look at them“
I may be wrong but it honestly doesn’t seem realistic to expect seamless, latency free foveated rendering switching at those speeds while coordinating between two devices.
Since foveated rendering would only send the resolution required for what the user could perceive then even logs in the peripheral space would be ok since they would be sent in much lower resolution. I think the challenge with some smart foveated rendering would likely be latency.
Another option would be handling rendering on the Vision Pro rather than the MacBook so pixels don't need to be streamed at all.
Exactly. Current systems that work with large channel counts of high res deep colour real-time compositing are also around 6U 19" rack mount units and pull half a kilowatt of power. Not exactly ergonomic for strapping to one's face.
It'd break if you wanted to do the dumbest thing and just not stream the windows users aren't looking at; but lowering the stream resolution on the fly could work but now involves more complexity (on both sides to communicate when to adjust the resolution) and because it's not handled entirely on-device breaks the illusion of it being invisible.
I can also imagine it having some weird privacy implications; like Mac apps somehow monitoring this and tracking whether you're actually looking at them, etc.
If you're OK with rendering everything at high resolution (and then choosing what quality to send over) then you shouldn't have any privacy issues, assuming that this part is done by the OS.
It’s the product vision. But progress is slow reportedly because of physical and current scientific tech limits. Their product vision could take another 10-30 years.
No, it’s a processor and graphics card issue as well. You have to actually render a potentially unlimited number of 4K screens (at least the ones in your direct view) on the Mac Pro. It was never built to do that.
AVP already has foveated rendering so if MacOS had some awareness of what is being rendered it could potentially render unlimited windows since it would only need to render and stream enough to handle what is being looked at.
The primary problem I would guess here would be latency though so maybe not feasible. The other possibility is if the actual rendering happens on device instead of streaming pixels. It would dramatically decrease bandwidth required.
Because I don't care about the technical limitations. I care about if it's a useful gadget for me or not. It doesn't matter why a feature isn't there, that just becomes a "you're holding it wrong" thing.
I don't think Apple wants you to use Mac apps. They want developers to make Vision Pro apps. That's why the Mac integration is the minimum they could get away with.
There are enough devs that love Apple and have a burning loyalty toward them, that this is not something I would worry about. Even if it started to become a problem, Apple has enough cash to make a lot of apps, and could subsidize this for an eternity if necessary to make it work. Apps will not be a problem.
Do you think Netflix and Spotify are really needed (genuinely wondering)? Netflix maybe because of exclusives, but I would think most people with Vision Pro are also going to have Apple Music. With their TV and Music offerings, it might actually be better not to have Netflix/Spotify etc.
How do you know? Where do you get the numbers? Vision Pro isn't even out yet so I'm curious how you could know this. Do you have inside-Apple insight? Such as how many of the people who ordered Vision Pro are on Spotify vs Apple Music?
Hope they are not really betting on that, think it's clear to everyone that OS fatigue hit when they had 3 OSes and lots of companies stopped bothering after that didn't pan out as worth it now this would make 6 OSes...
But what is your actual physical work environment like then? Are you standing in the middle of the room and typing into air? I always imagined productivity minded people would still at least use a physical keyboard which limits mobility.
But that isn't 'The whole room is your "desktop".' If you are restricted to having your desk in front of you, you might as well be looking at virtual screen(s) on your desk.
Of course you could hang the displays freely into space as you can now with the existing Vision Pro apps. You'd hang the productivity ones close to your desk but some of them you could hang in other places and use a virtual keyboard.
I’d be sitting on my couch with my wireless Glove80 split keyboard at my sides. Or lying down with the keyboard flanking me. No desk. And I’d move the keyboard to my counter to stand when I feel like it.
I was thinking something similar. Split ergonomic keyboard wherever you are. Could possibly do something to stick them to legs wherever it comfortable for standing.
I had not seen the Glove80 before, it does look pretty nice. Though I would prefer a bit smaller. I don't really use F keys, and use layers for things like arrow keys.
Glove80 is smaller than the only alternative that suits me, Kinesis Advantage. Agree even smaller would be nice. There are many options if you don’t require the key well shape or if more DIY kits/production quality are up your alley
Glove80 has hot-swappable tripod mounts, so I attach it to my chair, or I use clamps to attach it to a table, or to short tripods standing on the floor, or to very tiny tripods that can give the keyboards an extreme near-vertical angle on a desktop. Very easy to get an ergo setup going when traveling this way.
this is how microsoft's remote desktop protocol works (RDP) but apple never invested in their implementation of something similar, relying on vnc to just send images of the entire screen.
the way RDP works is the code to actually render a window or desktop gets sent over the network and is ONLY rendered on the remote client.
this is not only more efficient but also allows for fun things like rendering remote apps directly on your local desktop mixed in with your normal local windows
I know the implementation needs a lot to be desired but just in case you were not aware, the Oculus Rift shipped with this feature and it still exists via the link on Quest AFAIK. You pick the last icon on the UI bar that appears and it lets you select any window to pop out separately
(note: In my view there was passthrough and I could see my room but the screenshot function didn't include that).
It seems limited to desktop + 4 popout windows but I know it used to support more because when I first got the Rift I tried to see if I could get 6 videos running + desktop all at once. It worked.
There are issues though
(1) it's just showing the desktop's image, that means it can't adjust the resolution of the window. Ideally you'd pick a devicePixelRatio and a resolution and it would re-render but no desktop OS is designed to do that.
(2) It's Microsoft Windows, not an OS that Facebook controls which means they're at Microsoft's mercy and whatever features Windows supports. No easy way to add better VR controls. For example, IIRC, you can't size a window directly. Instead you need go to the window showing the full desktop, size it there, that will then be reflected in the popout window. If that's not clear, there are 2 size. The size in VR. The resolution of the window. If the window is a tiny 320x240 pixel window on your desktop and you make it 5x3 meters in your virtual space, all Oculus can do is draw those 320x240 pixels. You can go back to your desktop, size the window to 1237x841, now Oculus can draw those 1237x841 to your 5x3 meter VR pane. But, since Oculus doesn't control windows (or maybe they're lazy), there is no way change from 320x240 to 1237x841 by directly manipulating the pane in VR. Instead you need to do that in the desktop pane in VR.
(3) IIRC it's got some weird issues with the active window. Since it's really just grabbing the textures of the Windows OS compositor and showing then in different spaces, that's separate from Windows' itself's concept of which window is the foreground window. So, as you select windows, you can see on the floating window of the "desktop" the front window keeps changing.
These are all really just a function of (2), that Facebook doesn't own the desktop OS.
The same problem would exist with the Mac. Apple, since they control Mac, could maybe make it work but it would be a huge change to MacOS. They'd effectively need each app to run on its own screen and each screen to be a different resolution and device pixel ratio. Plus they'd need to map in all of the other features to mac MacOS to be usable in AR
Huh cool, thank you! I had no idea this existed. And I actually have had a Rift since the start (I was an original Oculus Kickstarter backer so I got one for free). But to be honest I never really used it for productivity as the Rift's resolution was too low anyway. Even with the Quest 2 I didn't think it was high enough to use comfortably. But now with the Quest 3 it might be worth it.
But I'll give it a try, thanks!
And yeah Apple would need to do some changes to macOS to make this work well, but as you say they control it. If they really view this as the future of computing they should really make it work.
The idea of SimulaVR is that it replaces your window manager so they avoid some of these issues by taking direct control. But that's something that only works on Linux.
For 2, does Microsoft not have accessibility APIs to control window size and layout? On Apple platforms at least if you were doing this "right" you would probably create a new software virtual display and move all windows bridged into VR to that, and move and resize them as appropriate. I believe the actual implementation turns off the Mac screen (through a "curtain" I think, so idk if it's actually moving the windows around below that or what) but like there are many things you can try doing here to make it better.
Yeah that's why I never used the whole desktop either, only if I need to change a setting during VR gaming like the audio output device that doesn't always auto-switch to the Oculus Audio output. I never noticed you could show apps separately like this, cool!
It's basically what the SimulaVR guys are aiming at, and I'm surprised Apple didn't go this way with their Mac integration. Especially because the native visionOS apps do seem to behave just that way.