I think that kind of under sells it. Yes Apple had all sorts of existing technology they could leverage. But they still built a completely new spatial UI paradigm for it.
And an entirely new interaction model that hasn’t been seen before. Using looking at something to replace a mouse isn’t new but taking that combined with using a pinch gesture to “click“ and some of the other things they’ve come up with is a unique combination that seems to work quite well. Thought there is certainly room for improvement.
I own one of each, and develop for the Vision Pro through my job, it's the very same story it's always been. Apple hasn't 'invented' much here, but the magic is in how it's assembled, even in its current state, using apps in a 3d space feels better than anything the quest has ever done. Even simple things like 'touching' a panel just feels more natural on the vision pro than the same experience on the quest, mostly because the quest does things like forcing the ghost hand to stop at the surface of the window, instead of continuing to track your hand through it and just using the intersection as the touch point. It's a small difference in the interaction that makes a world of difference in usability, which Apple is very good at.
And an entirely new interaction model that hasn’t been seen before. Using looking at something to replace a mouse isn’t new but taking that combined with using a pinch gesture to “click“ and some of the other things they’ve come up with is a unique combination that seems to work quite well. Thought there is certainly room for improvement.