They’re both nearly identical in the ways that matter except for their outward appearance and minor things like audio, accessory ports, and the Vive’s camera.
They both have the exact same resolution OLED displays with a butterfly mechanism to adjust the IPD and fresnel lenses. They both use IMU’s as the primary tracking system with external components used for drift correction.
They both use optical tracking (except Vive is inside-out and Rift is outside-in.)
They both have incredibly similar software, with a very minor tweak needed to fully translate all the Rift calls to work completely on Vive.
The original Rift and Vive are very, very similar headsets. The most different thing about them is their controllers.
This sort of analogizing might fool a layman, but it will not fool someone who has looked at teardowns and done actual software development in the space. You need to be able to tell whether a similarity is more like two cars having the same engine controller firmware, or more like two cars both having carburetors, or more like two cars both having bought tires from the same third party supplier.
You picked the wrong two headsets to compare. The steam sight hmd is not the vive. You read a comment about the steam sight hmd and made a comment about the vive.
What secret knowledge of the steam sight headset do you have?
It's not really fair to call the Lighthouse system an "inside-out" tracking system. "Inside-out" generally refers to the tracking data collection and processing happening on data collected exclusively by the headset, without any specialized, external reference. That's not what Lighthouse does.
Lighthouse is nearly identical to outside-in camera tracking, with the one wrinkle being that the photons flow in the opposite direction. Instead of a fixed, designed constellation of point-like emitters on the headset, you have a constellation of point-like detectors. Instead of a stationary grid detector, you have a stationary grid emitter. But otherwise, the data is practically the same, the math is all the same, the calibrations are all the same, and the whole system doesn't work without those stationary, external reference points.
Similarly, ray tracing doesn't simulate photons leaving light sources, bouncing off surfaces, and arriving at a camera sensor. The simulation is of anti-photons, leaving the camera, bouncing off surfaces, and seeing what lights they hit. It's like conventional current actually being the opposite direction of electron flow. The systems can run forwards or backwards and get the same answer.
Actually inside-out tracking does a completely different thing. The acronym "SLAM" stands for "simultaneous locating and mapping". It's building up a coherent, consistent model of the world around it. It adapts to new surroundings.
Bump a Lighthouse emitter or CV1 camera out of position and everything stops working because the data no longer makes sense. Designing a Lighthouse headset or controller requires given the tracking code a 3D model of the position of all the detectors.
But move the furniture around in a room and SLAM catches up in a few seconds. SLAM also doesn't care about the shape of thing you're tracking. Hell, it really doesn't care all that much about the quality of the camera feed, other than being relatively high framerate and not very noisy.
It's simple enough to say that on the lighthouse system, the sensors are on the headset/controllers, and the beacons are cast from the lighthouse boxes.
On the oculus system the sensors were the external cameras, and the beacons were the LEDs on the devices.
My experience with both was that the oculus system did really well in a seated system but for room scale games the lighthouse system does better, especially when the controllers go behind you like in the valve archery game.
I haven't bought an oculus system since the DK2 so not sure how sophisticated it is now.
Windows MR (both VR headsets and the HoloLens), Magic Leap, Vive Focus, Pico Neo, and the upcoming Linx all use inside-out cameras, all with their own implementations. HTC Vive, Vive Pro, Vive Cosmos, Valve Index, Varjo XR-3 and VR-3, and PiMax headsets are the only ones using outside-in tracking anymore, and they're all using specifically Lighthouse.
First of all, you just don't really do that very often. People have rotator cuffs and elbows that make any action in those regions fairly uncomfortable.
Second, the all current VR systems primarily use inertial tracking. The visual tracking is only there to correct for drift out of the reference frame. Whether it's Lighthouse or Rift CV1 outside-in cameras or inside-out cameras on every other system, you can put your hands in the sensor blind spot for several seconds before it becomes a problem.
99% of the time, you're working with your hands in front of you. Lighthouse doesn't care about your hands in relation to your body. But it does care about your body in relation to the base stations. Lighthouse's blind spots are constantly changing over the course of your play session. Quest's are always in the same spot.
So many times I've found myself in a corner on the opposite axis of my base stations and my own body is blocking my controllers' view of the base stations. When that happens, you have to have enough awareness of what is going on to understand why your hands start slowly floating away while you are trying to work on something, having forgotten your orientation in the real world room. It's literally immersion breaking.
"Inside-out cameras can't track behind your head" is really not the problem that your random Valve fanboy on Reddit makes it out to be.
Controllers behind the head are tracked by some algorithm magic that fuses the last seen position by the cameras with accelerometer and gyro data for the blind spots. Seems to work like a charm. Probably not as good as full lighthouse system, but good enough.
Every extant tracking system uses IMUs as the primary tracking sytem. The Lighthouse base stations and the Oculus camera tracking are used to correct for sensor drift.
You need it to be this way, because the IMUs can run at fairly high frequencies (200 - 1000Hz), which is (in part) keeps latency low. The data paths and processing needed for the reference frame corrections are so complex that they can't be run anywhere near as fast. It's why the hand tracking on the Quest is so high-latency: there's no IMU on your hand.
And it's not "algorithm magic". It's mostly just Kalman Filtering.
according to Yates the optical tracking and fresnel lenses were the only things Oculus changed from the Steam Sight to the CV1. The rest of the actual architecture was the same.
The question will be what changed from the CV1 to the Rift, but I don't think valve is going to sue oculus either way. This is just to expand the understanding of how slimy a company oculus was even before it was acquired by facebook.
Unfortunately, Facebook heavily subsidizes the cost of the Quest 2, while Index and Reverb G2 have to be sold at a profit.
Facebook is also deep in bed with Qualcomm making Quest 2 work fully standalone and getting the XR2 SOC at a steep discount. Index and Reverb G2 require desktop PCs and high-end GPUs, and other companies using the same Qualcomm SOC have to pay full price.
My senior year of high-school I made a similar display. I purchased a lenticular screen, coated it, and modified a projector to make a much smaller image than intended. Alignment was tough, but I ended up with about 4" of useful horizontal space with a fairly narrow viewing angle.
There really isn't any magic, it's just a lot of pixels to push. I was able to do a Utah Teapot to 2 views at 400x600 on a 100MHz 486 (it was an 800x600 projector).
Is this the Movers and Shakas program (reimbursement for moving to Hawaii)? I would love to e-meet you all! I actually moved here before the program existed :) Is there maybe a chat I could join?
It is, but for those who weren’t accepted there is a small “Shakas Network” of about 7,000 people.
Of those, there are 300 active on Oahu and ~40 that I mentioned who actually like to hang out, work on stuff, get drinks and party. We’re actually going to SKY and The Hideout for drinks tonight.
Message me on Twitter or LinkedIn if you want a Slack invite.
It's aimed at watching movies, running android likely for the support of streaming apps; What I want is a computer monitor on my eyes and so this either underperforms for that need or is an overkill.
OP project may in fact solve this, I need to look deeper into the tech.
Godot and Unity have support for specific VR API's.
This headset is designed for SteamVR.
You can build for this headset by targeting SteamVR in Godot and Unity, but you'd have to do a TON of work yourself if you wanted to build your own VR runtime to bypass SteamVR for use on this headset.
If you want a volumetric display, maybe it will work, but the display itself will need to be the same size as the total volume of depth you want to create, e.g. digital objects can not appear outside the screen.
Due to the fact that eInk is a non-backlit technology, I can't imagine creating a glasses-free stereoscopic effect using eInk.
> They are both cool for short term use, but I've yet to see a shutter glass/etc implementation that is synced well enough to the screen and blocks 100% of the light to avoid ghosting.
I've never seen this on a monitor/display either. But I have an Epson projector from 2013/2014 that uses shutter glasses and does block 100% of the opposite eyes image. Because it's not a screen, it doesn't have to blank the image between frames, it just completely stops sending light from that frame.