One of the more interesting parts of the PSVR2 is that it proves that the PS5 is the only console that can output two different video streams independently (HDMI and the front USB-C port). PSVR1 did some passthrough hack.
Imagine the possibilities if Microsoft had actually added multiple HDMI ports on the Xbox Series X:
- screen mirroring for capture cards without passthrough
- native single console multiscreen cockpit view modes for Forza
- Split screen Halo/Goldeneye without actually splitting the screen - LAN party with one console
- minimaps, HUDS on separate screens like the Wii U
- Discord / chat on a seperate screen while streaming
- support for the ecosystem of actually sometimes quite high quality Windows Mixed Reality VR headsets
I wonder if they could technically add more video out ports via the PCI-E based expansion card slot...
The Raspberry Pi 4 has dual HDMI and it costs $35 for the entire board. Cost is absolutely NOT the factor for an extra port. Extra ports and extended monitor support are literal pennies.
Ignoring HDCP for the secondary port, it would cost an additional $0.27 to add a secondary hardware port along with whatever cost for the software, which still wouldn't even add up to $5, let alone $50.
True - but I can't think of a single game which actually makes use of multiple monitors even on PC where plenty of people have the hardware to support that sort of thing. A second monitor with Discord on it, or for streaming? Sure. But nothing that video games themselves make use of.
Even if games supported it, who has multiple TVs in their lounge room close enough to connect both of them to your xbox? That sounds super niche.
Supreme Commander did it all the way back in 2007, you could have a second independent viewport on the second monitor to keep an eye on a different part of the map. IIRC it had to jump through some weird hoops to pull that off with the graphics APIs of the time, it spawned a second instance of the game which acted as a spectator client for the primary instance.
Gran Turismo 4 had that too, all you needed was three PS2s, three network adapters, three TVs and three copies of the game. The feature was mainly intended for installations at E3 and the like but it was left in the retail build so anyone dedicated enough could set it up at home.
Live For Speed renders from multiple cameras - one camera per monitor - to prevent distortions.
It can render a 360 degree view if you have enough monitors. Can't do that with a single viewport.
Perspective projection only works for FOV < 180deg. At FOV = 180 the projected coordinate has to be +inf and it won’t work, so games that support FOV beyond 180 stitches multiple viewports, either on display or by having multiple displays.
I remember reading about a technology that used the 3D functionality of TVs, along with 3D glasses, to display two separate video feeds to two separate users - split screen, but without splitting the screen.
Instead of the video feed alternating between left and right images, it alternates between first player and second player feeds. The glasses, instead of alternating between left-open/right-closed and vice-versa, alternates between both-open/both-closed.
I'm not sure how audio would be handled in this case, and I don't know if anyone actually really supported it in practice, but it was a pretty neat idea.
Plus sometimes I don't want to develop a high dimensional regression model for how the setting affects the fps or deal with all this shader compilation stutter.
It can be a lot of fun to squeeze out every frame you can, but other times I just want to play a game.
Atomic Heart had the best shader compile I’ve seen yet. It happened with a loading bar on the main menu and didn’t stop you from exploring settings. IMO the compile should be part of the install step outside the game though
Setting up shader precomp is apparently a manual process that can be really developer intensive, especially in open world titles. That said, I agree, there is a lot of low hanging fruit that can be handled. Devs seem to be deciding between all or nothing and often just pick the nothing path.
You still have to manage the PC and games. My console has never let me down when I want to relax. I do this for a day job, I prefer not to fuck with troubleshooting at night.
Nope, my PS5 and previous PS4 both update(d) themselves while asleep at night. I only have to update games I haven’t played in forever. My internet is fast enough that I don’t care about that either.
When I had a gaming PC a few years ago, it always wanted to update something on steam or windows then restart.
The defaults can be a double edged sword. Places like DF put out optimized defaults for really mainstreams config for just that reason.
If you have a 3080 ti or 4090, sure, you have paid to just push play and expect stuff to work (well that still doesn't solve the compilation issues), but then you are leaving a lot of performance / fidelity on the table.
However, if you are even somewhat underpowered (think of all the people that bought 6500 xt or 3050 because of the GPU shortage), you are going to have to do some type of manual tuning. Most users according to the April Steam survey use worse than a 3060 and a large fraction of those use something way worse than a 3060.
Plus the CPU side isn't always a solved issue. I play on an 64 core epyc I primarily use for simulation. I have plenty of power on tap, but the lower clocks plus NUMA arch force me to do some fiddling. And I'm spoiled on that front, over half of gamers still play on 4 and 6 core machines (again, see the steam hardware survey for April) that chock gpus very quickly.
A 300 dollar Series S may not show the most crisp image, but I often use mine to avoid the whole song and dance.
Depends on the game. My 5800x/3080 Ti doesn’t do much better at maintaining a stable 60fps in Jedi Survivor than the PS5 does. Recent triple-A PC releases have been super bad.
1) Only ever use it from a desktop browser with a sufficiently configured ad blocker. The mobile apps are intentionally designed to pull you out of the thread with only a single line of text delineating where it does so and no way to actually continue with the thread. It's a miserable experience and the people responsible should be ashamed of themselves. They try to pull the same bullshit on the website but it's more obvious there.
2) Twitter has two reply modes. Direct replies are put under the post they're replying to, much like normal linear forum discussions. But then there's retweets. They create a new thread, quoting the previous thread in the topic starter, so to speak... and unlike the standard linear model, the topic starter is done by top posting, with the quoted bit below.
I think it's part stress of the project and part stress of the spotlight. I would imagine the developer was stressing over how to manage requesting donations and then all the feedback (good and bad) that they were receiving.
I could see how he was getting tired of annoying people, but then I managed to navigate to the author's profile, and there's a pinned tweet where he's selling this thing as a product for $20
> Use your Sony #PSVR, Oculus, Pico or Daydream Headset, or Smartphone as a #SteamVR #VR Headset for your PC. Evaluate for free, use In-App Purchase or Steam DLC for "Premium Edition"
If he doesn't want to deal with customers, maybe he shouldn't be selling this as a product
That said, there's nothing wrong with discontinuing the product if he doesn't feel like working on it any more
To clarify, he isn't selling any of the work that he's doing on the PSVR2 currently. He had a donation page for that work that he has since removed after some minor criticisms.
The advertisement is for the iVRy Driver that he developed for some other hardware options.
The work he's done on reversing the PSVR2 is unreleased and closed source for now.
Yeah, I went straight to their profile and found a thread that made sense. Even if they wanted a little money, the thankless work of being a hacker/dev/whatever on a highly anticipated project sucks and drove this guy to take a break. Good on him.
Isn't one of the major barriers that you'd have to reimplement the entire tracking code for PC? Or does that actually run on the headset independently and not on the PS5?
If you plug a headset into a VirtualLink adapter it will go into the "cinematic mode" and function as a floating 1080p monitor with 3dof tracking done by the headset.
The difference between the headset doing 3dof by default (expected for something like time warp and only need accelerometers) and getting an API that parses the camera feed for 6dof head and controller tracking and wiring that up to OpenXR seems like a huge leap.
My understanding is that this wasn't particularly the issue, the main problem for PC use is that you'd need a VirtualLink adaptor, and that will hold back people doing much work on this.
The PSVR2 turned out not to actually use VirtualLink, just more-or-less standard DisplayPort + USB 3 + USB PD over USB-C. Most of the compatibility issues seem to be from it requiring parts of the spec that aren't widely implemented at the same time, like 12V support and display compression. The main open question is how to actually switch it into VR mode so that it's possible to send a stereo image instead of 2D virtual cinema mode, and I don't think anyone is quite sure whether this helps with that at the moment.
VirtualLink isn't that exotic, just another USB-C Alt mode that has 4xDP lanes with USB3 signaling pushed over the USB2 pins. There's likely plenty of ICs on the market that could do it, just few products that implement the feature because there's zero demand.
Given someone gets the PSVR2 working on PC there would be incentive among enthusiasts to create hobbyist-grade hardware at least.
That and the fact that the PSVR2 uses camera-based SLAM tracking, similar to the Oculus Quest, and reimplementing that from scratch is non-trivial to say the least.
There's already a mostly-working open source SLAM implementation that people have been using on a few of the existing headsets. What complicates matters here is that at least feature extraction seems to run on the PSVR2 headset itself so the existing code can't just be dropped in as-is. (Also, whilst the controllers work in basically the same way as many other VR controllers using a constellation of IR LEDs, there's not really good open source support for any of those controllers either.)
Is it? I haven't seen it definitively confirmed either way, but intuitively it would make more sense to do SLAM on the PS5s main processor rather than driving up the PSVR2s BOM with an on-board processor fast enough to do it. They have to stream the camera frames back to the PS5 anyway for the passthrough feature to work.
>driving up the PSVR2s BOM with an on-board processor fast enough to do it
The Quest 2 can do SLAM on the head set plus the 2*2K resolution 3D rendering in real time on a 4 year old QUALCOMM chip running Android OS on top and it costs less than the PSVR2.
So Sony can definitely afford to put a processor in the headset just for the SLAM compute alone since the 3D rendering is done on the PS5.
The chip in any optical mouse is a low resolution camera and DSP computing SLAM algos at hundreds of times a second and costs peanuts. VR headsets have more resolution to process and account for the third dimension in space but they also don't have to process the entire picture but just the differential movement of the LEDs captured in their blanking intervals, processed as white dots in B/W images, further simplifying things.
So pretty sure the PSVR2 SLAM compute can be done on an ARM chip or FPGA worth ~ 10 USD nowadays.
There's nothing to support, the VirtualLink consortium was dissolved and it's now a zombie standard with nobody at the wheel. The VR headset makers that originally backed it aren't supporting it anymore either, Valve announced and then cancelled a VirtualLink cable for the Index, and HTC/Oculus/Microsoft never made it as far as announcing any products.
For anyone interested in why VirtualLink didn't go anywhere, one of the limiting factors was that USB 3 and above is entirely separate from USB 2 and below.
Normal USB-C DP Alt Mode delivers either four lanes of DP and USB 2 or two lanes of DP and both varieties of USB. VirtualLink takes the 4xDP variant and upgrades the USB pairs to support 3.x modes, but that means if you plugged a USB 2 device in to the headset itself there'd be nowhere to route that traffic.
A chip was developed to bridge the gap and allow USB 2 devices to connect to the USB 3 bus, but there were quirks and it ended up not being ready in time to be useful to the VR market. Valve eventually cancelled their adapter plans and that was the end of it.
I suspect the rise of good-enough wireless PCVR over commodity WiFi hardware didn't help matters either - being able to tether a headset over a single USB-C cable is mildly convenient but nowhere near as convenient as having no cable at all. Oculus and Pico both went down that route and it looks like Valve are also going to with their next headset.
The tech has been there for years, if you look at discussions of using the Quest for PCVR the default recommendation is to use WiFi streaming rather than the tethered USB mode. The trick is that the reprojection still runs on the headset itself so the critical part for avoiding motion sickness has minimal latency regardless of the signal quality.
As an early adopter of the Vive (didn't preorder but bought within the first few weeks) who also has a Quest 2, the "good-enough" part of your previous post is definitely open to interpretation.
I will say I agree with using it wireless being the default recommendation, but there's a reason people still buy fiber optic USB-C cables.
If you have a good WiFi setup, which a lot of people do not (and a lot of people who think they should have good WiFi because they spent a lot of money on it still don't because they don't understand what they bought), it's perfectly fine for most games that don't depend on tight timing, but you can still tell when playing Beat Saber or similar. I noticed it most as input jank on the controllers, both compared to the Vive and native Quest apps.
IMO the right answer is a Quest-like standalone handset with a VirtualLink style input to allow uncompressed signals with predictable latency to flow between the two. Best of both worlds. Wired Quest now is effectively just using the USB port as a network interface rather than a display.
I've been using Virtual Desktop (vrdesktop.net) for a while now and it works great. With some judicious use of its resolution scaling options and a physically nearby Wifi 6 router it's pretty easy to get latency down to 30-40ms. The only real trouble spots I still run into are games that have wildly unoptimized elements like VRChat.
It's cool to hear what I use every day is tech that isn't here yet :)
I use my Quesr 2 with my PC (4090) all the time and it's great. The lag is minimal and I have no motion sickness.
I use a dedicated wifi 6 ap with its own SSID though because I noticed multiple devices on the same AP causes stuttering. Probably from switching between higher and lower modulations.
But this way it works perfectly. I hope the Quest 3 will get wifi 6E for more free channels
Imagine the possibilities if Microsoft had actually added multiple HDMI ports on the Xbox Series X:
- screen mirroring for capture cards without passthrough
- native single console multiscreen cockpit view modes for Forza
- Split screen Halo/Goldeneye without actually splitting the screen - LAN party with one console
- minimaps, HUDS on separate screens like the Wii U
- Discord / chat on a seperate screen while streaming
- support for the ecosystem of actually sometimes quite high quality Windows Mixed Reality VR headsets
I wonder if they could technically add more video out ports via the PCI-E based expansion card slot...