Hacker News new | past | comments | ask | show | jobs | submit login
Play GTA V in your Browser, Sort of (phoboslab.org)
362 points by luastoned on July 27, 2015 | hide | past | favorite | 158 comments



I wonder how long before games start being developed for server side rendering.

Stuff like MMO for eg. you dont care about latency as much, your payment model already works with it and it solves several major issues :

* no hardware barrier to entry for high end graphics

* no instalation/play anywhere

* optimizations from shared state rendering, potentially advanced rendering techniques (world space lighting) - you can share computation and memory between multiple clients this way

* probably an order of magnitude harder to cheat


>I wonder how long before games start being developed for server side rendering.

A while ago I was working on a flappy bird clone (as a test bed for the technology) that kind of did this. The app ran locally on the device, however it created a text-based record of all objects and their movements on the screen, and at the conclusion of each game, had the ability to ask you if you wanted to upload a video of the game you just played to YouTube. If you selected yes, the small text record of the game was uploaded to the server, where it was used to create a video of the game as you played it and upload it.

The idea was that if it were super-easy and used almost no mobile bandwidth to upload video of game sessions where people liked their results enough to share, it would go viral. The technology worked well in tests, but then flappy bird popularity kind of died before I released it. Now I'm working on implementing it in another game.


This is very reminiscent of two things for me:

- the introduction of replays in Halo 3 (which worked the same way - by saving the data of the entire session, one could freely move the camera around the entire map and observe any part of the game at any point in time)

- Super Meat Boy's level-end combined replays (which replayed all of the user's attempts simultaneously, creating a pretty amusing sort of "heatmap" effect)

I think this will eventually become standard for games where replays would be valuable or fun to watch. But I'm not sure about actually rendering live games server-side until we're at a point where input latency is unnoticeable.


Server side rendering for live games is a really bad idea. If there's a single hickup with a packet, the whole game will stutter on the client's side and they won't be able to do anything, not move, not look, not pause, whatever.

Server side rendering for replayed games on the other hand is totally fine, not everyone has capture cards. They actually had this service on Halo 3 where you could pay bungie for the number of minutes you wanted footage rendered out at.


nod nod Replays used to be a standard feature of big-budget multiplayer PC video games. I spent many fond hours dissecting exactly how new-to-Descent2 or new-to-Starcraft me got his ass handed to him this time around.

It's a damn shame that more video dev houses don't spend time to create -at a minimum- single-POV client-side replays for their multiplayer games.


Awesome anecdotes. Reminds me of map making and frag videos in Quake, Unreal Tournament, etc. Super Meat Boy's replay feature was a blast through and through. You have good taste!


Very creative idea. A similar technique is used by a few games for high score validation. The server runs the player inputs through a local simulation of the game to re-calculate the score, thus preventing blatant cheating. TAS is still possible though.

I think TIS-100 and other Zachtronics games do that as well for score validation.


> it created a text-based record of all objects and their movements on the screen,

Also known as what every single ID Engine / GoldSource / Source[1] / Unreal Engine / many more engines do for records and replays, except they do it in a binary format. It's not exactly a novel idea.

https://developer.valvesoftware.com/wiki/DEM_Format


> It's not exactly a novel idea.

True. But:

1) It isn't exactly trivial.

2) Someone who didn't play games in the 1990s, and doesn't like RTS games may very well actually have never played a game with demo recording and playback.

3) Hell, I was unaware that either Source or recent versions of Unreal Engine were capable of demo recording, and I've played the shit out of lots of games that use both engines.


In my case, it was new to me, and I have yet to see this functionality in any mobile game that I have played except for my own.


I do hope you continue to refine your demo playback techniques and bring this functionality to ever more complex games! I also hope that game devs will push hard to take some time to add demo recording and playback to most of the games they build in the future. :D


> I wonder how long before games start being developed for server side rendering.

Haven't you heard about Gaikai (now owned by Sony) that provided cloud-based gaming. https://en.wikipedia.org/wiki/Gaikai

The problem is latency. Now you have input latency from you wireless gamepad to/from your PC/console, the rendering loop CPU/GPU time and then also the latency of your network and the server CPU/GPU time. Good night. If you live in a big hub city near the cloud datacenter you can use it, or if you like slower paced games. But if you like to play Quake (read: very fast paced first person shooter action with 120+fps) forget dreaming about cloud streaming your games now!


Gakai isn't what I'm talking about tho - that's just hosting your gaming PC in the cloud.

Server side rendering would exploit the fact that all of your clients are using the same resources to render the game - so you only need to keep 1 instance of mesh x in memory and you can render it for all clients.

As I said below if you have shared instance worlds like MMO you can do much more sharing, you can have shared effects just like you have a shared simulation for clients - the effects can then be optimized for techniques that might not be viable on consumer hardware - for eg. you might have some sort of world space global illumination technique - like radiance transfer. These techniques aren't usually used because you need a lot of memory + processing power - but you can get server GPUs with >10 GB of memory and more in the future you could have one dedicated GPU that would just compute global illumination and then feed view specific data to each client view rendering thread.

It would completely change the way you render games - right now most effects are done screen space because it's faster (eg. deferred shading) but if you could share world scene state you could get a lot more fancy with world space effects. Also the rendering pipeline would need to be a lot more asynchronous to avoid latency (eg. global illumination could be a separate process that would sync occasionally and not every frame to avoid extra latency in a trade-off for lighting latency).


I think you're overestimating the impact of sharing resources... it wouldn't do much. Lets say you have 10 concurrent users on a machine, that means to keep everyone at 60 fps you need to be rendering at 600+fps. Even with a great graphics card running SLI etc/etc, having ~1.6ms budget you're not going to do much in terms of visual effects.


Yep, even on my gigabit internet and sitting smack dab in the middle of the USA (well, pretty far South, but centered) latency is still my achilles heel.


I'm pretty sure you would be much better off on either coast. The vast majority of game servers at least tend to be split between LA and the northeast.


It's ramping up:

Square Enix's solution https://www.shinra.com/us


A fun little fact: the name 'Shinra' is a reference to the Shinra Corporation from Final Fantasy VII, one of Square-Enix's most beloved games.


Yes this seems what like the thing I'm talking about - true cloud rendering - not just sticking PC games on to VMs.


Well that's awesome


Darn tootin!


Most important: Significantly less cheating! This is huge!

No more client-side hacks (to the .exe) that add draw calls for things like - draw a red rectangle around enemy units, or force draw them blended so that they can be seen, or turn wireframe on, make all significant audio highest volume (so you can hear steps easier), and remove anything else, etc., etc.


It's unlikely for most games, even MMOs. 100ms of input lag (which is a good case) is going to be noticeable to most players. You can always add more bandwidth, but latency is bound by the speed of light.

Economically it'd be a disaster also. Since many games max out resource usage on the machine they're on, you're not going to have a lot of shared hosting. So now you're talking about having an (expensive) machine for each concurrent user.


>You can always add more bandwidth, but latency is bound by the speed of light.

You're always bound by the same latency - your input goes to the server and server returns world state for your PC to render - the only extra latency in this scenario is time required to encode-decode the data which could be offset if the server is faster at rendering than the client PC.

As for economics I already said you can exploit the shared state very much in games if you rework the way rendering works. Right now games focus on camera space rendering because they only care about 1 view output and view space effects are cheapest in this scenario.

If you have shared state rendering suddenly you can process the instance state once per frame for 100s of users just like you simulate the game for 100s of users per instance.

You can have specialized hardware setups (ie. multiple high end GPUs with >10GB of ram) doing dedicated tasks like recomputing lighting/shadows, animation, particle effects, etc. you would need to find a way to make these systems asynchronous to reduce lag so lighting updates might lag a frame or two for eg. behind animation but as usual with rendering there are always clever tricks to fool the eye - and once you have gigabytes of ram at your disposal very different rendering techniques compared to currently used ones become viable for shared state rendering.

Someone already linked a platform that's already doing this - I have no doubt this is the future of VR and gaming at least in some part (maybe you won't be streaming video to the client but some geometric 3D world representation with deltas so that the final rendering can be done client side and you can have low latency rotation for stuff like VR)


> You're always bound by the same latency - your input goes to the server and server returns world state for your PC to render

That's untrue of most games. Most games will accept inputs immediately on the client, and only correct from the server if things get significantly out of sync ("rubber banding"). Dead reckoning is both hard and super important.

> As for economics I already said you can exploit the shared state very much in games if you rework the way rendering works. Right now games focus on camera space rendering because they only care about 1 view output and view space effects are cheapest in this scenario.

The way you say this makes me think you have no idea how rendering works. There is no rendering without a camera. The idea doesn't even make sense.

> recomputing lighting/shadows, animation, particle effects, etc

Particle effects are dependent on graphics card bandwidth and fill rate. No benefit of shared state. Animation is done in vertex shaders. No benefit of shared state. Lighting and shadows are done through GPU buffers. No benefit of shared state.

The only possible system where shared state could (maybe) be useful in the way you describe is some sort of massive ray tracing operation. That wouldn't have anything to do with GPUs, and if you're talking about real time ray tracing the economics of this just got sillier. Now you're basically talking about a super computer cluster.

> I have no doubt this is the future of VR

VR is BY FAR the most sensitive to even minor latency. There's no way you're doing VR over a network.


>That's untrue of most games. Most games will accept inputs immediately on the client, and only correct from the server if things get significantly out of sync ("rubber banding"). Dead reckoning is both hard and super important.

Most games aren't MMOs

>The way you say this makes me think you have no idea how rendering works. There is no rendering without a camera. The idea doesn't even make sense.

What a narrow minded view. There are data structures that can store geometry and lighting information in world space - for example you can have world represented by some sparse voxel data structure and calculate lighting in worlds space - then camera rendering is just raycasting in to the datastructure which is the same for all views. Animation and particles are about updating the world geometry.

>Particle effects are dependent on graphics card bandwidth and fill rate. No benefit of shared state. Animation is done in vertex shaders. No benefit of shared state. Lighting and shadows are done through GPU buffers. No benefit of shared state.

This is because current rendering systems are optimized for client side rendering which is my point. If you discard the notion that the only way to render 3D geometry is using GPU pipeline and triangle rasterization you'll see that there are a lot of possibilities. Unfortunately not a lot of research has been done because rendering 3D polygons fit the constraints we had historically and is really robust, everything is optimized towards it.

>VR is BY FAR the most sensitive to even minor latency. There's no way you're doing VR over a network.

Which is why I said you could stream geometry data updates instead of video - this way your client re-renders to match camera movements but the animation is streamed from the server


> What a narrow minded view. There are data structures that can store geometry and lighting information in world space - for example you can have world represented by some sparse voxel data structure and calculate lighting in worlds space - then camera rendering is just raycasting in to the datastructure which is the same for all views. Animation and particles are about updating the world geometry.

That only works for diffuse lighting. There is more than diffuse lighting. I'm not "narrow minded", I actually work on renderers for a living so I know the actual data structures in use. Things like specular reflections and refractions are entirely dependent on where the viewer is; and calculating lighting information for an entire scene is way less efficient than calculating it for a viewer (see: how deferred renderers work).

> Unfortunately not a lot of research has been done because rendering 3D polygons fit the constraints we had historically and is really robust, everything is optimized towards it.

Huh? There's been decades of research into ray tracing and voxels. Believe me, people have put a lot of thought into how to optimize these things.


>That only works for diffuse lighting.

You can add light ID list to the structure (eg. 1 or 2 ints with byte IDs or w/e) - essentially solving light occlusion problem in world space. You can then do view space specular and you do transparent objects as a separate pass just like you do with deferred.

>Things like specular reflections and refractions are entirely dependent on where the viewer is; and calculating lighting information for an entire scene is way less efficient than calculating it for a viewer (see: how deferred renderers work).

If you're rendering for a single view - my whole point is that if you're rendering for multiple clients then solving lighting world space makes that calculation shared just like gbuffer is an optimization for view space with it's own tradeoffs and workarounds.

>Huh? There's been decades of research into ray tracing and voxels. Believe me, people have put a lot of thought into how to optimize these things.

Compared to triangle rasterization it's nowhere near close - for eg. I've only seen a decent voxel skinning implementation a few years back - realtime is entirely based on it and it's baked in to the hardware pipeline.


It (as with most things) comes down to the implementation. If everything is rendered server-side, and the client is a dumb remote framebuffer, then yes, you'll notice 100ms of input lag. If, on the other hand, environments are composited on the server while the player's avatar and particles are texture-mapped server-side but then motion-tweened and composited into the environment on the client-side (which doesn't require much more work than the dumb remote framebuffer approach), then gameplay feels nearly as snappy as if everything were local.


Already happening.

https://games.amazon.com/games/the-unmaking

"Powered by Amazon’s AppStream -The Unmaking is the first game to unleash the power of the cloud so players can experience thousands of enemies, destructible environments, and an epic, cinematic soundtrack on their Fire tablets."

May or may not be cost effective.

I'm highly skeptical of the viability of your point #3.



That is different - they weren't using cloud rendering - they were just stuffing games to some sort of VM and streaming the output.

Cloud rendering would be something like you and me play the same game and the server rendering only has 1 instance of every resource used for rendering to reduce memory overhead.

If you have shared instance worlds (eg. MMO) you can then do shared state effects like animation, advanced lighting, etc. and reuse the calculations for each client.


I'm curious how much of a savings you could actually expect to gain from rendering a single scene for multiple cameras at once. My understanding is that, historically, a lot of the work in rendering a scene is camera-dependent, and that a lot of performance optimizations for rendering rely on being able to avoid computing things that aren't visible to the camera. Has that changed significantly over the years, or am I just wrong?


As a nerd I love numbers, 'Latency is minimal' doesnt do it for me. Display this http://tft.vanity.dk/inputlag.html or any other stopwatch, and record both screens. What is the bitrate and cpu overhead at hd/fullhd resolutions 30/60hz?

Some very slow MVA monitors lag as much as full frame behind input, so in extreme cases you could get equal delay on your laptop.


For my system it hovers around 50-70ms[1][2]. Tested with an 800x600 window at ~7% cpu utilization on a Core i5. My desktop monitor is a Dell u2711, which seems to add about 15ms latency itself[3].

I'm not sure how much latency OSX or Chrome adds (at least a frame more than Firefox[4]); Mobile Safari seems to be a bit faster, as evident in the video. I don't have a second Windows machine for comparison.

Edit: When connecting on the same machine, latency is 2 frames (33ms) exactly[5].

[1] http://phoboslab.org/files/jsmpeg/jsmpeg-vnc-latency.mp4

[2] http://phoboslab.org/files/jsmpeg/jsmpeg-vnc-latency.jpg

[3] http://www.anandtech.com/show/2922/4

[4] http://phoboslab.org/log/2012/06/measuring-input-lag-in-brow...

[5] http://phoboslab.org/up/B2u6w.png


> 'Latency is minimal'

Tell me about it, I've been trying to gather info about a solution where I can record with a camera, probably 1080p (via ethernet or via hdmi to capture card) and use the frame information as inputs in a simulation with a latency around 20 ms and am unable to get hard data on the actual latencies. Nobody cares about input lag because practically no one records as a real time input.

Incidentally, if anyone knows or has experienced with a similar setup I'd be forever grateful.


Gamer care about input lag. Pro gamer on PC don't use wireless input devices, they prefer cable based ones for exactly that reason. The input lag is also what (afaik) almost killed Gaikai. Sony recently bought it for some older PS3 titles. https://en.wikipedia.org/wiki/Gaikai


I never played at the pro level but I did play competitively and I would never use wireless internet, mice, or keyboards. These days there are some great mice, keyboards, and headphones but most people I know won't leave it up to chance and always go with wires.


Yes. I'm aware they do and manufacturers of those devices often provide options and information but cameras are seldom used as input devices so they're not usually measured and that's a shame.


Here's a dumb idea I just came up with: Attach a bright light that the computer can blink programatically. Write a program which: a) blinks the light, b) Blinks the light again 0.01 seconds after the camera sees the light blink, and c) repeat b a thousand times. d) Measure how long the process took relative to ten seconds, divide by 1000 to get your latency (including latency for sending the signal to the light, but hey, it's at least an upper bound).

Another thought that occurs to me is that people who work with music equipment (especially usb synths and such) care about latency a lot. It maybe worthwhile to hook into that scene and see what kind of insights can be gleaned. Here's a link to get you started: http://www.mathworks.com/help/dsp/examples/measuring-audio-l... Looks like they're basically using a feedback model as well.


Thanks. I've done stuff like that. For "whole system" lag it's also a good idea to record input lag with a faster camera and analyze the count the number of images/frames that it takes from the input to the output on screen.

My problem is that, from what I've seen so far, camera/capture card retailers sometimes don't and there doesn't seem to be a particular standard for those who do.

Thanks for the link. It's quite interesting.



would like to see latency numbers too. Gstreamer with h264 is about 150ms ish


This kind of technology is amazingly promising for being able to play games anywhere on any hardware.

However, the thing that worries me the most is not latency or graphical fidelity, but the way this will affect the actual design of AAA games. If I as a developer/studio know that my AAA title is going to be remotely controlled, possibly accessible only via subscription, and the end user's progression monitorable at any point, the opportunities for monitization skyrocket. You think DLC now is bad? Try playing Skyrim and being able to buy in-game gold at any point via browser pop-up. Just died in Dark Souls? Respawn with no lost souls and kill all enemies in the vicinity for only $1!

I'm not saying that all games will end up being this way, but a certain amount of AAA titles may end up going with more mobile-oriented pricing models if this sort of remote gameplay becomes mainstream.


I watched an extra credits video the other day that touched on this thought;

one option is to have AA studios / publishers fund teams of "indie" devs with license to go nuts exploring new aspects of the medium.

Personally, since we're running full-tilt towards needing something to replace "jobs," I'd like to hope that, at some point, that some places ( countries, planets(?)) might institute a universal minimum income, along with a flat-rate fair tax. Then folks could earn currency by playing games ( or creating art, music, solving interesting problems, whatevs...

Could go a long way towards building space-ships & eliminating poverty / starvation...


So basically, revival of the arcade slot machines.


I have to say that latency is still my biggest concern


Possibly stupid question: Isn't this how the Playstation 4 plays Playstation 3 games (in a nutshell)?


Yes.


Interesting! Could you elaborate? Where do the frames come from?


I believe it's a server farm consisting of hundreds of "bladed" PS3s.


How does this work? Are the dvd images uploaded to Sony's servers? Do they have PS3 datacenters in several locations?


I have no first-hand knowledge, but it could be done like this…

1. Take a pile of PS3s, throw out the cases, and attach the motherboards to some custom-built rack arrays (these are the "blades")

2. Build one huge SAN that contains all the games, and tweak the PS3 BIOS's to read from the SAN as one huge internal HD. (BIOS tweak possibly not even necessary if the SAN was good enough at pretending to be a simple local drive)

3. Hook up the AV-out and the USB inputs to a series of XEON servers that simply pump out live h264 streams (and pump controller input back in) from the PS3 blades. One Xeon ought to support at least a dozen PS3s.

This is how it could be accomplished prior to Sony's acquisition of Gaikai and OnLive. Since then, they'd have access to Sony's own knowledge of the PS3 platform, and possibly make PS3 VMs that would cut the hardware blades out of the equation.

OnLive had five datacenters in North America, Gaikai ran on… 300? I imagine many of them were running virtualized Windows machines, as Gaikai and OnLive didn't start out as console streamers. http://www.engadget.com/2010/03/11/gaikai-will-be-fee-free-u...


The PS3 already support game streaming, you can play from your PSP and PS Vita, you doesn't need another server for the actual stream.


It doesn't play games you already own on discs (BD, not DVD in the case of PS3). It's a subscription service where you can buy/rent games that are stored on their servers. They are played on remote PS3s and streamed to your PS4/Vita.


Can you explain how it works? Wasn't there a company trying to make streamable games but then lag was an issue? Does Playstation 4 render PS3 games on their servers and send me the images... or what?


This looks awesome, and saves someone the headache of trying to setup the VPN, but is not using the nvidia GPU's ability to use hardware encoding which is much faster than going the CPU route in most cases. If someone could adapt that, and open source it, that'd be the game changer that people are looking for to build this sort of stuff.


Afaik nvidia's NVCUVID API only supports H264 encoding, which would require a JavaScript H264 decoder. So far I haven't been able to get the one JS H264 decoder I know of (Broadway) to work reliably and fast enough for large video sizes.

I wish the industry would just get their act together and support a common video codec in browsers along with a JavaScript API that can deal with decoding single frames. Currently native streaming support for the <video> element in browsers is extremely poor and proposed solutions like MPEG-DASH or HLS add around 10 seconds(!) of latency.


The API you are looking for is WebRTC. VP8 is the most widely used codec for it, but you could use H.264 baseline if you really wanted.

You will need to include a WebRTC stack in your server though, which is a lot more complicated. But I think you will get better overall performance with it versus TCP - it's what WebRTC was designed for.

Also, you might want to look at ogv.js - if you want to keep going the JS way, it includes a very fast Theora decoder which should still be a lot better than MPEG-1.


Media Source Extensions[1] may be an option. It allows you to feed bytes to a video element, sourcing those bytes from wherever you'd like. There is a demo[2] that literally opens a video and split it in to parts arbitrarily. Chrome, IE, and Safari all have support for MP4 with MSE.

[1] https://en.wikipedia.org/wiki/Media_Source_Extensions [2] http://html5-demos.appspot.com/static/media-source.html


MSE is too high latency for this application.


This is really cool, after watching this demo I wonder how the future of gaming might look like, e.g. we could pay subscription for a game and remote machine to run game on. This way even with cheap laptop we could play AAA games remotely. No more issues with hardware, drivers fps.


This is very much what OnLive where doing a few years back - I had a subscription for a while but never used it as much to justify it. They would have some problems from time to time but for a casual gamer like myself that only had an ancient and linux-only laptop it worked surprisingly well.

They're since defunct, Sony picked up their assets end of April this year (https://en.wikipedia.org/wiki/OnLive).


It's essentially thin client gaming, right?

With the roll out of more fiber networks, this could be a reality (assuming you still had enough host locations to reduce latency).


I have never using OnLive, did they support playing via browser? Browser solution is key to this as almost every device support modern browsers capable of ruining OP solution.


They had Windows and OSX apps, as well as an iOS app. Onlive started in a time before browsers had the features needed to make this possible.


The browser is the key to what? Doing stuff in the browser?

It's certainly not the key to getting a game on every device. I can't think of one browser game that people play on their devices. Even Words with Friends, the simplest 2D game you could think of, is not used from a browser - most people run the native app for that.


>It's certainly not the key to getting a game on every device. I can't think of one browser game that people play on their devices. Even Words with Friends, the simplest 2D game you could think of, is not used from a browser - most people run the native app for that.

Because of performance, but streaming games solve this problem. Full screen browser wouldn't be any different than native game client.


You may have seen this before, but if not it would probably interest you: http://lg.io/2015/07/05/revised-and-much-faster-run-your-own...


Yes I have seen this one, but it wasn't browser based. Somehow browser based solution looks much better.


It definitely does! The main point of interest for me would be instant cross-platform support, although it's nice to not have to install a client too.


OnLive has been offering that service for five years (they got acquired a few months ago by Sony).


Are you sure? I thought Sony bought Gaikai, which is basically similar to OnLive - but a competitor service.


Sony did buy Gaikai, and they have been using their technology in a few PS4 services.

OnLive had a pretty aggressive trove of patents, and when OL finally shut down, Sony bought their patents. So the company wasn't really acquired, it was just a fire sale.


From onlive.com: "Sony has acquired important parts of OnLive. Due to the sale, all OnLive services were discontinued as of April 30, 2015."


Got it. Wonder what happened to Gaikai then.


Acquired by Sony in 2012.


As mentioned above, I'm well aware of that. I am wondering why they needed OnLine since they already had Gaikai.


As mentioned before, it sounds like they wanted the patents.


> .g. we could pay subscription for a game and remote machine to run game on

the latency of in-home streaming is far inferior to out-of-home streaming. Unless the server is just next door to where you live.


For now, yes.

Think 5-10-15 years when everyone has gigabit+ fiber to the home.


Latency != Bandwidth.

Latency is much harder to reduce than it is to increase bandwidth.

At a certain point you're also limited by the speed of light, round-trip latency for halfway accross the world cannot be physically less than ~200ms (unless our knowledge of physics advances and SoL is no longer a limit)


I think it was Microsoft that proposed an approach. They'd modify games to continually speculatively execute and render every user input. So that might be they render you beginning to run left/right/forward/back as well as jumping and shooting. When you actually change the input, a local device can switch streams and start speculatively executing all over again.

It probably works best if the game engine cooperates. But that's not necessary. You can just split processes on the OS and run each different bit of user input in a different process with no cooperation from the process. (Though I admit this might be tricky on current hardware and heavy games.) Given enough compute and bandwidth, you could do this continually.

In theory, with unlimited compute/bw this means you can have local latency (just the cost of input/stream switching) because you could speculatively execute every possible input to the game, all the time, out to the latency duration. In practise, it'll probably prune things based on the likely inputs and only speculate a bit out. This is probably enough to provide a smooth experience for most users that aren't playing competitively.

If you think about a game as a mapping from a limited set of user inputs to a 2D image, some optimizations start coming out, I suppose.


Interesting approach!

But that sounds almost impossibly computationally expensive for 3D games and the like. Furthermore most game inputs aren't discrete but continuous, making the problem even hearder.

Do you have a link to the paper?


http://research.microsoft.com/apps/pubs/?id=226843

"able to mask up to 250ms of network latency"

They tested with Doom 3 and Fable 3. I don't recall the specifics but I'm gonna guess that the actions people take are really quite limited, so with a bit of work you can probably guess what they're going to do enough to make things playable.


Technically you can get it down to 85ms without going beyond any known physics, you just need to figure out a way to transmit information through the Earth rather than around it.

So the next big breakthrough in data transmission will be neutrino rays....


>>unless our knowledge of physics advances and SoL is no longer a limit

The latency of the human mind is around the same. There are lots of tricks like predicting the future game state that can result in a better user experience.


UI Response time is at most 100 ms [1], as anything more that is very noticeable laggy.

Actual perception times are much lower than that, about 13ms [2]. You can see the difference for yourself by looking at a 30FPS (33ms) and 60FPS (16ms) video [3], and the effect is much greater when you're actually providing the inputs.

[1]: https://stackoverflow.com/questions/536300/what-is-the-short... [2]: https://newsoffice.mit.edu/2014/in-the-blink-of-an-eye-0116 [3]: http://www.30vs60fps.com/


> round-trip latency for halfway accross the world cannot be physically less than ~200ms

Yes, but people have been playing online games for a while now and latency was always there. I don't see why it would sudunly become a problem.

They'll just choose the closest server (i.e. the one that is not accros the globe) as they always did.

Whether the service is "rendering", "web" or anything else doesn't make a difference wrt. latency.


With local rendering, when you shoot a bullet you see the bullet shoot after local hardware latency (~50ms) and get a confirmed kill after local hardware latency + round-trip time (say ~50ms).

If the rendering is distant, the time until you will see that bullet shoot becomes 100ms instead of 50ms.


As long as the enemy player is remote, the enemy player position (and thus confirming a kill) will inevitably be delayed by the network latency (whether rendering is local or distant). That won't change.

The only thing that will change with remote rendering is that your own moves are also going to be delayed, which is certenly anoying I agree. But, on the positive side, this ensures consistancy between the view and the model, that is: you won't try to shoot at a player that is not actually where you're seeing him. That happens a lot with local rendering.


It's been 10 years I hear that Gigabit fiber is coming everywhere. It's taking way long that they say, and it still does not resolve the latency issue if you are far from the server.


Light can only travel so fast and the routing and network congestion is a far bigger obstacle than bandwidth.


The limiting factor is going to be the speed of light rather than the bandwidth.


If we can somehow eliminate input lag.


Of course for some games it is a problem, but not for every game. Also I hope in the future latency problem will be solved e.g. but placing more DC around the world.


I'm not sure it's entirely solvable for some games, given the extra constraints imposed by remote processing. Gamasutra had a nice article[1] about responsiveness a and lag few years backthat wasn't addressing this problem specifically, but does put it in perspective a bit.

1: http://www.gamasutra.com/view/feature/3725/measuring_respons...


Are there any advantages over Guacamole? Possibly higher framerate?

http://guac-dev.org/


Guacamole is a text protocol. The screen is sent as PNG images in base64. It was built before websockets were compatibly implemented across browsers.


Any idea what the actual performance impact of that is?


For fullscreen 60FPS, it's probably going to increase the bandwidth requirements by an order of magnitude, which will lock out a bunch of users on less-than-ideal connections.


I'm surprised at the reactions in the comments. Both OnLive and Gaikai were doing this around 2008 and nobody seemed to give a shit.


Well, it kind of makes a lot of sense. If you've got an ultrabook or something similar. Laptop graphics cards are very underpowered on the whole...

Actually, I could see this being handy while travelling. Sure, at home I've got a decent gaming rig. But it'd be nice to have decent quality gaming on a low-end laptop...

I think it's cool


The latest round of articles encouraging streaming from the cloud to your local client focusing on latency sensitivity is very interesting because it might finally make things like high-performance, low-latency remote virtual desktop for development solutions approachable.

Chrome Remote Desktop works OK and it's a sign of things to come in the field of virtual desktops. Right now, the state of the art for on-demand cloud remote development desktops to help facilitate intensive development environments, like IntelliJ / Visual Studio / Mathematica, on underpowered clients, (i.e.12" MacBook), is to rely on proprietary protocols that barely work if the targeted remote machine is a Linux desktop.

Yes, I know about x2go etc, but I've had so-so experiences with it. Compared to streaming games, I wonder if there's a product in here somewhere.


So I just bought a new Macbook which although is amazingly thin is also somewhat underpowered. I also own a beefy Mac Pro (the trash can) and the thought had crossed my mind that I might use remote desktop to utilize the power of my Mac Pro from the convenience of my new Macbook. For gaming, latency is a huge issue, but I could deal with 50ms latency when doing my work if it meant not having my system lag in other respects. I only wish there was a OS X equivalent.


You should try Steam streaming - you can run your full and regular desktop in it (by using fake passthrough apps), and is how I often use Lightroom and Solidworks from my Macbook in bed.

Coming from VNC and Remote Desktop, it's remarkable how low the latency is. In many cases, because I can get a signifcantly higher framerate by remote rendering, the latency is better than if I was doing stuff locally.

For games, it's worked pretty well with everything you'd be happy playing with a gamepad (e.g. GTAV, but not CoD).


I will do just that. Thank you!


How much work would be required to adapt this to use WebRTC with native H.264 instead of websockets with a javascript-based MPEG1 decoder implementation?


Probably not much. The WebRTC reference implementation[1] by Google and Mozilla is fairly easy to build and embed, assuming your Internet connection can handle a complete checkout of the Chromium repository. Unfortunately it only supports the VP8 codec for video. There is also OpenWebRTC[2] by Ericsson, but it doesn't work on Windows.

[1]: http://www.webrtc.org/native-code/development

[2]: http://www.openwebrtc.io/



Why WebRTC? I think that's more designed for Browser-to-Browser connections. You'd just need to stream an H.264 file using a <video> tag (and use a websocket for the input).


Streaming an unbounded file to a <video> tag can have fairly significant delay compared to webRTC. It's really more designed for one to many situations (such as livestreams or VOD) than one to one low latency.


How likely is it that we can actually bring the bandwidth and latency down to reasonable levels in reasonable conditions? High-def YouTube videos sometimes skip or buffer for me, and that's on cable broadband in a medium-sized American city, pulling from Google-backed servers. How do we make this good enough that most urban Americans can enjoy it without constant frustration?


Anyone have any idea how to Stream audio with something like this? Would be awesome if that same process could stream the audio


Hey so I currently work on this technology - check out https://www.x.io/ if you'd like to learn more. You can sign up for free and get some free minutes to try it out.

We can run arbitrary windows apps and stream them to your web browser, including videogames.


The "vnc" suffix on the name could be a bit confusing to some because it looks like this project does not use any existing VNC code or libraries using RFB protocol. This could come in handy for streaming applications to my mobile devices over LAN if I need to control something easily.


Yeah, that's what I thought too.


Nice hack ! WebGL decoding looks pretty effective.

I think the major challenge with remote gaming is input latency.

Even here on a local network, it looks like there is at least ~200ms between input and frame, wich is a blocker for many game types.


This is awesome. I can see it being handy for demo'ing things. :)

One thing to note: It gets a bit confused about where your mouse is if you try to stream one monitor in to a browser on a second monitor.


I'm rather curious if this could be used (or changed to support) two people playing a local co-op game. Would open up a lot of games in my library for play with a friend.


It probably could be used as-is. Some people use screen sharing tools like Teamviewer to do just that, by the way.


Interesting, I hadn't heard of Teamviewer as an option for this - will check it out. Thanks!


Hey guys, what about doing this on a chromecast? Since it's a browser app, we just need to know if chromecast can handle the decoding.


You would still need to send input to the app somehow, which possibly could be done with a phone/tablet which is setup with just the controls and no video.


I highly recommend the Xbox One streaming feature that's coming with Windows 10.

It allows you to play your Xbox games on your PC.


Sad to see everyone late to the party on this. There was a great company, Onlive doing this for the past 5 years. Not browser based in their case, but they were the thought leaders. They recently closed shop and were 'acquired' by Sony. I believe thats what spurred this recent trend of articles.


Anyone have a sense of how well this setup would do w/ an Oculus Rift?


I imagine the network latency would be more than enough to induce motion sickness as the display would not keep up with your movements.


Yep, big time. The latest Rift SDK added depth-aware timewarp though, so if you could stream the depth buffer as well you could get something maybe non-sickening, but full of artifacts (transparent objects aren't even written to the depth buffer; when a nearby object is occluding the background and you peak around it in a warped frame it gets filled in a way that can stand out; specular highlights are all wrong wrong in warped frames; lossy compression of the depth buffer for streaming would cause glitching on the silhouettes of objects).



as a dk2 owner, I would guess that it's not very nice, any amount of lag or stuttering in the rift is catastrophical to the experience, IMO.


- This support one connection, right?

- How feasible would it be to host multiple client connections (each running their own instance of the hosted program) ?


Bravo!


This is the coolest thing ever


Clickbait much?


I'd like to see GTA III & Vice City (or a good clone) running in the browser, client side via asm.js or wasm.

Edit: Not that they couldn't do it with GTA V I just prefer those games over it. Saints Row the Third or Just Cause 2 are also acceptable.


Well, we can emulate an x86 PC: http://win95.ajf.me



win95.ajf.me is em-dosbox.


This is pointless and nothing new. Remote desktop in the browser has never been an issue, so obviously playing games isn't an issue either.

The pointless part is that he's connecting to a "server" that's next to him, not in a DC in London. I guess if you really wanted to play GTA V on the toilet, on your phone, it solves the issue - but really...


Nothing is new. It's simply a case of joining the dots.

1. AWS can run Windows and therefore Steam. Nothing new. 2. Steam home streaming works over VPN. Nothing new. 3. VNC can stream games. Nothing new.

etc.

But put all these pieces together and the first person to wrap it up and make sure all the legal parts are in place might make a fortune.


I agree with your point - as hackers, we are often too quick to dismiss something as "nothing new" because we have seen something similar before ("I don't need Dropbox, I have rsync").

That being said, in this case these three dots have been joined up multiple times already, so there really is nothing new here. The interesting challenge is in reducing latency so that remote games are actually playable when the server is not on the same LAN.


This is my point, it's been done. The problem that still exists is the lag you'll get when not playing at your house, so nothing has changed. Apparently people don't get that.

Now, if he gets this running between his house and work, I'll be give him some credit.


I'm sure he's holding his breath waiting for your credit.


great riposte.


What are you talking about, these pieces have already been put together multiple times.

http://www.pcworld.com/article/2359241/how-steam-in-home-str...

http://shield.nvidia.co.uk/play-pc-games/

http://store.steampowered.com/streaming/

This guy isn't running it from AWS though, like I said. He's running it from a "server" that's right in front of him. By the way, have you ever tried RDCing a remote Windows machine and done some trivial tasks in the GUI?


I actually have an AWS machine that runs Steam + VPN and I can stream games to my home laptop, it works really well.

I also have non-steam games that I've added like Star Wars: The Old republic, Starcraft II and Diablo III and they all work well using the same system.

The only glitch I've noticed is that if steam starts before the VPN, I have to RDC in and restart steam so it picks up the VPN network. (If anyone has any idea how to fix this I would be very grateful)


what AWS instance do you use?


I'm using one of OTOY/ORBX (ami-8ff2b9e6) instances ( I tried setting up my own, but never got the graphics to work right )


I used the old version of this guide, the new one may work better but I already have a working instance:

http://lg.io/2015/07/05/revised-and-much-faster-run-your-own...


Community AMI

ec2gaming or ec2 gaming


>By the way, have you ever tried RDCing a remote Windows machine and done some trivial tasks in the GUI?

I have, and the only real issues with latency/bandwidth were with over satellite connections.


Yeah, I regularly log in to machines in France and Canada and it works fine (on a 50mbps connection, at least).


These are not running in the browser, though. And previous Remote Desktop solutions that are viewable in a browser can't handle 60fps (let alone anything >10fps for full motion).


> Remote desktop in the browser has never been an issue, so obviously playing games isn't an issue either.

at 60 fps?

I few years ago I was lucky to get 3 fps from a local server.


Facebook wasn't new, Twitter wasn't new... Sometimes reusing existing components, and making them better is enough to do something great. It doesn't need to be always original work.


> This is pointless and nothing new. Remote desktop in the browser has never been an issue, so obviously playing games isn't an issue either.

That is completely wrong. Remote desktop doesn't really care about latency or framerate. The vast majority of updates will change only a few pixels (e.g. moving the cursor or typing a letter), and when fullscreen updates do occur (switching apps), it's okay if it takes a visible fraction of a second for the update to sweep across the screen.

Gaming needs to be high-speed, it needs to be low-latency, and it needs to be seamless. 30FPS is an absolute rock-bottom minimum, and gamers are increasingly demanding 60FPS (or more). Some specific genres, like rhythm games, are so sensitive to timing that you can tweak their input-detection settings based on the latency of your television.


There is definitely a use-case for being able to play wherever without being tied to a location even when at home. Plus the fact that you don't need to install anything on the client machines.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: