We might just have seen the future of PC gaming DRM. That you will pay per hour instead of a one-off payment.
There's one problem though, and it's latency, even 50ms will feel very laggy. We need more decentralized data centers! With a data-center in each city you could get latency down to less then a millisecond.
I think the next digital revolution will be low latency, and a flora of new services coming with it.
> We might just have seen the future of PC gaming DRM. That you will pay per hour instead of a one-off payment.
While I would hate to be locked into an ecosystem that was pay-per-usage, this article is just amortizing the cost of hardware over time.
The PC gaming community is (I hope) fairly intolerant of any sort of lock-in. If Steam went the way of pay-per-hour there's enough competition that we'd see a transition to other services.
Unless Valve goes through some other route to give themselves an advantage (e.g., buy Xfinity Valve package and get unlimited access to Valve Streaming services, the rest of the internet is low-bandwidth/high-latency), there's not much to worry about.
I don't think a lock-in would be an issue. There's however other issues I think will be very hard to solve.
This is a quote from a salesman trying to sell me 3G Internet.
"Why on earth do you need more data then this!? You know downloading movies is illegal, right!? Ohh, and you will also get a free Ipad."
The problem is that people refuse new technology like fiber networks, because they do not see any use-cases beyond e-mail and Facebook.
My depressing thoughts about ISP's is that the only reason they use fiber instead of copper is that fiber is much cheaper over long distances.
On the other hand, immersive VR headsets ala Oculus is just around the corner, where extremely low latency requirements pretty much makes streaming a dead-end.
IIRC Carmack had written several posts on how they are chasing and killing latency in everything from USB input to LCD front buffering.
I think they started with around 100ms response time and have managed to get it down to around 20 ms. Hopefully they will be able to get it even lower, so that one ms of network latency wont matter.
Think about getting a VR headset like the Oculus and the only thing you needed was to plug it into the network, and then have access to virtually all games available at a hourly fee!?
Even at 250 FPS, waiting for the next frame takes 4 milliseconds. If we can get data center latency down to 1 ms, it's no longer an issue in the context of VR, or gaming in general.
You realize that 99.9% of the world cannot reach an aws data center in 1ms based on the speed of light? Assuming routing took no time which is also false.
They tried it in the early 2000s, when the network throughput and latency just wasn't what it needed to be. Now we have much better networks, more available cloud computing, and a populace that's more used to streaming things. I'm not sure people would accept it just yet, but I don't think OnLive would fail quite so spectacularly now.
Yes, they didn't get going until then, and ended up running... 4 years total?
For strategy games and such, I'm sure streaming will always be viable. As we move into the VR realm, even a tiny amount of latency won't be acceptable. So we're always going to need local rendering for action games.
Games are going to have 100+ gigabyte install footprints later this decade, and terabytes when 8K gets here. Phones, tablets, vr headsets and cheap computers - especially cheap steam boxes - will have to rely on streaming.
The gpu needed to play games on a VR headset doesn't fit on it, thats why they plug into a computer. And for something that fits in a phone/tablet, a 64GB microsd card is $30. 64 GB on something the size of a fingernail for $30. 10 years ago a 60GB 3.5" hdd cost more than that . Storage expands and gets cheaper at a crazy fast rate.
A solution like this is a clever hack, because it's taking advantage of an immense economy of scale provided by a company with really deep pockets operating on a multinational pool of servers that still on balance see a wide variety of different use cases, allowing for a lot of distributed load.
Imagine if every user was demanding that same level. As I write this, Steam is counting over 8 million players logged on. Now imagine trying to imagine trying to guarantee them all gaming-level real time performance (fun note here: that GRID card used for the AMI costs $2,000 alone), and doing it all on a reasonable price.
Even our Amazon hack isn't doing that. 50 cents an hour doesn't sound like much, but the average gamer is putting in 22 hours a week. Some five million enthusiasts are regularly pulling 40. That's anywhere from $40-80 a month. Not to mention the cost of the games themselves.
OnLive was trying to offer this for $15 a month. And originally, they were even footing the cost of the games, aiming for a Netflix/Gametap approach (the latter of which also failed, I might add).
Something like this will only be economical if you really plan on only using it for the occasional single player game a few times a year.
If you use it to play games regularly at even modest frequencies (20+ hours per week), the 50c per hour will quickly start to accumulate and become a recurring monthly bill of $40+.
At that point you might as well invest in a decent gaming desktop and stream from it instead. A i3 + 750 TI based system can match the performance of this setup and would cost less than $500 all-in, which is about the cost of 1 year of streaming from this setup at 20 hrs per week. You'll get a much better experience due to the much lower latency and not having to worry about penny pinching on every session.
Probably not in terms of pure compute power, my wording is definitely a bit off there.
However, the overall gaming experience you get with the local streaming setup will probably be similar to, if not superior to, the remote streaming setup regardless of absolute hardware power, because i3 + GTX 750 TI can definitely handle most games at 60fps in 720p (and a lot of them at 1080p in my experience). So the comparison ends up between streaming 720p with 50ms latency due to bandwidth constraints over WAN vs streaming 720p (and some 1080p) with <5ms latency within a LAN.
According to this, the K520 supports up to 16 concurrent users while sporting 2x GK104 based GPUs of power. When fully utilized I figured each user won't be getting much more power than a single midrange GPU, and I can't imagine why Amazon would choose to not keep them fully utilized either. If you have any sources otherwise, I'd love to see them as well.
Regarding the CPU, when creating an instance you can see this: "G2 instances are backed by 1 x NVDIA GRID GPU (Kepler GK104) and 8 x hardware hyperthreads form an Intel Xeon E5-2670". According to [Intel's product page](http://ark.intel.com/products/64595), the processor only has 16 hyperthreads, so 2 users per CPU. My reasoning may be wrong though, I'm not a virtualization expert at all.
120ms latency in network communication of game state is different from 120ms in input latency. We have client-side prediction in engines to ensure that the game world responds to input and view changes in soft real-time even though the server trip is still unfinished.
If your mouse cursor or terminal was continually an eighth of a second behind your input, you'd get pissed fast.
There is a post sometime in the last year here about a Microsoft Research tech demo which abused bandwidth to send all possible futures as well as the current screen, enabling client-side prediction to again eat one way of the trip.
This is the relevant technical details from the pcmag article:
Microsoft's DeLorean system takes a look at what a player is doing at any given point and extrapolates all the possible movements. It streams a rendering of these from a server to a player's console. Thus, when a player decides what he or she plans to do, that scene—for a lack of a better way to phrase it—is already ready to go.
I don't think competitive is the right word (I'm leaning towards "engaging"). Even if I'm casually gaming, it can be a turn-off to notice timing issues. More-so if I have to adjust my behavior to accommodate.
Well when I was younger I was very particular about these things (CRT vs early TFT etc) so I would not say that is not engaging at all. Just that you will probably be frustrated very soon when trying to outperform players with direct access in something like FPS
I tried to play GTAV when the first article came out, and while it wasn't unplayable, the latency was frustrating at best. Driving was futile, and I have a muc, much lower latency to aws than 120ms:
> ping -c 4 sdb.amazonaws.com
PING sdb.amazonaws.com (176.32.102.211) 56(84) bytes of data.
64 bytes from 176.32.102.211: icmp_seq=1 ttl=239 time=7.35 ms
64 bytes from 176.32.102.211: icmp_seq=2 ttl=239 time=7.51 ms
64 bytes from 176.32.102.211: icmp_seq=3 ttl=239 time=7.11 ms
64 bytes from 176.32.102.211: icmp_seq=4 ttl=239 time=7.15 ms
--- sdb.amazonaws.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 7.119/7.285/7.512/0.180 ms
7.5 ms of latency will not degrade your gaming experience, I'm pretty sure. If you're gaming at 60 fps, and move your mouse right after a new frame is displayed, the mouse movement won't be visible until the next frame, 16.66 ms later. And 60 fps feels smooth to me.
On my home internet connection, pinging, for example, google.dk gets me a response time of a few milliseconds, and a HTTP GET request for the root URL has the same latency. But if I do a HTTP GET request as part of the search, the latency is much higher.
I think the google.dk/com main page (and probably other heavily visited sites) are cached by ISPs, so that you don't necessarily reach Google when you ping or HTTP GET the root domain, but rather some network cache device between you and your ISP.
So be careful trusting that ping latency necessarily equals HTTP GET latency, or latency for some other request, to a server.
That is weird. It should be worth investigating if you have any other bottle necks that are increasing the latency. Monitors for example, usually have a response time, from input to when you see the image, of around 10-20 ms.
Input > PC > amazon > PC > monitor
Maybe there's something in your PC that is adding to the lag, like a slow software render.
It could also be that the machine answering to the ping is closer due to anycast routing.
Can anyone come up with a practical way to measure the actual lag from input to screen render!? For example using a high speed camera.
The decoding was in hardware, and I have dual amd 7850s, so I don't think that was the issue (and I wasn't trying to run at the full 4k either)
There was a very noticeable input -> display lag, according to telemetry on steam it was ~40ms total, which is fine for a lot of games, but really noticeable and annoying for something like gta. I mean I've played civ5 over vnc before, and 40ms would be a godsend compared to that, but it was still more than playable.
Latency will only be an issue if it's non-constant - most gamers can adjust for lag if it's expected.
I think the underlying idea here is consistency. If I point and click on something that was merely a "mirage" due to network effects, then this is somewhat of a bad experience.
> If you have a general latency of 120ms, then the maximum number of frames per second which react to distinct instances of input is 8.
If you have a round trip latency of 120ms, then the maximum number of action-result-reaction cycles per second is 8, but its possible to have distinct instances of input to information received each frame whether or not that frame accounts for input in the previous frame. You can have distinct instance of input -- and frames that react to them -- as fast as you can show frames and humans can process them. The frames showing the reaction will be delayed by the latency + human response time from the information they react to, but the frequency of those frames is pretty much unconstrained by latency.
Which is why, again, slow frame-rate and high latency are orthogonal (both create the perception of "slowness", but they are different and independent effects.)
The only real relation is that a low frame-rate can mask high-latency, as if the latency (including human response time) is less than time between frames, the latency becomes imperceptible. So, yeah, 8 FPS becomes the frame rate necessary to completely mask 120ms latency.
> If you have a round trip latency of 120ms, then the maximum number of action-result-reaction cycles per second is 8, but its possible to have distinct instances of input to information received each frame whether or not that frame accounts for input in the previous frame. You can have distinct instance of input -- and frames that react to them -- as fast as you can show frames and humans can process them. The frames showing the reaction will be delayed by the latency + human response time from the information they react to, but the frequency of those frames is pretty much unconstrained by latency.
Got it. That makes sense. I was putting the user more in the mindset of a webapp user, where almost all interaction is "action-result-reaction," but when I think about gaming, I am giving a pretty much constant stream of input, coming from a join flow of muscle-memory, my own desires for the outcome, and the input of the visual and audio.
So, sure, I will take many more than 8 actions, and all of them will just be delayed.
120ms doesnt limit the game to 8 FPS though - it means that the image is at ~60 FPS and 8 Frames into the past.
120ms is perfectly fine for RTS and most RPGs or even MMOs... Though MMOs would probably get another ~30ms increase in latency because of the connection from aws to the game servers.
If it wasn't a direct feedback game with mouse to look. I'm sure it wouldn't matter aS much. Such as point and click. Or click to move rpg. Rts. Etc. I'm sure if it was mouse to look or aim. Or even a strong key press to move it'd be noticeable. But still could be great for plenty of games. Though not all
There's one problem though, and it's latency, even 50ms will feel very laggy. We need more decentralized data centers! With a data-center in each city you could get latency down to less then a millisecond.
I think the next digital revolution will be low latency, and a flora of new services coming with it.