It's an idea tried and failed many times over (GPU hosted games in the cloud, OnLive, GaiKai, etc). The round-trip lag from the time you press a button until the time you see a change on the screen is just unacceptable for most twitch oriented games. The JS/Web implementation doesn't change anything about the fundamentals. I don't know why Mozilla is bothering with this approach vs the Emscript/Asm.js which would fair much better (but still not likely to succeed for AAA games)
You have people complaining about touch-lag on Android devices, which is on the order of 100ms, and you're telling me you're going to send a packet with a controller movement, render a frame, compress a frame, ship it back, and display it with something <100ms? The demo shown is a non-interactive cut-scene, so no one's going to notice the input lag. OnLive, when latency was actually measured, came out around 150ms, totally unacceptable.
Th true test would be running something like CoD, BF4, or a SF2 tournament and get pro gamers to evaluate the system.
The AAA titles are never going to be run this like economically. I'm playing Battlefield 4 right now, a huge huge game that brings my current PC rig to its knees. Why would anyone want to suffer this with 150ms lag and compression artifacts?
The failures that happened were of economical nature (OnLive) and if you call selling a ~50 people company for ~500M a failure (Gaikai) then I'd love to hear about your success.
Apart from finding a way to profit from this there are no big problems left to solve.
My idea of success is a service which works for gamers. The fact that someone flipped a company is irrelevant, there are lots of companies which turned out to be economic failures that were dumped on investors. The fact that it was an economic failure points to a) the fact that gamers didn't flock to it and b) that it wasn't profitable to run or scale
There are more practical explanations of OnLive failure. Like the prices they had to pay for the content leading to a very modest library of titles.
Gaikai is going to roll out with PS4 and only then we can judge how well is it doing. It's definitely solved the content issue so we can see if your technical angle holds.
Most multiplayer games are quite playable over the network. The difference is that the graphics are rendered locally and the world state is streamed over the network.
A fast enough pipe (100Mbps fiber is quite common and cheap here in Europe) could deliver good results, even if the whole world is rendered and then streamed as video.
Multiplayer games run with a synchronization/prediction model. Most are not run with a purely server-driven world-state. The entire game simulation code and logic runs locally and is sent to the server, the server than collects all the inputs and calculates differences to the world state and sends back the diffs with predictions based on latency. These diffs may differ from the local state, and so corrections are applied. In the majority of cases, the differences are minor enough that the player doesn't notice. When they are severe, the player notices hitbox inaccuracy or worse, rubberbanding as his actions are "snapped back".
So when you're playing an FPS and you press a button to shoot, the local game logic and physics computes and displays the result immediately (firing animation, sound start playing). A short delay later, the server confirms a kill.
Even with this fairly sophisticated model, you still want <50ms pings. Video streaming and running the entire game in the cloud just won't work for games that require rapid hand-eye coordination.
I don't know. Working with shell is very uncomfy with 170 ms ping. (I use mosh which prints typed symbols right away not waiting for response from server, but other feedback such as commands output or cursor movement is still very unpleasant at that lag).
I agree with this. That's why I spent so much time optimizing Gate One's Terminal->HTML encoder. On my i7 laptop the round-trip time for keystroke->screen update hovers around 10-20ms on localhost (remember though: My laptop is processing both ends at the same time).
On actual server hardware that "baseline latency" is about 5-10ms. So for any given client connecting to Gate One the response time for keystrokes should be at most 10ms + whatever the connection latency is.
Yes the same technology is in Autodesk Remote, which has been out since July, and allows you to do exactly that. The JavaScript component is not out yet, it is just PC to PC or OC to iPad right now.
Has anyone had any luck? I've set everything up and followed their instructions, but the URL they give you to access the web interface keeps timing out for me.
Don't get me wrong because this is a cool technological achievement if it can deliver as promised.
However, when they originally announced this in May, my impression was that this was going to be open sourced and a part of Firefox. I found that to be really exciting.
It turns out, it's just another business trying to show us the future. Nothing wrong with that but it doesn't excite me as much.
Also, Brendan Eich is an advisor to Otoy, the company behind ORBX.js. Is it just me or does anyone else think that it's a conflict of interest for him to market a for-profit service/product via Mozilla? It's not clear what exactly Mozilla's involvement is in this project and what do they get out of it besides a broader web ecosystem?
If it weren't for Mozilla creating broadway.js, ORBX.js would never have happened. Andreas' work on this libary was they key inspiration for ORBX.js. Since May, Mozilla has helped us optomize the JS code (which was key post FF22 when the JS VM changed), and is helping us move the decoder entirely to the GPU in WebGL2. I think at some point we would like to open source the older ORBX.js codecs as we iterate on this first version, but even that doesn't make much sense until we get a stable file format for video. Right now ORBX.js is tuned for live streaming. That will change with v2, which is we're targeting for early next year with compression close to HEVC - see http://aws.otoy.com/docs/ORBX2_Whitepaper.pdf
This sort of stuff is just not very exciting to me. Latency is very important in gaming. Unless the servers are in the next room the latency will be probably be crappy. Companies have enough issues launching AAA titles these days without streaming them. It'll be like the Sim City launch every time.
The only really exciting thing going on in gaming (for me) is the Oculus Rift.
Outside of gaming maybe there are other use cases. Given how powerful and cheap hardware is, I just have a hard time believing it'll take off.
OTOY’s CEO Jules Urbach demo’ed an entire Mac OS X desktop running in a cloud VM sandbox, rendering via ORBX.js to Firefox, but also showed a Windows homescreen running on his Mac — and the system tray, start menu, and app icons were all local HTML5/JS (apps were a mix ranging from mostly local to fully remoted, each in its own cloud sandbox).
Personally I find that much more interesting than anything to do with gaming.
The demo videos seem to suggest it works at 60hz, including 3d and videos. If true it's a bit of a game-changer — I could move even more stuff onto my home server (assuming we get the bits, and not just this cloud BS)
But, for the billions of people not having 100mbit to their house, does it still work? I would like it, but in my winter house in Spain there is no way in hell to get decent bandwidth and that will remain so unless something big happens with EU regulations. 10 years maybe?
Has anyone actually had any luck getting this working? Seems like such a revolutionary idea that I'm dying to try it, but no luck so far.
It took me a few tries (and a couple hours) to get one of their preconfigured AMIs provisioned, excruciatingly extract the GUID from Win2k8 (people actually work this way?), but now their weird HTTP bouncer endpoint doesn't seem to be connecting at all..
http://render.otoy.com/forum/viewforum.php?f=70
I started a thread when I could not find the AMI (honestly confusing that you must launch it through the AWS marketplace outside of the console), but now I've updated it with my current problem. Perhaps we should start a new thread for this issue.
You have two choices - supply your own GUID in the metadata, and this overrides the randomly assigned one, or look in RDP/ssh for it (which is more work than needed - highly recommended you add your own GUID, this way it doesn't need to be copied or grabbed from RDP/ssg)
The nice thing about Firefox is you can stop animated GIFs by hitting ESC. There is also an about:config setting that will turn them off as well. I personally find animated/video graphics annoying when trying to read.
So VNC...? But in your browser? I don't see how making VNC work in the broswer is 'revolutionary.' What does it add to the table more so than a standalone VNC client?
Yep, and the web is exactly the same as green screen terminals.
Instead, imagine VNC, but programmable with Javascript across applications in a standard way.
Imagine using a remote app, but when a particular trigger occurs (think a special button on the client) it opens up a new connection to another server, and splices the connection into the view, so you can use apps on the two servers at the same time.
But there is plenty more - Javascript can inspect individual frames, and use them as triggers for other actions.
Yeah, except video acceleration has more of a chance of working reliably in this than in X Windows (it's a JOKE people! I'm not really trying to have a discussion about how video acceleration in Linux really does work now, finally, this time for good. Really.)
Anyway, I'm not sure what point you are trying to make.
Yes, all remote display protocols are similar at some level.
The fact that this is programmable via Javascript and runs in the most widely deployed client app ever made (ie, a browser) is a fairly significant difference though.
I don't think anybody said that this is revolutionary but it is really evolutionary. This technology really has a lot of of areas where it can and will be applied.
From the TechCrunch link (which was in the article):
>A single GPU, Amazon argues, can support up to eight real-time 720p video streams at 30fps (or four 1080p streams).
Seriously? I have been working on X11 support in Gate One for a while now and my laptop, with absolutely ZERO GPU acceleration can deliver/encode 720p, 30fps video to a browser at around 5% CPU utilization.
It's not clear to me from the announcement which parts are executed server-side and which client-side.
Companies like iSwifter have tried to do server-side-rendered, streamed Flash games for years with limited success. The local machine in that case simply transmits input and displays video.
I think the difference here is that there will be some client side computation as well, but I'm not sure how much. If the GPU is in the cloud, that seems to indicate that they are bypassing WebGL, which would provide access to the local GPU. So my guess is that the JS does the typical setup work of a CPU in a rendering pipeline (setting up the scene, constructing draw batches), transmits it to the cloud GPU, and then transfers the rendered frame to the local GPU for display.
For interactive applications, it seems like the CPU-cloud GPU latency could be a deal breaker, though John Carmack famously said that it was faster to ping Europe from the US than to draw a pixel to the screen.
It's rendered server side. The decoding is done entirely in JS, so no plug-ins. Not even <video> tag (try doing streaming with that, the latency is +100 ms). So, When you boot an AMI up on Amazon, the web page streams back the host FB as a 60 hz HD stream. Here in Los Angeles, my ping time to EC2 West Coast (NoCal) is 13-16 ms. Encode is 4-6 ms, decode in JS is 4-8 ms - I don't notice any latency. I am very curious to see the experiences others have. BTW I work at OTOY, and my username on the OTOY forums is "Goldorak". I can help anyone out there if they need assistance.
Thanks for the reply. What about the additional latency of sending the rendered framebuffer to the local graphics card after it's received from the cloud? It seems like you're right on the edge of being able to do one-frame latency between input and render at 30 fps (33 ms per frame), but would have at least two-frame latency at 60 fps?
Well if you run Aero in windows 7, that adds 3 frame of latency for example. In my testing, I don't think the decoode->canvas->present is much of an issue, but others might be more sensitive. Of course that last step is also browser specific. Firefox 19+ to me seems the smoothest, but Chrome 26+ an Opera 16+ give very good results too.
So the encoding is done by Nvidia in hardware; I guess ORBX is doing the decoding. Is there some reason why asm.js + WebGL is better than <video> + DASH/HLS?
When it comes to watermarking, I'd like to see a cost comparison between watermarking and a static CDN (e.g. Netflix can serve 15+ Gbps per server).
Quite amazing, I predict this will have a big impact on gaming. It basicly means that games can be delivered from the cloud, and it seems quite plausible, this is how games will be delivered in the near future.
For the consumer it means you dont have to buy games, you can subscribe to a service and rent them. For the game developers, it means they don't have to worry about illegal copies.
I predict this will have a big impact on gaming. It basicly means that games can be delivered from the cloud, and it seems quite plausible, this is how games will be delivered in the near future. For the consumer it means you dont have to buy games, you can subscribe to a service and rent them. For the game developers, it means they don't have to worry about illegal copies.
This idea has been floating around for years. A company went bankrupt trying to do it.
I predict it won't happen unless/until Valve makes it happen. No one else would be able to convince enough gamers to switch. And Valve has no incentive to do it, because there's really no incentive for anyone to do it. At least, not for the gaming audience at large. This tech has the potential to be a godsend for 3D artists wishing for realtime previews of their work. But gamers? Not so much.
Remember the other day an article was floating around like "Desktop PCs aren't dead, we just don't need new ones"? That's finally starting to become true for gaming PCs as well. Gamers are quite content on their current gen boxes.
You may argue that this tech will enable higher graphics fidelity, and will blow people away with how real it looks. But considering nobody knows how to make games look any more real than they look now, I wouldn't hold my breath. Gaming graphics has plateaued.
In summary, "games rendered via the cloud" is the pets.com of the gaming industry.
EDIT: Oy. If you're going to try to refute me, then put some effort into it.
Pretty poor form to down vote this comment without a response. Seriously, what the hell?
I pretty much agree with this. There may be cases outside of gaming that this will be useful though. It's funny how on the desktop market everyone wants to run everything in a browser and on the mobile everything is moving into an "app".
In the future? Haven't http://www.onlive.com/ been trying this for quite some time now? This is so not the future. Why would we want to move all GPU processing to the cloud when you can get an incredibly powerful GPU for so cheap? Even on mobile devices?
It is. Sony bought out Gaikai, which was a company doing similar work as Onlive. They plan to allow users to stream games, I believe PS3 games, on the PS4. I am sure if it works out for them they will expand it to do even more.
The lag's going to be the killer. I think this will mostly be used by large companies which want to manage (and enable) their employee's access to software. Of course, that's exactly the sort of thing that companies would like to do with Autodesk.
I'm interested in something like this for high-end CGI and graphics work. Being able to use a macbook air with a big workstation back end in the cloud would be a dream. I know there are render farm solutions that big studios use but realtime access to lots of computing power from anywhere would be amazing for freelancers and smaller studios.
Apparently the video decoding and other client-side stuff is done purely in JavaScript (and in particular using WebGL), no <video> tag or plugins or anything like that. I presume that all the server-side stuff is still native code.
I'd still like to see a demo page with their JavaScript decoder.
No, there is a full DCT decoder in ORBX.js, the encoder is built into the Amazon AMI, and can use CPU or GPU for encoding (the GPU encoder is pure OpenCL - so one day ORBX.js could support encoding in the browser through WebCL). If you send raw data down to the browser, you would need a 1 GB connection . We are targeting 4G/LTE speeds at 8-12 Mbps for HD @ 60 hz, with support going to 1-3 Mbps for 1024x768 @ 30- 60 hz..
> The two parts of the announcement, JS+WebGL decoder and the GPU cloud driving the encoder, combine to make a whole greater than the sum of the parts.
So, it's some client JS WebGL code, and something else driving the encoder in the cloud
You have people complaining about touch-lag on Android devices, which is on the order of 100ms, and you're telling me you're going to send a packet with a controller movement, render a frame, compress a frame, ship it back, and display it with something <100ms? The demo shown is a non-interactive cut-scene, so no one's going to notice the input lag. OnLive, when latency was actually measured, came out around 150ms, totally unacceptable.
Th true test would be running something like CoD, BF4, or a SF2 tournament and get pro gamers to evaluate the system.
The AAA titles are never going to be run this like economically. I'm playing Battlefield 4 right now, a huge huge game that brings my current PC rig to its knees. Why would anyone want to suffer this with 150ms lag and compression artifacts?