I understand those are the web technologies, but how is this being run in the back? Tons of mac servers, with multiple IOS simulators on each? One running per user on the mac? Custom software to control and pass through to the socket?
Care to shed any light as to an overview on how the backend works? I've always been curious since I first saw App.io
Hey B-n-c,
You are pretty much spot on with everything you mentioned.
Apparently app.io had some setup w/ VMs running on lightweight hypervisors (https://macstadium.com/casestudies), but we found we didn't have to have all that overhead to run multiple simulators per machine.
And you're exactly right about the small piece of custom code to pass controls and send frames. We're big fans of socket.io - pretty powerful stuff!
Happy to answer follow-ups if we missed something.
The one piece that puzzles me is how the frames are being captured from the iOS simulator. Are you launching multiples per user account, where each one is a different session at a different portion of the screen, and essentially screenshotting each? Or did you modify the simulator somehow and are hooking in to grab the screen, so that way its irrelevant of their position on screen?
We wrote some custom OS X code to capture frames from individual processes. That way the simulators don't need to be positioned in any way or even visible. Cheers.
The other piece of magic you are doing is sending in the keystrokes / mouse movements. If you're running multiple iOS simulators on a Mac, how are the key presses / mouse movements not interfering with other users?
Maybe, but then launching multiples and capturing the screen of them to send off through the sockets seems like there must be something in between. I wonder if its able to see where the session windows are on the screen and screenshotting them to send through, or if they somehow hooked directly in to pull the screen?
Care to shed any light as to an overview on how the backend works? I've always been curious since I first saw App.io