In the mass stalker battles, the AI APM exceeded 1000 a few times, and no doubt that most of that was precisely targeted. Whereas a human doing 500 APM micro is obviously going to be far more imprecise.
I think a far more interesting limitation would be to cap APM at 150 or so, or to artificially limit action precision with some sort of virtual mouse that reduced accuracy as APM increased.
>I think a far more interesting limitation would be to cap APM at 150 or so, or to artificially limit action precision with some sort of virtual mouse that reduced accuracy as APM increased.
IIRC OpenAI limits the reaction time to ~200ms when playing DoTA2. AI employing better strategies than humans will always be more interesting than AI that can out click humans.
Even the 200ms reaction time seemed overly slanted towards the AI. I don't think that is the actual reaction time of top pros, in the matches the AI played the human player would teleport in from complete invisibility and try to use an instant cast spell and the AI would have already teleported out. Yes the theoretically may have been constrained to a 200ms reaction time, but in practice the AI was playing at a superhuman level. Even with that advantage in fights, the human team still demolished the AI. Oh well, lots of things to learn still.
Another advantage was that the AI is just reading the game state through an API, it doesn't have to look on the screen. The game can be difficult to watch from a pro's perspective since they have to constantly click around the map to see what's happening, but the AI has perfect knowledge of everything it is capable of seeing, all without having to physically move a mouse to click on the screen.
If you watch the 11th game where pro player wins, (prior was a 10-0 shutout by Alpha), the AI actually lost because they rebuilt the agent to use the same forced camera perspective as the human - so there is absolute truth to this being a compelling advantage. It was able to micro multiple units in disparate areas by having far better spatial awareness. When they took that advantage away it seemed more even.
I don't know if we can absolutely claim that the limited viewport was the deciding factor in the 11th game, but it did seem to me that the Alphastar agent's blink stalker micro was somewhat compromised in that game compared to the seemingly superhuman blink micro in previous games.
It struggles with camera placement like real players :) And uses popular divert-attention tactics, which shows it understand that part of the game - for example when it sends oracles to mineral line at the same time as it attacks in front. Previous versions didn't do that, because they were taught playing vs cheating AI - so no point diverting attention of something that has instant access to any unit on the map :)
It also struggles to defend against adept harras beacuse it has "tunnel vision" - controls its oracle instead of defending probes at home. Mana actually managed his attention budget a lot better (this is a crucial pro-player skill in starcraft - harras is effective because it trades little of your attention for a lot of attention of the enemy, it's a skill that becomes irrelevant when opponent doesn't really have "attention" and can perceive and interact with all units on the map at once like previous version of alphastar).
This one is much more human, and much lower level. In my opinion it lost unfair advantage, so the mistakes in its errormaking are revealed. Previously it never was behind and never had to react to human player strategy - it rarely even scouted because what's the point - it wanted to build mass stalkers anyway.
Yeah, that's actually a huge point that I didn't even consider. Regardless of whether the AI itself is playing with a limited viewport, the fact that its opponent has a limited viewport opens up the opportunity to learn attention diversion tactics during the training process, which would otherwise be impossible.
What happens if a human tries to use the API with a custom UI of the human's own choosing? Such a UI might not exist yet, but are there ideas for more efficient UIs that could be built?
Yes I am curious of this too. What happens if the human has a giant TV screen that can see the whole map at once
Or, what if we slow down the game, so that the human can actually pause the game each second and consider what to do next. That's basically what the computer is allowed to do
Macro-wise, it would be like an unwieldly minimap which already exists so people can get a sense of where the enemy is moving. With a giant screen, information is not focused on a small area, so you are limited to your FOV. Minimap which shows unit strength in terms of armor hp or shields as well as placement would be ideal information.
Micro-wise, it would be like sitting in front of a giant text display looking at a whole book. You still have to focus on a small section to read it.
> Or, what if we slow down the game, so that the human can actually pause the game each second and consider what to do next. That's basically what the computer is allowed to do
While this would make it more fair, it would just make the micro game more similar to chess or go. I don't think humans would necessarily win in the end.
That's a good insight and yes, humans would probably be overpowered eventually. However, this is just the consequence of the fact that all games are similar if you remove external limitations such as reaction time (or, alternatively, produce a more efficient "being" which is not as subject to these limitations as some other).
Starcraft is like chess in some sense. The largest fundamental difference is that it isn't a perfect information game.
Tbh starcraft and dota shouldn’t really be the test games atm; turn based strategies (or rather, grand strategies) would be the far more appropriate evolution after chess and go, since we’re clearly more interested in AI macro than micro, and too much of its learning process is in trying to push the AI beyond micro-oriented thinking (probably many rounds of the AI tournament are lost simply because one AI found a new micro strategy to abuse)
But ofc, there’s no tbs or grand strategy currently out there with a real tournament scene, so you can’t really count on the devs implementing an AI-API, or even properly balanced / bug-free (far more user-testing goes into sc2/dota2 than say civ, simply by virtue of its playerbase).
Yes but a turn based game drastically reduces the action space compared to a real time game, something the DeepMind folks pointed out as a particularly interesting problem they wanted to tackle.
>a turn based game drastically reduces the action space compared to a real time game,
That's the primary benefit imo. The bigger action space is largely composed of non-strategic elements, at least in the sense of long-term strategies, eg micro and mini-skirmish tactics, that I don't think are as interesting. Ofc its clearly a conflict of interest, but my feeling was the most interesting aspect of Go/Chess is the AI making unintuitive discoveries that benefit the long-term. The human-collective machine is pretty good on its own at finding the shorter-term strategies; I don't think AI will make much significant impact in that space.
games as a medium to study upcoming real-world applications (eg cars), RTS makes sense; but as a medium to study AI beating humans, TBS is more appropriate (their ability to explore large search-spaces is far more interesting/potentially impactful). Studying both would be ideal ofc, but in a pick-one situation, TBS is better imo. But only RTS are even really viable atm, which is disappointing.
Even allowing players to zoom out would give huge advantages, that's why no matter the screen size you have to play at the same zoom. There was a bug at one point that allowed players to play multiplayer zoomed out and it was forbidden to use it in competetive games.
How about having multiple humans control the same faction, so one can focus on building, two on a couple of battle groups, another on scouting, etc.? Then they don't have to context switch nearly so much.
Aha, nice, thanks. Let's see, two players per side... not a huge number but probably a big step up from one. Looks like people aren't playing it much; some people suggest it's because that requires a partner.
I would like to see a setup akin to that of Ender Wiggin, with one commander overseeing and recommending overall strategy, and, say, five others managing different areas or groups. That seems like the way to get the best human performance, and might be enough to beat the AIs—at least to nullify chunks of their advantage.
Yeah, put an eye tracker in a pro and you'll see that the eyes are constantly changing the focus point, if you can watch the entire scene with the same precision without the need to focus on it you're already at a nice advantage.
As an aside, a few pro gamers prefer to play on windowed mode for exactly this reason.
Is the bit about reading the game through an api true? Earlier iterations of this same rl based agent that played Atari games would read just raw pixels not an api.
Yes, it’s true. A special PySC interface was created for AI. Also, it’s not only that AI doesn’t need to parse information available on much limited screen real estate but also that AI doesn’t have to use controller that have physical constraints. So AI has access to this super human controller and it can decide to click on one screen extreme and then another within 200ms.
Any game that is specifically going out of its way to support these ai’s will naturally do it through an api, though I’m only aware of dota2 and sc2 (sc:bw also does, through a community-modified client that serves the api, iirc). For adhoc games, eg atari, pixel-parsing is the natural result, but no one would intentionally set it up like that
The game is difficult to watch, but does anyone honestly believe that an AI is going to have a difficult time parsing the scene if it is trained to do so? That to me just seems like a question of resources. We're pretty good at image recognition and segmentation now, and that's without the unlimited amounts of training data one could generate when using a controlled game environment with a limited range of possible animations and effects. This is why I find the prospect of the AI agent having to parse the screen entirely uninteresting.
For real life applications, parsing the ”scene” would have impact as it could only convey imperfect information retained. In the game of starcraft the information is perfect when fog of war have been removed this together with unlimited attention (camera viewport) helps action potential and macro planning. No player is ever going to be able to consider precise strategy on the whole map perfectly in their mind. If deepmind wanted to mimic human limitations perfectly they would have to provide imperfect information for AlphaStar, e.g when providing information of locations of objects sample a random variable from a probability distribution which represent the location imperfectly and making that distribution bigger the longer the attention of the A.I wanders from the object both spatialy and temporal. Of course the usefulness of having these limitations is purely to model maximum theoretical human mental capacity and it’s use case could be to help explore strategies that work for actual humans.
There is another potential use: given these limitations, an AI might be able to learn to be better strategically, which could translate to an even greater advantage once the limitations were removed later on.
You talk about a static image, but navigating the camera requires strategy, attention, and adds to the focus. If you take that away, it's just a turbo charged pen-and-paper RPG with a time limit on rounds.
They could train against the API, reinforcing the AI trying to predict the state from vision. But with limited APM it would be pretty difficult for the AI to keep track of everything. And, potentially, it would still not be the same as a human looking at it. I'm not sure whether human attention is a particularly bad example of efficient resource allocation. I'm very biased to think it is still the gold standard. But the fact that deepmind didn't focus on this implies they were not finding it interesting enough, and/or too difficult.
Anyhow, (visual) exploration is a step up from mere image recognition
"Brute force" in AI context is usually reserved for traversal of the entire search space. I think "superhuman micromanagement" is a better term. And before AlphaStar superhuman micro wasn't insurmountable obstacle for human players.
Yes, since DeepMind chose SC2 for having the right characteristics for mapping to the real world, ie imperfect information and real time response, they should have had at least one run without any speed governors. And maybe another with the CPU limited to some level we might find in an embedded system of near future.
I've recently watched a TED talk explaining how human perception has a lag of about a third of a second. Pro players might be better, but after noticing they also need to take an action.
My experience is that to beat 300ms requires there to be no conscious thought in the loop. It has to be muscle memory guided by higher level intent. It's like how the gunslinger waiting to shoot hits first, it's reflex instead of decision.
Getting sub 200ms on something like this benchmark is fairly easy [1]. While waiting for the color to change is different than processing a game like dota2 or sc2 a 200ms limit isn't too unreasonable to me.
I would love to see these AIs get handicapped even more like a full second and really force them to out think humans.
I think OpenAI would have been by lots of humans, but they decided to train it with 5 unlimited, invulnerable couriers. (until the TI showmatches, in which they were beaten easily.)
The only way to truly have a fair fight would be to accurately model the limits of human capacities. How fast can humans move the mouse and at what accuracy? How fast can they type keyboard commands? How fast can they move their eyes? You could study those limits in a sports lab with high speed cameras, etc.
A simpler model would be to limit the bot to, say, one action per 250ms, introduce a slight delay in his reaction time, require him to move the camera to gain detailed information and take further actions, and have camera movements count as actions.
Here's a graph of AlphaStar's APM versus a professional player's: https://i.imgur.com/TXeLkQK.png Evidently AlphaStar also has a similar Economy of Attention (where the player focuses) to a professional player, at around 30 screens per minute. Additionally, AlphaStar's reaction time is around 350ms, a significant disadvantage over a pro.
The skepticism in this thread is absolutely justified but I think it's important to note the lengths to which DeepMind has gone to address and assuage the fears of superhuman mechanical skills being employed in these games.
I watched all of the event live and I feel that that graph is deceptive. If a game is 15 minutes and has 3 main battles lasting 15 seconds each, and you use 100 average APM on non-battle time and 1000 APM during battles, your average APM will be 145 but you obviously have a superhuman advantage.
This is compounded by the fact that almost all of AlphaStar’s actions are “useful” whereas a significant amount of the human actions are spammy.
You will typically see a human select a group of units, and fast-click a location in the general direction they want the units to move (to get them started moving that way), and then keep clicking to continuously update the destination to a more precise location. Every click counts as an action. An AI can be perfectly precise and “clicks” the right place the first time.
TLO seems to have a longer tail than AlphaStar in that graph though, so doesn't that imply that TLO peaked at an even higher APM, presumably during battles?
TLO is a Zerg player, so he probably does a lot more errors when playing Protoss. Also, every top player estimates when to do a sequence of actions and spams it a few times to maximize the chance of execution. Meanwhile Alphastar only has to do that once.
Hm, should be interesting to force the AI to use input commands through a "filter", where it can only execute orders with human level precision. And something similar for input.
This graph is incredibly deceptive and I'm kind of upset they posted it. There are about 10-15 seconds of gametime where APM is incredibly important, and the AI boosted to 1000+ APM during those periods. During lulls it cruised at ~30 APM.
Meanwhile humans are literally spamming keys to keep their physical fingers loose and ready - they're not performing anything close to 400 useful APM on a regular basis (or in TLO's case - 1500 ... He kept walking his units straight into death while spamming keys).
I believe you are conflating latency and throughput. It might take AlphaStar 350ms to perceive a threat, but once perceived, it might issue many commands at high speed to respond.
How many of those 500 actions are actually useful? I haven't watched competitive StarCraft games for years but back when I did, rates were more like 300APM and even then the players basically spam clicked the background or selected random units non-stop and were probably only doing 50-100 actual effective actions.
> How many of those 500 actions are actually useful?
Exactly, a human doing 500 APM during intense moments is going to be way different than an AI bursting 1000 APM with pixel-precision during the most crucial moment in a game.
TLO spent a ton of time at >1000 APM and walked his army directly into enemy shots all the time. MaNa had much better control at ~400 APM. So APM is really irrelevant to control - for humans.
I suspect the AI, on the other hand, makes each action precise & count for something.
This graph, which I think was supposed to show that the AI was being "human", IMO is pretty damning. We saw the APM spike to >1000 during a critical moment and we saw the APM at <30 during lulls, so we know it uses its APM at important moments, presumably with important pixel-precise actions.
I suspect that once the AI becomes good enough it will be able to beat human players using a much lower total APM than human players. We're not quite there yet, but it just needs a little bit of time.
As a hopefully illustrative comparison, you could give any top player a day of play time per move against the top Chess AI being given a minute of play time per move and the AI will still win. That's how much better the AIs are than humans now. There's no reason in principle this won't be possible with StarCraft AI too.
The biggest issue with allowing the ai to have high APM is that it will inevitably learn optimal strategies that depend on that high APM, eg stalkers can take on far more immortals than we normally expect, and the AI will learn it this way, because the high APM allows a new stalker strategy (or rather, empowers an old one greatly) while not affecting immortals significantly. This also naturally means the AI leagues see a different game balance than the human leagues, leading to strategy divergence.
And then when you drop the APM limit, suddenly all the learned optimal ai strategies start falling apart, and the whole thing has to be relearned.
More annoyingly, there’s not much for human players to learn from innovative ai strategies that are based on inhuman accuracy of play (because we couldn’t possibly execute it).
What they're improving at right now isn't any specific AI model, it's how to train the AI models. It's meta-machine learning. I don't doubt that they can quickly train up a new model under different constraints now that they know how best to train up said models. It's not like they throw away all progress once they change some constraints; far from it.
I'm sure we'll get there too, I just think it's a little deceptive how they've measured the APM at the moment.
StarCraft is more random than chess, so I do think it's possible humans will always be able to take occasional games off of fairly constrained AIs just based off blind luck in picking counter builds, it will be interesting to see what % that is.
the 1000 apm thing is because of a bug in how apm is calculated in starcraft2. There is a hotkey to assign all your units to a new control group while also deleting it from all other control groups which TLO extensively uses, and while it just is one key-combination to press it records as 1 action per unit which was selected. The real APM of pro players averages at 250-400 and peaks at 600-700.
I stopped playing SC competitively because it's too stressful. Both physically and mentally. Hitting 300 APM continously in a game for up to 60 minutes at a time makes your hands go numb. And the adrenaline rush makes you want to go running afterwards. With games like LoL/DoTA at least you have a chance to take a break after a gank/farming/ team wipes. With starcraft every decision has a significantly higher compounding effect
From what I understand, the most common string instrument problems are with shoulders/neck/back, due to sitting for long periods of time with poor posture.
Most music should be playable without excessive risk of serious injury to arms / wrists / hands, but from what I understand very high notes on e.g. the violin are hard to play without using an over-flexed wrist, which is definitely a problem if playing music requiring such a position for long stretches of time, or many rapid switches between high and low notes.
Some of the string players with most risk are novices who have not been taught proper technique.
For professional PC game players, the design of the standard computer keyboard and furniture is absolutely terrible from an RSI perspective (worse than any common musical instrument, and without any of the design requirements of acoustic instruments as an excuse), and it is shocking to me that there has not been more effort to get more ergonomic equipment into players’ hands. The way game players typically use a computer keyboard is generally more dangerous than the way typists or e.g. programmers do. As someone who spent a few years thinking about computer keyboard design, I can think of at least a dozen straight-forward and fairly obvious changes that could be made to a standard computer keyboard to make it more efficient and less risky for game players. There is a lot of low-hanging fruit here.
Whether or not the equipment is changed, the most important single thing when using a computer keyboard (or any hand tool for that matter) is to avoid more than slight wrist flexion or extension, especially while doing work with the fingers. Excessive pronation and ulnar deviation of the wrist are also quite bad. Watching pro players, many of them have their wrists in an extremely awkward position while doing fast repetitive finger motions for hours per day without breaks, which is a guaranteed recipe for RSI.
Well I have heard of them, also looked up TLO mentioned above, he actually did get RSI and had to take months off.
"Liquid regretfully announces that Dario “TLO” Wünsch will be unable to play for the next few months due to the Carpal Tunnel Syndrome he experiences in both hands. He will however continue to be involved with E-Sports even as he takes a break from gaming to give his wrists time to heal. Sadly, this means that he will not be attending Dreamhack Summer or the Homestory Cup III as a player."
There would be an entire new dimension of decision making, in addition to good macro, where you have to prioritize actions. Will be interesting to see.
I said so before, but is it really a big difference from controlling a unit that can also only do one thing at a time? The agent controls itself just like another unit, with a constraint on APM available to control other units. On the one hand, these APMs add a new parameter, if the constraint is implemented naively. On the other hand, if there are viable strategies against ultrahigh APM opponents, then the constrained is really rather limiting the dimensions of the decision space and to good effect, finding viable strategies that take less effort. Hence such things are called "hyperparameters" (I think that's something different, but you know what I mean). Likewise, the game isn't as fast as to need 100 screen switches per second, if good planning allows batching and bursting actions.
I understand the spirit of the proposal but that would be like limiting a computer to add at most two numbers per second. It's OK if we want an interesting contest against humans but it wouldn't be a fair estimate of a computer math capability. It's also not the point of using computers to do math instead of a room full of accountants. I'm OK with the AI going as fast as it can and play superhuman strategies because it can be that fast. After all we'll not limit AIs output rate when we'll let them manage a country's power grid.
The purpose of limiting speed isn't to make an interesting contest, it is to accurately compare the "math" instead of the speed the math is done at.
It isn't surprising that its fast, the surprising part is that it can make human-like decisions. The only way to compare whether its thinking is human-like is to restrain it from "brute forcing" the contest through speed.
The model has likely learned that the faster it does things the better the outcome. What it needs to be measured on is strategy.
But isn't the competency of a Starcraft player is also measured on his/her speed?
In that context, you can't really measure strategy without accounting for timing/speed because a lot of tactics and strategies only become viable once the player has the required speed to actually realize them aka "micro".
Exactly, and due to superhuman micro, the AI has cornered itself into learning a small subset of the strategy space. It’s not good at strategy because it’s optimized itself for just getting into micro-handled situations.
It’s not good at strategizing with all the options available to it given it’s micro ability, it has “one” strategy that leveraged the micro as much as it could, and when given a strategic challenge by mana, it didn’t know what to do.
yes but the ultimate goal, is to make an AI as "smart", or "smarter" than a human. That's why they keep making AI's play against human players in Chess, Go etc. It's not to prove computers are faster than humans. It's to prove computers can be smart like humans.
They want to make an AI that can teach new ideas to humans. New strategies that human bodies are physically capable of executing, but no human was "smart enough" to think of yet. An example is when the AI built a high number of probes at the start. That's "smart".
The only way to train an AI to be able to come up with new ideas, is to force it to be "slow". Otherwise, it will just always do the easiest way to win, which is out-micro. There is nothing interesting about a game like that. That only shows the AI is fast, but it won't be clear that it's "smart"
That's exactly why it's so important to try and constrain the system to as close to human parameters as possible. You can't compare strategic prowess if the two players are playing at a completely different level. It'd be the same as saying MaNa is better than say, Maru (who has just won 3 GSL Code S's in a row), because he has stronger strategies against ~30th percentile players. It makes no sense.
Speed is only interesting as part of fair human competition. It's trivial for the AI to win with speed and it doesn't have to be remotely smart about it. Serral (dominant world #1) was easily beat by 3 far weaker humans controlling one opponent - it wasn't even close. It's just stupid to even claim victory in those situations.
Making an AI that wins by outsmarting humans, on the other hand, is what we are all interested in.
That would be right if AI and human player had the same opportunities for micro.
They don't, because AI doesn't use physical objects to move stuff in the game. AI just "thinks" that this stalker should blink and it blinks. Human player has to deal with inertia of his hand and of mouse.
If you want fair competition of micro - make a robot that watches screen through it's camera, moves mouse and presses keys to play starcraft.
Then the bandwith of the interface is the same for both players, and we can compare their micro.
you don't really need a real robot, but assign some "time cost" for various actions which depends on spatial distance and type of action and if it is a different action than the previous action. humans are really fast when for example splitting a group of units but performing multiple different actions on different areas on the screen or even multiple screens takes a lot longer. They don't need to fully emulate human behaviour but getting somewhat close would really show how strong teh AI is tactically and strategically without superhuman micromanagement.
If we want to measure strategy, I agree with you, and out of curiosity we might do it. But the goal is winning, so is strategy important as long as it wins? The AI can take every shortcut it finds IMHO. People do take shortcuts.
Cars and planes bring us across the world exactly because they don't walk like people and don't fly like birds. Wheels, fixed wings and turbofans are shortcuts and we're happy with them. We can build walking and wing flapping robots but they have different goals than what we need in our daily transportation activities.
The problem with starcraft is - interface overhead is significant part of the game. AI doesn't have to cope with that - every click is perfect, and moving the mouse from one edge of the screen to the other takes no time.
If you want to make it fair - place an AI-steered robot in front of the screen, and make it record the screen with camera, and actually move the mouse and press the keys.
Then I can agree it's fair :)
But then of course AI would be incredibly bad.
Right now the advantage doesn't come from faster thinking, but from much higher bandwith and precision that AI has when controlling the game. It's anything but fair.
With chess it's not a problem, because interface overhead is negligible.
Those are different engineering problems. I'm pretty sure that they could eventually build a pixel perfect camera and a fast pixel perfect robot mouse. They'll be at least as good as human eyes and hands, probably better. Done that, they'll keep winning.
It's surely interesting technology with positive impacts in a lot of areas but is it that the important part of the experiment? Humans need keyboards and mice to interface with computers, computers don't (lucky them.)
Sorry to insist on that analogy, but it looks to me as if my car should be able to fit my shoes and walk before I admit that it goes to another city quicker than me walking.
When you're trying to individually blink 30 stalkers at the perfect time they have almost 1 hp - latency is everything.
Camera has latency. Depending on various factors it takes even milliseconds of exposure for camera to gather enough light that it registers as a clear image frame. Human eye works on a different basis, but also isn't instant. You cannot cut that in software, human player cannot train to lower this. But AI doesn't need to do it - it has image provided as a memory buffer.
Image recognition has latency (both in the brain and in computer). Even as simple stuff as recognizing where the computer screen is as opposed to the background. It takes time. AI doesn't need to do it.
Muscles (engines in robot hands) have latency.
Mouses and hands have inertia and can't be moved instantly - have to be accelerated and stopped and even if you have optimal algorithm to be 100% accurate - it takes time.
It's not only hard to implement, it's also physically IMPOSSIBLE to do without introducing significant delays.
AI that is controlling the ui directly doesn't have to deal with most of these tasks, so it has a huge advantage in a game like starcraft. It's not that AI is so much better, it's that AI is high-frequency trading and human player is sending requests to buy/sell by telefax. By the time your request is processed the other guy had opportunity to do 10 different things.
If you want to focus on the part of the job that is doable now - sure, go ahead. But then don't abuse the unfair advantages you have and announce you "won". It's very low threshold to win in starcraft when your opponent has effectively 100 times the lag you do.
I'm sure someday we will have AI that can beat human player in starcraft without abusing this advantage, And I'm pretty sure the fastest way to this isn't to put a real robot in front of a screen, but it's to limit the intraface bandwidth of the AI to be on the similar level as that of human players.
> Sorry to insist on that analogy, but it looks to me as if my car should be able to fit my shoes and walk before I admit that it goes to another city quicker than me walking.
Let's remove the roads that we made specifically for cars and speak about this again :) Will your car move you through an untamed wilderness quicker than your legs? Possibly. Or not at all.
If I walk into a bullet train, slowly walk inside it, and walk out of it at the end of the route I will be even faster than the fastest car. Is it fair to say I'm faster than a car? After all it's not my fault the car doesn't fit inside that bullet train :)
We need to compare apples to apples, and comparing AI that doesn't need to deal with half the sources of latency with a human player that does, in a game where latency is very important - just isn't fair.
If you don't put any limits on the AI, it's not Starcraft any more.
You could make an AI which tries to hack the human computer to force a leave. That would also constitute a "win". Or one which hacks its own computer and displays "You win" immediately. Or one which tries to kill the human player, if we want to be really dramatic about it.
Chess and Go both limit computers to one move per human move, and they’re still very interesting games for AI. You’ll always have limitations. When you’re playing a game, the limitations are largely arbitrary, and you choose them to make the game better achieve whatever goal you’re after.
You are right, but the point here is to force it to win by pure decision making. Having an AI play a game was always about challenging ourselves to improve our understanding of intelligence. Limiting APM is just another way to force us to come up with new ideas.
So, in some sense this is a limitation of starcraft. The goal of this project is presumably have the AI play a high strategic depth game. However, with sufficiently high micro certain strategies that have low "macro depth" become unbeatable. So it's true the AI would win, but it plays in ways that do not expand our understanding of SC strategy, it is simply using a simple to understand and impossible for human to execute strategy. Think of aimbot in a shooting game, a human can try to play smart and attack from unusual angles/lay traps/crossfires, but if the AI can simply get instant headshots the AI can run straight to objective and win. It would be a winning play, and humans understand why it would be a winning play (boringly so), but it is outside of human execution.
But it's important to be clear about what's being measured. If the AI can take and successfully win engagements that no human could because of their superior micro, it's not necessarily winning via superior strategy (as is claimed).
I think a far more interesting limitation would be to cap APM at 150 or so, or to artificially limit action precision with some sort of virtual mouse that reduced accuracy as APM increased.