Also worth noting that it starts by imitation learning from pros. I'd be curious to see if the macro can be learned without imitation; a much harder challenge. Also, playing with full visibility as was mostly the case in the demonstration is quite lame...
That's still a large advantage that humans don't have access to. Not just in the "pitiful humans can't take advantage of such a large viewing area" sense, but literally the game will not let human players zoom out that far.
Also I wonder how it handles invisible units. Because as a human player you can see the shimmer if you look close. Can it see that or are they just totally invivisble to it?
I wonder if that would let you win with something like mass dark templar with phoenix's to snipe observers. You could run right past it, and it could never anticipate you.
Or better yet, imagine zerg where you can burrow every unit.
It would be the same as with a player: as soon as you do something with those invisible units, or imply that you have it (eg dt shrine), its sufficient to say that invisibility is in play, and appropriate tools should be used. Its not like you can do anything about dark templars even if you see the shimmer, if you have no sight, beyond body blocking.
Regardless, the article describes cheesing as the common tactic in early iterations, with economic-play being learned later — one of the described cheeses is dt rushes, which the AI apparently learned to deal with, so it should have some understanding of invisible units (alternatively it learned to ignore the dts and base trade or something).
I don’t think the shimmer is useful enough to be a significant loss for these prospective AI’s quests for world (sc2) domination
If you learn, why not learn from the best, the pros? These people already have spent years figuring out what works and what doesn't. Why not draw from that pool of knowledge and instead spend extra time going through the same motions?
Because then you don't know whether the AI learned by experimentation or by mimicking. To draw an analogy, imagine the difference between somebody reading and following an algorithm to solve a Rubik's cube, as opposed to somebody being handed a Rubik's cube and experimenting. If expert-level strategies can be reproduced without being explicitly shown to the person/AI, then it means something is going right in your methodology.
An AI trained from human strategy might end up more limited than one that could learn from scratch. It could be stuck in a local maximum of play and be unable to escape.
An AI technique that requires a large dataset of pro play to learn will be much more limited in terms of applying it to other games.