What AlphaStar AI Can Teach Us About AI Development

February 5, 2020 • Shannon Flynn

Advertisements

The methods of enhancing systems that use artificial intelligence (AI) to work are rapidly changing. One recent example involves a Google DeepMind project called AlphaStar. That AI project learned to play an esports game called “StarCraft II” better than most humans who reached the Grandmaster level of the game.

Why Is This Achievement Significant?

Winning “StarCraft II” requires gathering a larger army than your opponent. Players also have to send their army members to source supplies for buildings, an activity that directly ties into improving the military. Perhaps the most impressive thing is that there are 1,026 potential actions to take at any point.

Also, the player in the game, known as an agent, must make thousands of action-based decisions before learning if they won or lost the game. The AI can reportedly beat 99.8% of human players, including professional-level participants. Let’s look at some of the crucial takeaways associated with this accomplishment.

The Early Development of DeepMind

Oriol Vinyals, the co-leader on the project, had an idea for a more advanced AI back in 2006. It was in college that he would start his journey with gaming AI that eventually would help develop AlphaStar.

At the University of California, Berkley, one of the most advanced gaming design schools in the country, Vinyals and his team developed the winning AI for an annual competition. It just so happened that this competition used the gaming engines StarCraft and StarCraft II.

This experience eventually helped Vinyals develop a more advanced form of AI in 2016 when he joined the DeepMind team in 2016.

AI Projects Require Ongoing Tweaks

The first version of AlphaStar centered on self-learning, which allowed it to improve by playing against itself. The system also allowed AlphaStar to improve on specific strategies by playing special versions of the game based on a single plan.

However, several things prevented the AI from reaching professional-level prowess at the start. One was that the tech kept “forgetting” how to win against previous versions of itself. Additionally, the DeepMind team realized that professional gamers usually have training partners who help them expose flaws in their playing techniques.

The researchers overcame these challenges with a combination of supervised learning and reinforcement learning. Their experience shows that AI projects will inevitably encounter stumbling blocks, but they should not ultimately discourage research from proceeding.

AI learns similarly to the way people do when they’re working on video game development. It’s a “try, try again, fail and eventually succeed” type of action-based process that takes place through playing, beta testing and tweaking until the game is complete. In other words, winning AI is designed based on the way human beings learn to program games and play them.

Based on the outcome of this DeepMind project, others may soon take deep dives into the processes of designing gaming AI projects by using AlphaStar as a case study.

Comparing Human and AI Game Performance Is Not Always Straightforward

One of the controversies surrounding using the AlphaStar AI to play “StarCraft II” is that some people assert it does superhuman things that aren’t necessarily skill-related. For starters, evidence suggests AI is faster than human capabilities. The AI was exceptionally efficient in combat, and it regularly surpassed the speeds humans managed.

Based on speed alone, it’s easy to see how some people might point out it’s too simplistic to say that the AlphaStar AI is “better” than humans because it can win against them. Is the AI truly more skilled due to what it learned over time, or do its speed-related advantages give it an edge humans could never hope to match?

It’s also possible DeepMind was likely unable to restrict AlphaStar from performing other actions that would give it advantages over humans. Or, perhaps the system exploited the habits people show during gameplay. For example, many gamers engage in “spam clicks,” which do nothing to advance activity in a game. These repetitive clicks do help players warm up their fingers, and some assert that they help make people more reactive to unexpected events.

One blogger pointed out that AlphaStar’s performance for click accuracy was substantially higher than one of the best human players. Due to the aforementioned speed, plus outstandingly accurate clicks, AlphaStar might maintain a level of performance that’s out of reach for any human. That’s the equivalent of an athlete on performance-enhancing drugs playing against someone who isn’t.

Game-Based AI May Help Revamp How the Technology Gets Developed

Using video games to help AI learn about the world is not a new concept, but it’s rapidly gaining ground. Problem-solving is a skill that lets AlphaStar respond to environmental changes. Scientists believe that teaching AI to play games could apply to other use cases. Training AI in a game environment is not merely for fun, but could reshape how AI deals with circumstances no one could accurately predict.

For example, it could help AI tech collaborate. Chinese researchers tested that theory when it made an AI agent coach teams in “StarCraft II.” The coached teams won 60 to 82% of the time, proving the value of teamwork. People have long taken inspiration from other places when shaping their AI development projects. You can expect more of the same for the foreseeable future.

Takeaways to Apply to Future Projects

AlphaStar is undeniably an impressive achievement in the world of AI, but that’s not to say it’s flawless. Developers must remain mindful that game-playing AI could be instrumental in helping intelligent technologies figure out how to deal with changes in their environments. Getting desirable results during training requires watching for unintended consequences and determining how to overcome them.

Also, if developers create something that can win mentally demanding games against humans, that’s notable, indeed. But, developers must continually do everything in their power to ensure the AI isn’t bettering people because it has an unfair advantage. Perhaps that issue demonstrates how we’re getting closer to the day when AI’s abilities will surpass humans.

Fascinating Things on the Horizon

The news about AlphaStar gives people plenty of reasons to stay abreast of what’s happening in AI. DeepMind’s work is meaningful, but other organizations are doing worthy work, too.

bg-pamplet-2