Previously, I’ve looked at a variety of video games that have proven useful test-beds for AI research, with the likes of Ms. Pac-Man, Super Mario Bros. and more recently StarCraft. But in this video I want to look at a genre that is still relatively new whilst presenting exciting opportunities for AI research: Multiplayer Online Battle Arena’s (MOBA). The MOBA genre is undoubtedly one of the most popular in gaming today, but what impact could this have upon AI research? I’m going to provide an overview of MOBA’s as a genre, what aspects of their design can prove interesting to AI research and look at some projects that are now bearing fruit both in academia and in corporate research labs.
Multiplayer Online Battle Arena’s are an offshoot of Real-time Strategy (RTS) games, originating with the Aeon of Strife map for Blizzards StarCraft, followed by the ‘Defence of the Ancients’ mod for WarCraft III: Reign of Chaos and its expansion The Frozen Throne. In MOBA’s players take charge of a single character or hero as part of a team. Teams of players are taxed with the destruction of the opposing teams structures, with the critical structure found in the centre of their base on one side of the map. Maps in MOBA’s are typically broken up into lanes, with structures such as towers or turrets for each team to defend their own and their opponents. Players must work to destroy these structures alongside not only fellow players, but simple AI-controlled non-player characters — often referred to as creeps — that continually spawn from each teams base and move forward often unwittingly to their eventual demise.
What makes this interesting and challenging for both human and AI player alike is the need for management of resources, control of our chosen hero unit, alongside consideration of the macro and micro layers of strategy. This is in contrast to RTS games that typically remove absolute control at the micro-level given that AI often handles the behaviour of individual units for a given task. This allows for players to focus on the macro or higher level strategies in play. However in MOBA’s, player need to manage the control of their own hero unit, but also consider how their actions will influence the strategies in play at multiple levels of abstraction. This requires teamwork and communication between players, whilst also ensuring we effectively manage our hero’s skill tree, and resources through use of items.
MOBA’s are rather strict in their map design and general formula, with maps often adhering to the three lane structure. The variety of heroes on offer are what govern the larger meta strategy of the game. Despite this, there is still sufficient breadth in this design for numerous games in the genre to emerge. While MOBA’s are typified by two spiritual successors of Defence of the Ancients, namely DotA2 by Valve and Riot’s League of Legends, there are host of other interpretations of this space such as the likes of SMITE by Hi-Rez Studios and Blizzard’s Heroes of the Storm.
Why MOBA Research
MOBAs are a very popular form of competitive gaming and have effectively pushed RTS games and other popular genres aside to become the largest eSport in the world in terms of prize money awarded.
So where there’s a lot of money, there’s a need to provide a great experience for players, spectators, commentators and more. There are a variety of ways in which AI can potentially impact areas of MOBA development, consumption and engagement. Whilst much of this is still more fantasy than reality there are some very real problems that AI could be tackling, this includes:
- Hero Drafting: where AI could recommend the selection or blocking of specific characters in a tournament context.
- Commentary: This is an incredibly demanding and taxing problem to explore and comes in a variety of forms, such as analysis of the hero draft phase as well as building up the key plays of the match. This last part is incredibly taxing, given this requires recognition of what’s happening at the point in the game, isolating the key play that occurs, managing the camera for maximum dramatic effect and potentially even hypecasting the whole event as it occurs.
- Pre and post-match analysis: Examining their movement patterns, their overall strategy with hero selection during play as well as item usage such as wards which temporarily expose parts of the map hidden by the fog of war.
- Content Patches and Balance: League of Legends and DOTA2 go through an ongoing process of balancing characters and items within the meta of the game. There is potential for AI tools to be able to establish the major changes at strategic level of a given patch, to isolate exploits that need to be patched out and potentially predict changes that will be made to the game based on current in-game performances.
- Community and Moderation: Sadly, MOBAs have a poor reputation for their community and toxic behaviours, but what I AI can catch this as it is happening — an issue already being tackled by the likes of SpiritAI and their text message monitoring platform Ally.
Many of these issues are relevant to other games and even physical sports such as football and basketball, but MOBAs are a perfect platform for this given they provide one thing that many machine-learning driven AI systems need: data. DOTA2 and League of Legends provide complete replay data of matches and there are a significant number of tools and communities out there who write parsing tools to enable hobbyists and researchers to parse that data for their own interests. And there’s plenty of data to go around, with over 1000 years of DOTA gameplay occurring daily.
MOBA Research Projects
So while this research field is still relatively new, I’m going to walk through some recent work in the last couple years and give you a flavour of just how far it’s come, but also how much more work is to be done.
First up, hero drafting: the process of selecting and potentially blocking heroes for the opposing team. Can we predict the hero a team will select? That’d be useful not just for match commentary but also in-game strategy. Early research in this area can be found in (Summerville et al, 2016) looked at using a stack of DOTA2 matches from the DatDOTA site around the time of the Frankfurt Major in 2015 and applied Bayes Nets and a LSTM neural networks to and trained them against the draft phase. This isolated choice number, team, choice type and preceding choices as part of the decision making process. The resulting Bayes Net wasn’t terribly accurate, but the LSTM wasn’t far off the actual commentators comments made during those matches.
What can we learn about players from their gameplay and how can this be put to good use? There’s a variety of research emerging in this field that utilise the reply data to full effect.
One notable example is (Bhattacharya and Sabik, 2015), who developed a prototype player-personalised recommendation system for hero and playstyle recommendations in 2015. This work analysed 500 matches in top-tier competition and clustered data of winning teams using Principle Component Analysis, Locally Linear Embedding and Affinity Propagation. The resulting system can look at a snapshot of the current state of the game and recommend the most successful heroes for that period of game, as well as give advice on goals and objectives players should focus on, all of which based on expert gameplay data.
Meanwhile, (Drachen et al, 2016) shows formative research in analysing zone changes and distribution of team members in matches of DOTA2 using a process known as spatio-temporal analysis. The resulting system, once again using a form of time series clustering over 196 matches across four skill tiers, enabled the research to highlight significant differences in strategic play between expert and novice players. This highlighted how expert players movement patterns and behaviour are not only more refined than novices but have unique identifiers during early-mid and mid-late game.
Moving into analytics, (Schubert et al, 2016) looks at building the foundations for future eSport analytics through encounter analysis: meaning reading the in-game data to isolate periods of play where multiple heroes from opposing teams are in range to affect one another. With a decisive encounter established as one where heroes get killed.
Using the spatio-temporal analysis process, this system assessed over 400 DOTA2 replays to establish a variety of different encounters, breakdown where encounters largely occur throughout a match even begin to predict success in encounters base
d on other in-game data and previous performances.
Alongside this is (Yang, Harrison & David Roberts, 2014) that focuses on identifying combat patterns in DOTA2. These patterns are built from graph models of in-game behaviour during specific windows of play, the interactions that take place and their outcomes. Through use of rule-based classifiers, a series of rules and strategies are formulated by the system with a confidence rating of whether they should be employed at a given point. Whilst still far from complete, analysis against new matches outside of the training set proved to be valid and accurate in over 80% of test cases.
This is far from an exhaustive overview, but it highlights that there’s a lot of work happening in this field and it’s been going for a few years now. Outside of these topics, there has also been in areas such as item-analysis and in-game economies, dynamic difficulty adjustment for solo players, online ranking algorithms for ensuring players don’t matchmake with undesirables and of course, writing bots that can actually play MOBAs.
Arguably the most well known research happening in MOBA’s right now is in creating bots that can play the game themselves. While there is work in this area happening in academia, the biggest and most notable to-date is from the corporate world. Headlines were made back in August 2017 as a new AI bot was paraded in front of fans and professionals at The International: the biggest DOTA2 event in the world — hosted by game developer Valve — and acts as the closing event of the annual pro-circuit.
The bot itself was developed by Open AI, who are still tight-lipped on how the bot works. But it is reliant on a custom interface based on the public DOTA2 bot API that reflects human observations and in-game actions and a configuration that allows it to use machine learning techniques and gradually learn the game through self-play. This is hosted on a cloud system to enable massive processing power as well as speed it up even further by running the game almost entirely on GPU’s. The learning bot is also capable of being coached by being fed good strategies either for specific circumstances or to bootstrap base behaviours.
The bot was gradually trained and improved over a period of 5 months in preparation for the International, with the bot’s performance gradually improving over time. In early June it still struggled to beat players with a DOTA2 Matchmaking rating or MMR of 1.5K (which is equivalent to less than 15% of all players) to ultimately defeating professional players such Arteezy (MMR of 10K), Sumail (8.3K) and former-world champion Dendi (7.3K), with this battle being streamed across the internet for the world to see.
Is this the future of AI research? Is this a significant technical achievement?
Yes… and no.
Firstly, we need to remember DOTA2 like any MOBAs is a team game — 5 vs 5 in fact. Being capable of reacting to, anticipating and outperforming a human in 1 vs 1 capacity — even against professional players — was an inevitability. Training machine learning algorithms to work in reactive and dynamics contexts such as this is a relatively attainable goal. Don’t get me wrong, it’s not easy by any stretch and full credit to the team for constructing it, but there is still a lot of work to be done before AI can successfully compete at 5 vs 5 level.
Open AI hosted a LAN event for other pro players at the International to take their shot at the bot. They have since conceded it was defeated, but failed to give actual numbers. Despite this, over 50 players have come forward detailing their experiences. Like any Machine Learning bot, you can’t insure it is infallible. As such, players found a handful of strategies the bot couldn’t anticipate for and exploited them to the best of their ability.
League of Legends and DOTA2 will continue to not only hold the attention of game players and eSports communities around the world in the coming years, but also become increasingly prominent in AI research. Hopefully after reading this you can understand why I would anticipate a continued growth of research in this field, given it’s such an enticing test bed for artificial intelligence.
If you’re keen to learn more about some of the research highlighted in this article, please consult the references listed below.
- Michael Cook, Adam Summerville, Simon Colton, 2017. Off The Beaten Lane: AI Challenges In MOBAs Beyond Player Control
- Open AI, 2017. Open AI at the International
- Adam Summerville, Michael Cook & Ben Steenhuisen, 2016. Draft-Analysis of the Ancients: Predicting Draft Picks in DotA 2 Using Machine Learning
- Rohit Bhattacharya & Azwad Sabi, 2015. kData-driven Recommendation Systems for Multiplayer Online Battle Arenas
- Anders Drachen, Matthew Yancey, John Maguire, Derrek Chu, Iris Yuhui Wang, Tobias Mahlmann, Matthias Schubert & Diego Klabjan, 2016. Skill-Based Differences in Spatio-Temporal Team Behavior in Defence of The Ancients 2 (DotA 2)
- Matthias Schubert, Anders Drachen & Tobias Mahlmann, 2016. Esports Analytics Through Encounter Detection
- Pu Yang, Brent Harrison & David L. Roberts, 2014. Identifying Patterns in Combat that are Predictive of Success in MOBA Games