GVG-AI: The Challenge of General Intelligence in Games (Part 2)

In Part 1 of this series, we discussed the recent release of a new artificial intelligence competition: the General Video Game AI Competition (GVG-AI).  As is to be expected of an AI competition, it attracts researchers towards domains that exhibit problems that researchers have not yet found amenable solutions to. In the case of the GVG-AI competition, it’s implied in the name; the pursuit of general intelligence that is capable of playing multiple games using the same AI controller. It’s an interesting problem, given that while AI as a science has addressed a large number of unique problems across a range of sub-disciplines, gluing it all together is still by-and-large an unanswered problem. In part 2, we look at the software framework that has been released by the competition organisers and explore what is in store for potential developers, as well as how do we even begin writing our own submission.

Playing Boulder Dash in the GVG-AI framework.
Playing Boulder Dash in the GVG-AI framework.

The GVG-AI Framework

The software framework is made publically available via GitHub , with a collection of documentation included both with the software download as well as via the competition website. To the credit of the team responsible for maintaining the framework, notably Diego Perez and Spyridon Samothrakis, the software is up-to-date and continually being improved: not only with addressing issues but adding functionality based upon user requests.

The framework is designed to address a number of key elements required by the competition:

  1. It’s a game engine: while perhaps not as lavish as Unity or Unreal Engine, the framework can build small and complete 2D games. Video games such as Space Invaders, Frogger and Missile Command can be built in the engine.
  2. Users can create their own games with ease: The framework adopts the recently minted Video Game Description Language first discussed in (Ebner et al., 2013) and written by Tom Schaul. The engine allows for games to be written using the language, and then parsed in at run time to play. As we will see shortly, the range of games it can define is fairly broad.
  3. Users can create their own bots with ease: developers can have a simple working bot within 5 minutes. The framework then allows for you to test your bot either on one or multiple games to assess its effectiveness.

We will first look at the types of games that developers must consider when writing their own bot. Afterwards we will then look at how the framework allows for developers to write their own bots and what information they are privy to.

The GVG-AI implementation of Space Invaders.
The GVG-AI implementation of Space Invaders.

The Games

The competition framework provides a number of games that are already defined for the user to test against. Many of these games are clones of titles that date back to the era of the Atari 2600. While some of our readers – myself included – may not have been born when these games were originally made, the challenge is still fairly evident.

  • Aliens: A clone of ‘Space Invaders’, where the player uses the flak avatar to shoot missiles towards scrolling enemies while avoiding enemy fire.
  • Boulderdash: Players dig through caverns looking for gems that will boost their score. However they must be wary of dislodging boulders that will kill them upon contact.
  • Butterflies: The player must capture all of the butterfly NPCs that are roaming the environment. The environment is littered with cocoons that will spawn a butterfly should an active butterfly come into contact. The player must capture all butterflies before the final cocoon hatches.
  • Chase: The player must chase down a collection of ‘goat’ NPCs that continue to flee from the player. Should the player come into contact with a goat it will be killed. However, the player must be wary of angry goats that can retaliate and kill the player.
  • Frogs: A clone of ‘Frogger’, where the player must reach the goal location having avoided a collection of moving obstacles and navigated difficult territory.
  • Missile Command:  Perhaps unsurprisingly, this is a clone of ‘Missile Command’, where the player must defend their bases from incoming enemy fire.
  • Portals: A quirky game where the player must reach the exit by navigating through a series of portals.
  • Sokoban: A clone of the original ‘Sokoban’, where the player must push objects to specific locations and reach the exit.
  • Survive Zombies: The player attempts to navigate the environment to collect honey that has been left by bees. However, the player needs to avoid zombies that are moving around the map.
  • Zelda: The player must find the key in order to reach the exit and avoid enemy NPCs. In addition, they have access to a sword that they can use to attack enemies nearby.

If we consider this collection of games, it’s evident that there are a range of different behaviours that any player would be expected to exhibit. This can range from shooting enemies to collecting coins, from avoiding moving obstacles to solving small puzzles. When we consider that one sole agent would be responsible for addressing all of these behaviours, the challenge of the competition becomes even more apparent.

The Frogger clone 'Frogs' in the starter kit.
The Frogger clone ‘Frogs’ in the starter kit.

What makes this challenge even greater is that these are but a sample of some of the games that an entrant may face. Final assessment of a bot is conducted courtesy of a server ran by the competition organisers. On this server, players will be subject not only to the games listed here, but completely new games that we have not yet seen. As a result, have to consider whether our agent will be capable of playing other games it will never have been tested on until the day of the competition evaluation.

Creating Your Own Agent

So, you’ve decided that you want to create your own GVG-AI bot? Fantastic! There are a number of limitations imposed upon you that you need to consider when writing your own bot. Firstly, the bot can only utilise a small amount of time to make a decision: typically we are looking at around 40ms to make our move, which is not a lot of time to do any serious computational work.  In addition, the bot must be single-threaded: so we cannot run AI processes on a separate thread from the main game.

The bot must extend from a given abstract class definition, which expects on method to be implemented: act().  The act() method carries two parameters that are provided to provide information for the agent to make its move. Firstly, there is an instance of the StateObservation class: This instance tells us everything that the bot can ‘see’ on the screen in that given frame. It’s important to appreciate that a bot can only see what is happening at that frame since, well… that’s all that a human can see too! The StateObservation class does help a little in providing methods that allow us to find specific types of objects in a given frame, such as the location of all enemy NPCs. Of course while we know they are there, we do not understand how they work in context with the game.

In addition, the StateObservation class can provide us with instances of the Event class. These tells us the events that have occurred in the game: such as missiles colliding with NPCs. This event information is rather limited, but does tell us a thing or two about what has happened previously. It is through our own experimentation that we can begin to infer rules of play based on what is happening in these events, if we wish to do so.

Another important thing to consider is that the StateObservation class has a forward model built into it. What this means is that we can simulate what the next state of the game will look like by querying the advance() method. This is useful for search algorithms since they will need to simulate several states ahead of the current position to make judgement on what is the best path to take. This is critical for algorithms such as Monte Carlo Tree Search (Browne et al., 2012)  which are reliant upon the need to simulate ahead.

As a reminder of our time constraints, the method also receives a instance of the ElapsedCPUTimer class, which helps give an indication of how much time we have left to complete our ‘thinking’ for this frame.

A (Really) Simple GVG-AI Bot

Despite all the challenges ahead, it is fortunate that writing a simple bot is not a major challenge. The simplest of bots possible, which is reliant upon making random moves in each frame can be seen below:

public class Agent extends AbstractPlayer 
{

	protected Random randomGenerator;

	public Agent(StateObservation so, ElapsedCpuTimer elapsedTimer){
		randomGenerator = new Random();
	}

	public Types.ACTIONS act(StateObservation stateObs, ElapsedCpuTimer elapsedTimer) {

		Types.ACTIONS action = null;
		ArrayList<Types.ACTIONS> actions = stateObs.getAvailableActions();
		int index = randomGenerator.nextInt(actions.size());
		action = actions.get(index);
		return action;
	}
}

 

This agent simply looks at the available actions in this frame and selects one of them at random. It gives no consideration to whether this was a smart move to make and what consequences this action would have. While it takes little effort to create something that is functional, creating a bot that is intelligent is a significant feat. In the framework download, we also find examples of simple evolutionary algorithms as well as an implementation of the Monte Carlo Tree Search algorithm. However, when we let these loose on the sample games they do not perform much better than the ‘smart’ random player which is also provided. It opens the question of just what is needed to perform well in this competition.

Closing

As we round up part 2, we’ve given an overview of what research problems the GVG-AI competition hopes to address and gave an overview of what the competition has in store for developers. We will return to discuss GVG-AI after the first round of the competition takes place at the IEEE Computational Intelligence and Games conference in Dortmund, Germany in August 2014. We will look at how well entrants performed not only against one another, but the larger task at hand. In addition, provided it is finished, we will discuss the design of our own submission!

References

  • Browne C., Powley, E., Whitehouse, D., Lucas, S.M., , Cowling, P.I., Rohlfshagen, P.,  Tavener, S., Perez, D., Samothrakis, S. and Colton S., 2012. “A Survey of Monte Carlo Tree Search Methods”. IEEE Transactions on Computational Intelligence and AI in Games 4 (1).
  • Ebner, M., Levine, J., Lucas, S.M., Schaul, T., Thompson, T. and Togelius, J,. 2013  Towards a video game description language. Artificial and Computational Intelligence in Games. Dagstuhl Follow-ups, 6(1) . Dagstuhl Publishing, pp. 85-100
Enjoying AI and Games? Please support us on Patreon!
Tommy Thompson Written by:

Tommy is the writer and producer of AI and Games. He's a senior lecturer in computer science and researcher in artificial intelligence with applications in video games. He's also an indie video game developer with Table Flip Games. Because y'know... fella's gotta keep himself busy.

  • Hmm… what with the author names on the “Towards a Video Game Description Language” paper? 🙂

    • Yeah… that doesn’t look right at all! How did that happen? Going to fix that now! 😀