Now we get to the gravy! In parts 1 and 2, we introduced the idea of rewards and how they help us to make good decisions. This is then reinforced in how we play and create video games, given that these reward structures must be conducive to our experiences of gameplay.
In Part 3, we begin to look at how all of this matters in the context of AI. We start by looking at the fundamental principle of AI developments: agents.
Artificial Intelligence is Born
Historically, AI researchers had a bit of bother agreeing on how to pursue this scientific field. Of course this is rather understandable: given that AI was still being discussed theoretically until the mid-1940’s, with strains of disparate research emerging in the early 1950’s. It was only in 1955 that researcher John McCarthy began to rally his peers in defining not only what artificial intelligence is, but why it is in many respects a completely separate area of science from computer science (McCarthy et al., 2006).
While research in AI has continued on ever since, the pervasive theories and ideas behind what AI is and how it should be tackled have varied over time. However, since around the early 1990’s, the consensus has formed largely on the notion of intelligent agents.
An intelligent agent is one that is expected to take information about the world and makes decisions to adopt actions it is capable of doing. This typically relies on a number of elements:
- An agent must be able to perceive its environment. It has sensors that allow it to establish an idea of what is happening in the world.
- Based upon this sensory input and/or all input it has received to-date, the agent will make some kind of decision.
- This decision will result in action, courtesy of actuators assigned to the agent.
In video games we can see how easily this can match up: the percepts of the environment can be accrued from information happening in-game. This can apply just as equally to humans as it does to non-player characters (NPCs). Humans are reliant upon what they can see on the screen as well as things such as audio cues or controller feedback. Meanwhile the non-player character can ask for information that is happening on a given frame and use that information to make an action. The action being one of the handful of behaviours it can execute in that frame of gameplay.
What is interesting is that the information a computer can use may not be the same that a human can. This is either due to the developers giving the NPCs less information, or by giving it extra information that a human may not know at that point. This is a point we will return to later.
Despite this definition, this is not sufficient for our purposes given that it says that the agent does something upon receiving input, we need something more specific.
The critical word from this chapter is rational and is the key component of this agent idea we have discussed so far. A rational agent is one which, based upon the information it has right now, makes the best decision it possibly can.
How does that differ from the intelligent agents discussed already? An intelligent agent will make a decision based upon the information it is given. However, a rational agent will always make the best decision. It is what separates artificial intelligence from their human creators.
See, humans are really fickle things. In fact, you could argue, we’re actually a rather strange species: our actions are often influenced by things such as our emotional state, our political leanings or religious beliefs. While these things are very important to us, they seldom have any real place to be involved in our decision making process. It is rare that humans make decisions that are entirely rational because our emotions, allegiances and beliefs drive our behaviour. When you consider how infrequently humans make the right choice at a critical moment, you wonder how we have survived for so long.
It also makes you look at how AI is portrayed in science-fiction and wonder whether the characters we know and love from TV, games and movies reflect how AI operates. Do these characters always make the most rational of decisions?
The Quality of Information and Action
A character in a game that continues to make rational decisions can then be considered a proper AI bot. However, when it comes to games we need to consider the quality of the information that we give the bots, as well as how effectively they can act in the world.
If we were to, rather brazenly, apply the best AI code we could to a NPC in a first-person shooter (FPS), a ‘bot’, with no constraints on its behaviour, it could become a god-like killing machine that always wipes out the human opponents. This can occur for a number of reasons, such as:
- You decided to let the bot always know where other players are, rather than let it ‘find’ other players like humans do.
- It may be able to make function calls in the games code to determine what the player is doing on a given frame, allowing it to know exactly what the opponent is doing.
- There is no ‘noise’ in the agents sensors or its behaviour. In other words it has a 0% margin for error and will always shoot perfectly.
Simply put, this can remove all the fun of a game. If the bots are too hard to beat, then players become frustrated as may even call into question whether the AI is ‘cheating’, by being able to respond to events in the game far faster than a human can.
At the end of the day, it’s all about being fun when it comes to video games. Now, that’s a bit of a problem and we will address this in Part 4.
Russell, S. and Norvig, P. (2009) – Artificial Intelligence: A Modern Approach – Chapter 2.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Magazine, 27(4), 12.