Facing Your F.E.A.R

An overview of the Goal Oriented Action Planning technique adopted in 2005’s F.E.A.R. by Monolith Productions.

When being introduced to the foundations of Artificial Intelligence, either in a university class or via textbooks, we often focus on some of these core principles:

  • The notion of an agent: one that is rational and can make intelligent decisions in order to achieve a goal as succinctly as possible.
  • The idea of a state space: where unique combination of values that are pertinent to the world derive a state and that these states are connected to one another via actions.
  • The ability to search a state space: uninformed algorithms such as breadth and depth first search, as well as informed algorithms like A* that utilise heuristics to optimise their final result.
  • Finite State Machines (FSMs) and Behaviour Trees: managing the goals and stimuli for an agent that subsequently dictate what it searches for.  In effect sculpting a prescribed yet intelligent behaviour.

Now it it’s obvious that we can create something intelligent using these tools alone.  Of course the real question is just how intelligent can it be?  And could it be put to use in games?  To be honest FSMs and A* have been the norm in the games industry for many years. However, as we see in this case study, utilising these in a smart manner can give a massive return on the investment.

An intensive first-person shooter with a horror twist.
An intensive first-person shooter with a horror twist.

F.E.A.R.: First Encounter Assault Recon

F.E.A.R.: First Encounter Assault Recon is a first person shooter by Monolith Productions that was released on PC in 2005, with a console port to PS3 and Xbox 360 released in 2006.   The game is focussed on a F.E.A.R. squad, a U.S. special forces unit, who are sent on a mission to neutralise a target that has recently taken control of a breed of telepathically controlled soldiers from private military contractors Armacham Technology Corporation.  What seems like a routine operation takes a dark and violent turn, as strange paranormal phenomena manifesting in the form of a young girl begin to haunt both friend and foe, leading to a bloody confrontation with the player.

As can be seen by checking the long list of available reviews, the game was praised at the time for having intelligent and challenging AI adopted for the soldiers.  These characters are fast, ruthless and coordinated.  They do not suffer from the same dull and uninspired behaviour as many NPCs that precede (and even succeed) them in first person shooters.

Having played through the game myself, it strikes an interesting balance between emergent gameplay found in the intense shooting battles with the tightly scripted horror sequences.  I do feel that the game suffers from pacing issues with a real dip in momentum in the second act of the game.  However, the final act is well told and altogether a lot of fun.

Building the AI of F.E.A.R.

When dealing with an agent, be it for games or any other domain, typically we are focussed on three key things:

  • What information do we have of the world?  Knowledge that we can encapsulate within a state.
  • What actions does the agent have available to them?  How can they transform that state into different circumstances that may ultimately prove more desirable.
  • Lastly, what are the agents goals?  By understanding their intent, we can use a search algorithm to find the optimal state transitions (using actions) to reach that goal.

These principles continue to hold, though in this case they are changed ever so slightly to suit the needs of gameplay.  Typically when we think of AI goals they are pointed towards what we want to do: we want to navigate an environment and reach a destination, we want to pick up and move an object, we want to figure out how best to rescue people from a natural disaster.  Each of these goals in some sense actually express the actions we want to take, they are not vague, they are explicit.

The overall goal of NPCs in F.E.A.R. is not to kill the player. Their goal is to eliminate ‘threat’.

This is where the AI of F.E.A.R. deviates from what has been discussed up to now in my class.  The goals of the NPCs do not focus on killing the player, they focus on eliminating ‘threat’.  That threat is typically you, the player.  But the best thing about this implementation is that the threat you present and the range of actions that bots can take gives plenty of scope for engaging behaviour.

Eliminating a threat does not necessarily mean killing the player.
Eliminating a threat does not necessarily mean killing the player.

The Need to Think Long-Term

One of the key issues that we need to consider when developing intelligent and engaging NPCs for games is their ability to build some kind of strategy.  Traditionally, a NPC implementation will rely on finite state machines or a similar formalism.  These are really useful for a number of practical reasons, notably:

  • We can model explicitly (and often visually) the sort of behaviours we expect our NPC to exhibit.
  • We can often express explicit rules or conditions for when certain changes happen to the NPC state.
  • We can be confident that any behaviour the NPC exhibits can be ‘debugged’, since their procedural nature often exhibit the Markov property: meaning we can always infer how a state of the bot has arisen and also what subsequent states it will find itself in.

However the problem that emerges here is that behaviours will neither be deliberative, nor emergent.  Any bot that uses these approaches is tied to the specific behaviours that the designers had intended.  In order for the more interesting NPCs in F.E.A.R. to work, they need to think long-term.  Rather, they need to plan.

Planning (and scheduling) is a substantial area of Artificial Intelligence research, with research dating back as far as the 1960’s. As my game AI students will soon learn in more detail, it is an abstract approach to problem solving; reliant upon symbolic representations of state conditions, actions and their influence.  Often planning problems (and the systems that solve them) distance themselves from the logistics of how an action is performed.  Instead they focus on what needs done when.  For those interested, I would encourage reading (Ghallab et al., 2004) which provides a strong introduction to planning and scheduling systems.

The NPCs in F.E.A.R. are not merely reactive in nature, they plan to resolve immediate threats by utilising the environment to full effect.
The NPCs in F.E.A.R. are not merely reactive in nature, they plan to resolve immediate threats by utilising the environment to full effect.

One benefit of a planning system is that we can build a number of actions that show how to achieve certain effects in the world state. These actions can dictate who can make these effects come to pass, as well as what facts are true before they can execute, often known as preconditions.  In essence, we decouple the goals of an AI from one specific solution.  We can provide a variety of means by which goals can be achieved and allow the planner to search for the best solution.

However, at the time F.E.A.R. was developed, planning was not common in commercial games (Orkin, 2004).  Planning systems are often applied in real-world problems that require intelligent sequences of actions across larger periods of time.  These problems are often very rich in detail – examples include power station management and control of autonomous vehicles underwater or space – and rely on the optimisations that planning systems make (which I will discuss another time) as well as time.  Planning systems are often computationally expensive.  While we are looking at time-frames of a couple of seconds when dealing with plans of 20-30 actions in games, this is still dedicating a fair chunk of overall resource to the planner (Thompson, 2009); thus ignoring the issues that a game running in-engine at 30-60 frames per second may face. Put simply, you must find a way to optimise this approach should you wish to implement planning systems, lest you suffer a dip in the games performance at runtime.

G.O.A.P.: Goal Oriented Action Planning

The approach taken by Monolith, as detailed in (Orkin, 2005), was known as Goal Oriented Action Planning.  The implementation attempted to reduce the potential number of unique states that the planning system would need to manage by generalising the state space using a Finite State Machine.

The three states of the GOAP system in FEAR.
The three states of the GOAP system in FEAR.

As shown in the figure above, the behaviour of the agent is distilled into three core states within a the FSM:

  • Goto – It is assumed that the bot is moving towards a particular physical location.  These locations are often nodes that have been denoted to permit some actions to take place when near them.
  • Animate – each character in the game has some basic animations that need to be executed in order for it to maintain some level of emergence with the player.  So this state enforces that bots will run animations that have context within the game world.  These can be peeking out from cover, opening fire or throwing a grenade.
  • Use Smart Object – as conceded in (Orkin, 2005), this is essentially an animation node.  The only different is the animation is happening in the context of that node.  Examples of this can be jumping over a railing or flipping a table on its side to provide cover.

That’s it!  The entire FSM for the NPCs in the game is three states. Note that these states are very abstract; we do not know what locations are being visited, the animations played and the smart objects used.  This is because a search is being conducted that determines how the FSM is navigated.

Which state are these NPCs currently in now?
Which state are these NPCs currently in now?

Bear in mind we traditionally utilise events in order to move from one state to another in a FSM.  These events are typically the result of a sensor determining whether some boolean flag is now true, or an ‘oracle’ system forcing the bot to change the state.  This then results in a change of behaviour.  However in F.E.A.R., each NPC uses sensor data to determine a relevant goal and then conduct a plan that will navigate the FSM (using grounded values) that will achieve that goal.

Planning in G.O.A.P.

As mentioned earlier, in planning we require a symbolic representation that models the world.  In the case of GOAP, it is reliant upon the STRIPS planning language.

STRIPS takes the name from the planning system in which it was originally used: the STanford Research Institute Problem Solver was developed in 1970 and used a simple representation of actions and goals to create a planning formalism (Fikes and Nilsson, 1972).

An example of a STRIPS domain and problem.  This is a very simple problem, where we are trying to move between two rooms.
An example of a STRIPS domain and problem. This is a very simple problem, where we are trying to move between two rooms.

In the problem shown above, we express the world using two key expressions, at(x) which denotes where we are in the world and the relation between one location and another using adjacent(x,y).

We then have a action called move(X,Y) which states that we must be in the original location (X) and that it is directly connected to another (Y).  If we do this, then we express that we are now in the second location Y while removing the original fact that we were in location X.

The top of this STRIPS instance explains that we are in location and our goal is location B.  This will result in a plan with one action: move(A,B).

In F.E.A.R. actions and information about the world are modelled in a manner akin to STRIPS, with goals determined at runtime and searches conducted that will find the list of actions that will determine what to do.

As mentioned before, planning allows us to decouple actions from goals, but it also means we can enforce who can execute actions using preconditions.  A range of actions can be created that are unique to different types of characters in-game.  Meaning that more emergent behaviour can arise.

Some Key Design Decisions

Of course it doesn’t work as straightforward as that; design decisions were made under the hood in order for all of this to work.  Some of these raise interesting questions from an AI R&D perspective.  I breakdown some of the changes made below:

The Use of A* Search

It’s well documented that the search algorithm deployed was A*.  While to a certain extent this seems absolutely fine, given that A* has proven itself to be a fairly competent search algorithm.  As documented in (Orkin, 2006), it was implemented given that they wanted to measure action costs, which requires an informed search algorithm.  However, that wasn’t really possible straight away. Bringing me to my next point…

Is STRIPS the Correct Choice?

STRIPS is historically one of the first real breakthroughs in set-theoretic representations for planning.  It’s also remarkably outdated, with a number of languages succeeding it.  Nowadays the Planning Domain Description Language (PDDL) is effectively the standard representation used in planning research.  PDDL 2.1, released in 2002, provides the option of modelling costs.  In addition, it can manage resources given it provides use of numeric fluents: meaning we can model the likes of energy consumption given the value stored changes over time.   For more information, I’d check out (Fox and Long, 2003).

Now this strikes me as an odd decision.  Given that planning systems such as Metric-FF can plan in PDDL 2.1 with ease.  In addition, planning systems use more intelligent heuristics than the likes of A*, such as the Relaxed Plan Graph (RPG) heuristic, which would also address a point raised in (Orkin, 2006) of removing the delete effects of actions.

Though when reading the research papers published, notably when discussing the management of the game state and available planning actions, it doesn’t sound like STRIPS was actually implemented. Rather that certain ideas were adopted and then employed in a fashion that suited the development team.

Memory Management

Arguably the key issue that would perhaps prevent some of these implementations above is the need to keep memory consumption to minimum.  The team at Monolith sought to keep memory costs down, with a combination of storing hashtables of actions based on their effects, as well as a blackboard which collected all data that was globally relevant to the NPCs.  Even then, the garbage collection had be managed thoroughly to ensure that this did not become too excessive (Orkin, 2005).

Another issue mentioned in (Orkin, 2005) was the need to prune the search tree once established to prevent bots from inadvertently searching for ‘invalid’ solutions by providing “context preconditions”, these would prevent certain actions from being explored, thus optimising the search approach.  Once again the question is raised if whether using an actual planning system and formalism would have proven more cost effective given the design considerations needed to generate what are in reality very short plans.

I imagine that the need to manage the AI knowledge base of F.E.A.R. helped drive the minimum memory requirement to 512MB, which puts it in the same category as Quake 4 and Battlefield 2, despite being a game that focussed on small skirmishes and seldom moved into big combat scenes.

Dynamic Goal Assignment

Goals are needed for a planning agent to plan.  Goals are typically assigned directly, as seen earlier in the STRIPS example.  In this instance, each character type was assigned a number of different goals it can solve and actions that it could commit.

A collection of goals and actions assigned to a solider.  Images taken from the GDC 06 presentation by Jeff Orkin.
A collection of goals and actions assigned to a solider. Images taken from the GDC 06 presentation by Jeff Orkin.

Goals would be assigned to the characters by continually re-evaluating their priority based on information received by sensors at runtime.  This could result in a quick change in priority mid-gameplay should the player walk into a room or throw a grenade.

Basic and Squad Behaviour

To the credit of the developers, the behaviour (as shown in the video footage accompanying this article) is fast, fluid and intelligent.  It is rare for a NPC to make a stupid mistake that opens them up to a quick kill.  Naturally there are moments when the NPCs will expose themselves, but this is the reality of creating enemies in a first-person-shooter: they’re designed to be killed!

The real benefit is that they are quick to react, making interesting use of the environment and keen to counter-attack if the opportunity presents itself.

Also you will note that the agents appear coordinated: they are not prone to falling over one another or getting in each others way.  This squad behaviour is largely built upon what already works in the GOAP approach.  However, it still required some further work to make this happen.

As detailed in (Orkin, 2006), the game is reliant on a coordinator that ensures NPCs do not cluster too close together or break too far apart on the game map. These simple behaviours assign goals to the bots that look like coordinated squad behaviour, but are reliant on the established AI framework already discussed.  Having found available NPCs to participate, a number of behaviours can be executed:

  • Get-To-Cover: ensures all squad members not currently in cover to do so.
  • Advance-Cover: moves squad members to cover that is closer to the threat.
  • Orderly-Advance: moves NPC along firing lines while covered by a teammate.
  • Search: Pairs of NPCs systematically search the local area.
An example of failed squad behaviour from the GDC 06 talk.  The bot is attempting to move to the closer cover to push up against the player.  However the player threw a grenade and the bot flees instead, since failing the squad behaviour is less important than surviving.
An example of failed squad behaviour from the GDC 06 talk. The bot is attempting to move to the closer cover to push up against the player. However the player threw a grenade and the bot flees instead, since failing the squad behaviour is less important than surviving.

Typically the squad coordinator does not need to provide any further information except the latest goal or task that was assigned to them, given that each agent has its own sensory information to help them make that decision.

However, it’s important to note that this is not always a priority for the character.   Bearing in mind that goal prioritisation is still active.  As such, should a squad behaviour run the risk of death due to executing a squad behaviour, it has the option to ignore and instead prioritise another goal that eliminates immediate threats (such as being in the firing line of the player or a grenade being thrown nearby).

Beyond this, there are no complex squad behaviours.  Players will recognise that at times the NPCs will appear to coordinate a flank or retreat behaviour.  However, this is actually the result of a two separate NPCs committing to a simple squad behaviour that in actuality looks far more complicated.

An example of emergent complex behaviour, where two separate NPCs attempt to use a simple squad behaviour (advance-to-cover) and appear to use a pincer movement.  Image from Orkin's GDC 06 presentation.
An example of emergent complex behaviour, where two separate NPCs attempt to use a simple squad behaviour (advance-to-cover) and appear to use a pincer movement. Image from Orkin’s GDC 06 presentation.

The other surprising squad element is the implied coordination due to communication.  It can be heard in gameplay that the soldiers will shout exchanges to one another, suggesting that their comrades should take cover or take an opportunity to flank the player.

In reality, the squad coordinator is overseeing the actions being committed as part of the squad behaviour.  It will find the appropriate dialogue and assign it to the correct actors (Orkin, 2006).  This means that an action that is being executed will then result in dialogue being assigned to local actors to help simulate the idea of coordination.

The Impact of G.O.A.P.

The G.O.A.P. approach created a small legacy within a relatively short period of time.  As summarised in (Champandard, 2013), this technique was then employed in a number of combat-driven games where it proved highly valuable until around late 2012.  Naturally the sequels in the series, of which there were two, utilised the same technology.  Whether it was expanded upon further has – to my knowledge – not been documented.  However, it has since been used in games such as Just Cause 2, S.T.A.L.K.E.R. and Deus Ex: Human Revolution.

One game that also used G.O.A.P. was High Moon Studio’s Transformers: War for Cybertron.  However, it has since been documented that there were issues with maintaining steady performance using G.O.A.P., leading to the adoption of a different planning algorithm in the sequel Fall of Cybertron.  This is the focus of a subsequent article on AI in games here on the site.

Transformers WFC: One of the numerous title that subsequently adopted GOAP.  However, it wasn't as straightforward as it sounds.
Transformers WFC: One of the numerous titles that subsequently adopted GOAP. However, it wasn’t as straightforward as it sounds.

Conclusion

The work by Orkin and the team at Monolith had a big impact on game AI.  While the resulting implementation proved popular with game developers, permeating through the community to a range of different titles, the actual AI science behind it was rather straight forward.  GOAP took existing concepts prominent in the industry and adding influences from planning research circles to create something fresh and exciting.  In time the practicality of using this approach has dissipated as the complexity of games and the NPC agency has expanded, with recent games looking at adopting technologies from the planning research community.  However, we must not overlook the achievements of GOAP and F.E.A.R. given it provided a strong example of the value that AI can provide to enhance gameplay.

References

  • (Fikes, Richard E., and Nils J. Nilsson., 1972) STRIPS: A new approach to the application of theorem proving to problem solving. Artificial intelligence 2.3 (1972): 189-208.
  • (Fox, M., and Long, D., 2003) PDDL2. 1: An Extension to PDDL for Expressing Temporal Planning Domains. J. Artif. Intell. Res.(JAIR) 20 (2003): 61-124.
  • (Ghallab, Malik, Nau, Dana S. and Traverso, Paolo, 2004), Automated Planning: Theory and Practice, Morgan KaufmannISBN 1-55860-856-7
  • (Orkin, J., 2004)Symbolic Representation of Game World State: Toward Real-Time Planning in Games.  AAAI Challenges in Game AI Workshop 2004.
  • (Orkin, J., 2005) Agent Architecture Considerations for Real-Time Planning in Games. Proceedings of the 2007 conference for Artificial Intelligence and Interactive Digital Entertainment (AIIDE ’05).
  • (Orkin, J., 2006) Three States and a Plan: The AI of F.E.A.R.  Proceedings of the 2006 Game Developers Conference (GDC ’06).
  • (Thompson, T. and Levine, J. 2009) Realtime Execution of Automated Plans Using Evolutionary Robotics.  IEEE Symposium on Computational Intelligence and Games (CIG), September 2009.
Enjoying AI and Games? Please support us on Patreon!
Tommy Thompson Written by:

Tommy is the writer and producer of AI and Games. He's a senior lecturer in computer science and researcher in artificial intelligence with applications in video games. He's also an indie video game developer with Table Flip Games. Because y'know... fella's gotta keep himself busy.

  • robert wade

    If I were to make an AI spec ops unit in an open world setting how would I accomplish that and make it as simple as possible. (i want them to be able to move as a group. take out threats as fast and effective as possible,and finish objectives like (s&d,defend an area, finding and destroying a squad of threats and make them play as their class as in sniper,heavy gunner,ect)) or is that asking for to much out of an AI.