While this article is focussed largely on the adoption of planning technologies within High Moon Studios’ Transformers series, there is a particular emphasis on the adoption of the Goal Oriented Action Planning (GOAP) system found within the Monolith title F.E.A.R. This article makes numerous references to GOAP and its applications within F.E.A.R. As such, if you are not familiar with this work, we would advise consulting our existing written article on the G.O.A.P. system, alongside the video summary and recorded lecture we already provide on AI and Games.
The challenge for creating interesting, adpative and cunning non-player characters (NPCs) for combat games is ongoing. While technology continues to improve in its ambition and scale, so too does the aspirations of designers who seek to adopt it. In this article, we consider the continued advancement of planning technologies adopted within games. Previously, we have looked at the ground-breaking developments used within both the Halo and F.E.A.R. series for NPCs in combat games. In this instance we are following-up on the approach used in F.E.A.R.: the Goal Oriented Action Planning (GOAP) framework and how it not only has been adopted in subsequent games, but led to further exposure of planning technologies in combat games. We begin by discussing the impact that GOAP had on planning in games and focus our interests on the adoption and a subsequent tranisition of technology in the Transformers series by High Moon Studios.
Recap: Goal Oriented Action Planning
As mentioned in our previous article discussing the AI of F.E.A.R., back in 2005 automated planning technologies had not yet became established in games. The system devised by the team at Monolith, Goal Oriented Action Planning (GOAP) was a major step towards embracing tried-and-tested methods of academic AI research for industry application. One of the key elements to the design of GOAP was that the goals that a NPC could accomplish – which typically focussed on resolving a threat that the player had introduced – were decoupled from the action set. This means that, depending upon the context in which a goal is defined, different plans could be devised to solve the same problem. The plans crafted would be executed through a 3-state finite state machine (FSM) which dictated the movement and animations triggered within the NPC.
Furthermore, in the Monolith implementation, the goals and actions could be added to each character in the game using in-house tools, allowing for the same AI framework to be adopted for all characters in the game: ranging from brutes and stealth assassins to the rats that scurried around the environment. This ultimately led to emergent gameplay when bots were given goals that were coordinated by a squad management system. Complex squad behaviours emerged in-game that were not hard-coded by the designers and occurred as a result of the design.
The critical success and indeed the viability of the approach lent to GOAP being applied in a number of games afterwards. High profile titles such as S.T.A.L.K.E.R. and Just Cause 2 emerged not long after F.E.A.R.’s success. However, the method has continued to prove popular, with titles such as Tomb Raider (2013) and Middle Earth: Shadow of Mordor adopting the method nearly 10 years later (Conway et al., 2015). However, while it has proven successful in many instances, there are also the interesting stories of where it did not go as smoothly. With that in mind, we cast our eyes towards a particular GOAP implementation and the challenges it raised.
Transformers: War for Cybertron
Transformers: War for Cybertron (WfC) is a third-person shooter released by High Moon studios in 2010 and released on PC, Playstation 3 and Xbox 360. Unlike many games of that period based on the Transformers license, it is neither influenced by the Hollywood film series by Michael Bay, nor cartoon series. Instead, it carves its own narrative largely established from the lore of the franchise: set millions of years prior to their eventual emmigration to Earth, the Autobots and Decepticons fight for supremancy on the war-torn planet of Cybertron.
Adopting GOAP for War for Cybertron
Having recetly completed the The Bourne Conspiracy for PS3 and Xbox 360, High Moon were looking at alternatives to their current AI implementation. As discussed in (Champandard and Humphreys, 2012), their NPCs were reliant upon Hierarchical Finite State Machines (HFSMs) to implement their behaviours. It seems that the team at High Moon faced a number of issues with their HFSM implementation, namely:
- The FSMs became too large, which had several knock-on effects:
- It became increasingly brittle and would break easily.
- Increasingly difficult to debug.
These issues lead to the migration towards the GOAP system, given that the rising popularity of the approach after its unveiling at the Game Developers Conference (GDC) in (Orkin, 2006). Having adopted the GOAP method, the High Moon AI team found it far more practical than the HFSM method. This is driven by the key benefit mentioned earlier, in that they did not need to worry about how actions interact. Instead they could define each action, complete with preconditions and effects and depending on the goals, the action selection would always be a reflection of the agents needs.
While largely inspired by the original implementation by Monolith for F.E.A.R., the team built their own version from scratch in Unreal Engine 3. The use of Kismet within UE3 allowed for goals to be dynamically assigned to agents based upon events taking place in the world state. The planning system would then generate the correct actions required to satisfy the goals and utilise a ‘Plan Runner’ module which would ensure that the agent executed the actions correctly. The resulting implementation was an improvement over previous work and there seems little argument from Troy Humphrey’s that the resulting behaviours did not satisfy their needs. However, there were a number of implementation issues that arose while adopting GOAP.
The challenges faced can be broken down largely into both the technical issues as well as the design problems, with the latter proving the bigger issue to overcome. One issue behind the implementation built for War for Cybertron was an inability to craft action-costs on a per-character basis: while the varying Autobots and Decepticons could be given different actions they are permitted to execute, the perceived value of those actions couldn’t fluctuate. This would be a valuable feature to implement: given that it would allow for NPCs whose behaviour is radically different from one another in terms of how they approach problems. Furthermore, the plan-action pre-condition calculation process was found to be rather expensive to compute, thus resulting in dips in performance as multiple NPCs try to plan their way out of scenarios. While the action space could be trimmed on a per-class basis, the number of unique variables grew, with over 50 for each NPC in some of the earlier builds. The final number of states was not given in (Champandard and Humphreys, 2012), but earlier implementations of the AI system pushed 500 unique states to search.
Finally, there is also the issue of how many planning levels are involved. War for Cybertron had three: a top level planner for figuring out what in general the agent was trying to do (moving to some location, moving to cover, attacking etc.), with a second-tier planner focussed on how to maximise any paths the character would have to navigate. This became increasingly complex when dealing with aerial levels and characters, given they may run through a building, jump, transform into a jet and fly through debris before transforming once again upon reaching their destination.
All of these issues build upon one another to the point that performance was becoming an issue: with the time taken to identify and search through the state space for each character proving too demanding. To rectify this, search space for all characters is pre-generated for a given level or segment. While this did result in more consistent performance, there were still significant technical problems faced in managing the time-slicing and priority queues for each agent.
Outside of the more technical issues faced were those presented to the design team. The design issues were driven not by the implementation of the GOAP method, but rather how that method of planning did not align with the designers ambitions. The designers were keen to build gameplay sequences that where the AI would yield interesting behaviours for the players to come up against. However, the nature of the GOAP planning process is to find any valid actions that will result in a successful plan that satisfies a particular goal. This can be at odds with a designer’s intentions, given that you may still wish to exert some control over what kind of final behaviours the AI crafts. As a result, the the level designers were prone to ignoring the AI system entirely and relied on hand-crafting certain sequences to their liking.
With these issues weighing down on the development of the final product, we see the impact of this process reflected in the subsequent changes that followed.
Transformers: Fall of Cybertron
Fall of Cybertron was released in 2012 acting in many respects as a logical progression of its predecessor: with improved graphics, gameplay, narrative and multiplayer gameplay. However, what is most important is an improvement in NPC AI brought on by the factors previously discussed. Given the difficulties faced towards the end of development, the team opted to move away from the use of GOAP. As mentioned previously, the designers wished to avoid the issues of a ‘bottom-up’ approach to crafting behaviour: where the individual actions dictated the behaviour. Instead, the emphasis was placed on more traditional ‘top-down’ method: where you would typically design behaviours at a higher level, then break down the individual elements that would comprise it (Champandard and Humphreys, 2012).
There are a number of design changes that alleviated the issues that the planner was forced to deal with. Firstly, the planner was focussed solely on the main action selection, with a lot of the more reactive behaviours being covered by different systems. The sensor system was given a major overhaul and is designed to allow for features of the environment that are of interest to be found at run-time in the game. This also allowed for faster computation of pre-conditions, which was previously an issue for the planning system. In addition, while game elements such as the navigational mesh was fixed, the ‘useful’ or tactical features of the world – such as cover – were defined at runtime by conducting checks through ray traces within proximity of players. One further change that impacted the planners workload was the removal of buddy AI characters which followed the player throughout War for Cybertron. Instead opting for more scripted sequences with friendly NPCs. In addition, a Blackboard was introduced to allow for shared knowledge of the environment to come into play.
However, the real innovation lies in the heart of the planning system, as GOAP was replaced with HTN planning.
Hierarchical Task Network Planning (HTN)
Hierarchical Task Network Planning (HTN) is a different approach from what we have seen with basic STRIPS, where the the emphasis is on forward decomposition of complex ‘tasks’ into a series of smaller, more definable actions. The work itself dates back to the 1970’s, as research that deviated from the linear planning approach that STRIPS adopted: planning systems that dictate a total order over the actions in the final plan. This idea emerged due to the issues that can occur when you have multiple sub-problems within the main problem itself. The order in which these sub-goals need to be completed might not be readily apparent in the initial problem, but must be considered when satisfying the problem dictated to the system. A simple example of this can be found in the ‘Sussman Anomaly’ (Sussman, 1975), shown in the image below. A planning problem for the stacking of cubes in which a planning system must move each cube individually and place them atop one another: A upon B upon C. The anomaly arises given that early planning approaches would first attempt to satisfy the ‘A upon B’ condition by removing C from A and then placing A upon B. In doing this, the planner puts itself in an awkward position, given that it has satisfied the first goal (A upon B) but in order to resolve the second goal (B upon C) it must undo the work it has achieved. Conversely, should it pursue ‘B upon C’ first, it will result in a situation where A is not trapped under C and a similar conundrum arises. In this instance, there is an implicit requirement to interleave the two goals execution in order to stack the cubes correctly.
The issues raised by the Sussman Anomaly helped pioneered research in the field of partial-order planning, where the system leaves the decision making process of sequencing actions as open as possible, typically with exception of these subgoals where a strict ordering is provided. The suite of HTN planners is fairly broad, with nonlin (Tate, 1976) and O-Plan (Currie and Tate, 1991) at the University of Edinburgh being highly influential. It has since been popularised in the games industry courtesy of work inspired by the SHOP2 planner (Simple Hierarchical Ordered Planner), headed by Dana Nau (Nau et al, 2003). SHOP2 was first adopted for NPC AI in Guerilla Games Killzone 2, leading to interest from High Moon and its subsequent adoption in Fall of Cybertron. The core principle of HTN planning is that in addition to the traditional types of actions that we saw in GOAP – referred to as ‘primitive tasks’ in HTN – we can create more complex ‘macro-actions’, called ‘compound tasks’. Compound tasks dictate a sequence of individual actions to achieve some goal. We can then search over tasks that are even more abstract than planning actions. Also, HTNs can be rather flexible since a task can define itself using existing tasks. We can build richer and more complex behaviours in the planning domain model as a result. The images below show not only a variety of primitive and compound tasks, but also a full HTN plan comprised of tasks given specific parameters for execution.
The final planning process is reliant upon forward decomposition of these tasks into concrete actions that can satisfy the goal state. One of the benefits of this for designers is that, as shown in the diagram above, this looks remarkably like a behaviour tree! These trees are special in that we can deliver them for particular contexts at runtime.
Planning in Transformers
The images below highlight some of the tasks established within Fall of Cybertron’s own implementation of HTN planning. The first example highlights a compound tasks designed to kill an enemy. However, it provides some variety to gameplay with six unique ways of how to achieve the same task. Each method has a series of preconditions that dictate what is required for them to be selected, followed by subtasks that are then to be executed. Meanwhile the second image highlights the primitive tasks used in the game: showing not only the preconditions but the actual behaviour the character should execute and the effects it has on variables pertinent to the world state.
As Humphrey’s explains in (Champandard and Humphreys, 2012) this was relatively easy to transfer across from the GOAP implementation given that the theory of GOAP actions can be adopted easily to primitive tasks. Interestingly, it is mentioned that while squad-like behaviours exist, there are no specific implementations or HTN domains that focussed upon that. Instead NPCs would be grouped together and be allowed to share information (such as the location of the player) and often select a task that they wanted to execute. This would be shared among the others so that they would not attempt to do the same thing.
One of the key benefits in the transition from GOAP over to the HTN implementations is the performance. (Champandard and Jacopin, 2014) and (Jacopin, 2014) assesses the performance of the HTN planner in Transformers: Fall of Cybertron to that of the HTN in Killzone 3 and the GOAP system in F.E.A.R. This analysis often focussed on key metrics such as the plan lengths, the number of plans generated by the system per second and the absolute ‘need’ for planning, given that plans would need to be reformulated due to changes in the game world.
The performance improvements were substantial, with the HTN implementation in Fall of Cybertron being able to generate up to four plans a second versus the typical 0.5 plans per second provided by GOAP. What makes this even more startling is that plans in F.E.A.R. were often relatively short with only 1.2% of plans carrying three actions or more, with the largest plan length being 4 actions. Meanwhile by contrast, Transformers had characters generating actions anywhere with a maximum length of 12 (though Jacopin argues that the limit is in fact 11, since a redundant action is occasionally added). These longer plans were often utilised by sniper characters as they moved into position to line up a shot. These plans were the exception rather than the rule, with less than 1% of all plans exceeding 5 actions.
The shorter plan lengths in F.E.A.R. are also to blame for the repeated ‘need’ for planning in the game, with the planner needing to multi-task between agents more frequently when compared to Transformers. It also highlights the benefits of the changes to the domain models and perhaps a more elegant design, since Transformers uses over 130 unique actions in its plans compared to F.E.A.R. which used 26 of the 60 actions it had available.
It is important to focus on the quality of plans and their lengths rather than the time taken to generate them. Given that the requirements of these games and indeed the systems that could run them changed significantly in the five years after F.E.A.R.’s release. However, what it tells us is that the GOAP implementation, while fast and practical, was not planning to any rich level of detail. Instead it would continually re-plan in a reactive manner. Perhaps this is more due to the nature of the gameplay and the need for fast movement. However it also suggests that more coordinated and detailed strategies were not being generated. Whether this was because the designers did not intend for it or it the GOAP system was incapable of doing so is open to debate.
While GOAP continues to be a success for game AI applications, the rise of HTN planning as a valid alternative highlights the desire for continued innovation in game development. Both solve the same problems, albeit in different ways and much of the emphasis on using one technique over another can easily lie in the hands of designers and their interests. HTN planning has continued to prove popular within game development, with titles such as Max Payne 3, Killzone 3 and Empire: Total War adopting the approach. Interestingly, the Dark Souls franchise is reported to have used HTN, which seems at odds with the rather predictable albeit demanding AI you face in the game It is expected that this will continue into the future, with more technologies lifted directly from the AI community being considered as the limitations of hardware are slowly relaxed for the game development process.
Special thanks to Rory Driscoll for his comments on the original version of this article from 2014.
Champandard, A. and Jacopin, E., (2014) The Evolution of Planning Applications and Algorithms in AAA Games, AIGameDev.com
Champandard, A. and Humphreys, T., (2012) Planning for the Fall of Cybertron: AI in Transformers AIGameDev.com
Conway, C., Higley, P. and Jacopin E. (2015) Goal-Oriented Action Planning: Ten Years Old and No Fear! GDC Vault
Currie, K., & Tate, A. (1991). O-Plan: the open planning architecture. Artificial Intelligence, 52(1), 49-86.
Jacopin, E. (2014). Game AI Planning Analytics: The Case of Three First-Person Shooters. In Tenth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).
Nau, D.S., Au, T.-C. , Ilghami, O, Kuter, U. Murdock, J. W., Wu, D. and Yaman. F., (2003) SHOP2: An HTN planning system. Journal of Artiﬁcial Intelligence Research (JAIR) 20:379–404, Dec. 2003. pdf.
Orkin, J., (2006) Three States and a Plan: The AI of F.E.A.R. Proceedings of the 2006 Game Developers Conference (GDC ’06).
Sussman G.J. (1975) A Computer Model of Skill Acquisition Elsevier Science Inc. New York, NY, USA.
Tate, A. (1976) Project Planning Using a Hierarchic Non-linear Planner, D.A.I. Research Report No. 25, August 1976, Department of Artificial Intelligence, University of Edinburgh