What I Learned After 10 Years of AI and Games
Looking back on a decade spent digging into the subject.
While AI and Games has only been on Substack for just over a year now, the YouTube series started out in 2014. And it wouldn’t have been possible without 10 years of support from my audience who have crowdfunded the show via Patreon and now on Substack.
Support the show to have your name in video credits, contribute to future episode topics, watch content in early access and receive exclusive supporters-only content.
March 2nd 2024 marks 10 years since the release of the very first video on AI and Games. A video that attempted to communicate the inner workings of Monolith's 2005 shooter F.E.A.R. to a wider audience. And so began what has been a decade of videos exploring different ideas, different games, different topics, all in a quest to try and create something that is accessible and informative to people both inside and outside the world of research and development of AI and video games.
10 Years.... you really gotta chew that over a couple of times before it sinks in.
So yes, I've been making content on YouTube, for 10 years. It's not a sentence that I ever expected to utter, and it's one that carries with it a lot of thoughts and emotions for a variety of reasons. But the question that's the focus of this piece, is what have I learned?
I mean clearly I learned how to make better videos: it's safe to say my more recent releases look a little less shonky than they did back in 2014. But critically: what have I learned about a subject that I've dedicated more or less a decade of my life towards. My career, whatever you want to call it, has been spent learning as much as I can about how artificial intelligence intersects with games. Be it the interesting ideas from research that could one day impact the field, to the inner workings of many a popular title.
The conversation of AI intersecting with video games has changed so drastically in that timeframe: in 2014 AI research in games was still something of a niche, with super-powered deep learning systems training to beat popular online multiplayer games nothing more than fantasy. In fact the term 'deep learning' had only been become normalised in AI two years prior. Meanwhile right now in 2024 we see generative AI making its mark in the industry, both for good and ill.
So for this first of a series of articles looking back on the past decade of the YouTube channel, I wanted to look back and reflect on what I've learned about this space. Be it in terms of game development, research and much more besides.
A Change in Perspective
So while the things I've learned I'll list in a moment, I'd argue the biggest thing I've gained from all this, is a shift in perspective.
It's important to give context to what my life looked like when that original video launched: I was just over a year into my first job in academia. I was hired to my first university position in 2012, and originally with the intent to teach computer science, while conducting my AI research using games. Within 12 months of being on the job, I was not only teaching in the departments games programming degree, but I was leading it and responsible for ensuring it was successful - a situation that arose given I was increasingly critical of how mismanaged the course was, and I think management wanted to see whether I would sink or swim, and so with less than 9 months experience as a lecturer, I was now in the thick of it.
I was 28 years old at the time, and genuinely wanted to do well by the roughly 140 students in the degree programme that I was now in charge of. It was a responsibility I took very seriously while also being terrified by it.
Critically, I was highly aware of my own limited knowledge of the subjects I taught. Before becoming a lecturer I had worked as a software engineer in the banking sector. My first 'real' job after completing my PhD. A big reason for that was I felt it was important I spent time working as a professional developer, and prove to myself I knew what I was doing. To assume that I knew how to teach programming to a group of students without industry experience was, in my opinion, tremendously arrogant.
And so I was back in this position all over again: given I had made silly little games that run in Java and other tools, but nothing that used industry standards and technologies. So I wound up with a lot of extra curricular pursuits on top of my day-to-day job, including learning more about development tools that were popular at the time - and then making my own little games in the likes of GameMaker, Microsoft's XNA, Unity and Unreal Development Kit. But also just trying to learn more and gain a broader understanding of the field. And so given I was teaching a class on AI for games, I wanted to give more industry context to what was being discussed and ensure what I was teaching was relevant for use in actual studios. And so I started with a topic that I knew very well from my PhD days.
My original video on F.E.A.R. was based on a lecture I used to give on how the Goal Oriented Action Planning works in that game. It was a fun lecture, I enjoyed doing it, and given how... dramatically I delivered it, it led to noise complaints from other lecturers - a point of pride to this day. But nonetheless, I felt I was limited. Having delivered this case study in class two years in a row - I wanted to find other topics to explore, while also finding some way of making previous ones remain accessible. Hence I made the very first video, and while the channel didn't really achieve any real popularity until that video about that horror game, it was the beginning of a journey to where I am today.
For context: Alien: Isolation is my most famous video, having gained over 1.6 million views since its release in 2016.
The change in perspective has really came from spending more time in the space, an unending desire to learn from my peers, and apply that knowledge, and find ways to give back to audiences, be it regular gamers or those within the industry. I started out on the outside looking in: an academic trying to teach game development when I had little knowledge or experience of how it's actually done.
Sitting here now, my academic career is - for now at least - in the rear view window. I've worked on over a dozen game development projects, be it my own, as a contract programmer, or as a consultant helping studios achieve the best they can. I help bring new talks to the AI Summit at GDC. I run professional training programmes for game developers and studios. I've made videos in collaboration with studios who understand and respect my work. I've consulted for some of the biggest firms in the industry and get invited to present at events around the world. While I still sometimes feel like I'm on the outside looking in - given my YouTube work is a sort of weird type of investigative journalism I guess? - my work life is very much the opposite, and I'm immensely grateful for it.
What started as both a passion project, both to support my students and give me an excuse to dig into how AI is utilised in games, has transformed into something much larger - I mean there's over 200K people subscribed to this YouTube channel. That's insane! I mean, you're listening to me waffle on here, and some of you might even enjoying it. But as I say it's had a massive influence on me, my work, how I now approach the field. And truth me told, it's done immeasurable damage to my Scottish accent, as I got out of my way to try and make sure you understand me.
So anyway, what did I actually learn? Let's dig into it some more.
Making Games is Difficult
I think it's safe to say, that making games is difficult. I always knew it must be hard to get it over the finish line. But now, having shipped games myself, and dug into the complexities of large scale productions: I'm honestly surprised that games ever get released. The one thing to appreciate is every little thing that players pick up on, is often a much bigger issue in the backend. Decisions get made that can make or break a game, and sometimes the ramifications are not immediately apparent.
I always get annoyed at the 'lazy developer' argument that bounces around social media. I can guarantee you, the reality is much more nuanced - an issue of a nightmare publisher or game director, restrictive timelines and budgets, tech debt that forces specific decisions or impacts the final product, issues that emerge as the scope of a game either expands within consideration for production realities, or shrinks drastically because time is running out. One simple addition or concession made to a game three years prior can somehow, on the games release, actually work and the audience loves it, or condemns it forever.
Game developers, like humans, differ drastically from one another: be it their background, nationality, race, creed, gender identity, sexual orientation, you name it. They come in all sorts of flavours. Some are great, some not so much. But I've yet to meet any that are lazy. At risk of using a word I detest when discussing game development - they have passion. People don't work in this space for the money - a point all too relevant now in 2024. Hell I could've made far more money if I stuck it out in the banking industry when I joined it some 15 years ago. People work in gmaes, because they want to make great games. But not all games are destined to be great: they are large, unyielding beasts. Even the smallest of projects can become increasingly layered and complex. It takes such a herculean effort to get a game out the door without it collapsing on start - and sometimes that still happens: be it due to unforeseen technical issues or when those that write the pay checks dictate when a game goes out the door.
And this is a difficult talking point to raise now in 2024, having seen over 15000 people lose their jobs in the industry in past year or so. It's been harrowing to watch many a person I know, people whose work I respect, and even my closest friends, lose their jobs at a time of corporate unease. Some of the most talented people I've ever met, out on the arse courtesy of bad decisions made by executives during the last couple of years as games saw a boost courtesy of the COVID lockdowns. It's painful, it's angering, and my love goes out to all of you out there. I hope you find your feet once more however you do it.
But it's not just about game development itself. The thing I really learned was how different AI operates in games versus the types of things in academic research we took for granted.
The Best AI is Seldom Complex
I distinctly recall, when first reading about the AI of FEAR during my PhD, my first thought was how... simple it was. I felt it naive that games companies used such simple approaches when 'smarter' alternatives were available - both in symbolic AI and in machine learning. The planning tech in FEAR is derived from STRIPS. STRIPS is pretty old, and dates back to the 1970s. In fact, when I learned about FEAR’s AI, I was part of a research team that specialised in planning technologies - and STRIPS was considered archaic compared to more contemporary methods. So yeah, I found that baffling. In fact my PhD was in part spent trying to explore alternatives. My thesis, in conclusion, developed something that 'worked', but it had many limitations.
It was the first time I realised how limited my perspective was of how AI works in the industry, and how my lack of knowledge had - in part - been driven by how difficult it is to find out how all this stuff is actually done. Outside of paying for the GDC Vault - which I couldn't afford - it was very difficult to keep on top of the big things happening in and around the industry. In hindsight, it's no surprise that, as someone who was interested in how games worked when I was younger, and lacked any real insight into the industry, I've since dedicated a lot of my time to sharing whatever I can find out about this space to all of you out there, as freely as possible.
As an aside, it's also one of the problems I've faced for years in monetising my work. Given I don't want to create another paywalled platform that contributes to this lack of information access.
But, I want to stress that while I say that FEAR's AI is simple, I find the very best AI in games often shares in this ideal. This doesn't mean that it's bad, nor does it mean it isn't technically complex - let me stress that please, some of the work that is done in these is a technical marvel. Rather, what I mean is that conceptually, the core of the decision making is often rather simple: with very smart decisions being made to achieve an effect that is then layered on by additional flourishes that really sell it. Which I'll come back to in a second.
Looking back on many of the games that we are often throw around as having great AI: they're seldom complex in theory:
Left 4 Dead's Director AI spawns zombies, and targets players based on whether they're playing the game as envisaged.
Half-Life's AI is a reactive state machine, but it also just gives characters goals to follow if they're not going to be interrupted. Essentially queuing actions one after the other.
The big breakthrough of Halo 2's AI? It can immediately change tactics if the player does something significant.
FEAR's AI tries to minimise threat, and it will either plan to kill you, or hide to resolve it.
Heck, one of my weird claims to fame is that I normalised the discussion of Alien: Isolation's AI having 'two brains'. In a tweet I posted 6 years ago - on a profile I have since deleted because y'know, Twitter - I highlighted that one brains always knows where the player is and tells the second brain that controls the body where to go in order to find you.
But while these ideas are conceptually simple, in practice they’re significantly more complicated. All of the games I’ve mentioned have a myriad of technical complexities in making them work, but the crux of the idea in each of them is very simple. And then rolls out into increasingly complicated systems that have to handle the myriad of situations that can arise.
Why this 'conceptual simplicity'? I'd argue from experience, because developers and designers themselves need to understand it. If a system is so complex that it becomes difficult even for those working on a game to anticipate what it might do, then it becomes a real headache to work around that. Debug tools and much more besides help make that a reality, as you can force the AI to do specific things when developing the game. But players, as much as designers, often want to be able to anticipate the possible things it can do, while also still being surprised in the moment.
But a big part of the real surprise, as well as the quality of a given AI character or opponent, rarely comes just from the underpinning AI technology alone.
The Emphasis of 'Theatre'
When I work with developers nowadays as a consultant, the philosophy I try to communicate is the need to treat your game as a theatrical performance. It's all about putting characters and important elements in the correct place, at the correct time, and bringing the story or the experience to life. The trick of course, is trying to ensure that the player actually sees it.
Players are very fickle. They move around a lot, they don't always look where the game needs you to look. Heck it's also kind of established that a lot of players don't even read. No I don't mean the hundreds of text logs you might find in a game to establish it's lore. I'm talking fundamental stuff. Tutorials, key information prompts, and much more besides are blissfully ignored. Back when I used to have stalls setup at events like EGX as I toured my little indie games, we spent the bulk of our time giving advice on how to play our game because all of the massive tutorial messages we put in the game were simply ignored. It used to drive me absolutely mental. We must've spent weeks of development reworking UI, and adding what we thought were clearer and clearer prompts, only for people to completely miss them.
Look, I'm not looking down on anyone: I played Monster Hunter World for 50 hours before I realised I'd completely missed a tutorial that tells you how to steer a monster by using the clutch claw. My point being, trying to catch the attention of a player, even when they're really focussed on your game, is very difficult.
A lot of games wrestle the control of the game from you with cutscenes and vignettes. But you can't do that with non-player characters in moment-to-moment gameplay. And so this brings me to the big thing I raise with every developer I work with: how can the player interpret what your AI is thinking?
This is where the theatre kicks in. Because a lifeless NPC is impossible to interpret. By comparison, humans give away a lot of tells: their body language, what they say, how they move, their facial expressions, their tone of voice. You learn a lot about what they're thinking by how they act. And this is often the thing that really makes great AI in games excel, because you can understand - to some extent - what it is trying to do.
I don't think this particular philosophy really began to take shape until after my episode on BioShock: Infinite which I released back in 2017, in which the development team the 'Liz Squad' tried to approach the character with this holistic approach that put theatrical performance at the very centre. But I've since expanded that thesis into something much broader.
For me, the games that really excel in their AI, are those that sell me the experience beyond any intelligent decision making:
Hearing soldiers cooperate with one another in FEAR
The body language of thugs in Arkham Asylum as the Batman looms over them
How the music in Left 4 Dead tells me when things are about to go horribly wrong
The tension that builds in the Last of Us Part II as a patrol closes in on my hiding spot
The brief look of terror in the eyes of a demon as the DOOM Slayer smashes their skull into paste
How I can tell how much trouble I'm in based not just on how the Alien: Isolation’s Xenomorph moves, but the hisses, the snarls, and the turns of its head.
These are the things that make AI in a game really shine, and almost all of it isn't coming from the AI programmers. It's from designers, it's character artists, it's animators, it's voice actors and sound designers.
Great AI is Nothing Without Tools
Each episode of AI and Games takes a lot of effort to produce, and in order to build all of the knowledge that I need to start putting a script together, I find myself digging into a variety of sources: I watch presentations, I read research papers, book chapters, technical reports, and interviews. I even dig through the source code of games - some of which publicly available, and some... not?. An episode can take me a couple of hours to research, or in some cases literal weeks of time. But there is one consistent element that crops up again and again, and that's the value of good tools to support the team.
What do I mean by tools? Well quite often making changes to AI can be difficult if it's all in the code, and on any game project, the bulk of the development team won't be able to program. So making it easier to understand what an AI system is doing, how it comes to decisions, and even edit these elements can be a real boost.
This is again something I advocate for when working with studios, in that understanding the technical intricacies of AI for games is already a huge undertaking for programmers, so it's going to be even worse for designers - and they need your help. A designer shouldn't need to know the technical intricacies of using Goal Oriented Action Planning or a Behaviour Tree. But they should know enough about how it works on a high-level such that they can learn how to tweak it, and create new content. Naturally this shifts from project to project and studio to studio. But more often than not, you have designers who want the AI to satisfy specific design goals, and it's a collective effort to make this as smooth a process as possible. This is especially relevant to smaller teams where the numbers of programmers or designers is in the single digits - build tools to make each others lives easier, rather than have your work be slowed down by overlapping responsibilities.
A big challenge of course, is given how unique every AI system winds up being for every game, the tools are also custom and bespoke for that game. It's hard to standardise this sort of thing, but it strikes me as being the thing that really helps steer AI into something that is practical and manageable.
Of course, I say this at a time when we're hearing a lot about AI tools for games, offering to speed up and bootstrap development. It's an interesting time for sure, and I think there's a lot of really cool stuff out there that's worth checking out, and a lot of stuff that... yeah...
The Value of AI Research for Games
The last thing I wanted to talk about, is spinning this around and looking at AI for games in another direction. One thing I continue to see value in is the thing that I walked away from: research! I think if anything, the past ten years has highlighted the value of AI research in games, in a variety of different ways.
Now, when I talk about AI research people are often quick to think of the big news stories: of DeepMind training AI to play a variety of Atari games, and then of course StarCraft. Of OpenAI training bots to learn to play Dota2. These are impressive, for sure, but I don't see them as being of value to those who work in or are interested in games. As I've discussed in previous videos and elsewhere, these are great examples of highlighting the potential of deep learning, and the scale and complexity it can reach. But it isn't practical for game development. In fact despite these breakthroughs happening as early as 2013, it took the best part of a decade before deep learning became normalised in the games industry, and quite often in ways that I didn't anticipate.
It's funny for me, as someone who used to be a researcher, when particularly in my grad school days the conversations happening in academia was often split into two main trains of thought. The first was that games are a great way to learn about how to improve machine learning technologies. And the second was that machine learning would enable for smarter non-player characters and more intelligent opponents in commercial video games. Training neural networks could make for smarter opponents, the adoption of reinforcement learning algorithms would transform strategy AI in games. While some of this did come to be the case, it was in limited instances. But by and large, a lot of the arguments for machine learning in games haven't really proven to be the case.
What is interesting however, is to see how machine learning has emerged in a myriad of problem areas across game development. Academic research has had an impact on a lot of that, such as the first real adoption of player analytics conducted by my friends over at the ITU Copenhagen on Tomb Raider - a topic I made a video about years ago. But also the myriad of problem spaces that academics - like me - were not thinking about, and a big part of that was a lack of knowledge of the many challenges faced in development. Now sure, a lot of research in game development nowadays at the likes of Ubisoft's La Forge or Sony AI is done in collaboration with universities, but for many years - from the perspective of a researcher keen to work with games studios - that seemed like a fantasy. In fact, at one point I spent weeks trying to establish a collaboration between my department at one university and a AAA publisher - both of whom shall remain nameless - and it never materialised despite the best efforts of myself and my friend at said publisher. Now I'd argue that there are three really exciting research routes that exist.
First, access to studios and their willingness to collaborate is greater than ever before. Many PhD graduates I meet have worked in collaboration with studios, either on brief internships or as member of the development team. This helps foster knowledge and communication that I would've killed to have 20 years ago as I entered grad school.
Secondly, as a lot of AI research in academia was increasingly communicated and shared with the wider world, game developers themselves have been able to try out these techniques themselves, and that's led either to new applications emerging, or for ideas that were being floated around academic spaces to be proven to actually work in real-world contexts. From texture upscaling and real-time neural rendering, to the use of motion matching in animation, cheat and toxicity detection in online gaming and much more, game developers found solutions to things they knew were problems. And it's exciting to see what new developments continue to emerge in the space. There is still so much potential for machine learning to be applied in games, be it with deep learning, or generative models. And hopefully, it will be achieved in ways that are practical for the business and for developers.
But I think it's worth saying here, that academic research is not beholden to industry relevance. Research shouldn't be applied because we think it addresses an industry need, but rather to expand our collective knowledge of a subject. After all, research is really about finding not just the right answers, but also identifying the wrong ones. Experimentation and, frankly, failure, is an important part of academic research, and the opportunity to experiment and do weird and wonderful things. And this leads to the third and final aspects.
For me what is perhaps the most exciting thing about the field nowadays, is that we have normalised the idea that conducting research in games is a good thing. My first games research project started in 2004, and at that time the field of AI researchers interested in games was very small. In fact it was only a few years prior in 2001 that a paper published by John Laird and Michael van Lent in the AI Magazine of AAAI - an academic research publisher called the American Association of Artificial Intelligence - had advocated for the use of video games as a research platform. At the time, myself and my peers had to be careful when writing papers not to use the term 'game' all too often. The games I built for my projects were referred to as 'simulations' or 'serious games', and I had to be mindful where I tried to publish my results in academic papers, for fear of being dismissed for conducting pointless, frivolous research.
Fast forward to today, and I now attend events and regularly meet new students who are supervised by professors who were starting their journey at the same time as I was. I'm no longer a researcher (and to be honest I wasn't a particularly good one IMO), but I feel a kinship with these people because I've been through that journey. And the new generation is so exciting: they're tackling new ideas and problems that I wouldn't have thought possible as a PhD thesis 20 years ago, and they're also far more fluent in games technologies and practices, such that their ideas may have more relevance to the industry - and if they don't they can just go and make their own prototypes in the likes of Unity or Unreal to prove their point. But at the same time, at a time when corporations are making these huge breakthroughs which are impossible on an academic budget, these students are doing weird shit... fun and crazy stuff that might not have value now, but could prove influential years from now. There's huge potential in that space, and I'm more excited to see what comes out of those spaces, than most of the stuff coming out of corporate research labs. If you're in a position where you could go and study games at grad school, I wish you the best of luck!
Some Closing Thoughts
So there we are, some words of 'wisdom' after ten years of this nonsense. Perhaps you learned something from this? I'm not sure. But what's next? Is this the end of AI and Games? Am I going to pack up now and declare my job is done?
AI and Games will continue and frankly I feel like there's still so much to talk about. As I said earlier, AI in the games industry has changed so much since my first episode, and there's still so many topics to explore, of games to dive into and - well, there's a lot of folk out there peddling the AI snake oil.
Y'know, I kinda miss when NFTs where the big grift...
But hey, thank you for reading this 10 year anniversary special. I hope you've enjoyed it. This is the first of a series of episodes I'm releasing in the coming weeks and months to celebrate 10 years of AI and Games. We've got retrospectives, some big games being explored, and also I'll be back in a more informal capacity to talk about my predictions of how I think AI will change and evolve over the next 10 years in the games industry.