The Origins of Half-Life's Finite State Machines
How the AI of Half-Life is Connected Through History with the Manhattan Project
The History of AI and Games is a new YouTube series made possible thanks to the AI and Games crowdfunding community on Patreon as well as right here with paid subscriptions on Substack.
Support the show to have your name in video credits, contribute to future episode topics, watch content in early access and receive exclusive supporters-only content.
Do you know what links Valve's Half-Life with the atomic bomb?
No it's not Gabe Newell, nor is this some bad attempt at humour (y'know, half-life, nuclear physics, etc.), this is all quite true. You can draw a link between the beloved first person shooter released in 1998 and the detonation of the two atomic bombs in 1945, that led to estimated death toll of a quarter of a million people in Hiroshima and Nagasaki in Japan. The answer centres on the underlying concepts that drive the AI characters in the game itself.
I'm Tommy Thompson, and in this episode of History of AI and Games, we explore how this research field was driven by many an individual whose work not only influenced some of the most beloved video games of all time, but was more directly involved in possibly the most destructive force created by humankind.
After exploring the humble beginnings of computing as we know them, and how they may well have arose inadvertently thanks to the Mechanical Turk, we jump forward in time by over 200 years. From 1770 in Vienna Austria, to 1998 and specifically we're headed to the offices of Valve LLC, a company founded in 1996 by former Microsoft employees Gabe Newell and Mike Harrington, based in Kirkland, Washington state, in the United States.
On November 19th, Valve releases their first game as part of this new venture, Half-Life: a title that proves highly influential not just on the first-person shooter market, but on the games industry as a whole. Half-Life set a new standard for action, storytelling and visual fidelity at a time when games were tackling how to move increasingly into 3D rendering and design. This was of course later embellished upon by its sequel, Half-Life 2, that to this still appears in many a list of the greatest video games ever made.
Now the link that bridges Valve's seminal release, and one of the worst events in human history, is the AI methodology employed for the non-player characters. So before we dig deeper and start working our way through the history books, let's have a quick refresher on how the AI of Half-Life works and it's importance in the history of AI for video games.
The AI of Half-Life
The AI of Half-Life, is powered by a simple yet effective technique known as Finite State Machines. A concept whereby specific behaviours of a character are encoded as individual states. Then there are rules established throughout the game's logic that dictate when a character may transtion from one of those behavioural states to another one. So for example, an enemy guard may patrol the map until they see the player, after which they will transition into one of several states that handle combat. I won't dig into the minutiae of how the AI works in Half-Life here, but if you're interested, please check out the dedicated episode to the topic in my AI 101 series.
Now it's important to stress that Half-Life's AI wasn't new. No not at all. Finite State Machines had been used in some form or another in games for many years, even as far back as the 1980s in the likes of Pac-Man. And in fact, in episodes of my main AI and Games series, we have dug into how this particular approach was employed in other seminal shooters of the 1990s, notably id Software's DOOM in 1993, and then in 1997 in Rare's beloved James Bond adaptation Goldeneye 007. What Half-Life highlighted - and why it's the focus of this video - was the ability to use finite state machines to complete specific goals. When a goal is given to these characters, it devises a path through the state machine that will achieve that goal. This combined with the accessibility of Half-Life's codebase, helped establish and cement its place in game AI history and led to many an expansion of this idea, be it in the likes of Bungie's Halo franchise or Monolith's F.E.A.R., that each innovated in their own ways.
But pulling the thread even further, it's worth highlighting how often that AI in video games is derived from other sources. Game AI, as it is often known, is a suite of tools and techniques used to facilitate specific game design concepts. If you consider Director AI, where you use an AI to govern the experience of a game, that term was originally to distinguish between the use of a finite state machine used for non-player characters or NPCs, such as in Half-Life, and the use of the same approach for experience and gameplay management like in Left 4 Dead - another popular Valve game. Plus let's face it, it's a much more marketable term than 'FSMs for Experience Management'.
But even then, finite state machines are not unique to games and date back much earlier than that. The notion of a finite state machine, or rather the finite state automaton, is one that has existed within mathematics for far longer than it has video games. And it's here, that we find how Half-Life is connected to important events in human history.
Automata Theory
Finite State Machines are one of a number of similar yet different approaches to handling the idea of modelling computational processes. This field of theoretical computer science is referred to as 'automata theory', in essence the notion of a self-acting or self-moving machines.
Like a lot of computer science and AI research in the early 20th century, there was a lot of independent bodies of research occurring without the benefit of regular communication and the sharing of ideas. But by in large, they all adhered to the same core concepts:
A set of inputs (often referred to as a word) that would adhere to a fixed alphabet of symbols.
A number of states that the automaton could operate within that exhibit some form of behaviour
A transition function that dictates based on the current state and the input signal what subsequent state the system would transition to or output it would exhibit.
And so in 1956, much of this research was finally consolidated, Automata Studies, published by Princeton University Press and edited by two of the most influential AI researchers of the period - Claude Shannon and John McCarthy - brought together 13 chapters of ideas surrounding the notion of automata theory, and the cutting edge work in the field.
The Finite State Machines we see in video games largely resemble the work of one of these authors: the American professor Edward F. Moore. Though they are arguably one of the simplest forms of this theoretical concept, given they often operate as sequential, deterministic, discrete finite automata (meaning the number of states is fixed and the transitions are known in advance and fixed).
However, our story is really interested in another scientist whose work overlapped in the same field. One of the other authors of this book, John Von Neumann.
Von Neumann's work in automated theory helped derive many a breakthrough, though arguably the most well known is that of cellular automata, a concept derived originally in the 1940s in collaboration with Polish mathematician Stanislaw Ulam - where you have a grid of states that are either on or off, and a set of rules that dictate the activation of states based on the condition of those within proximity. It is, in essence, a finite state machine operating as a grid, where transitions occur based on positional relationships. It is one of the earliest known forms of automata theory, sitting alongside the Turing Machine first conceived in 1936.
While cellular automata has been used throughout game development, notably in procedural generation as far back as the early 1980s for maps in the likes of Dandy on the Atari 8-bit, and later Gauntlet for arcades in 1985. It's largely known outside of computing and academia courtesy of the Game of Life - a mathematical game published by British mathematician John Conway in 1970: a topic which Alan Zucconni has explored in their own video.
And this is the critical piece of the puzzle. Given Von Neumann, like many of the seminal minds in these early, often theoretical, days of computer science research, often saw their talents being utilised in more applied problem areas.
John Von Neumann
Dr John Von Neumann was a mathematician, physicist, engineer, polymath and one of the earliest computer scientists in history. Born December 28th 1903 in Budapest, Hungary, Von Neumann was a child prodigy who by the time he was 8 years old, could understand differential and integral calculus, and speak six languages. Now if you really want to feel bad about yourself, Von Neumann had published two important scientific papers in maths and won a national award for mathematics by the time he was 19, before he'd even went to university.
Von Neumann completed his university studies by 1927, and taught mathematics at the Universty of Berlin a year later - becoming the youngest person ever elected to the role of Privatdozent (a title historically used in German-speaking countries to denote a doctoral graduate qualified to conduct research and teach students). After a visiting lectureship at Princeton, Von Neumann accepted a tenured position at the Institute for Advanced Study in 1933, and became a US citizen in 1937. His background in mathematics led to his involvement in a variety of areas. Not just working in automata theory, but helping the research field of game theory, writing a proof for the minimax theorem in 1928, which was later adopted in chess AI programs, invented the merge sort algorithm, contributed towards the development of the Monte Carlo method, which is used in everything from the AI of Total War to Google DeepMind's AlphaGo, and was a leading contributor to the development of the programming languages of the ENIAC project. The Electronic Numerical Integrator and Computer, the first ever fully programmable, general purpose electronic computer, built in 1945 at the University of Pennsylvania. And sadly, it is this contribution, that had a much more powerful impact than could have been anticipated.
The reason for this is of course that by 1945, the world had been caught once again in the grip of world war for six years, with the US joining the fight against the Axis powers in 1941. While the ENIAC was a research project that sought to explore the capacity of electronical information processing, it was originally built to speed up the calculation of trajectories and ballistics models for artillery fire. And then Von Neumann adopted it for something much more terrifying.
Von Neumann had shown expertise in the calculation of explosions in research dating back to the 1930s. In fact, he'd became so prolific in his expertise he'd consulted for many branches of the US armed forces, and was subsequently invited to the join the Manhattan Project: the secret R&D project conducted by the US, UK and Canada to produce nuclear weapons. Von Neumann made several significant contributions to the development of the atomic bomb, including the compression mechanisms for plutonium, but critically he was involved in two key decisions.
The first was that he was part of the target selection committee - calculating the expected explosion sizes and in-turn the estimated death toll of each bomb, where he in fact nominated Kyoto over Hiroshima and Nagasaki to be one of the targets, though this was later dismissed. The second, and perhaps even more horrendous, is Von Neumann was solely responsible for the calculation of the height at which the bomb detonates. If you're not familiar, atomic bombs don't detonate on the ground, they actually explode several kilometers above it, given this yields a more powerful explosion as the shock wave smashes through all it comes across. Von Neumann employed the ENIAC to calculate the thermonuclear reaction that would occur based on specific heights of the bomb's detonation which in-turn would help derive an estimated death tool In fact, the use of the term 'kiloton' as a measure of the devastating force a nuclear weapon can create, was coined by Von Neumann in a paper written in 1944, one year prior to their application and subsequent detonation over Japan.
Closing
What was the point of all of this? Why tell this specific tale? To highlight three critical aspects about the history of AI and computer science as a whole.
First, that while we utilise AI in a variety of fields - and notably here on AI and Games to discuss forms of entertainment - that much of early computer science and later AI not only overlaps with one another, but quite often the root of ideas we see today all started from the same points in time. AI and computing as a whole is still a research field that is, in an applied sense, less than 100 years old. And so often we find that ideas percolate individually and then collectively in ways that bear fruit later on. Like much of the current AI market being driven by innovations in machine learning that occurred over 40 years ago.
Second, and perhaps even more critically, that a lot of the ideas employed nowadays in commercial products were built to support or were funded by the military industrial complex. While nowadays we see many a big AI development occurring in huge corporations, much of the early days of this field was driven by military need. The two first ever programmable computing devices, the ENIAC and the Bombe - the computational device built by Alan Turing and others at Bletchley Park in the UK - were both built for military applications, the latter to decrypt the German enigma encryption for military communications.
And the third aspect? Well, to be mindful that it's not necessarily about the technology itself, but how it is employed. We're at a critical juncture in society as AI becomes more pervasive, that we be vigilant not necessarily about how it works, and where it is adopted.
But what about Von Neumann? His contributions to the field of nuclear physics provided significant, though his work may well have been the death of him. Given he died of cancer on February 8th, 1957, with the cause suggested to be from radiation exposure during his time at the Los Alamos National Laboratory during the Manhattan Project. Curiously, while Von Neumann's contributions to the Manhattan Project were significant, he didn't even get a cameo in Christopher Nolan's 2023 biopic about Robert Oppenheimer. Despite being one of the people on-site at the Trinity event: the first ever detonation of a nuclear weapon in New Mexico.