The Video Games Industry Needs to Start Self-Regulating for Safety with AI
Getting ahead of the issue, rather than letting it become an epidemic
AI and Games Plus is our second YouTube channel that is dedicated to expanding on topics from AI and Games in more detail. It’s made possible thanks to crowdfunding on Patreon as well as right here with paid subscriptions on Substack.
Support the show to have your name in video credits, contribute to future episode topics, watch content in early access and receive exclusive supporters-only content.
Earlier this month here in the United Kingdom was the Global Summit on Artificial Intelligence Safety: an event hosted by the UK government in an effort to and I quote "agree safety measures to evaluate and monitor the significant risks from AI".
It was... a bit of a mess quite frankly, all at a time where we really need a greater understanding of how and where to employ safety and protection measures for industries as well as broader society on artificial intelligence in games.
At the same time, I've been chatting more publicly in the press about the need for how we build protection measures across the games industry, so in this episode of AI and Games Plus, I want to dig a little deeper into that. What do I mean about safety measures, what the likes of the AI safety summit fails to grasp, and where the industry can start making steps forward.
The AI Safety Summit
So if you're not familiar, let's bring you up to speed on what the AI Safety Summit was all about. As I said already, it was intended as a means to raise dicussion on how to evaluate and monitor the risks of AI. All the while not really delivering any meaningful conversation on the topic.
The UK government, and more specifically the Prime Minister Rishi Sunak, made quite a lot of noise in hyping up this event. Hosting it at Bletchley Park where AI forefather Alan Turing worked as part of the team cracking the enigma code during World War II. It hoped to bring together figureheads both of government and of AI companies from the likes of central Europe, the United States and even China to discuss the real-world applications of AI and the potential safety risks it presents.
While this all sounds eminently reasonable, much of this is really performative politics on the part of the UK's conservative government. At a time where the UK's status in many regards is diminished courtesy of Brexit, it is equally in the realm of computer science. The UK has over the past 10-20 years lost what little foothold it had in the AI market. When it comes to AI, it is at this point a two horse race between the US and China, and depending on who you ask the UK is (at best), in a distant third place.
The AI Safety Summit sought to help re-establish the UK's relevance in this space, but even then it all felt rather lacklustre. Meaningful efforts on regulation of AI and relevant technologies are well underway in the European Union, which the UK will typically have to play along with in order to do business in the Euro markets. Meanwhile US President Joe Biden announced an executive order on AI regulation the same week he then opted not to attend the summit, and sent vice president Kamala Harris in his stead.
This naïve posturing by the UK cabinet is highly evident in virtually any of the press surrounding the event. As Sunak shows his distinct lack of knowledge on the subject matter, often speaking of AI as a omniscient presence, and of the realities of systems becoming sentient and threatening our way of life. A UK prime minister having a conversation on the future of AI with Elon Musk of all people, is like the blind leading the blind. As two people in positions of wealth, power and authority, wax poetically about how everyone beneath them will suffer at the hands of AI as their livelihoods are ruined - and in press conferences since, Sunak has kept with the traditional line of the UK conservative party of not legislating unless they have to (which, spoiler alert, never goes well anyway). It was quite a staggering display, particularly with an election year in 2024 no less - as these two supposed men of industry showed a fundamental lack of understanding of how AI technology works and how it will be employed in society. But the big takeaway was that the little people would suffer the most.
A Need for AI Safety
Now what does all of this have to do with games? Well, it's important that as a society we start moving towards building safeguards to protect developers, users and consumers of artificial intelligence, whatever form it may present itself in. The big conversation that is happening around AI right now is largely around copyright, particularly due to generative AI techniques. But it's also a situation in which AI as a whole is generally misunderstood. When you have the Prime Minister of your country speaking about AI as an omniscient threat, that's really disappointing. When elected leaders rely on The Terminator as their base of understanding of AI, we're in trouble. The real problems of most AI is not about it being in control of military hardware, but rather when people in positions of power and authority, be they in political, military or captialist frameworks, use those systems to expedite decision making. As they attempt to take exist frameworks, ideas, and data, and use it to create a system for automation. The real threats of AI are not from the systems themselves, but the people that design and use them. Sometimes this is going to deliberately, to do harm or make monetary gain from others, while a lot of the team its either developers or consumers using AI without real understanding of what it does.
And yes, all of this applies to the games industry. In fact I recently was interviewed by our Branching Factor podcast co-host
on his new publication: the , about why we need to do a better job of capturing the nuances of how is going to be used in games, and regulating the industry to better protect both players and developers.Does this mean I am all for government intervention and the introduction of AI regulation? Yes, and no. Historically governments, be it the US, the UK, the EU and elsewhere, are not great at regulating technology. This is a common bugbear given governments are often led by people who have a very poor grasp of technology in general, not to mention getting the regulation in writing such that it captures the nuances of these technologies such that it is enforceable is a real challenge.
In fact, earlier this year, George and I discussed this at length on Branching Factor (see above), about how the UK government has failed to implement any meaningful legislation against loot boxes, because the practice and surrounding ecosystems are incredibly hard to define in such a way that they don't inadvertently result in legislating against all in-game microtransactions and related currencies. I mean sure, some folk might really want that, but that's a prime example of how regulation often harms and stifles innovation within technology-based and creative industries. Given it will deem many a common day-to-day practice illegal without bureaucratic oversight.
At minimum, I would argue that we should be moving towards is self-regulation: where the industry bands together and puts together safe practices that are clearly communicated, and can be somewhat enforceable. It wouldn't be the first time this has happened either, given many of the ratings boards you see across western gaming markets such as the ESRB and PEGI, emerged as a result of a need of the games industry to better acknowledge the risks of players being exposed to certain types of content. It's a topic that came to the forefront in the 90s as games like Mortal Kombat and DOOM began to shape public discourse in violence in video games.
Self-regulation, that is then enforceable throughout the industry when governments endorse, is useful in that it, well in the case of ESRC and PEGI at least, it prevents broader government regulation. It shows a maturity on behalf of the industry to recognise this is a problem, and how we should approach it.
So what does all of this mean for AI in games? How can we self regulate? I don't have all of the answers to this, but I want to spend the remainder of this piece unpacking the comments I made in the VGIM article, and dig into what areas we collectively as an industry need to be focussed on.
Video Game AI Safety
For me breaking down the safety of AI systems in games is really focussed on five key elements, all of which provide an aspect of protection be it for a game player or for a developer.
Protecting Players
First of all let's talk about players. As a player, you're happy just playing your games right. But what about if the game is using AI that is also reliant on your data? A game where you're chatting with non-player characters powered by conversational AI means that the system is either reading your inputs on the keyboard and sending it for processing, or it's even capturing the audio of your voice. What is happening with that information? And what rights do you have to it? For me, the two most obvious elements here, is that you're able to control how and why your information is being used. So it boils down to two core concepts:
A player should be aware of what AI technologies are being employed within a game, and how their data is being utilised and processed for the purposes of said AI.
Expanding on my first point: a player should also be made aware of what rights they have to opt-out of their data being utilised for future AI training and development, as well as being able to see how and where their data is being processed as part of the AI training process. And critically all of this should be in plain and easy to understand language.
Now, a lot of my second point here could be captured to some extent courtesy of data protection regulations such as GDPR, but it doesn't explicitly provide protections against AI. This is actually an issue that has been seen with the likes of OpenAI who originally took everything you wrote to GPT as data it could later use to train it. As of April this year, this became an opt-out. Now at present, most of this second element have to come about courtesy of some form of regulation, but the first part can be more clearly communicated in much the same way that the likes of PEGI and the ESRB now dictate that loot boxes and similar microtransactions for random items now be displayed as part of a games rating.
Protecting Developers
Meanwhile jumping over to developers, my concerns stem from a need to better share information on the risks of using certain AI techniques and tools, and whether they result in a legitimate problem for a developer. As we've discussed in recent episodes of AI and Games Plus, the likes of Steam imposing strict limits generative AI tools is a big issue for developers who may not be aware until that point that they're getting themselves into a pickle. Sure, big AAA studios have legal teams and much more besides that will help, but for many small to mid-size studios, they arguably don't have the legal teams behind them to help make these judgement calls. Plus the technology is moving quickly, so it's going to be difficult for studios to know what the requirements are to use this tech. So my proposals for protecting developers, are actually pretty straight forward.
First of all, a developer should be able to easily and accurately determine the legal implications of using a specific AI technique, notably generative AI. And have clear guidelines on what the particular issues are with the use of a specific technique or approach that could result in implications for the future.
Secondly, by extending on my last point, it should be made clear how a specific generative AI tool, such as GPT, DALL-E, Stable Diffusion uses data, how output is generated, and where sources of that data are procured. This should also be extended to 3rd party products that are based off of these same ideas, or licensing their platforms.
Lastly, an up to date and clear guide on what AI is deemed acceptable by different platforms, storefronts and professional bodies.
None of these ideas are inherently complex or difficult, but it requires effort to put into practice. One way I've previously argued for this, is to have what is known as a traffic light system: that dictates based on the colour coding, whether a given tool or approach is considered safe or otherwise in a number of different scenarios. So rather than having to conduct a lot of your own research just to figure out whether a language model provided is going to be expensive or safe to use in your project, or whether the likes of Steam, Xbox or PlayStation will accept or reject your game, it is all made available in an easy to use format.
Closing Comments
You may wonder why I'm making such a big fuss of this, but we're at a pivotal moment in the games industry as a whole with AI technology. For the past ten years, we've seen AI become increasingly pervasive beyond just use as enemies or director systems, it's being used in a myriad of aspects of game development. But only now are the risks becoming more prevalent, with your actions, your voice, being captured to use for AI systems, for creators whose materials may be used without impugnity. The next few years in every facet of society, beyond just games, we're going to see how AI can start making change, both positive and negative. And it won't be long before AI makes some very ugly headlines in game development. That I am positive.
Regulation is often one of the first things that will then roll down the pipeline once the outcries become loud enough, and so it's important we start to get ahead of it. Having the industry self-regulate to some extent, is a smart way forward to mitigate the more oppressive forms of regulation that can ultimately impact safe and practical uses of AI in the industry.
Absolutely! Using AI to self-regulate security in the video game industry would be a huge step forward in protecting players and ensuring their safe online experience https://servreality.com/nft-game-development/