AI and Games Plus is my second YouTube channel that is dedicated to expanding on topics from AI and Games in more detail. It’s made possible thanks to crowdfunding on Patreon as well as right here with paid subscriptions on Substack.
Support the show to have your name in video credits, contribute to future episode topics, watch content in early access and receive exclusive supporters-only content.
Last week Steam's owners Valve posted an update on their stance with respect to Generative AI submissions being allowed on the store. Their latest iteration on their policy regards generative AI in games looks to remove much of the obfuscation in their original stance, taken back in June of 2023. This new approach will allow for users to submit games to the platform for review and provide appropriate disclosures on what generative AI techniques have been adopted.
In this issue of AI and Games, we're going to dig into the details of this update. How does this change the situation when compared to their previous position, but also the real impact it has on developers? Is Steam going to be flooded with nonsense generative AI asset flips? Or is it going to be business as usual.
From my reading of the situation, this is a double-edged sword: while this does create opportunities for game developers to start having their generative AI-infused titles accepted for Steam distribution, it in-turn also adds to the risks a developer needs to consider when moving forward with submission.
Let's get into the details...
Steam's Original Ban
The landscape surrounding generative AI in games is, as discussed in many other videos, one that is a mixture of opportunity and peril. And in the case of games that are being released using generative AI, it is currently one that is fraught with challenges. Many companies are pushing hard to normalise many of the practices adopted by generative AI at a time when the industry, and legal frameworks, have not yet had a chance to adjust. In the wake of this, last summer Valve opted to take action.
Last year I discussed Steam's original position on the matter: they were making moves to block the vast majority of submissions that utilise the technology. Their post, published back in June of 2023, did not impose a hard ban on generative AI in games on the platform, but rather it was an effort to curb many of the possible issues that could arise as a result of their adoption. Games could only be submitted if developers could prove that their AI systems used training sets for which they have license, and this extended to 3rd party systems.
For many smaller independent developers, it was this second aspect that proved to be their downfall. While the narrative surrounding generative AI is the democratisation of tools and knowledge in creative industries, in practice it is often making creators sign up to licenses and paid plans on platforms such as GPT and Stable Diffusion to get access to these AI tools. 99% of game developers don't have the skills or resource to train their own AI models, so they use a 3rd party one, many of which are currently still in legal battles over their data acquisition practices and cannot prove ownership of all the data they've used to train their systems. While contentious in the eyes of many, including their own competitors at Epic Games, Valve's move to block these types of systems was - in my opinion - a smart one.
The legal landscape surrounding generative AI is so precarious right now, that allowing these games to be added to the platform would wind up with Valve being caught up in issues it doesn't want to have to deal with. Game developers being sued over assets created using a license-infringing generative AI system is a very real prospect.
But also, there's a second aspect to this in that Steam has history of being a platform that is overstuffed with low quality content. Games that take existing assets such as code, models, textures and more and then sell them in a variety of simple, derivative games that repackage these assets with minimum effort. These 'asset flips' came to dominate the majority of submissions to the platform during the Steam Greenlight era of the late 2000s and early 2010s. There is a big concern that AI generated asset flips will simply add to that ongoing churn of low-quality shovelware being published on the platform, once again diluting the space for and making it more challenging for developers working on more carefully and thoughtfully crafted titles.
So given this has been the stance since June of 2023, what does there most recent announcement mean in the grand scheme of things?
Digging Into The New Policy
Reading through Valve's post and reconciling it with their previous position, what I read from this is that Valve's stance on generative AI has not changed. Content that is infringing on the rights of others, or content that cannot be guaranteed to have been crafted without using of illegal or infringing datasets is not welcome on the platform. But now, they've changed the rules so that you can at least try to get your games on the platform. Let's unpack it in more detail.
Steam's new policy breaks down AI content into two distinct categories: pre-generated and live-generated. The first is content built as part of the game prior to submission, while the latter is content that is being crafted while the game is running. While they use the term 'AI', they're really talking about is machine learning and generative AI models. Given in each of those instances, they rely on existing data to build new content. This isn't made clear, but given the emphasis on machine learning and generative models, it means they're not considering the likes of procedural generation in roguelikes, or old school director AI creating encounters based on game state lest it runs on a machine learning model - which 99.9% of them do not.
Pre-Generated Content Restrictions
To read directly from their post regards pre-generated content:
"Any kind of content (art/code/sound/etc) created with the help of AI tools during development. Under the Steam Distribution Agreement, you promise Valve that your game will not include illegal or infringing content, and that your game will be consistent with your marketing materials. In our pre-release review, we will evaluate the output of AI generated content in your game the same way we evaluate all non-AI content - including a check that your game meets those promises."
So there's a couple of interesting things to unpack here:
The first thing is that this is exactly the same stance they made in June, but by having you submit the game and agreeing to the Steam Distribution Agreement, it's making it easer for them to give guidance on how to pass their review process, to later drop and ban your game if it is later found to be infringing, or simply not allow it at all. They will still review it prior to release, and make their own decision prior to approval. This addresses an issue in that back in 2023 their stance was incredibly vague, and it meant that developers would often struggle to find out what grounds they had to address any issues Valve had with their title. Given that prior to submission the rules were not clearly defined.
Secondly, this actually puts the developer in a precarious position: given Valve state later in the article they will publish a developers disclosure on how they satisfy the Steam Distribution Agreement on the games Steam page. Meaning if a developer lies about the generative AI tools used or tries to obfuscate the situation, and a rights holder thinks they have been infringed, then this provides ample material for a legal case to be raised against the developer. Interestingly, Valve also announced they will provide means for customers to flag infringing content, but only for the live-generated content, which strikes me as the wrong move, given pre-generated assets could still slip through the platform holders review process.
the most glaring issue: it's neither strict nor explicit enough.
The third and final interesting aspect of this is the assessment of marketing materials: a increasingly common practice for low-quality titles on the likes of Steam and mobile platforms is to submit with marketing materials be they title graphics, screenshots or otherwise that are of higher quality than the final product. This is of course misrepresentative of the actual product and leaves the developer (and Steam) in a precarious position for product liability infringment. But also, it implies that generated output needs to be consistent across the board, and not have the screenshots use a handful of high-quality human assets, and then use generative AI for the rest.
Live-Generated Content Restrictions
Meanwhile, let's take a look at the new policy for live-generated content which is defined as:
"Any kind of content created with the help of AI tools while the game is running. In addition to following the same rules as Pre-Generated AI content, this comes with an additional requirement: in the Content Survey, you'll need to tell us what kind of guardrails you're putting on your AI to ensure it's not generating illegal content."
So this is again interesting for multiple reasons. It has to follow all of the same rules as pre-generated content, so everything I've already mentioned - be it Valve blocking at review stage, developers liability after lying about data sources, and marketing that obfuscates quality - still applies. But there are two additional concerns.
The first, is that now you need to show proof of how your system will guardrail future content. In the previous category, all the code, images, sounds and text is pre-generated. Now we're talking about stuff generated on the fly. While it will apply to other problem areas, this is really for language generation models like GPT. If you have a language model generating text on the fly, how can you guarantee to Valve it will play ball, in a way that respects the terms of the Steam Distribution Agreement. This is even more pertinent if you submit the game with an age-rating from the likes of Europe's PEGI or ESRB for North America.
Note that Valve's definition of guard-railing is deliberately vague: this could relate not just to the use of curse words and other inappropriate language, but also copyright infringement as well.
This is a question I've asked of developers of GPT-like platforms such as Convai and Inworld when interviewing them for videos over on my main channel. In each instance, they're working on guardrails to ensure content is appropriate. So in those instances you can refer to their tools and APIs. But if you're using other systems, you need to find out what support, if any, you have to ensure content is appropriate. Note that Valve's definition of guard-railing is deliberately vague here: hence this could relate not just to the use of curse words and other inappropriate language, but also possibly copyright infringement as well. Asking a character to recite or summarise portions of a Stephen King novel for example, in a game that has not received the authors permission, could run risk of infringement.
The second aspect, stated farther down in Valve’s post, is that there are limits on what live generated content is permissible. Right now there is still a ban on anything considered adult only or sexual in nature. This is understandable given this adds a whole new realm of legal liability for Valve and the developers themselves. While adult-only content is permitted on Steam, it has its own very long list of requirements that it needs to satisfy. Most notably ensuring games do not use sexually explicit images of real people, of minors and is not designed to be offensive or violate laws in specific countries.
Relying on users to enforce policy is a dangerous route to take, and one that is easily prone to exploitation or stunt tactics
My Take
Looking at what Valve has presented, and unpacking the implications of it, I'm rather frustrated by this approach. Given it plays on tried and true tactics that Valve has adopted in the past. This new approach is still very much in-line with their existing policies, but it introduces risks and burdens both to developers and to the wider gaming community.
Rules Are Not Explicit Enough
I'll start with what is to me the most glaring issue: it's neither strict nor explicit enough. While it's important that there is flexibility such that games that adopt generative AI in legal and ethical ways can be considered for publication on the platform, I don't think this does enough to stop bad actors to try their luck at submitting their game and hoping it will get through, or what will still happen is people will fall afoul of the rules without even realising it. If you're not familiar with the state of the art in the field, would you know that using a GPT plugin currently runs foul of these rules? Currently, as they're described, I would interpret the use of GPT in your game at runtime as breaking these rules - given there is no guarantee of licensed training data, nor can you guarantee the guardrails are going to be appropriate. A big reason is no doubt the blowback they'll get if they list and itemise how many generative AI tools are effectively banned by the policy - and in truth the answer is a lot more than you'd think. But more practically, it means they'd need a team of people monitoring the situation and continually updating the guidelines. I still think that needs to be done, because the situation - both in legal circles and in Valve's policy - is still sufficiently unclear.
The Steam Review Process
The second issue is one I just intimated towards: monitoring and review. The policy as it stands requires all games to be submitted for review by the team at Valve. Now this has always been the case for games that get released on that platform, but as I've discussed previously, Steam's review policies tend to be pretty lacking. A lot of questionable low-quality content makes its way onto Steam, with quite often hundreds of games released on the platform each week. Naturally mistakes will be made and some stuff will sneak through, but there's also a very real risk that they simply don't enforce their standards effectively enough.
Labour of Policy Enforcement
While the ability for users to flag content they feel is violating the terms is a welcome one, I also worry that the labour of enforcing the policy will increasingly be pushed onto the Steam userbase, with end users doing the bulk of reporting on catching issues within a specific game. Much like how Steam previously relied on their Greenlight policy for filtering down potential new games on the platform by getting their userbase to do it for them.
Relying on regular users to enforce policy is a dangerous route to take and one that is easily prone to exploitation or stunt tactics - as we saw previously with how Greenlight operated. I'm not saying that's how it will go down, but given their track record I'm not feeling too enthusiastic about it.
Let Users Flag Pre-Generated Content
Finally, I think that only allowing users to report live-generated content and not pre-generated content is a mistake. Given this actively prevents one of the most common issues with generative AI from being addressed: a creator may recognise their work has been plagiarised in a game, but as part of the textures and assets shipped with the title. It's naiive to assume that Valve's review is going to catch every incident like this occurring. So allowing for people to submit a complaint about exactly this type of issue occurring is baffling to me. Yes there is a risk it can be exploited and yes some people will abuse it. But it strikes me as a such a weird step to not permit this. Valve themselves admit that right now this is a - and I quote - "legally murky space of AI technology" and considering how so much of this is built to allow them to both exert their influence on how generative AI makes its way into games, it's baffling that they then allow for this opportunity for people to potentially sneak past their rules without repercussion.
No doubt this isn't the last of their efforts in bringing in tools and systems to address this issue. You can bet as the story develops, I'll be keeping an eye on it over here on AI and Games Plus.
Closing
Is this the change that allows AI-generated asset flips to pour into Valve's ecosystem? Yes, and no. The door has cracked open a little bit more, as I expected it would over time. But that gap still has its challenges to navigate, and while I don't think we're subsequently going to have this massive influx of low-quality titles making it onto Steam, it will most certainly incentivise the most ardent of generative AI evangelists to push harder for their games to get in. I don't feel Steam's efforts here are as stringent as they ought to be, and I don't think it absolves them of the potential legal issues that could arise in the event one of these games breaks through onto the platform. But it shows an effort to moderate what technologies appear, and try to limit their own liability if and when it happens.