Researchers Digest: Microsoft's Muse | AI and Games Newsletter 26/03/25
Let's dig into the research paper to find out more!
Welcome to the AI and Games Digest. The monthly edition of the newsletter where I discuss topics outside of the scope of our regular issues. In addition to our regular announcements and news stories, each issue digs into books, articles, games, and videos that intersect in part, or in full, with the AI and Games remit, plus games I’m playing and answering questions form the community.
You can read part of each digest issue for free, and then catch the rest with a paid subscription to AI and Games.
Greetings one and all, and welcome back to the
newsletter. I’m sitting here nursing a cup of coffee as I try to get back into the swing of things post-GDC. It’s slow going if I’m honest! But that’s fine, we have another 50ish weeks to prepare to do it all over again.While we will have a post-mortem next week, I wanted to circle back for this month’s digest issue on the conversation regarding Microsoft Muse. As mentioned in the newsletter a few weeks back, this project is about how to build generative models that can support ideation in game development. So we’re going to dig into the research paper itself, and I’ll break it down and discuss the work in as accessible a form as I can.
But first, some announcements and the news!
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
Announcements
We don’t have any big announcements this week. A lot of content stuff was paused, and while I have two big things to announce it’s a little too soon! But in the meantime, here’s a delightful screenshot of one of several memes our new editor tortured me with while I was at GDC.
For context, in the upcoming YouTube episode of AI and Games we decided to roll out the old green screen and try something a little different.
Let me be clear that your sponsorship of this newsletter helps pays for these shenanigans. The more of you who support, I’m pretty sure we can start turning this into a regular feature.
AI (and Games) in the News
There was a bunch of news stories that broke GDC. Some big, some small, let’s quickly summarise them!
Another GDC, Another Roblox AI Update
Roblox continues to be the biggest gaming platform in the world that almost everybody over the age of 25 dismisses outright. However, it’s not just one of the most pervasive gaming platforms among ‘the youth’, it’s also one that continues to heavily invest in AI tools for its creators.
Last week saw the announcement of Roblox Cube, their AI workflow for 3D assets. They provided the teaser above, and then a deep dive into their plans on their blog post.
Nvidia Remembers GDC Exists
I always find it amusing how Nvidia largely ignore GDC and instead run their very own GTC (GPU Technology Conference) in the same week just an hour down the road in San Jose. However, they did turn up at GDC with both what was arguably the smallest booth I’d ever seen (wish I’d got a photo, it was just large enough to put a monitor in it with a sizzle reel) as well as some announcements. Nothing we hadn’t already heard earlier in the year, but in summary their neural shaders now run in UE5 and DirectX, and more games now run DLSS4. But arguably the coolest thing was the announcement of a Half-Life 2 demo running on Nvidia’s RTX pipeline. You can get it for free now if you own the game on Steam.
US Appeals Court Rejects AI-Generated Copyright Claim
Last week came news (via Reuters) that the US Court of Appeals for the District of Columbia has reasserted that the outputs of AI systems cannot be provided copyright protection.

While this is very much in the conversation right now given the state of generative AI, this particular thread of legal discourse actually kicked off many years prior courtesy of a gentleman by the name of Stephen Thaler. Thaler’s case dates back as far as 2018, after they sought to submit the image shown above to the copyright office. They call this image “A Recent Entrance to Paradise” and it is generated by an AI system of their own invention named DARBUS. This battle has been ongoing, this latest update following a final decision by the US. Copyright Office in March of 2023 that denied copyright, and then an appeal rejected by a federal judge in August of 2024.
So while it’s old news in many respects, it is very much having an impact on the discourse of today.
Adobe’s Firefly Getting Squished?
While I was focussed on GDC, last week was also host to the Adobe Summit in Las Vegas (I think I made the right choice). A small, but rather interesting headline caught my eye and certainly promoted some discussion in my circles related to Firefly. For those unfamiliar, Firefly is a suite of APIs for multimedia production, and it has seen a bunch of AI tools emerge. Notably it carries image generators for allowing you to bootstrap your work, and now they’re adding more AI-related tools. But the thing that caught my eye, wasn’t the introduction of a video model as shown below, but that they’re allowing for custom models which they argue are ‘safe for commercial use’.
The reason this caught my eye is that despite Adobe’s insistence that their AI art generator uses their own images from Adobe Stock, as well as public domain assets, they’ve been dogged by allegations since it went into full launch last year, including that it is trained using images by Midjourney - so it’s stealing art, but with a degree of separation? I had multiple conversations with people in and around generative AI suggesting that by offering a means for training your own models that are safe for commercial use, Adobe are not only de-emphasising their own arguably copyright infringing model, but kinda admitting to it in the process?
Generative AI Powered Sims-Like ‘InZOI’ Now the Most Wishlisted Game on Steam?
We’ve not talked about it much on the newsletter, but Krafton-published InZOI is arguably the first major title to come along that bakes generative AI into the product as a means for user generated content. And clearly it has the attention of the gaming populace, as research suggests it is currently the most wishlisted game on Steam prior to its launch on March 28th.
InZOI takes much inspiration from The Sims, in which players manage a neighbourhood of little NPCs known as ‘Zoi’. But critically it includes a variety of generative AI features to create in-game objects including the ability to generate textures using text prompts, and even importing real world objects into 3d models. There’s also the ability to import animations from video to give to your characters. Plus the game uses small (large) language models (SLMs) for some of the in-game behaviour. All of this while sporting a more realistic graphics style courtesy of using Unreal Engine 5 (big Metahuman energy right here). You can see it all in the trailer below.
I can’t speak to this at this time as I haven’t played it yet, but it’s clearly caught the attention of a lot of players. Almost as if the trick to making generative AI work in games is to make games people want to play with it? How much the hype aligns with reality will become clear in the coming weeks.
New Ark DLC Gets an AI-Generated Trailer, Players Hate It, and Devs Try to Deny All Knowledge
I mean that’s the story in a nutshell really. We had to have at least one GDC related AI moment that really pissed people off. But this… wasn’t on my bingo card. Rather than some half-baked incomplete game getting the generative AI market push, instead it’s the ‘Aquatica DLC’ for beloved survival game Ark: Survival Ascended (a 2023 remastered version of the 2015 game Ark: Survival Evolved). The DLC isn’t being made by the main developers Studio Wildcard, but is instead made by Snail Games USA Colorado.
As reported by PC Gamer, not only do people really hate this trailer (and with good reason, it’s awful), but now even the developers of both Studio Wildcard (i.e. the main game) and the developers of the DLC are distancing themselves from it.
Like, I don’t even get why you would do this? You have an existing game, that’s been around for years, and you’ve built the DLC? No doubt the argument is that it’s cheaper to make an AI-slop trailer than to make a trailer showcasing the actual content. But this is now damage control as the studio has to deny use of AI tools in the actual games development, not to mention that the brand itself is now tainted by this horrendous misfire.
Operative Games Comes Out of Stealth with ‘The Operative’
Reported on by GamesBeat, Operative Games came out of stealth at GDC with an announcement of a new AI-driven storytelling engine in the pursuit of new forms of interactive experiences.
Co-founded by Jon Snoddy, the former head of Walt Disney’s Imagineers - the highly regarded research and development department within the company - Operative Games promises to “combine cutting-edge AI with unparalleled narrative design to bring stories and character experiences to life in an entirely new way.” Their initial project is The Operative, a spy thriller in which you ‘play’ by interacting with characters from the narrative via your phone. You call or text these characters (via actual phone numbers) and they engage with you like a person would via this medium.
I completely missed this announcement - clearly I’m in the wrong circles, and I even attended a GamesBeat event at GDC on the future of AI - but right now it’s largely fluff and nothing of substance. Like every one of these start-ups its website is 90% hyperbole and a combination of bullshots and generative art, but worth keeping an eye on nonetheless.
Researchers Digest: Microsoft’s Muse
So for this month’s digest, I figured lets do something I’ve been meaning to do for a little while now. While I work predominantly in the games industry, I have a long background in academia, and for much of that time I spent reading research papers as part of my scholarly pursuits.
But reading an academic paper isn’t always easy, particularly if you’re not familiar with the subject area, or how academics write research papers! So I decided I should get back into a habit I’ve since abandoned and write up a high-level summary for you to enjoy either on its own, or to follow along with.
For our first issue of the Digest, let’s revisit Microsoft’s Muse project I spoke about a couple of weeks ago, and find out what the research was really all about.
A High-Level Summary
So what is this project about? Well let’s quickly summarise the key points:
It’s a collaboration between Microsoft Research and Xbox Studio Ninja Theory in exploring whether generative AI can be useful for ideation when building games.
The researchers conducted a study among game developers to learn about their processes, and critically what aspects of generative AI mean it is often a poor fit for creative ideation.
Critically they found three aspects of generative models that make them difficult to use in creative ideation processes:
They struggle to be consistent in their outputs.
They lack diversity in outputs from a single prompt.
There are no means to easily make modifications to their output by users that can later persist in future generation.
The researchers design a type of generative model called a WHAM (World and Human Action Model) that can simulate the behaviour of a video game. But critically it can create diverse outputs, while also maintaining consistent logic for periods of up to two minutes.
It is built by using a large amount of training data generated from the Ninja Theory game Bleeding Edge. And thus generates outputs that look like Bleeding Edge.
They build multiple versions of the model at various sizes and resolutions.
Testing suggests that while far from ready in the grand scheme of things, the model is more stable than others seen in the space, and critically could support creative ideation by supporting changes to the game and simulating their outcome.
So without further ado, let’s jump in!
Keep reading with a 7-day free trial
Subscribe to AI and Games to keep reading this post and get 7 days of free access to the full post archives.