LLMs Try Their Hand at Phoenix Wright | AI and Games Newsletter 23/04/25
Plus AI bots steal your pals names in Marvel Rivals, AI is toxic to your brand, and the Sponsor Update too!
Our monthly sponsor issue of the newsletter provides what you come to expect every week from AI and Games: with a summary of the big talking points in AI in the games industry, plus announcements of relevant events, books, courses, and more. However the main segment of this issue gives our paying supporters a deep-dive into what’s coming up across the board. From future newsletter topics, to YouTube episodes, new projects, our conference planning and much more.
You can read part of each sponsor issue for free, and then catch the rest with a paid subscription to AI and Games.
Happy Wednesday and welcome to this month’s sponsor update on the
newsletter. I hope you’ve all successfully polished off those Easter eggs and are ready to get back on the summer bod treadmill - both literally and figuratively! If you’re not familiar, while we do the regular updates on all things regards AI in the games industry, we will also have the special segment for our paying subscribers who get an update on future newsletter topics, upcoming YouTube content as well as some updates on things like the AI and Games Conference and Goal State.But hey, let’s not get ahead of ourselves, what else can we expect this week?
Research has shown that aligning your brand with AI is as toxic as Elon Musk.
Your offline Marvel Rivals friends are being controlled by bots?
Phoenix Wright: Ace Attorney is being used as an AI testbed.
But first, some announcements!
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
Announcements
Some quick announcements of all things
related!Heading Over to Dev/Games in June
As mentioned recently, I’ll be flying over to Dev/Games in Rome for the event running June 5th - 6th. A fantastic line up fo speakers, it’s available to watch online via streaming (ticketed) and I’ll be talking about the current state of Game AI technologies, and where the industry is currently focussed and lacking. I’m so committed to this, I am willing to let my Nintendo Switch 2 pre-order sit at home so I can participate. Very much looking forward to it.
AI (and Games) in the News
Alrighty let’s round up some of the more interesting headlines of the past week or so.
Embracing AI as Damaging to Brands as Elon Musk?
Earlier this week came the story courtesy of The Guardian that a brand reputation survey conducted by the Global Risk Advisory Council - a group of 100 ‘reputation leaders’ who discuss issues in various areas - have identified the top 10 most damaging subjects that can negatively affect brand reputation for businesses, with artificial intelligence in the top spot, and the worlds most overhyped Path of Exile 2 player not far behind it.
The full report goes into much more detail on these and other issues, including backtracking on diversity, equity, and inclusion (DEI) commitments (#3), IP/copyright infringement (#7), and even being involved in the US opioid crisis (#10). But I wanted to pull out a choice quote on the AI issue:
Stories that feature AI misuse, the harmful or deceptive use of artificial intelligence, including creating deepfakes, misinformation, biased decision-making or unethical applications that cause harm or manipulate public perception, were viewed by the Global Risk Advisory Council as the most severe risk – e.g. the risk most likely to gain negative online news attention. 20% of respondents rated AI misuse a 10 (on a scale of 1-10, 10 being the most severe risk) and the impact as profound. According to one Council member, “AI, if not understood or managed in companies, can have an incredible trickle-down effect that may not be reversible.”
Companies face a variety of reputational risks related to AI misuse, including addressing misinformation about brands, “regulatory whiplash” as governments take different paths to safety, and possible allegations of bias when the technology is utilized to inform decision-making. What’s more, this risk is expected to stick around and potentially increase in the years to come. From one Council expert: “Organizations need to invest in AI policies in the way they do all other operational policies.”
The issues of brand reputation, safe use (and mitigating bias), and establishing safe and practical policy are topics I’ve raised at length both in speaking engagements and my own consultancy work with companies. But the thing they don’t discuss here, is that simply, in my opinion, aligning yourself with AI without anything meaningful or substantial to show for it is the long term factor. As mentioned above, the risk is expected to stick around or increase in years to come, and that will be the case given so many companies want to show they’re willing to embrace AI (i.e. appeal to investors) and often do little with it, or show really poor implementation.
I’ve talked about this at length here in the newsletter this past year, both with how public perception is the true litmus test for companies trying to advocate for AI use, and also how their repeated proclamations with nothing of substance is doing them more harm than good.
Meta Given Legal Permission to Train AI on Facebook/Instagram Posts in EU
Meta, the owners of Facebook, Instagram, Threads, and the Quest VR headset, have been given approval by the EU to allow for them to train AI systems using posts by their European users. As discussed in this post by GuruFocus, the previous request had been blocked by a number of EU nations over privacy concerns - something that the EU in context of online activity takes very seriously. However, it is now permitted to use data published publicly by adults. So in essence, any Facebook/Thread/Instagram post by an adult is now by default being scraped for AI training (you can opt out in your settings).
While Meta AI was launched in the US last year, it’s only now coming into force in the EU given these previous objections. Unlike Meta’s previous efforts to train AI by stealing the work of others (see the LibGen controversy that is now going to court), at least this time they can claim some semblance of ownership of the data.
Cyberpunk 2077 is the First Confirmed DLSS Enabled Switch 2 Game
After discussing it a couple weeks back in the newsletter, CD Projekt’s Cyberpunk 2077 is the first game confirmed to not only use AI-upscaling technique DLSS, but requires it on at all times. As reported by Eurogamer via Digital Foundry, the game uses it both for the performance and quality modes, with resolutions varying between 1080p and 720p in docked versus handheld.
None of this is surprising, given that Cyberpunk 2077 is something of a resource hungry beast, and I’m pretty sure playing it on my Steam Deck in the winter could cut down on my heating bills. But as I discussed a couple weeks back, this is significant because now the only way to play that game on Nintendo’s device, is with DLSS enabled, and I’m curious to see the reactions to that after launch.
Marvel Rivals Bots Now Using Your Friend’s Names?
A funny story that caught my eye over at VideoGamer, that NetEase’s popular online shooter based in the Marvel Universe has been using names from your friend list to add some spice to the in-game bots.
The long and short of it is, when you suffer a losing streak the game starts putting more and more AI-controlled bots into the match to help you git gud again, but previously they would either use stock names or some random identifier. But recently players have observed that the bots now use the names of people in your friends list - provided they are offline.
I mean, I think it’s a pretty silly idea and really the filter should run in the inverse - it uses the names of people not in your friends list - but it did remind me (and VideoGamer) that Forza’s Drivatar bots do the same thing. Though it does that when you’re playing the game in offline mode - a fairly critical distinction!
Phoenix Wright is a New AI Benchmark - and the Devs Seem Rather Bemused
Last but not least, a new story broke last week of Hao AI Lab - a research lab associated with University of California San Diego (UCSD) - has published work they’re conducting in analysing the performance of Large Language Models (LLMs) in complex reasoning skills. Their approach? By having them play the popular detective game Phoenix Wright: Ace Attorney by Capcom from 2001.
The results as detailed on Twitter - a social media platform whose owner has helped damage its brand - show that OpenAI’s o1 model performs better than the likes of Google’s Gemini, Anthropic’s Claude, and even GPT4. While o1 appears to perform best, none of them are ultimately that good at playing the game.
I mean the idea of using games to gauge the effectiveness of AI (see my story re: Pokémon from a few months back) is nothing new. But the funny part of this story was - as reported by Automaton Media - that the developers themselves were rather bemused that the AI seems to struggle with what they consider to be the easiest part of the game.
To quote the automated translations of social media posts from Masakazu Sugimori, who worked on the game as sound designer and voice artist:
How should I put this, I never thought the game I worked on so desperately 25 years ago would come to be used in this way, and overseas at that (laughs).
That said, I find it interesting how the AI models get stumped in the first episode. [Shu] Takumi and [Shinji] Mikami were very particular about the difficulty level of Episode 1 – it’s supposed to be simple for a human. Maybe this kind of deductive power is the strength of humans?”…
“The reason why Takumi and Mikami were so particular about balancing the difficulty level of Ace Attorney’s first episode was because ‘there was no other game like it in the world at the time.’ It had to be a difficulty that would be acceptable to a wide playerbase, but it had to avoid being insultingly simple too. They were going for the kind of difficulty that gives you a sense of satisfaction when the solution hits you.”
I think we ought to summarise all of these ‘LLM plays game’ topics into some sort of bumper case study issue somewhere down the line.
The Sponsor Update
With the newsletter portion out of the way, it’s time for me to get into the weeds on what to expect in the coming months across the various axis of
.But before we cross the paywall, I wanted to take a moment to highlight that our work across Substack and YouTube is audience funded. We’re slowly seeing more and more people sign up to support the newsletter and our broader content, and that’s a great signifier to us that we’re doing something of value and that our audience is enjoying it.
One way of giving back to our backers, is a bit of transparency. And so beyond the paywall is an update on everything happening behind the scenes, including:
A little update on how the broader AI and Games business is going.
Progress on this years AI and Games Conference organisation.
A timeline on video content over the next 3-5 months.
Plans for future newsletter topics.
Plus some Goal State progress to boot!
Keep reading with a 7-day free trial
Subscribe to AI and Games to keep reading this post and get 7 days of free access to the full post archives.