Switch 2 and Consumer-Ready AI Products | AI and Games Newsletter 09/04/25
A small but significant development is happening with the Switch 2
The AI and Games Newsletter brings concise and informative discussion on artificial intelligence for video games each and every week. Plus summarising all of our content released across various channels, like our YouTube videos and in-person events like the AI and Games Conference.
You can subscribe to and support AI and Games on Substack, with weekly editions appearing in your inbox. The newsletter is also shared with our audience on LinkedIn. If you’d like to sponsor this newsletter, and get your name in front of our audience of over 6000 readers, please visit our sponsorship enquiries page.
Greetings one and all, and welcome to the
newsletter. This week I figured let’s jump on the internet bandwagon and have a chat about the recent deep dive into the Nintendo Switch 2. Sure, there’s a lot to talk about right now be it the games, the new joy cons, that new chat app, the economic implosion that might mean the device gets a price increase in the US before it even launches, but I wanted to talk about a smaller technical element: how Nintendo is normalising AI rendering through the Switch 2 tech stack.Stick with us, and I’ll explain all!
Follow AI and Games on: BlueSky | YouTube | LinkedIn | TikTok
Announcements
Before we get into the weeds of news and all this Switch 2 chat, let’s cover some announcements!
Thanks to Nordic Game Jam!
I wanted to kick things off this week with a thank you to the team at the Nordic game jam for having me along last week. I had a fantastic time, and it was great to spent time with people while I was there! I sadly had to head back to the UK before the jam ended, but I hope everyone’s projects made it over the finishing line in one piece!

I’m a Speaker at the /dev/games Conference in Rome this June!
With the first speaking engagement of the year now behind me, it’s time to announce the next one we have lined up! I’m really excited to be heading out to /dev/games - the Italian conference on game development - which is running in Rome on June 5th and 6th. Tickets are on sale now with the first set of talks now live on the site. Meanwhile the event will also be streaming live on the internet for those who cannot make it down to this beautiful city. Much like my recent Nordic Game Jam talk, we’re focussing more on traditional game AI and I’ll be discussing the current trends and issues facing classic NPC design today.
AI (and Games) in the News
A lot of things have happened in the news recently, and the recent Nintendo Switch 2 announcement is the big story this week. But first, let’s cover what else is going on.
The UK Copyright Debate is Ongoing, and Nobody is Happy
After discussing the topic back in January, the situation with the UK’s attempts to rework its copyright laws in an age of generative AI is proving to be an ongoing headache.
As discussed back in January, the UK government opened a consultation on how to approach copyright law in an age of modern AI. However, they showed their hand somewhat prematurely, with their intent to provide an ‘opt-out’ approach for businesses, while ensuring the bulk of AI companies get to exploit the wholesale scraping of UK copyrightable assets for commercial purposes.
This of course went down like a lead balloon, with the UK Intellectual Property Office (IPO) receiving over 13,000 responses to the consultation (this link says 13k, others have reported 11k, hard to know for sure). I’m willing to guess that a fair chunk of those were not in favour of this regulation. At the time, it was then posited to many of the big AI companies who the UK was courting to respond to the suggestions made in the governments action plan.
As reported by Politico, both Google and OpenAI have responded and simply said it was not enough. You can read OpenAI’s response in full here, which is absolutely delightful given they don’t just reject the idea, but given their suggestions which - let’s face it - sound utterly abysmal.
Oh, and in amongst all this, the Tony Blair Institute for Global Change piped up on the subject as well.
Ugh… this is going to be next weeks newsletter topic isn’t it?
Northern Ireland’s Adoption of the EU AI Act May Influence the UK Approach
This isn’t necessarily news - or it’s news at least in the sense that it only just occurred to me this past week - but as reported in the Financial Times earlier this year, one aspect of the UK’s evolving approach to AI is that Northern Ireland would have to comply with the EU AI Act.
For context, the reason for this is down to NI’s relationship with the EU; in short the Northern Ireland Protocol and its extension the Windsor Framework are legal agreements that dicate the relationship between the country and the European Union and how it differs from the rest of the UK. This is all due to the existing Good Friday agreement of ensuring no border exists between Northern Ireland and the Republic. It was one of the most contentious aspects of the UK’s exit from the European Union.
This has since resurfaced in more recent news filings, but the crux of it is that Northern Ireland - being technically a borderless annex of the EU, and a country within the UK - will have to respect and implement both EU and UK regulations on AI. How that turns out in practice is another matter entirely, but it does suggest at minimum that whatever UK regulation that is crafted will have to be compatible and aligned with what the EU has already done.
Microsoft Announces a New WHAM Model of Quake II - Same Issues, Different Game
In the past few weeks you may have got fed up of me talking about the recent trend of AI models that simulate the behaviour of games. This came to a head with the annoucnement of the Microsoft ‘Muse’ research project, which was behind the creation of the World and Human Action Model (WHAM): a generative AI model that was demonstrated by reproducing the now defunct online shooter Bleeding Edge. I gave a high-level overview in the newsletter at the end of February, plus last months digest was a deep-dive into the actual research paper published by Microsoft Research.
Well Microsoft have since released a new WHAM model of Quake II, which you can play in a browser. This demo is very similar to the Oasis project - which simulates Minecraft - shown in my recent YouTube episode on the subject of ‘neural game engines’. It’s interesting for sure, but outside of it being a curio it’s not that fun to play. As discussed in the video, these models easily lose context provided you do something that makes it difficult for the visual frames to distinguish where you are in the game world. Easiest way to do this? Just look at the floor for a couple of seconds and then move and look up.
Catch my video if you haven’t already for more on this subject.
Palestine BDS Committee Launches the #BoycottXbox Movement
But it’s not all sunshine and rainbows for the AI in the Xbox brand. Last week the Palestinian BDS National Committee (BNC), the body that is coordinating boycott, divestment and sanctions (BDS) against businesses has launched a new boycott movement targeting Microsoft, and specifically the Xbox brand.
As noted in the above social media post, the campaign is targeting Xbox because it is a significant consumer-grade money maker for the company. But the real reason for targeting the corporation is due to reporting earlier this year that highlights how Microsoft services, including the Azure and OpenAI suite of AI tools, have been offered to and adopted by the Israeli Defence Fore (IDF) in their ongoing occupation and military action against Palestine. This highlights the uncomfortable reality that many AI technologies that can be applied in production and entertainment capacities can equally be deployed in warfare, and was further reinforced by a pro-Palestine protest by a Microsoft employee at the company’s recent 50 year anniversary celebration.
“[He] can’t even write a fucking email without using Chat GPT...”
With this ongoing push for AI tooling, I’ve certainly heard my fair share of groan-inducing stories of companies struggling as management seek to embrace the AI hype without really knowing what they’re getting into and making everyone’s lives miserable (btw, I literally consult with studios to stop this from happening).
In a new article posted on Aftermath, Luke Plunkett does a great job summarising some pretty rough experiences of developers on the ground caught in the storm of management and other senior individuals hoping to capitalise on AI tools and the hype. Sadly all of these stories sound painfully familiar to me: of management investing heavy in AI, failing to understand the issues with the technology as it is, a lack of collective agreement in the studio, and how many short term gains from faster early-phase work with AI tools leads to bigger issues down the line. Well worth a read.
Nintendo and the Path Towards ‘Consumer-Ready’ AI
A few weeks back I was having dinner with
, author of the and we were talking - as we often do - about the intricacies of the games industry. Specifically, we found ourselves talking about artificial intelligence in video games, and how it’s going to be a subject in his upcoming book! As you can imagine, this is a good fit for me and my interests!Naturally, there’s a lot to talk about in the current climate, but I posited that the soon-to-be-announced Nintendo Switch 2 is arguably one of the most important moments in the history of AI within the industry, if - as was rumoured - the Switch was adopting a custom NVIDIA chipset that would enable for Deep Learning Super Sampling (DLSS) rendering on the device - a form of AI-driven upscaling of in-game graphics.
Wait, what…? Why is the one thing that Nintendo themselves didn’t announce such an important moment? No this isn’t me buying into the hype, nor is it me trying to turn every games industry story into a learning on the state of AI in the industry. Rather it speaks to the maturation of the technology, and the big N themselves seeing this as an important step forward. Let’s dig in!
Recap: Switch 2 is Coming in June!
Of course last week the big story was the long awaited deep dive into the Nintendo Switch 2. While the Direct on April 2nd gave everyone an insight into the device itself, first party titles such as Mario Kart World and Donkey Kong Bananza, a variety of 1st and 3rd party titles, and the launch date of June 5th, the thing I was really curious about was the technical specifications of the device.
Sure, they’re going to mention some key things for the regular user in the Direct, like the ability to render at up to 4K resolution at 60fps when docked, and even 120fps at 1080p and 1440p when in handheld mode, but all the really sweaty technical details were instead published in a separate technical document on the Nintendo website. But even that let me down! Sure, it tells us that the Switch 2 weighs 1.18lbs (540g), it has a battery life of 2-6.5 hours, and that it’s running a CTIA standard audio jack (thank god), the document omitted that one critical element: the makeup of the processor (CPU) and graphics processer (GPU) in the chipset. Only stating that it is a “custom processor made by NVIDIA.”
But NVIDIA clearly couldn’t keep it to themselves, and on April 3rd decided to make some noise on their own website. They didn’t get into the nitty gritty, but the thing I’ve been waiting to hear, was confirmation that the Switch 2 does in fact rely on a version of NVIDIA’s GPUs that carry tensor cores, and therefore is capable of AI-powered neural rendering. To quote the article:
The Nintendo Switch 2, unveiled April 2, takes performance to the next level, powered by a custom NVIDIA processor featuring an NVIDIA GPU with dedicated RT Cores and Tensor Cores for stunning visuals and AI-driven enhancements.
With 1,000 engineer-years of effort across every element — from system and chip design to a custom GPU, APIs and world-class development tools — the Nintendo Switch 2 brings major upgrades.
The new console enables up to 4K gaming in TV mode and up to 120 FPS at 1080p in handheld mode. Nintendo Switch 2 also supports HDR, and AI upscaling to sharpen visuals and smooth gameplay.
Hang On, What is Neural Rendering?
Neural rendering is the term given for the use of AI - specifically trained generative AI models - in areas of graphical rendering. Historically in games we have relied on rasterisation as the means to convert a collection of objects into a single image on screen. This has gradually became more complex in 3D games as we attempt to model the effects of lighting in a scene and other visual elements. Recent years has seen the rise of ray tracing as a more expressive albeit expensive approach to resolving this. Ray tracing is an idea in graphics that seeks to mimic some of the real-world properties of lighting by measuring the density and colour of light as it hits objects in a scene. The reason you choose one over the other is the trade off between computational effort and graphical fidelity. Ray tracing in games looks fantastic when done correctly, but carries a hefty requirement of the graphical processing unit (GPU). We’ve seen raytracing becoming increasing popular in PC and in some console games, but it requires a beefy GPU behind it to run it at high framerates.

Regardless of the rendering model, the important thing is whether the device in question has the hardware capabilities of rendering the game while ensuring it looks as smooth and readable as possible, and that’s becoming increasingly harder to achieve. Ray tracing has existed as a technology since the 1960s, but it’s only in the past few years has hardware became powerful enough to make it happen in real time. Hence back in 2019 NVIDIA announced Deep Learning Super Sampling or DLSS: an approach that takes existing research in image upscaling in AI and applying it to graphical rendering - and was only attainable courtesy of NVIDIA’s AI-processing Tensor units on their graphics cards. The process in principle is that rather than running a game at say 4K resolutions like 2160p, you instead render it at 1080p and then using DLSS to upscale it quickly. This is all much ‘easier’ for the GPU to do, and the technology has matured to a point where the quality gap between this and rendering 'natively’ at 4K resolutions is closing.
DLSS has since became a core component of many modern games seeking to achieve high-resolution, high-frame-rate performance while also rendering using ray tracing. It’s not perfect, and is prone to visual artifacts and failing to render clean edges in some instances, but it has improved drastically since 2019. At the time of writing, NVIDIA has since updated the DLSS pipeline to version 4 which is significantly more sophisticated than the original model. You can find out some more about the evolution of the technology in my YouTube video from 2022 explaining the evolution of DLSS versions 1 through 3.
Why is This Significant?
So you might be thinking why is any of this relevant? Neural rendering technologies like DLSS have existed for several years now, and anyone with an NVIDIA GPU has been able to utilise this technology for some time now.
The reason I bring this up, and equally the issue of my conversation with George, is that video game consoles are, in my opinion, one of the best examples of highlighting what are considered the most stable, affordable and effective technologies, given they’re aimed at mass-market consumption. While PC and other areas such as VR/AR headsets have a variety of more experimental hardware, games consoles are sold as convenient devices that are ready to be deployed in bedrooms and family lounges, with an expectation that they hook them up to the TV and we’re off to the races.
As such, the need for these technologies to be reliable, consistent, and stable is paramount. Given the demographics who rely on games consoles are typically less technology savvy as the folks over in the PC space: parents purchase consoles for their children, adults who simply don’t have time or interest in learning how to configure their gaming devices for the highest graphical fidelity, grandparents who will play Wii Sports in perpetuity (remember that?). The box should be plugged in, turned on, and just work - except nowadays it’ll spend 30 minutes updating no doubt!
This isn’t to say that every new technology drop from games companies makes the landing. The Sega Dreamcast was an internet-enabled console that, while great, was simply too early for its time - both audiences and developers struggled to have the infrastructure and knowledge to maximise utility of this ‘online gaming’ concept. The Xbox Kinect wasn’t ready either given it was too unreliable - it seldom worked in the households of customers given the limitations of the technology - and was a poor fit for the install base that Microsoft had established. Meanwhile Sony’s ventures ranging from the PlayStation Portable (PSP), the PlayStation Vita, and more recently the PSVR highlight that even when technology is 'consumer ready’, there needs to be an audience willing to invest, and equally you need to invest in cultivating that audience (PSVR2 was practically abandoned about 3 months before it was even launched). Particularly when what you’re selling is something new, and unknown, you need to be aware of the risks involved and plan to overcome them.
No company evokes this philosophy more than Nintendo. I’ve spoken of this in a previous newsletter, but the Japanese giant is at its heart - both philosophically and historically - a toy company, and so it doesn’t just want to ensure that their products are consumer friendly, but equally they’re a good fit for their target demographics of children (and their parents). Nintendo are often criticised for reinventing the wheel when it comes to how certain features work in their products. A common criticism has been the bizarre ways in which they have adopted online communication and social systems on their devices. The announcement of the Switch 2’s chat functions is a significant evolution on their part in allowing users to play together by building their own voice, video, and streaming chat that is integrated into the console UI itself.
While the reception to this has been mixed across the board, with PC Gamer in particular referring to it as “Discord but worse”, it highlights that for Nintendo the best way to deliver these features is to create their own dedicated version that is accessible to families - even if the application is often rather esoteric.
But these applications and products are derived from existing and established technologies, be it hardware and software. PC Gamer’s dig, while accurate, is emblematic of how Nintendo often reinvent the wheel such that it fits their vision and philosophy. Meanwhile the consoles themselves are typically built from existing components and seldom move into the territory of new and expensive hardware research. After all, they want them to be easy to build at scale, and with manufacturing costs that will mean they’re not making a loss on every unit sold. Consoles families such as the PlayStation and Xbox are often sold at launch as loss leaders: meaning Sony and Microsoft hope to turn a profit by having you buy more games (i.e. increasing the ‘attach rate’) or signing up for subscription services like PlayStation Now and Xbox Game Pass. But while Nintendo seeks the same thing, they often try to ensure they’re making a profit on their hardware. Both the Nintendo Wii - a device once disparagingly referred to as “nothing more than two GameCubes stuck together with duct tape” - and the Nintendo Switch were sold at a profit during their launch window.
The company doesn’t always get it right (WiiU anyone?), but it is why Nintendo continues to be one of the driving forces of the industry: an company that is willing to experiment, to try something new, but it will always do so using established technologies such that the user experience is safe, convenient, and aligned with their goals.

DLSS Is Now ‘Consumer-Ready’ Technology?
This philosophy of bringing technology to customers once it’s stable, or fits their needs, is what makes the adoption of DLSS on the console a really interesting one.
In recent months stories have cropped up about how the original hardware profile of the 2017 Nintendo Switch actually originiates from a proof of concept drummed up by NVIDIA in collaboration with Razer around 2015-2016. The original Switch hosts a modified version of NVIDIA’s Tegra X1 chip - with a slightly downclocked CPU, no doubt to minimise energy consumption and by extension reduce heat and extend batter life. As reported in that story linked above, there was a possibility they could deploy with the updated Tegra X2, but doing so would have delayed the console for another year (and no doubt cranked up manufacturing costs).
This is the beginnings of the DLSS story, given not only was the X2 quite capable of running the AI upscaling technology, but even the (now discontinued) 2019 models of the NVIDIA Shield TV Pro (an Android based game streaming platform) were capable of running DLSS while only running on a Tegra X1+ chip.
Critically, it’s worth taking a brief tangent to highlight that this intersection of Nintendo and NVIDIA is also why it’s the Switch that has became this focal point of DLSS adoption on console. Given both the PlayStation and Xbox consoles run similar yet different AMD-based GPU chips (codenamed ‘Oberon’ and ‘Scarlett’ respectively) and have relied largely on their FSR-based upscaling (though the PS5 Pro now has it’s own PSSR resolution scaling). The Switch is the only major console platform on the market right now that is running NVIDIA’s GPUs (even VR hardware like the Quest 3 are running on SnapDragon chipsets).
On one hand this felt like an inevitability, given the complexity of many modern games means running them on a portable device can be a real challenge - I mean try running Cyberpunk 2077 on a Steam Deck, after 10 minutes you can cook your dinner on it. Meanwhile some of the Switch ports of popular AAA titles either required significant optimisation to run (DOOM 2016), with some being delayed in the process (Hogwart’s Legacy), or ran on cloud versions (Hitman 3) no doubt because it was just not possible to run natively without a significant degradation in quality. Using DLSS allows you to offset this, but it gets into the weeds of giving users option of graphics settings on Nintendo platforms, which is not terribly common. The mention of performance vs quality modes for games like Metroid Prime 4 during the Switch 2 announcement shows they’re willing to start normalising the idea of games looking better in different modes, in different configurations, and giving the end-user to make those judgements themselves.
This is interesting to me given that outside of running your 2017 Switch in docked or handheld, Nintendo players have never really been given the option to think about these sorts of things. Are games going to allow you to disable DLSS? Will you have that option, and if so are we going to live in a world where Nintendo takes a page from their parental control support and create an informational video starring Bowser and his son discussing DLSS and how it works on individual titles.
I stand by my opinion that this is still one of the best videos Nintendo has ever made.
An Interesting But Subtle Inflection Point
By going this route Nintendo enters a space where most 3rd party titles will start to embrace DLSS to achieve the performance they’re aiming for - otherwise how else will they get their games running on the device without aggressive optimisation or corner cutting? While the jury is still out on the technology, with many PC players not a fan of using it given the rogue visual artifacts and occasional frame skips it still causes on occasion, the big N are willing to put their backing behind it. So for me personally - as someone who personally opts not to use DLSS if I can avoid it - it’s an interesting step in both the technology’s gradual adoption in the sector, and Nintendo’s perception of consumer ready tech that they’re willing to invest in it, and make it available for their development partners to adopt. Though perhaps the real question isn’t how much 3rd party developers will embrace it, but will Nintendo themselves given how protective they are of their image and IP?
In previous generations we’ve talked within the industry about how certain 3rd party ports on the likes of the Wii/WiiU/Switch are the ‘uglier’ versions of their PlayStation and Xbox counterparts, but it’s always been on the basis of it being older and less performant hardware. I wonder if by going all-in on DLSS, it then leads to a normalisation of the idea that ‘games look bad on Switch 2 because of AI slop’. There is already a `fake frames’ movement in PC gaming communities, as players consider the DLSS output ‘fake’ given many of the frames now generated in DLSS 3 and 4 are interpolating between two frames of the game engine itself. The normalisation of the term AI slop in the past year has led to me seeing posts online that equate DLSS as no better than all of the junk invading the internet. All of this reminds me of my point last year in my conversation on AI winters: consumers themselves will dictate how aggressively these technologies are embraced in the coming years.
Wrapping Up
Right, that’s enough babbling, but hopefully this has kept you entertained while you’re watching the timer go down on whichever Switch 2 pre-order queue you’re sitting in right now - unless you’re in the US and now Canada, sorry folks!
Thanks for reading, and I’ll catch you again next week.
Just One More Thing…
Just before we go for this week, I wanted to give a shout-out to the trailer for the upcoming M3GAN 2.0 film, which is just bananas. For those not familiar, the original M3GAN from 2022 is a Blumhouse-produced horror movie in which a young girl develops an emotional attachment with an AI-controlled doll, the titular Megan. She subsequently becomes a little too controlling, clingy, and begins killing people. Considering how often I groan at AI in fiction because of how poorly it’s understood or adopted as a plot device - Mission Impossible: Dead Reckoning was an exercise in patience, believe me - I really liked M3GAN, it earned my respect as a piece of entertainment that almost makes sense from an AI perspective.
The sequel appears to have gone for the Terminator 2 approach, where now the original robot is being used to combat an even worse variant. It looks dumb, the Britney Spears soundtrack was a nice touch, and yeah I’ll probably go see this over the summer.