Why is AI the Root of SAG-AFTRA's Video Game Strike? | AI and Games Newsletter 25/09/24
Plus EA's AI Investor pitch, Frank West is back, Goal State, conference updates and more.
The AI and Games Newsletter brings concise and informative discussion on artificial intelligence for video games each and every week. Plus summarising all of our content released across various channels, from YouTube videos to episodes of our podcast ‘Branching Factor’ and in-person events like the AI and Games Conference.
You can subscribe to and support AI and Games on Substack, with weekly editions appearing in your inbox. The newsletter is also shared with our audience on LinkedIn. If you’d like to sponsor this newsletter, and get your name in front of our audience of over 6000 readers, please visit our sponsorship enquiries page.
Hello all,
here and welcome back to the newsletter. I was this close, *this* close, to pivoting topic this week to the Electronic Arts’s investor day, given they seem to have decided they’re now an AI company and not a games developer. But yeah, that can wait - but don’t worry I have a lot to say on that one!For this issue, it’s time to check in on what’s happening with the SAG-AFTRA strike, and how AI is at the centre of this debate. Critically, I’m going to try and help paint a picture of where we are with a lot of the relevant technologies, whether the strike action is justified, and if performers have a legitimate case for concern when it comes to AI affecting their livelihoods.
Spoiler Alert: The answer to both of these questions, is yes.
Announcements
Before we dig into the main stories, a quick round-up of important announcements AI and Games related and otherwise!
AI and Games Conference 2024
We’ve got a whole bunch of stuff to announce in the coming weeks for the AI and Games Conference, but here are the highlights:
Our next batch of tickets went live on the conference website on Monday.
But uh, since they went live on the site, they sold out already, sorry!
We’re looking to see if we can issue any more, but that may well be the last batch we can offer. I’ll share more info on this when I can.
Decisions on speaker submissions will be sent out this week!
We appreciate your patience on this one, we’ve been blown away by the number of submissions we received, and it’s made our job very difficult to decide which ones we accept.
It is my pleasure to welcome Sally Kevan on board as our event coordinator for the conference. Sally joins us with years of experience in event planning and coordination in the games event space, having worked with UKIE and Pocket Gamer Connects, plus putting on client booths at everything from Gamescom to Twitchcon, we’re lucky to have her on board!
Other Announcements
As announced earlier in the year, I am running my first ever Kickstarter to crowdfund my online course series ‘Goal State’, which will provide accessible materials on how to get started in AI for games. You can read all about it below, and the Kickstarter will go live in November.
Upcoming Events
I’m pleased to announce I’ll be a speaker at Konsoll 2024, which is running October 31st to November 1st in Bergen, Norway. I’ve been a big fan of the talks shown at Konsoll in previous years, and am excited to make my own contribution to their community. This is going to just a week after returning to King’s College London for their Next Level event on October 25th.
But in the meantime, this newsletter comes to you while I am in Dublin attending the NEXUS Games Conference which starts later today. While we’re talking events, I wanted to give a quick shout out to two events I attended since the last newsletter. The IGGI Conference in York was a lot of fun, and it was great to engage with a variety of folks across industry and academia. Plus a special thank you to the wonderful folks at the GamesIndustry.biz HR Summit last week. My talk there was well received, and the overall presentation quality was top notch. I learned a lot! Plus, it was a really nice community full of fun and interesting people. I had a great time, and I need to find an excuse to go back next year.
NEXUS Games Conference, Dublin, Ireland, September 25th and 26th.
‘Large Language Models (LLMs) for Game Designers’ with Gamaste.
Lyon, France, October 9th.Next Level 2024, King’s College London, UK, October 25th.
Konsoll 2024, Grieghallen, Bergen, Norway, October 31st and November 1st
AI and Games Conference 2024, Goldsmiths, University of London, UK, November 8th.
AI (and Games) in the News
EA CEO says generative AI is at the "very core of our business", almost three years after saying NFTs are "an important part of the future of our industry" [VG247]:
So EA had an investor summit last week, where they proclaimed AI is at the very core of their business. It’s funny how all these tech trends are at the ‘core’ of what they do, meanwhile ‘making games’ seems largely forgotten. We’ll talk about this some more in the next newsletter, promise!Gamebeast raises $3.7 million in pre-seed funding [GamesIndustry.biz]
The latest in a long line of companies receiving a big chunk of pre-seed or seed funding. Gamebeast is a no-code tooling platform for Roblox that seeks tools for user generated content, as well as better analytics systems for creators. A reminder that yeah, Roblox is big, and people are making money over there.Campfire raises $3.95m for generative AI engine, Sprites [GamesIndustry.biz]
Raising $3.95 million in a seed round, Campfire is seeking to build generative AI tools for building meaningful and engaging NPCs. They plan to showcase this with their own AI-native game titled Cozy Friends. So Animal Crossing, except now the Nook family can use AI to find even more ways to bleed you dry!UK Government Call for Evidence on AI for Creative Industries [UK Parliament]: Thanks to
over at for catching this one. This is an important one for many companies and universities to present to the UK government the value of funding and collaboration initiatives surrounding AI in the creative sector.
Upcoming Games
Some games that I’ve got my eye on for many a reason.
Dead Rising Deluxe Remaster (PC, PS5, Xbox Series X) - 19th September
Frank West is back in this remake/remaster of the 2006 original. A solid and interesting game back in the day that was technically impressive given its ability to showcase hundreds of zombies in a post-apocalyptic shopping mall in Willamette, Colorado. While this is advertised as a remaster, I’d argue it leans more heavily towards a remake, with a complete graphical overhaul, is fully voiced, plus many quality-of-life features have been introduced such as auto-saves, a new control scheme, and improved user interface. Plus the friendly NPCs are less dumb too!
Ara: History Untold (PC) - 24th September
Ara is a new turn-based grand strategy (4X) game developed by Oxide games and published by Xbox Game Studios. Vying to be a competitor to Sid Meier’s Civilisation. It’s of note for me given we had the team from Oxide present the underlying AI technology at GDC 2024, and they’ll be back with an updated talk at the 2024 AI and Games Conference. Mwa ha ha.The Legend of Zelda: Echoes of Wisdom (Switch) - 26th September
The Legend of Link begins with the first entry of Nintendo’s beloved action-adventure series in which Princess Zelda is the protagonist. Built using the same engine as the Legend of Zelda: Link’s Awakening remake for the Nintendo Switch, this builds on ideas from 2023’s Legend of Zelda: Tears of the Kingdom, to allow you to approach a variety of puzzles however you like. Very excited for this one.
The Big Story: The Origins of SAG-AFTRA’s Video Game Strike
For this weeks big story, I wanted to tackle a talking point that I’ve highlighted in brief across several newsletter issues of late, the industrial action called by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) in context of video game performers.
If you’re not familiar, hundreds of game performers are striking and will not participate in the development of upcoming games until the strike is resolved. Now I’m grateful that
, a friend of and frequent co-conspirator provided a fairly thorough breakdown of the situation over on when it all kicked off. Frankly, George does a better job of summarising the current situation than I can, so I won’t even bother trying! if you want a grasp of what led to the the strike kicking off, then check this out.But for the purposes of this article, I provide some key points in summary:
As of July 26th, SAG-AFTRA members will not perform for 10 named video game companies after attempts to refresh the Interactive Media Agreement (IMA) collapsed.
The 10 named companies are: Activision Productions Inc, Blindlight LLC, Disney Character Voices International Inc, Electronic Arts Inc, Formosa Interactive LLC, Insomniac Games Inc, Llama Productions LLC, Take 2 Productions Inc, VoiceWorks Productions In and WB Games Inc.
The strike is only applicable to projects that were commissioned after September 2023.
This is the end-result of 18 months of deliberation in which 24 out of 25 provisions of the agreement have been resolved. The final provision, the adoption of AI in the context of video game performance, is what has led to this strike.
Given the aforementioned date of projects commissioned after September of 2023, this means that most (if not all) publicly declared games will not be affected. A point that EA CEO Andrew Wilson echoed in a recent investor call.
For this issue of the AI and Games Newsletter, I wanted to focus less on the business aspects of this - George did that already on VGIM - and instead focus on the technical side of things. Critically…
What technology has led to these concerns?
What is the state of this technology at this time?
Has this tech already been used in games? (spoiler: yes)
Why the concerns that performers have are very real, and warrant protection.
And what’s my perspective on all of this?
What is Performance?
Before we get into the nitty gritty, I wanted to take a moment to qualify what we mean when we talk about ‘game performers’ or ‘performance artists’ in the context of games. While videogames are inherently a medium that relies on technically proficient individuals to build the experiences you know and love, they are equally reliant on performance artists of various kinds, this can include:
Sounds Designers who create everything from the simple sound effects played when you click a button in a menu, to moving a character or using an item in your game. For a great example of how detailed, nuanced and just wild this type of work can be, check out the Vice video on the bone-crunching, flesh tearing sound effects in the Mortal Kombat series - which of course may be NSFW due to scenes of graphic violence!
Musicians will compose entire soundtracks for a given game that will compliment the entire experience throughout, and typically build them in pieces such that they can fade in and out of the experience when necessary. While it is similar to working other mediums, you will often find the requirements can become rather specific for games projects. This is of course a perfect excuse to share Mick Gordon’s wonderful GDC talk from 2017 on the design of the DOOM (2016) soundtrack.
Voice Actors who will come in and give life to everything from the most incidental of background fodder to your favourite characters. There are some very popular voice actors across the industry whose involvement in a game can not only make or break the experience, but also will have an impact on fans enthusiasm for the project. Whether it’s the frustration that David Hayer was replaced as the voice of Solid Snake in Metal Gear Solid V for Kieffer Sutherland, or more recently in 2023 with Charles Martinet stepping down as the voice of Mario after more than 25 years, and stepping aside for Kevin Afghani to assume the role in Super Mario Bros. Wonder.
Finally we have Performance Capture Artists, who are responsible for acting out entire scenes of your favourite games wearing ridiculous pyjamas outfitted with sensors that allow for the capture of body movement and facial expression. These are used in a variety of capacities in entertainment, and are just as common in visual effects for film and television as it is in video games. If you’d like to know more about this area of development, I highly recommend catching the video below published by BAFTA earlier this year which took an inside look at the development of the Hellblade series Ninja Theory. It highlights the setup and process of how they capture the performances of Melina Juergens - who plays series protagonist Senua - and other artists hired to work on the project.
Now for the most part, the SAG-AFTRA strike is really focussed on voice actors and performance capture artists. Though I suspect musicians are going to have their own issues in this space in the coming years if not already.
Striking Against AI for Performance
So one of the big issues that the SAG-AFTRA strike is focussed upon is the adoption of AI technologies in ways that intersect with jobs traditionally handled by performance artists. To quote SAG-AFTRA’s statement when announcing the action to members:
When it comes to artificial intelligence and digital replicas, we demand that performers retain the right to give or revoke consent and receive compensation and transparency for all such uses of their performances. This ask is entirely reasonable and feasible; we know this because independent and major studios have already signed on to our Tiered-Budget and Interim Interactive Media Agreements, which include those protections. [SAGAFTRA.org, link]
As you can see, the big concern is around the idea of ‘digital replicas’, the idea that AI could effectively take the core of humans performance capabilities, and then recreate it in such a way that it would effectively negate the need for the original performer.
Is This a Real Issue?
Now, this header might read a little sensational, but from the outside in, it’s worth addressing: is the AI technology out there good enough that you could effectively begin to replace a human with it? I’ll briefly get into this on a technology side, and then well… it’s best if I just show you.
On the technology side, we has seen significant gains in the past few years, in a variety of areas in which generative AI models can be trained to handle the creation of specific types of media assets - be it text, images, sound, and also things animation and video (which is of course, a series of images that create the illusion of movement*). A big reason for this is that while these models all fundamentally differ in their overall function, and even in the particulars of their underlying tech stack, they nonetheless are all derived from the same core technology - training artificial neural networks using machine learning algorithms.
Often we have periods where researchers achieve big developments in the field, where the technology passes a new threshold, and this has a cumulative effect not just on their work but in adjacent areas, given we begin to explore how this idea can be explored in other problem areas or applications. In the past decade or so we’ve seen two major improvements in the field. The first with the rise of ‘Deep Learning’ that kicked off in the early 2010s courtesy of the proliferation of Deep Neural Networks, and now more recently with ‘Generative AI’ that emerged first for images in 2014 with Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) and then for text with the Transformer model that powers popular text generators like GPT. And so you’ll find when these things gather steam, sometimes it will lead to a boom of innovation after years with little to show for it. For example, natural language processing has seen significant gains since the work on Transformer network architectures was published. A decade ago the idea of something like GPT even existing seemed incredibly far fetched.
As a side note, we talked about how GANs work and their potential to help with image and texture upscaling in a video back in 2021.
But still, this doesn’t quite express the tangible gains made by this technology. So I figured the best way to do it is with an example. In less than 10 minutes, I was able to use an existing third-party AI tool to create a clone of my voice - for free. To give you some context, particularly if you’ve only ever read
on Substack and not caught any of our audio/video offerings, here’s a short recording of my voice.And next up, here is a clone of my voice that I created using the AI tool Speechify. To give some context here: Speechify uses an autoregressive model: a system that can automatically predict the next component in a sequence by taking measurements from previous inputs in the sequence. Now this is commonly uses in Text-to-Speech (TTS) applications, but here, the machine has also analysed the acoustic features of my voice patterns, such that it can then figure out how to reproduce it.
In the example below, I have provided it with a script and the system the attempts to speak it aloud, using my voice patterns. Check it out.
So yeah, there are a number of issues with it: the cadence is a little off, the actual quality of the sound is a little weird, and critically it can’t get around my accent. You’ll notice the longer that recording goes on, I sound increasingly English as it tries to reconcile my cleaned up Scottish accent with what it considers to be a British English accent. Fun fact: despite all the advancements in both speech-to-text and text-to-speech, they are still useless when used in Scotland. I actively avoid them.
But the real scary part of all of this, is just how quickly this was built, and how little data it required for even a middling not great digital clone of my voice. It took all of around 20 seconds to generate, and the model is based on a roughly 90 second clip of me reading a script. So now imagine this in the context of games studios, where performers spend dozens if not hundreds of hours recording performance and all of that information is kept and maintained by the studio.
The Looming Threat
One of the false promises of generative AI in the games industry to-date is that it will level the playing field: that developers and studios of all shapes, sizes, and budgets can begin to capitalise on this technology to achieve things that were arguably impossible but a year or two ago. This whole situation is but another example of how this isn’t really the case. Given it seems the creatives who are already doing a lot of this work, without AI, are going to suffer as a result.
This argument often falls flat in practice due to a number of issues that I’ve enumerated in previous issues. But let’s revisit them quickly:
Significant concerns over copyright and intellectual property rights of generated assets. Critically, external generative AI tools leave you vulnerable to another creator generating the same or similar outputs as yours, and you can do nothing about it.
Risk of losing access to generators and associated assets as companies go bust, or they change their terms and conditions for AI model licenses.
A platform landscape that - for now - still does not provide clearcut guidance for games using generative AI to pass ‘lotcheck’ or ‘cert’: the validation process utilised for digital stores such as Valve, but also on consoles like the Nintendo Switch, PlayStation 5 and Xbox Series S/X.
Many tools are still not fit for purpose, and don’t provide sufficient resources for adopting into a traditional game development pipeline.
Of course this hasn’t stopped many an indie from exploring this technology, and I would argue the most successful ones have been those that have trained their own models in legal and ethical ways that mitigate many of these issues - a future story perhaps? There is ways to do this in ethically and legally sound ways, but quite often the narrative is being dictated by those who wish to circumvent that.
But regardless of the issues, it’s still been a tantalising prospect for many an indie, given it allows them the potential to explore a new feature or quality improvement in their game that previously would have been impossible for them. Sure, it’s meaning that a performance artist isn’t getting paid, but then they could never have afforded them anyway.
But therein lies the problem: if an indie studio who didn’t have the resources can use this kind of technology to achieve a higher performance fidelity, what’s stopping a larger (i.e. AAA) games studio from doing the same?
This is the thrust behind which the strike is occurring. Consider Jennifer Hale, one of the most prolific voice actors across film, television, and games, who has lended her voice to hundreds of characters across a career spanning 30 years. Many of those studios will still have those voice assets archived, and could be cleaned up for use in a generative model. Hypothetically, BioWare could clone Hale’s voice should they wish for to bring back Commander Shepard in the next Mass Effect game, and with AI could record all the dialogue they need without Hale entering a recording booth to work on the project, without her knowledge, and without compensation. Let me stress, this is a wild and highly unlikely scenario, but it could happen, and that is why SAG-AFTRA are striking.
But this isn’t just an issue of voice actors. Motion capture performers could see their work be dissected and reproduced without their permission. What is of significant concern is the artifacts evocative of the human themselves: their voice, their movement, could be cloned and reproduced with little regard not just for whether they’re compensated, but what is done with that digitised performance without their consent. This is already a big issue with deep fakes across the internet with people saying things that do not align with their religious and political views.
The Studio Perspective
On a fundamental level, what is being taken here is a piece of the performer themselves - who they are, what they are. It’s why performance actors are successful in their craft: they lean on their experiences and knowledge to craft performances that either marry them to that character forever more, or are so compelling you wouldn’t believe that was the same voice actor. But that’s not how the studios SAG-AFTRA are striking against see it.
For example, let’s take Batman: The Animated Series from the 1990s, in which the late Kevin Conroy - who was at that time known solely for his work in theatre - delivers what is considered by many as the definitive rendition of the Batman. Meanwhile many were surprised to discover that Mark Hamill of Star Wars fame was Batman’s cackling nemesis, the Joker. These actors subsequently assumed those roles in the Batman: Arhkam video game trilogy from Rocksteady Games. Hence they are now video game performers, given they adopted those roles in three game productions.
However, the problem is that in the eyes of studios working against the SAG-AFTRA union, a lot of the performances used in this context are often just data. What is arguably worse, is that the stance taken against SAG-AFTRA - per the unions documentation - is that a performers ‘digital replica’ can only be considered as such if the performance is “readily identifiable and attributable to that performer”, and that "there is a one-to-one relationship between a single performance and a single character”. Lastly, that motion capture performers are not considered performers in the traditional sense, given their movements is simply captured data.
Now take that in context of Conroy and Hamill as Batman and the Joker. Sadly Conroy passed away in 2022, and Hamill has stated he won’t return to the role out of respect for his former colleague. But it raises a question: could you then train an AI to clone their voices for a future game? Let’s run this hypothetically against these rules:
Are their performances readily identifiable as them?
Both actors are well known for playing these roles. Though I’d argue it’s still not readily obvious that Hamill is the Joker lest someone told you.
But many other voice actors in those games give fairly unrecognisable performances, given their voices are heavily modulated or in a style not evocative of their normal voice. This includes Steve Blum as Killer Croc, Fred Tatasciore as Bane, Nolan North as the Penguin, and even Harley Quinn’s voice, given that’s a very heightened performance.
Do they have a one-to-one relationship?
I’d argue no. Given in all of the Batman: Arkham games, Conroy and Hamill are solely the voice actors for each character. There are performance artists used to record their movements in fight sequences or cutscenes (see the video below). This if course makes sense given at that time both Conroy and Hamill were in their 50s and 60s during the making of that trilogy. There is also a slim chance that small incidental noises (grunts, groans) and the like could be recorded by another actor.
Of course, this means recognising the motion capture artist as a performer! But technically, it isn’t a one-to-one relationship.
But also, Harley Quinn fails this test, given she was played by two different voice actors in the same series of games. Arleen Sorkin, played the role in Batman: Arkham Aslylum, while Tara Strong played the character in both Arkham City and Arkham Knight.
My point being, that even in this simple contrived example, it’s not obvious what the answer is when taking the studio perspective on how this is all supposed to work.
There Is Still Value Here
Now while we have spent the bulk of this issue discussing the risks presented behind this technology, it’s also worth highlighting the opportunities it offers provided. In fact this is stuff that SAG-AFTRA themselves see value in, they just want to make sure that their members are compensated if it is employed in such a capacity.
While you hear synthesised AI voices in a variety of tech demos for chatbot companies like Character.ai, Convai, Inworld and more, there are a handful of studios that have rolled out this technology in their shipped games. Two fairly high-profile titles that have adopted AI synthesis voices in recent years include 2021’s The Ascent by Neon Giant and The Finals from Embark Studios in 2023. Now in the case of The Ascent, the argument was it helped them achieve a large number of voices on their limited budget, bridging the gap between indie and AAA. Meanwhile the announcer voices in The Finals sound really wooden, which actually works for that game given they’re meant to be reality game show hosts. Hence it has a slightly hollow delivery that kind of works in the context which it has been employed. From the perspective of the union, the big question is whether the original voice actors receive any compensation (even indirectly through an AI provider) for these projects?
But these instances are still for when the team have adopted it wholesale, rather than using it as part of a broader range of recorded material. One area this can prove of value is for small touch-ups, corrections, or even incidental dialogue that needs put together. Quite often recording voice work for games is a long process, and often has actors hired for a period of time that can range from a day to a month or longer recording screeds of dialogue. But what if a small change is needed to a handful of lines? Could you use an AI system, with the actors permission, to record that dialogue? If it proves sufficient, great. If not, the team works with the actor to get them back in the studio. I would expect that the latter will prove more expensive than the former, but with AI you could get a candidate solution faster. But of course, the actor should then be compensated to some degree for the fact you used your digital library of their voice to create those assets.
The same could be said for motion capture performances, to build a quick animation that is evocative of how that actor typically moves. Again, it should have their permission, and they should be compensated. But it can again can lead to a faster turnaround, and the actor is being paid for work that they would have had to do anyway, albeit no doubt at a reduced rate.
My Perspective
So of course, being someone steeped in the space of AI for games, naturally people ask me what my take this on this situation. I very much side with SAG-AFTRA on the issue. This is work that is created courtesy of very real, quantifiable, human traits. We recognise people by a variety of traits: their face, their voice - both written and spoken - their movement. We have protections for the use of a persons likeness, their face, in a variety of instances across entertainment and media that ensures they cannot be used legally without proper authorisation and often compensation. Why can we not have the same protections for your voice? Or for how you move your body?
This is all coming at a time where AI technology has reached a point where we’re entering a new age of data acquisition, processing and generation that capitalist endeavours can latch onto. In the past two decades it’s been about the data you create in virtual spaces: your web surfing habits, your online purchases, your likes, your shares, your comments. This has been exploited to great effect for the past 20 years from everything from social media platforms to online stores and search engines. Now as regulations continue to increase and improve the digital rights people have, there is a desire to find something new. Something else that can be monetised from your activity. And this is where we are in 2024, and it’s been happening for years already, the next wave of data is who you are, how you are, and what you create as an artist.
We see this issue with Large Language Models that we are told can only be effective if they scrape the internet wholesale without permission. Of creators suing AI companies when it’s clear that their writing or their art has been used to train these systems without their knowledge. And this strike that SAG-AFTRA has actioned is highlighting even more of these very personal creations. How you sound, how you speak, and how you move.
Everyone is unique in this regard. Your voice is undeniably yours, and it is a reflection of your life experiences, your upbringing, your heritage, your education, and your culture. You may have physical or mental disability that affects your speech. All of these things construct our spoken voice. We have a right to protect it. We have a right to dictate how it is used, and if we wish to monetise it as part of our career, then we should retain control over how it is used.
I joked earlier about my own voice, and it is often a topic of amusement among my friends, but it is is one that is reflective of the past 20 years of my career. My voice arises from a working class community in the west of Scotland, that over the years has evolved and changed as a result of the experiences I’ve had, of the work I do, and how I work to communicate to my audience. It is a voice that I have used to build an identity online that people recognise. That to this day I can attend a game development conference and more people will recognise me not for what I look like, but what I sound like.
My voice is an asset. It is my asset, and it is mine to protect and utilise as I see fit.
And this is the crux of the issue with the companies that continue to fight against workers rights for their assets. They seek to dehumanise and devalue what makes you, you. Sure, once it is processed into a computer, it is data. But it is data generated by you, from you, and that distinction should remain. To reduce the importance and validity of the performance artist, is a ploy so that companies can monetise it at scale without the worry of compensation and residuals. They don’t want to give performers consent over how their data is stored, processed and utilised. That adds complications, and processes, that get in the way of monetisation. Hence they will find loopholes in the system, or build them, and exploit them to full affect.
This isn’t even the first time unions have fought against this in entertainment spheres in recent memory. We’ve just came off the back of strikes by both the Writers’ Guild of America (which ended this week last year), and SAG-AFTRA’s first actors strike. In both instances, writers and actors in film and television entertainment had the same concerns with regards to their assets - their data - being utilised without their consent and without appropriate renumeration. So to see some of the largest players in the video games industry continue to defend their stance is disappointing, but not entirely unexpected.
It’s interesting how many AAA games continue to try and build off the back of standards and motifs established by film and television, but then resist having to pay for it. Building experiences that feel ‘cinematic’ in their presentation and scale, that seek to showcase serious dramatic performance in game narratives, but then resist treating the performers behind it like the professionals they insist they are.
So in closing, yes, I support the members of SAG-AFTRA in their strike, and look forward to sensible and pragmatic solutions emerging at the discussion table. As we’ve seen already, this isn’t the first time this has happened, and I can guarantee it will most certainly not be the last.
To learn more about the SAG-AFTRA strike visit the main strike webpage for a full set of resources and information.
Wrapping Up
And that’s it for this issue. I’m glad we got this out there, and hopefully you’ve learned a little more about the SAG-AFTRA situation on along the way. I’ve gotta go finish practicing my talk for NEXUS, wish me luck and I’ll see you all next week.
*Hello to Jason Isaacs.
Back in 2002, I went to Burning Man. I only had a few animations up on my website, but they were fairly popular, and I'd done the voices myself. In that desert, half a world away from my bedsit, I said "hi, I'm Mata," and the person replied "I know, I recognised your voice." Our voices are unique and important - I very much support the strikers too!