Does Ubisoft NEO Stand Out from Other Gaming Chatbots? | AI and Games Newsletter
Putting LLMs into an NPC is a big trend, but Ubisoft's approach might actually work
Hello all and welcome to the weekly newsletter on AI and Games. Tommy here as I continue to report on insights on AI for the video games industry. This week, we’re taking a look at NEO, the conversational AI bot by Ubisoft, and I will be discussing my time spent playtesting the new demo showcased at GDC 2024.
Quick Announcements
Before we get to the main topic, here’s some quick announcements of all things AI and Games related:
The London Developer Conference is happening this Thursday, and I’m excited to not only be giving a talk on (you guessed it) AI for games, but hosting a panel including Yuqian Sun, Simon Barratt, Florence Smith Nicholls and Matthew Jack.
The monthly Sponsor Newsletter went live earlier this week which is only available to our premium supporters on Patreon and Substack. We detail some of the exciting projects coming up from AI and Games in the coming months.
With that out of the way, lets focus on this weeks big story: Ubisoft’s Project NEO!
Digging into Ubisoft’s Project NEO
The Game Developer’s Conference is often home to many a big announcement or tech innovation, and considering how much prominence there is on artificial intelligence across the tech sector these days, you can bet many a company is looking to showcase their latest tech.
During GDC 2024 I was invited down to a small event being hosted by Ubisoft, to showcase what I would later come to learn is the NEO NPC: a prototype framework for non-player characters (NPCs) that adopts some of the latest work in large language models (LLMs). This isn’t a new idea, given in the past year we’ve seen the likes of Inworld and Convai among others showcase their tech that aims to support conversational AI for games. Inworld released their standalone product demo ‘Origins’ on Steam (which was originally shown at GDC 2023), while Convai have collaborated with NVIDIA in numerous product demos.
Now both of these demos I’ve written about and discussed before in detail. So when put into context, what is Ubisoft’s NEO doing that’s any different? Is it worth paying attention to, and what have they learned that perhaps these other vendors had not?
While I am not yet privy to the inner workings of NEO, my playtime with the demo, combined with conversations I had with some of the development team, had left a positive impression. It’s still not ready for use in commercial games - a point that NEO’s own team readily admit - but it is moving the needle forward in the right direction.
What is NEO?
NEO is a small R&D project based out of Ubisoft Paris and led by Xavier Manzanares, who was the lead producer on the critically acclaimed Mario + Rabbids games. The goal of the project is to try and expand the state of the art in conversational AI systems for games. This is a rather notable given thus far much of what we’ve seen in conversational AI has came from outside of games. The aforementioned Inworld and Convai are developed by people with experience in conversational AI and generative models, who are now trying their hand at game development. But to date, NEO is the first instance of an established games studio/publisher throwing their hat into the ring.
What makes it even more interesting it that it’s pulling on different experiences across Ubisoft’s provision. Typically when we think of AI R&D at Ubisoft (or rather, when I think of AI R&D at Ubisoft) I think of La Forge: their dedicated teams that focus on applying new innovations in deep learning for games. In fact at GDC we had a great talk at this years AI Summit by Gabriel Robert from La Forge showcasing how they’re using ML bots on titles such as For Honor and Rainbow Six: Siege. But they aren’t the only R&D provision across the company.
NEO is different because it’s starting from a gameplay perspective first, and then pulling on required expertise in the company to facilitate the technology parts. Virginie Mosser leads the project as narrative director, having previously worked at Ubisoft 1492, the studio dedicated to narrative-driven mobile games. This is where it gets interesting, as Mosser has been designing a bunch of non-player characters for the player to interact with, and then trying to work with a tech team - notably Mélanie Lopez Malet who I had the pleasure of chatting with during the demo - to make them interesting to interact with.
The project itself has been in stealth mode for around 18 months at the time of writing, and while the team themselves admit it is very early days, they were excited about what they had built thus far, and were confident that what they had done was worth showcasing to the press and a few other select individuals.
At this time I can’t talk about how it works, but rather, I will detail my experiences throughout and what I did manage to gleam from conversations with the team.
Playing the Objective
I think what is most important about Ubisoft’s NEO is that it addresses an issue I have had with all of the chatbot-based NPC tech I have played with thus far. Last year when I analysed both Inworld’s ‘Origins’ and Convai’s tech demos it was readily apparent how isolated the NPC felt from the environment and the overall objective of the game. While, they could conduct conversations with the player, it seldom felt like they were connected to the environment they existed within.
To unpack one specific example: Origin’s is a detective game that plays out at the crime scene; the site of a recent explosion. But the characters are rooted to the spot, and have little awareness of what is around them except for what the narrative back-end of Inworld’s tools has permitted. It makes these interactions - regardless of how human-like they appear - to feel hollow, given I’m not convinced these characters are aware of the circumstances they appear in. And so much of what makes good NPC AI works in games is that the player can be convinced (or rather, fooled) into believing the character can recognise and respond to its environment.
A big part of this is that the conversation tech offered by Inworld and Convai are only solving what they advertise: the conversation part. But conversation is only one aspect of intelligence required for a non-player character. And it requires additional work to elevate that NPC to make it feel like a part of its environment. It’s worth stating that this is something both of the aforementioned tools providers are aware of, and I had a great conversation about this exact topic with Convai’s CEO Purnendu Mukherjee in my interview with him in 2023. During which we discussed the issue that arises when you make conversation more realistic, it only exposes the limitations of every other facet of the NPCs intelligence. It’s worth mentioning that both Inworld and Convai are working on addressing these issues and each had updates to their technology that was available to check out at GDC 2024.
NEO tries to address the limitations of conversation, by building atop these systems to provide a more nuanced experience. It uses the Inworld tools I’ve described previously, but now there are additional layers of control and information that ‘scaffold’ the conversation system so it doesn’t take on all of the leg work. NEO helps control how responsive and emotionally open an NPC is until you work to know them better. It’s controlling the flow of information, so an NPC learns new pieces of information as the game world changes around you. It’s taking information offered by the player and validating it against in-game objectives to see whether your ideas are feasible. Ultimately it’s trying to build the bots conversational elements into a framework that better aligns with how a lot of game design is built - or perhaps more specifically in Ubisoft games.
Now, it’s worth stating NEO has not solved the problem I describe, nor does it have a definitive solution for it either. But what it proposes is how this could be addressed with further research. Perhaps more interestingly, the solution it proposes puts the control of the experience not in the (virtual) hands of the AI systems, but instead of the narrative and gameplay designers who would use these systems in an actual game. And while this is exciting, it also suggests that if anything the work of a narrative designer is going to become ever more complicated as time goes on.
The NEO Demo Scenes
As I was guided through the NEO demo, I was introduced to three separate yet linked scenarios, each of which exploring the system in different ways.
The first scenario introduces me to the character Bloom who I need to build a relationship with. This relies on the player engaging more with the character and asking questions relevant to the task at hand, while also exploring optional ‘quest’ activities for the conversation. In this scenario, the character is aware of the objects that exist in the room and can make conversation with regards to them. That said, it currently could not navigate to objects in the space or spatial awareness of their relative distance, but Bloom recognised that we could play music on the jukebox in the background, while also questioning me when I tried to convince them of objects that didn’t exist in the room.
But the actual crux of the demo was to have me build a relationship with Bloom. At first the NPC is not as forthcoming with personal information or willing to discuss things other than the task at hand. Once I started playing along with the in-game objective, then the characters attitude towards me improved, and the overall focus of the conversation continued to evolve.
The second scenario was also with Bloom, during which we observe and react to a drone mission we’re watching on a screen. In this instance, an ally NPC is piloting a drone to steal information from an enemy base. The idea behind this was to showcase how the character reacts to new information as it is presented. Bloom is not only able to share information about the current state of the mission, but react to changes as they occur. In this instance, the ‘mission’ is actually played as a video file, wherein the narrative designers had to sit and annotate the events of the video such that they could be fed into the NEO system at runtime. The goal, as was described to me, was that in future the team could have it such that the game generates information at runtime that is then fed to the AI. Hence if a player was actually controlling the drone, then the things they’re doing in the game could then be fed to Bloom to learn about and react to in real time.
The third and final scenario has me sitting with another character, codename Iron, as we discuss how to break into an enemy location and steal sensitive material. The idea is to plan a break-in, and identify sound strategies based on available tools and opportunities. Players could look around the environment to learn about available tools and resources and then discuss with Iron what would be the best approach to take. In each case, the player could convince Iron to change their tactic if the suggestion was convincing enough.
The Need for Narrative Design
In each instance, the conversation had a purpose for gameplay, and the characters were familiar with my goals and priorities. As mentioned already, characters felt more focussed towards their objective and more in tune with their local environment. One thing worth noting, is that it’s expected that information across the three scenarios is retained and passed on between them. This can include decisions made on specific strategic elements, or even something as simple as your name (I asked for my codename to be changed in our first interaction). Sadly I didn’t get to see this applied as intended, given the demo crashed a couple of times during my playthrough and therefore losing my shared knowledge. But hey, it’s a demo, these things happen.
Perhaps the most critical aspect of this is how much narrative data (i.e. writing) is required to establish this framework. The narrative team on the project had to think of a myriad of elements, including the worlds back story, the characters of Bloom and Iron, the world fiction of the enemies, the environments and important elements the player would attempt to utilise, the events transpiring in the drone mission, the full breakdown of the villa mission and much more. The sheer amount of narrative framing required to keep the characters on point is significant. In fact, I had a fairly lengthy conversation with the team afterwards about how much more work it would require to facilitate a bot like this to stay in tune with the game state than regular narrative design. What would previously have been the job of one or two narrative designers could now require significantly more resources behind them to feed something like this. It’s a real challenge that hasn’t got a solution yet, because nobody has really tried it yet.
But while I am impressed with what I have seen thus far, it’s important to take a lot of it with a pinch of salt. This is after all a very early demo, and the NEO team were the first to admit that this was nowhere near ready for adoption in production. They too haven’t got all the answers on how to make this work. Plus the demo itself was built such that you couldn’t really push the limits of the system, you really had to play the objective. And, I have to say that ultimately the bit that was still the roughest element was the voice work. I said it when I made my Inworld video last year that the voice work felt pretty rough, and while it has improved in the past year, it’s still nowhere close enough to convincing.
Wrapping Up
In the video linked above - after my narration - you can catch my largely unedited footage from each sequence of the demo. I’ve cleaned it up a little largely to account for gaps in conversation. I often had extended periods of silence in between when I engage with the NEO NPCs, but that’s largely because I was actively chatting with the folks at Ubisoft during the demo about what was happening under the hood - and they were great to chat with. I admittedly was not thinking about how it would translate into YouTube content! Nonetheless, please check it out and share your thoughts on how it performs. If you have any further questions I’ll try and answer them too.
Thanks to Ubisoft
Thanks once again to everyone at Ubisoft for inviting me down to the event at GDC. It was really great to chat with the NEO team and they were clearly very excited to be able to share their work. I have suggested we try and return to this and perhaps do a deep dive on the inner workings of NEO sometime in the future. But until then, I hope you’ve enjoyed this breakdown, and I look forward to sharing more on when and if the opportunity presents itself!
Wow so cool