About Me

I am a Game Designer based in France, currently working as a Technical Game Designer for X&Immersion. I bridge the gap between creative vision and technical implementation, specializing in prototyping, AI tools and systems design. I also write some opinion pieces about game design in my free time, feel free to check them out!

Technical Arsenal

Unreal Engine 5Blueprints & Widgets
UnityC# & Prototyping
DocumentationGDDs & Tech Docs
AI IntegrationLLM & Behavior Trees
Version ControlPerforce / Git
Project ManagementClickUp / Trello / Slack

Selected Work

X&Immersion - Technical Game Designer
June 2022 - Present | Intern → Permanent
Professional AI Tools Unreal & Unity

Role Overview

I develop tools that integrate Generative AI (Voice, Text, Facial Animation) directly into game engine pipelines. My role bridges the gap between R&D and Production, ensuring these tools are not just "tech demos" but usable solutions that solve specific production bottlenecks for developers.

The Tool Suite

  • Diagen: An LLM-powered narrative system for emergent NPC dialogue.
  • Ariel: A Text-to-Speech (TTS) synthesis tool.
  • Geppetto: A facial animation and lip-sync automation tool.

Shipped Collaborations (Client Work)

I worked as part of the integration team to help studios implement our tech into live productions.

Murky Divers
Murky Divers (Embers)

Collaborated with the dev team to integrate voice cloning technology, allowing in-game monsters to mimic players via VOIP.

Amerzone
Amerzone: The Explorer's Legacy (Microids)

Helped integrate Geppetto to automate their lip-sync pipeline, halving production time for lip-synced clips.

Hot Lap Racing
Hot Lap Racing (Zero Games)

R&D for AI pathfinding. Explored Machine Learning approaches to train AI cars for competitive racing behavior.

System Overview

Diagen is a modular narrative plugin for Unreal Engine 5 designed to bypass native limitations regarding runtime DataTable modifications. It enables:

  • Unique, Controlled Dialogue: Non-repetitive responses that adhere strictly to design constraints.
  • Dynamic Memory: NPCs that can "learn" and "forget" information dynamically.
  • Full Interaction Suite: Supports Player-to-NPC, NPC-to-NPC, and World-to-NPC loops.
  • Event Triggering: Natural language detection triggers gameplay events (Animations, Quest Updates).

My Role: Lead System Designer

I was the primary designer behind Diagen, responsible for translating the high-level goal "Dynamic and fully controlled AI NPC" into a functional, logic-based architecture.

  • System Architecture: Conceptualized the 4-Table Structure to bypass Unreal's runtime limitations. I defined how static data (Lore) interacts with dynamic states (Tags) to emulate memory evolution.
  • Logic & Algorithms: Designed the "Tag Weighting" system that prioritizes information to simulate "learning" and "forgetting" information. This ensures that an NPC mentions "The Dragon Attack" (High Weight) before "The Weather" (Low Weight) when prompting the LLM.
  • Technical Documentation and Workshops: Authored the full technical documentation and integration guides used by client studios to implement the plugin into their own pipelines as well as the guided workshops for integration.

Tech Demo

Core Architecture: The 4-Table System

The system relies on four distinct DataTables to manage knowledge access and game logic.

1. Character Information Table (CI_Table)

The central knowledge base containing every piece of narrative info (Lore, Biography, Facts).
Mechanism: Access is restricted by Tags. An NPC can only "know" a row if they possess the corresponding Tag (Key).

2. Tag Table

Defines the access keys (Roles, States, Knowledge). Each Tag has an assigned Weight to prioritize critical information in the final prompt context.

3. Diagen Event Table

Maps narrative triggers to engine instructions. An event can force a specific dialogue line, modify tags (Learning/Forgetting), or execute external game logic (Cinematics, UI updates).

4. Topic Detection Table

A semantic layer that detects specific subjects or intents within natural language (User Input or NPC Output) to trigger behaviors.

Logic Flow: Topic Detection

Example A: Player Interaction
Input: "What is the price of a baguette here?" ⬇ Topic Detected: [PlayerAsksForBreadPrice] ⬇ Execute Event: [TellBreadPrice] (NPC answers "5 Gold" & UI updates)
Example B: NPC Reaction
NPC Output: "Who are you? You have no right to be here! Help!" ⬇ Topic Detected: [CallForHelp] ⬇ Execute Event: [TriggerNearbyGuards] (Spawns guard actors)

Runtime: Prompt Construction

At runtime, the system dynamically assembles the final prompt sent to the LLM by layering three data sources:

LAYER 1: SYSTEM PROMPT

"You answer as an assistant. Speak in first person. Do not discuss politics. Stay in character."

LAYER 2: CORE DESCRIPTION (From CI_Table)

"You are Sir Archibald Smith. You live in The Oak Domain. Your wife is Lady Andrea."
(Only info unlocked by current Tags is included)

LAYER 3: CONTEXT & HISTORY

Player: "Hello Sir Archibald. How are you today?"
Archibald: "I am fine good sir, how may I assist?"

My Role: Technical Designer & Product Owner

I was responsible for translating the raw capabilities of our TTS engine into a production-ready tool for game developers in Unity.

  • Interface Design (UI/UX): Designed and implemented the plugin interfaces for Unity, abstracting complex API parameters into dev-friendly controls.
  • Pipeline Engineering: Designed early versions of the CSV Batch Processing feature within Unity, allowing users to generate thousands of lines at once, allowing the tool to go from gimmick to production-ready add-on.
  • Creative Curation: Expanded the internal voice library by training and fine-tuning specialized archetypes (Vampires, Elderly, Children) often missing from standard AI models and necessary for the video game ecosystem.
  • Documentation & Integration: Wrote the complete user manual and handled direct integration support for clients and internal tech demos.

Tool Overview: Ariel TTS

Ariel is a hybrid Text-to-Speech solution supporting both Pre-Generation and Runtime Generation. It operates via Cloud API or Local Model execution.

Feature: Voice Cloning (or what I call the "fingerprint" logic)

Used in Murky Divers. Instead of simple playback, the system analyzes a >10s user input to generate a "fingerprint" of the voice. This audio clip is matched against an open-source base model to synthesize a clone that is fully controllable via text input. The created voice is virtually identical to the input sample.

Feature: The CSV Batch Pipeline

To solve the scalability issue of generating 10,000+ lines, I built a system where:

1. Import CSV: [CharacterName],[Text to Generate],[Additional Settings] 2. Validate and Queue: The tool validates data and queues API requests. 3. Auto-Sort: Generated .wav files are automatically named and saved.

Facial Animation Automation

Geppetto is a tool designed to automate facial animation by analyzing audio files and driving character blendshapes (morph targets) for both lip-sync and emotional expression.

My Contribution

My main goal on Geppetto was to ensure the tool was actually usable for animators in a real production environment. I acted as the bridge between the machine learning engineers and the end-users.

  • Designing the User Experience: I helped build the tool's interfaces for Unity. I focused on making the tool easy-to-understand by non-technical users.
  • Stress-Testing the Tech: To prove the tool's versatility, I created comprehensive tech demos using a wide variety of industry-standard rigs—from high-fidelity MetaHumans to stylized Ready Player Me avatars.
  • Quality Assurance: I spent a lot of time testing the tool within production environments with my team to build a solid library of phonemes and emotions for our characters.

Totem's Path (Survival RPG Prototype)

Totem's Path

My first project at X&Immersion was a first-person Survival RPG designed to showcase our in-house technology (Voice Generation, Lip-Sync, Text Generation) in a gameplay context.

System Design & Implementation

My main contributions as a Game Designer were :

  • Procedural Narrative: Designed a Quest & Dialogue system where NPCs generated dynamic quests and evolving dialogue based on world events. This was a key R&D challenge to validate our early AI tech.
  • Combat Design: Designed a ruthless, hack-and-slash style first-person melee combat system.
  • Crafting & Economy: Built a database of recipes and items to balance resource scarcity and inventory-merging mechanics.
  • Implementation: Worked directly in Unity (C#) to implement these systems, bridging the gap between game logic and our AI tools.
Teaching & Conferences
Visiting Lecturer | ISART & ICAN
Mentoring AI Masterclass
Masterclass Presentation

ISART DIGITAL

Masterclass: AI in Production

I delivered a comprehensive masterclass to Master 2 Game Programming students regarding the integration of AI tools into video game production pipelines.

ICAN DESIGN

  • Conference: "AI Tools for Narrative Games" Explored how LLMs can enhance storytelling mechanics and empower narrative designers.
  • Themed Week: Supervised 3rd-year students during a week-long intensive game jam focused on creating AI-based narrative games.
  • Pre-Production Workshop: Working with Master 2 Game Design students to modernize their pre-production workflows using generative AI tools.

Resume

Download my full resume for a detailed work history.

Download PDF Resume

Articles

Beyblade X Game Design
How Beyblade X nails great game design.
Arc Raiders Voice Chat
Arc Raiders' voice chat is good, but it can be better!
Battlefield 6 Map Design
Can User Generated Content fix Battlefield 6’s map design problems?