Key takeaways
- Video games have used AI since the 1970s — enemy pathfinding, behaviour trees, and state machines are the classical toolkit.
- Modern AI in games spans procedural content generation, smarter NPC behaviour, generative assets (art, music, dialogue), and player-skill modelling.
- Landmark AI achievements — AlphaStar in StarCraft II, OpenAI Five in Dota 2, AlphaGo in Go — pushed reinforcement learning forward.
- Generative AI is starting to reshape game development: asset creation, voice acting, QA testing, and potentially entire content pipelines.
- Player concerns include asset authenticity, voice-actor displacement, AI-generated low-quality games flooding storefronts, and fair use of training data.
Game AI has always existed
“AI” in games is older than the modern machine-learning era. The original Pac-Man ghosts used simple rule-based AI in 1980; ghosts have distinct personalities and pathfinding behaviours. Chess AI beat grandmasters in the 1990s. StarCraft AI research was active in the early 2000s. The foundational techniques — behaviour trees, finite state machines, A* pathfinding, utility-based action selection — are still used in shipping AAA games today.

What changed around 2015 was machine learning getting strong enough to produce interesting game behaviour. AlphaGo in 2016 and AlphaStar in 2019 demonstrated superhuman play in Go and StarCraft II respectively, using reinforcement learning at scale. OpenAI Five did the same in Dota 2. For the underlying RL techniques, see our reinforcement learning primer.
NPC behaviour
Non-player character (NPC) AI in production games is overwhelmingly rule-based rather than learned. Reasons: deterministic behaviour is easier to debug, ship, and test; computation budgets are tight; and players often want the predictability of classical AI. Behaviour trees, GOAP (Goal-Oriented Action Planning), and utility systems remain the dominant toolkit.
Machine learning is increasingly used for specific NPC capabilities — perception (enemy “vision” learned from human-like attention), movement style (animations generated or blended dynamically), and dialogue (LLM-driven conversational NPCs in experimental games like Ubisoft’s NEO NPC prototype or Inworld-based indie titles).
Procedural generation
Procedural content generation (PCG) uses algorithms — sometimes learned, often rule-based — to create game content at scale. Minecraft’s terrain, No Man’s Sky’s 18 quintillion planets, Spelunky’s levels, Diablo’s item drops — all procedurally generated.
Machine-learning-based PCG has grown. Training models on existing game levels or worlds to generate new similar content produces variations that feel natural. Research projects have generated playable levels, meaningful dungeon layouts, and coherent quest chains. Commercial deployment has been cautious — designers often prefer hand-crafted content where quality matters.
Generative AI changing game development
Art assets
Concept art, textures, character portraits, and 2D assets can be generated with tools like Midjourney, Stable Diffusion, and DALL-E. Studios use these for ideation, pre-visualization, and sometimes for final assets. The craft is being re-shaped — artists spend more time directing, curating, and refining, less on blank-canvas work. For the generative-AI landscape see our generative ai primer.
3D models and environments
Text-to-3D tools (Meshy, Luma Genie, Tripo) convert prompts to 3D models. Quality is improving but not yet at AAA standard for heroes and key props. Background assets, prototyping, and indie games are adopting these tools.
Music and sound
Tools like AIVA, Suno, and Udio generate music at passable quality. Sound-effect libraries are being augmented with AI-generated variations. Composer roles are evolving rather than disappearing — composers use AI tools for exploration and iteration.
Voice acting
Tools like ElevenLabs and Altered produce convincing voice performances. This has triggered active disputes between game studios and voice-actor unions (SAG-AFTRA video game strike in 2024). Consent, compensation, and training-data use are all contested. Many productions continue to hire human voice actors while piloting AI voices for secondary characters, language localizations, or iteration.
Dialogue and narrative
LLM-driven NPC dialogue allows emergent conversations rather than fixed dialog trees. Experiments at Ubisoft, Convai, and Inworld show potential. Consistency challenges — keeping LLM output in character, lore-accurate, and safe — are real. Hybrid approaches that use LLMs for conversational texture over scripted plot-critical content seem most viable short term.
Testing and QA
AI agents explore levels to find bugs, playtest balance, and exercise edge cases. Traditional scripted testing plus AI agents gives more coverage than either alone. Studios like Ubisoft, EA, and Microsoft have public research programs on ML-driven QA.
Superhuman game AI
Beyond shipped games, AI research has used games as challenging benchmarks. AlphaZero mastered Go, chess, and shogi from self-play. AlphaStar reached Grandmaster in StarCraft II. OpenAI Five defeated professional Dota 2 teams. DeepMind’s Player of Games and Pluribus excelled at poker variants. These are not shipping game features but research milestones demonstrating RL capability at scale. See our deep learning coverage for the underlying neural-network machinery.
Player-skill modelling and matchmaking
Matchmaking systems use ML to predict player skill, match players for balanced games, and detect cheaters. TrueSkill 2 (Microsoft) and Glicko-based systems are common. Anti-cheat now increasingly uses behavioural anomaly detection to catch aimbots and wall-hacks by behaviour patterns rather than just code signatures.
The asset-flood concern
Storefronts like Steam have seen waves of low-quality AI-generated games. Shovelware was always a problem; generative AI lowers the floor further. Valve now requires developers to disclose AI-generated content, and Steam’s discoverability algorithm demotes low-quality releases. Similar dynamics are playing out on itch.io, the Epic Games Store, and mobile stores. Real players dislike cheap AI output; the commercial reality is that AI-assisted quality development remains desirable while AI-generated low-effort content struggles.
Legal and ethical questions
- Training data. Models trained on copyrighted game assets or voice lines face legal challenges. Settlements and lawsuits are active.
- Consent. Voice actors and motion-capture performers negotiate how their likeness can be used in AI models, sometimes years after the original performance.
- Disclosure. Players increasingly expect to know when content is AI-generated. Transparency is becoming a storefront requirement.
- Jobs. Roles are shifting. The net employment effect on game development is contested and varies by studio size and genre.
Frequently asked questions
Do modern AAA games use deep learning?
Selectively. Most runtime NPC AI remains rule-based for performance and predictability reasons. Deep learning shows up in asset creation pipelines, animation blending, specific behavioural subsystems, anti-cheat, and matchmaking. Fully neural real-time NPCs are an ongoing research direction but not shipping at scale yet.
Will AI-generated games replace human-made ones?
Not in the foreseeable future. Games that connect with players require intentional design, narrative craft, and taste that current AI does not provide autonomously. AI tools augment human developers — making art iteration faster, generating placeholder assets, handling localization, powering expressive NPCs — but the design and direction remain human. The “game made by AI alone” remains a curiosity rather than a competitive product.
Are LLM-powered NPCs going to be in mainstream games?
They are being tested in demos and some indie titles, and a few AAA studios have experimental prototypes. Wide deployment depends on solving cost (running LLMs at game frame rates is expensive), consistency (keeping dialogue in character and on-story), and safety (preventing offensive or lore-breaking output). Expect to see incremental adoption — LLM-powered secondary characters, background barks, optional chat features — before full protagonist-quality LLM dialogue.






