We deeply understand the pain points of game developers because we've been there. As the creators of Zenith: The Last City, the most popular VR MMO, we’ve faced firsthand the challenges of designing complex game systems, balancing mechanics, and managing content pipelines. Our experience building and maintaining a large-scale multiplayer game has given us unique insights into how AI can empower small teams to compete with AAA studios.

Game development is an incredibly complex process, filled with technical barriers that can slow down even the most experienced teams. We believe AI should eliminate those barriers, giving developers instant access to powerful design tools and allowing them to focus purely on creativity.

AI That Understands Gameplay

Making a great game isn’t just about writing code—it’s about designing mechanics, balancing interactions, and creating engaging systems. While LLM's have shown great success in developing simple game mechanics, they still can't one-shot create a game in a game-engine without serious bugs.

To address this, we designed a modular system of game-building blocks. Instead of generating raw code, our AI assembles functional components, similar to Unity’s visual scripting and Unreal’s Blueprints. This approach makes AI-assisted game development more accessible, reliable, and—most importantly—fun.

Breaking Through AI Limitations

We’ve been pushing LLM's to their limits, experimenting with new ways to make AI-generated game mechanics actually usable.

Here’s what we’ve learned:

Raw code generation

AI can write C# and Blueprint scripts, but debugging logic errors often killed productivity and flow state. While AI-generated code works well for simple mechanics, once complexity increases, subtle logic flaws become harder to detect and fix. We recommend using AI-generated code sparingly or within a framework to reduce compounding bugs.

Modular building blocks

By assembling pre-tested components instead of writing raw code, AI ensures more consistent and functional mechanics. This approach significantly reduces errors while allowing designers to experiment freely. However, there’s a trade-off: modularity means less absolute freedom in coding, but the upside is dramatically improved reliability.

Goldi-blocks balance

If blocks are too fine-grained, current-day LLM's make mistakes by connecting them incorrectly or illogically. If blocks are too large, they limit creative flexibility. We found the sweet spot by testing different levels of granularity, ultimately arriving at intermediate-sized building blocks that balance structure with freedom.

Structured output formats

AI adheres to a strict schema to reduce errors and improve execution success rates. We designed a predictable, verifiable format that prevents an LLM from generating broken logic chains. This technique uses the provided schema to modify the decoding step of the LLM to enforce the schema strictly, reducing the number of invalid connections and dramatically improving execution success rates. Careful schema design is necessary to work within the 5-level schema constraint of flagship LLM api's.

Step-by-step behavior planning

AI generates mechanics in stages, refining each step before assembling the final product. This eliminates the problem of AI making large, incomprehensible mistakes by breaking down complex mechanics into logical steps.

Prefab-based AI augmentation

To provide AI with a flexible yet structured way to modify game scenes, we built a dedicated folder of prefabs that the AI indexes and uses for augmentation. This allows the AI to dynamically introduce new functionality, swap out skyboxes, adjust lighting, and integrate music—all while staying within a predefined creative framework. This approach makes it easy for existing game teams to modify what the AI can do by simply updating a folder, rather than requiring complex AI retraining or a custom asset pipeline. Our recommendation? Treat the AI prefab folder as a living design system that evolves alongside your game.

Retrieval-Augmented Generation (RAG)

AI pulls from a curated knowledge base of successful mechanics, boosting reliability and creativity. The downside? Bad data or irrelevant data can pollute results. We mitigate this by implementing manual curation and ranking of AI-generated behaviors, ensuring only high-quality examples are used.

Fine-tuning vs. prompt engineering

Fine-tuned models improve AI’s precision, but prompt engineering is a faster, more scalable way to increase accuracy. While fine-tuning gives better long-term performance, it requires large, well-labeled datasets for formats LLM's are unfamiliar with—something smaller teams might struggle to maintain. When in doubt Start with prompt engineering.

User-generated behaviors

Designers can upload working mechanics with one click, allowing AI to learn from real-world use cases and evolve over time. This creates a feedback loop that continuously improves AI outputs.

AI-Powered UI That Just Feels Right

Speed is everything in game development, so we built UX paradigms that let designers iterate at the speed of thought.

Doc -> Game. Users loved writing game ideas in a doc and watching AI bring them to life. Our AI tool takes in linked google docs, context from the scene, and objects that are dragged and dropped into it

AI surfaces the right controls. No more clicking through endless inspector panels. Users loved when the AI surfaced the correct sliders, and didn't have to hunt for them.

Version switching. Instantly compare AI-generated results and revert with ease.

Conversational editing. Tweak game mechanics by simply describing the changes in natural language.

The Future of AI-Assisted Game Dev

What you have seen here is only our initial prototypes into AI development — it’s not representative of the final product, but hopefully it’s shown you just how much potential Gen AI tools have.

We stand for a human-centric future where creatives can build games at ludicrous speeds, and where game devs are at the center of game development.

Previous Previous
No Previous Post!
Browse all blogs
Next Post
No Next Post!
Browse all blogs
March 17, 2025
Tech
Design
Speedrunning Game Dev -- AI Lessons Learned
Hisham Bedri
Read now
Required_current_page
November 11, 2024
Tech
Beyond the Meow: Infusing Virtual Cats with Real Purr-sonality
Lauren Frazier
Read now
Required_current_page
October 28, 2024
News
Welcome to Ramen VR
Written By Lauren Frazier
Read now
Required_current_page
October 29, 2024
Tech
Ramen VR’s New Frontier
Written By Lauren Frazier
Read now
Required_current_page
December 17, 2024
News
Tech
First AI Hackathon + Exploring Creative Uses of AI
Chandani Battle
Read now
Required_current_page