News
Roblox Studio can now generate 3D meshes from text prompts
March 22, 2026
Roblox shipped a major update to Studio's built-in Assistant on March 19 that lets developers generate textured 3D meshes from a text prompt in seconds. The update also adds four new MCP Server tools, making the same capabilities available to external AI clients like BloxBot, Claude Code, and Cursor.
Mesh generation, built into Studio
The new mesh generation feature uses Roblox's GenerateModelAsync API under the hood. You can trigger it two ways: type /generate followed by a description, or just ask conversationally - “make a futuristic crate” works fine.
What makes this useful in practice:
- Batch creation. Generate multiple meshes at once instead of one at a time.
- Bounding boxes. Select a Part in your workspace as a spatial constraint. Assistant uses its size and position to guide generation, so the output fits where you need it.
- Triangle count limits. Set a maximum triangle count (default is 10,000). Use a few hundred for low-poly props, or go higher for detailed assets.
- Multitasking. Generation runs in the background. You can keep building while it works.
This is a significant step toward what Roblox has been calling “4D creation” - generating objects that include geometry, textures, and behavior from a single prompt.
Four new MCP Server tools
The same mesh generation capability is now exposed through Studio's MCP Server, along with three other new tools. Any MCP-compatible client can call them:
- generate_mesh - Create a textured 3D mesh from a text prompt. Same API that powers the in-Studio feature.
- generate_material - Generate material variants from a text description.
- insert_from_creator_store - Search the Creator Store and insert models directly into your experience.
- screen_capture - Capture a screenshot of the Studio viewport during play mode for visual verification.
The screen capture tool is particularly interesting for AI-assisted workflows. It lets an external AI agent see what your game actually looks like at runtime, not just read the code. If you ask an AI to change lighting or adjust UI layout, it can now verify its own work visually.
What this means for external AI tools
Before this update, external AI clients connected to Studio's MCP Server could read your project, edit scripts, and manipulate instances, but they couldn't generate assets. Now they can.
If you're using BloxBot, Claude Code, or Cursor with the Roblox Studio MCP Server, you can ask your AI to generate a mesh, drop it into the scene, and check the result with a screenshot, all without touching Studio's UI yourself.
A typical workflow might look like:
- Ask the AI to generate a set of props (“create five sci-fi crates with different sizes”)
- Have it position them in your scene using existing instance tools
- Run a playtest and use screen_capture to verify placement
- Iterate based on what the screenshot shows (“the large crate clips through the wall, move it back 3 studs”)
Other things worth noting
- OpenGameEval expanded. Roblox's open benchmark for evaluating AI model performance on game development tasks now includes 30 debug-focused evaluations across 15 scenarios with injected bugs. If you're choosing between models, this is the dataset to watch.
- Mesh streaming (early access). A separate update this week introduced engine-level mesh streaming with cloud-generated levels of detail. It works independently of StreamingEnabled and handles un-skinned meshes for now.
- AvatarAbilities library. The beta got a 25% efficiency boost through refactoring, plus 25+ bug fixes including a fix for intermittent client crashes during character movement.
The full announcement is on the Roblox DevForum. If you want to try these tools with an external AI client, the getting started guide covers how to connect BloxBot or any MCP client to Roblox Studio.