February 26, 2026 San Francisco - World Labs announced that it has raised $1 billion in new funding. This new round accelerates its mission to build spatial intelligence — AI that goes beyond 2D images and text to truly perceive, generate, reason about, and interact with rich, persistent 3D worlds.
Founded by AI pioneer Dr. Fei-Fei Li alongside Justin Johnson, Christoph Lassner, and Ben Mildenhall, World Labs emerged from stealth in 2024 with $230M. Now, with backing from AMD, Autodesk, Emerson Collective, Fidelity,, NVIDIA, Sea, FoundersX Ventures, and others, the company is pushing the frontier of physical AI.
As Dr. Fei-Fei Li stated: “If AI is to be truly useful, it must understand worlds, not just words. Worlds are governed by geometry, physics, and dynamics, and reconciling the semantic, spatial, and physical is the next great frontier of AI.”

Why Spatial Intelligence Is the Next Leap
Today's AI masters flat data, but the real world is 3D, dynamic, and interactive. World models create consistent internal representations of environments — no hallucinations, no viewpoint inconsistencies — enabling persistent spaces you can explore, edit, and simulate.
Its flagship product, Marble, launched in November 2025, delivers this today.
Marble: Generate & Edit Infinite 3D Worlds
Marble is a frontier multimodal world model letting anyone — creators, designers, engineers, filmmakers — build high-fidelity, spatially coherent 3D environments from Text prompts, Images, videos, or 360 panoramas, and Coarse 3D layouts.
Key features include:
AI-native editing: local tweaks, global restyling, object removal/addition
Expansion & composition: grow scenes or merge worlds seamlessly
Exports: Gaussian Splats for real-time rendering, triangle meshes with physics colliders, controlled videos
In January 2026, World Labs launched the World API to embed these capabilities into apps and workflows.

Groundbreaking Real-Time Frame Models (RTFM)
World Labs announced its groundbreaking RTFM (Real-Time Frame Model) in October 2025. RTFM generates video frames interactively in real time as users navigate and explore, using posed frames as spatial memory to maintain unbounded persistence and consistency without explicit 3D reconstruction.
It renders complex effects like lighting, reflections, and shadows learned end-to-end from data, runs efficiently on a single H100 GPU, and integrates seamlessly with Marble to turn single images or prompts into explorable, persistent 3D worlds.
This breakthrough enables fluid, on-the-fly viewpoint generation while overcoming traditional trade-offs in coherence and speed, setting a new standard for interactive spatial AI.

Transforming Industries, Augmenting Human Creativity
This new funding will supercharge World Labs’ ability to scale Marble, push research frontiers such as real-time interaction, agent simulation, tighter integration with physics engines, and expand into new verticals.
Imagine:
Filmmaking: Instant virtual production stages from concept art
Gaming: Rapid world prototyping that feels alive
Architecture & Design: AI-assisted spatial reasoning that respects real-world constraints
Robotics & Simulation: Training embodied AI in photorealistic, persistent 3D worlds before real-world deployment
Storytelling & Education: Anyone creating immersive experiences without traditional 3D expertise.
The company is not replacing creators, but giving them superpowers. As Dr. Fei-Fei Li often emphasizes, this is human-centered AI: tools that expand imagination rather than automate it away.
Helen H. Liang, Ph.D, Founder & Managing Partner of FoundersX Venture stated, "We are proud to back World Labs in their landmark $1 billion round. Dr. Fei-Fei Li and her team are redefining AI with true spatial intelligence—persistent, physically grounded world models that unlock creativity in design, robotics, simulation, and more. They embody our focus on boundary-pushing AI founders. Honored to support this mission!"
