The Genesis of the Neural Oasis:
Building the Bare-Metal Reality Engine

Hero

The boundary between simulation and reality is dissolving. As we transition from rasterized graphics to neural rendering, the dream of a persistent, infinite virtual world is finally becoming an engineering problem we can solve.

"People come to the OASIS for all the things they can do, but they stay for all the things they can be." — James Halliday

Anorak the All-Knowing
James Halliday's avatar, Anorak.

The Dream of Anorak

My obsession didn't start with a research paper or a frantic GitHub commit. It started in the dark of a movie theater, watching a scene that would imprint itself on my mind forever.

I watched James Halliday (the eccentric genius played by Mark Rylance) leave behind a message for the world. He wasn't just a game designer; he was an architect of a new reality. He had built the OASIS—a place where the laws of physics were optional, where identity was fluid, and where the world was not a finite rock in space but an unbounded, persistent manifold of digital possibility.

As I sat there, watching his avatar Anorak hand over the keys to his kingdom, I felt a specific, burning curiosity. Not the passive enjoyment of a moviegoer, but the frantic, reverse-engineering mindset of a builder. I didn't just want to play in the Oasis. I wanted to build the engine that ran it.

For years, this dream felt painfully out of reach. The industry was stuck in the "Rasterization Age"—building rigid meshes, baking static lightmaps, and scripting limited interactions. We were simulating the appearance of a world, not the world itself. Then came the "Generative Age" with models like Sora, where we could hallucinate beautiful dreams, but like dreams, they dissolved the moment you tried to touch them. A chair might turn into a table; a door might lead nowhere. It was visually stunning, but physically hollow.

But today, something has shifted.

The "March of Nines" has brought us to the threshold of a new era. We are finally graduating from hallucinating pixels to simulating states. We are swapping out the black-box neural networks of the past for a differentiable, action-conditioned Gaussian substrate.

We are building the Bare-Metal Reality Engine. And for the first time, I believe we have the blueprints for Halliday's masterpiece.

The Substrate of Reality

To understand why this moment is different, we have to look at the "atoms" of our digital world. In traditional games, the world was made of triangles—rigid, sharp, and hard to update. Then we tried Neural Radiance Fields (NeRFs), which were like holograms: beautiful to look at, but you couldn't touch them. They were ghosts.

3D Gaussian Splatting (3DGS) changed everything.

Think of 3D Gaussians as "fuzzy atoms." Each one has a position, a shape, an opacity, and a color. Unlike triangles, they differenitable—meaning we can train them with gradient descent. Unlike NeRFs, they are explicit—meaning we can reach into the cloud and move them.

In the new Gaussian World Model (GWM) framework, we aren't just predicting the next frame of a video. We are predicting the flow of these atoms. When you push a door in the neural Oasis, the model predicts the trajectory of thousands of Gaussians, updating their state in real-time to simulate collision, friction, and mass. This isn't a video recording; it's a physics simulation learned entirely from data. It is the "Matter" of the Oasis.

The Architects of the Simulation

Just as the Oasis had Gunter clans and IOI corporations vying for control, the race to build this Reality Engine is being driven by three distinct research frontiers. My work involves synthesizing these streams into a coherent architecture—the "Anorak Engine."

First, we have The Spatialists at World Labs. They are solving the problem of Place. Their models, like RTFM, introduce the concept of "Spatial Memory." In a standard video generator, the world is ephemeral; if you spin around, the room behind you changes. RTFM assigns a 3D Pose to every frame, ensuring that the world persists in memory even when unobserved. This is the foundation of digital geography.

Second, The Reasoners at Meta FAIR are solving the problem of Laws. Yann LeCun's V-JEPA architecture functions as the "physics engine" of the simulation. It teaches the AI common sense—understanding the latent causal link between "fragile object" and "hard surface." It ensures the simulation obeys a consistent set of rules, preventing the "dream logic" that plagues current generative models.

Finally, The Scalers at Google DeepMind are solving the problem of Interface. With Genie 3, they demonstrated that internet-scale video can be converted into a playable, interactive environment. This is the Latent Action Interface—allowing us to control the simulation not with text prompts, but with pure intent. It is the precursor to the haptic interface of the future.

Engineering the Unbounded Manifold

So, how do we put this together? How do we build the 'Oasis' as Halliday envisioned it: an unbounded manifold where infinite procedural detail and multi-agent causality emerge natively? This is the engineering challenge I have dedicated my career to.

The Infinite Detail Problem: Neural Proceduralism

We cannot design every planet in the Oasis by hand. We need Neural Proceduralism. We need models like WorldGrow, which use "Structured Latents" to grow the world on demand.

Imagine walking through a forest. The trees directly in front of you are rendered as high-fidelity 3D Gaussians (Near-Field). The forest in the distance exists only as compressed latent vectors (Far-Field)—a "dream" of a forest. As you walk forward, the Reality Engine "inflates" the dream into atoms, resolving the branches and leaves just before your photon receptors can register them. This is the only way to scale to the size of a universe.

The Causality Problem: The Distributed Ledger of Physics

The Oasis is shared. If I carve my name into a tree, you must see it 100 years later. We solve this with Decentralized Gaussian Fusion.

We don't send video frames across the network. We send Delta-Updates to the Gaussian Cloud. If I push a box, I am broadcasting a force vector. The server calculates the deformation, updates the Gaussian Covariance Matrices, and syncs the new state to you. It is a distributed ledger, but instead of financial transactions, it records the history of physical interactions.

Halliday's Regret
"I created the OASIS because I never felt at home in the real world."

The Invisible Key

In the end, Halliday didn't build the Oasis just to escape reality. He built it to find a way to connect.

We are building the most powerful connection engine in human history. A place where distance is irrelevant, where potential is unlimited, and where the digital world feels as warm, heavy, and consequential as the physical one.

The "Oasis" isn't going to be a piece of software you install. It’s going to be a persistent neural state—a massive, high-dimensional manifold that we inhabit together.

Every line of code I write, every paper I simulate, brings us one step closer to that moment when the pixels stop flickering and the world becomes solid. The moment when we find the first Key.

The simulation is loading. See you on the leaderboard.


References & Inspiration

  • Cline, E. (2011). Ready Player One. Random House.
  • DeepMind. (2026). Genie 3: Large-scale world models for interactive environments. Google DeepMind Research Blog.
  • Kerbl, B., et al. (2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics.
  • Li, F.-F., & World Labs Team. (2025). Marble: A Multimodal 3D World Model. World Labs Whitepaper.
  • Meta FAIR. (2025). V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning. Meta AI Research.