Physical Spaces in Virtual Realities
Should VR Care About Physical Space?
Respecting the user’s physical space is an integral part of keeping experiences authentic and immersive in augmented reality.
If a user throws a virtual ball, the program should bounce the ball on their actual floors and walls. If they want to play a robot-shooting game in the living room, the game needs “know” about the room’s boundaries to generate well-placed robot-spawning portals.

In contrast, virtual reality (VR) experiences typically do not rely on “knowing” the physical space — and understandably so. Since the essence of VR is to completely immerse the user in an experience, should the developer consider anything about the user’s real-world physical space?
I say the answer is yes — and my explanation has to do with the limitations of movement in VR.
Movement in VR — What’s Missing?
Well, let’s imagine we wanted to create a VR version of the robot game. In the AR version, the robots “know” about the living room, bedroom, backyard, or wherever the user is playing. Thus, the program is able to spawn robots in appropriate locations, making them reachable to the user and guaranteeing the user’s ability to interact with them.
The VR program would know nothing about the user’s space, so how could it ensure users can easily interact with the robots? The radius of the VR world would be fixed and practically endless — so where would the robots spawn? And if they spawn far away, how would the user get to them?
VR developers would generally tackle these design challenges in one of three ways:
Full Virtual Movement
With full virtual movement, users can walk/run/hop/etc. virtually while staying static in the real world. This is usually done using the controller’s joystick (Journey of the Gods) or adding point-and-teleport abilities (Half Life: Alyx).

In some contexts, these methods would make perfect sense (say, teleportation in a Harry Potter game, or the joystick in a flight simulator).
However, choosing all the movement to be virtual treats the user more like a camera than an actual agent. In the wrong contexts, this can feel contrived and weaken the immersion.
Static Setting
Building the virtual space around the user statically is another option. Here, the user doesn’t move in the world; rather, the virtual scene changes in front of the user. In the puzzle-platformer Moss, for example, the setting and perspective change with a fade in/out effect when the user finishes a level.
As with full virtual movement, using static settings in the right context can also be a great design choice, especially if the experience wants the user to take on an observer role. This is exactly the case with Moss, which even refers to the player as the “Reader”.
Limited Movement
Finally, developers can use a mechanism that relies on physical movement. The resulting experience assumes the user can move a few steps in every direction, and all user interactions with virtual objects occur within this small radius of allowed movement. This can be seen in the arcade sport games Beat Saber and Racket NX.
In my opinion, this type of movement best leverages the VR medium. While many aspects in games like Moss and Journey of the Gods can be ported to flat-screen gaming consoles, it’s practically impossible to do the same for games like Racket NX, which powerfully utilize the headset’s ability to track the user’s movements.
A Fourth Way?
So — which of these methods would allow us to create the robot game Dr. Grordbort’s Invaders in VR? The closest would be deploying limited movement. But, as with the beats in Beat Saber or the racket ball in Racket NX, the robots would always have to approach the user.
This is vastly different from the AR original, in which users are encouraged to physically explore the space, approaching robots themselves and avoiding projectiles with their body. Would it be possible to immerse the user entirely in a virtual world while also letting her move freely?
Perhaps — but we would need information about the user’s play area. Until recently, such an input wasn’t readily available in VR. However, the new generation of Oculus headsets changed that — not only does it allow users to easily define the playable area, it also exposes this data to developers in the form of a play area polygon.
Upon discovering the availability of this data, I had an interesting thought: what if we dynamically designed the environment given the polygon as an input? That is, instead of creating a predefined environment with fixed asset positions, we could use the polygon to generate custom environments that fit the user’s space.
If we did, this input could be used for more than just protecting pinky toes from hitting table corners. We could program the robots to spawn in locations that the user can actually walk to. The actual boundaries of an experience could be presented using scenery, such as a fence or the sand on an island.
More broadly, we could have a way to ensure the user can interact with our assets without relying on external movement mechanisms. Most people use VR in their homes, so why not have a tool to design experiences that scale to their living rooms?
The indie game Tea For God does something similar in a very refreshing way — and actually lets the player walk around (it even counts how many meters you hiked), but focuses on a rectangular play area. I wanted to take this and push it a bit further, building the entire environment around the play area polygon.
Let’s Get Physical

In my article Locomotion in VR: Procedural Generation of a Scene, I explore this idea with a technical walkthrough.
This demo illustrates how the use of this input uncovers an alternative locomotion method: by utilizing the play area and building a custom environment, users can physically approach points of interest no matter the space constraints.
For my demo’s design goals, the other methods would work, but do not feel quite right. Teleportation or joystick-ing would have been overkill given the island’s intended size (that of an average room). A static setting would feel artificial given that I want the user to feel they are on an island, not just looking at one. Limited movement would be unnecessarily limited.
The solution I presented allowed me to create exactly the experience I aimed for: a person on an explorable island with objects they can approach and interact with.

This tool may be a powerful addition to the VR toolkit. As I discussed in my article Virtual Reality — A Medium For Reimagining Human-Machine Communication, VR lets us explore more natural and intuitive ways of interacting with our devices, and new tools are needed.
Whether used exclusively or in combination with other methods, this tool provides a new technique for customizing and scaling VR experiences. We can even adopt a new perspective on design: instead of having the user step into the environment, the environment can step into the world of the user.