Image by Brecht Corbeel

Metaverse design guide: 3D environment (Part 3)

Creating a virtual 3D environment

Nick Babich
UX Planet
Published in
6 min readMay 2, 2022

--

The metaverse will be primarily driven through virtual worlds and virtual objects in it. It will be a three-dimensional environment wherein users and businesses can participate in a wide variety of experiences.

Previously we’ve discussed how to design a digital avatar for virtual space and defined a few foundational rules for creating engaging interactions. In this article, we will review fundamental principles that will allow us to create a large, real-time, and persistent virtual 3D environment.

Create live environment

By its nature, the metaverse is a persistent and ubiquitous virtual simulation, and all objects in this simulation are not static. For example, if you decide to break a house in a city you live, this house will irrevocably disappear for everyone in that city. That’s why the majority of creations in the metaverse will be intended to persist.

Use real-world cues to educate users

The learning curve (the time required to master the tool) directly relates to the adoption of the new product or technology. First-time users who join virtual space will rely on cues learned from the physical world to navigate the virtual space and interact with objects in it. Ideally, no special training should be required to learn to interact with the space — users should be able to look around and understand what they can do there.

An object that looks like an entrance in real-word is likely will work the same way in virtual 3D space. Image by autodesk

Introduce laws of physics

When it comes to metaverse desing, it’s not enough to create a realistic exposition of houses and streets. It’s vital to introduce laws of physics (such as gravity) that will work in that virtual space. The laws should be clear for users.

Use cloud-based rendering

As you probably understand, metaverse will require significant growth in the complexity of the 3D simulation. You will need to render a massive number of 3D objects that will change depending on the context. Rendering 3D space full of details will require a lot of hardware resources. None of the devices available on the mass market right now is capable of doing that.

There are two approaches to overcome this problem — either to release a super powerful device and wait for users to upgrade their old devices or render the entire 3D space in the cloud and then push it to a user’s device as a video stream. The second approach is more preferable since it will make metaverse more accessible for anyone and maximize the number of metaverse users.

Cloud-based rendering will also help you beat the problem of high latency. The servers can be located closer to the user to minimize network latency.

Introduce an easy way to transfer objects from real to a virtual world

The metaverse needs to be populated with virtual content, but content production is costly. How to overcome this problem? Simply give users a chance to create a virtual environment themselves. Metaverse should give anyone the power to step into the role of designer and craft spaces and objects that best express who they are.

Just as our home is a reflection of the self, the virtual space that we will create will also be the expression of us.

One of the critical problems that product creators face when designing a 3D environment is the need to create high-res textures of 3D space. For a long time, this was a time-consuming exercise since every object contained many details. But the situation has changed drastically in recent years. Many new technologies came to the market, and Apple’s Object Capture technology is one of them. This technology enables users to create high-fidelity virtual objects using an iPhone camera. It literally takes a few minutes to create a high-resolution 3D object. As a result, it will be easier to build the digital version of our house of the workplace, and the virtual space will look a lot like its physical counterpart.

Apple’s Object Capture in action

Here you might ask, “Okay, the user will re-create their home in the virtual space, but what about large areas such as city districts.” Technologies that allow us to transfer large areas are also available on the market. One of them is Cesium, an open platform dedicated to analyzing and visualizing 3D geospatial data. Users can either use a database with geospatial data provided by Cesium or upload their own data (e.g., bird-eye pictures of environment) and instantly create real-time visualization of this environment.

3D city created using Cesium

In coming years, we will see a growing number of tools focused on producing ultra-realistic renders of specific real-world environments but all tools will be built around the same idea:

Anyone can design a virtual space

Define global architecture rules

Despite the simplicity of creating a 3D environment, the objects in the virtual world still need to be designed and built according to some general laws. Every major city in the real world has its architecture code that suggests what architects can or cannot do. For example, you should follow this code if you want to build a new building in central London. You cannot simply start building a skyscraper near the Tower Bridge.

The same principle will apply to virtual space. Metaverse platform will govern their worlds through a series of rules that dictate regulations, zoning, and accreditations — basically, everything from building height to how close neighboring structures should sit near to each other should be defined prior to giving users the power to create new spaces.

Use GAN technology to generate virtual environments

Not only humans can create virtual environments. Artificial intelligence can be helpful for this tool. As soon as you define the architecture rules that the virtual space should comply, you can start using Generative Adversarial Networks (GANs) to create new spaces for your world.

The Palace of Westminster is the target scene for 3D reconstruction. (a) real 3D scene. (b) reconstructed 3D scene. (c,e,g,i) are observed 2D images from real scene. Image by semanticscholar

GAN isn’t a perfect technology, and it requires moderation. Once the computer generates the space, you need to carefully review it and introduce necessary changes to make the area feel alive.

Add dynamic lighting to make 3D space feel more real

Lighting can make a virtual space feel much more alive. By introducing dynamic lighting, you will also avoid some limitations that the first versions of metaverse will likely have (such as low texture resolution)

3D lighting in virtual space. Image by Adobe

Use 120hz refresh rate

Motion sickness is one of the most severe problems that VR users face. Many users who join virtual spaces suffer from nausea. Nausea can force users to abandon the interaction. One of the possible solutions to nausea is increasing the refresh rate. Quite possible that 120hz is the minimum threshold for avoiding nausea for a large segment of users.

Add spatial audio

Sound plays a tremendous role in how we feel about the virtual space. No matter what experience we have, whether it is a movie or video game, sound provides a lot of cues about the environment and directly impacts our mood. Like lighting, it’s easy to bring virtual space to life using sound.

Easy way to import/export 3D models

Today a huge number of various tools are used to create 3D objects. Maya, Houdini, RenderMan, and Blender are just a few tools that 3D designers use. Each of these tools has its file types and proprietary codecs, and right now, there is no simple way to export assets created in one tool to another. For the metaverse to thrive, we need to create open standards and protocols that will allow easy import/export from any of those popular 3D rendering tools.

Plus, we need better interconnection between tools. Ideally, it should be easy for developers to export their work from one platform/rendering solution/engine and import it to another. Companies like Nvidia are already working on the solution to this problem. For example, the Nvidia Omniverse platform uses USD to bring together assets created in Maya, Houdini, or Unreal.

It’s likely that we will see the rise of new open standards that support interchange formats in the coming years.

Metaverse design guide:

Part 1: Creating user avatars for the virtual space

Part 2: Designing interactions in virtual space

Part 3: Creating a virtual 3D environment

Follow me on Twitter

This article was originally published at whitelight.co

--

--