Our Latest

News Articles

Everywhere and nowhere: Metaverse leaders plan for data centers on a whole new scale

The metaverse was once pure science fiction, an idea of a sprawling online universe born 30 years ago in Neal Stephenson‘s Snow Crash novel. But now it’s gone through a rebirth as a realistic destination for many industries. And so I asked some people how the metaverse will change data centers in the future.

First, it helps to reach an understanding of what the metaverse will be. Some see the metaverse as the next version of the internet, or the spatial web, or the 3D web, with a 3D animated foundation that resembles sci-fi movies like Steven Spielberg’s Ready Player One.

Matthew Ball, author of The Metaverse: And How It Will Revolutionize Everything, refers to it as a persistent and interconnected network of 3D virtual worlds that will eventually serve as the gateway to most online experiences, and also underpin much of the physical world.

“When they’re having discussions about the metaverse, people always focus on upper levels of the stack,” said Rev Lebaredian, VP of Omniverse and Simulation at Nvidia, in an interview with VentureBeat. “There has been almost no discussion about what the infrastructure underneath is, and we care a lot about that. We’re building that stuff.”

More precisely, he said it is a “massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications and payments.”
I think of it as a real-time internet where we can have experiences like in the Star Trek holodeck, where you can immerse yourself in a world that can be indistinguishable from reality. And you should be able to use this metaverse to switch to another such world instantaneously.

When you get to that kind of definition of the metaverse, it’s clear that it isn’t here yet. And the question becomes: How long will it take to get there, and will technologists ever be able to build it? Raja Koduri, the chief architect at Intel, predicted in 2021 that the metaverse will need 1,000 times more computing power than was available back then. That’s obviously going to take a while.

Lower down the stack

Lebaredian said that there has been a lot of discussion about the applications that everyone wants to see for the metaverse. But he noted there has been very little discussion of the technology lower down in the stack for the data centers. He wants to see more attention to what it will take to really build the metaverse and the data centers that support it.

He said, “I believe that the infrastructure that we need to build out is going to be different for the metaverse as a whole compared to the data centers we have today. There are going to be differences even within these two classes of the metaverse, the consumer one and the industrial metaverse.”

As soon as I asked Jon Peddie, president of Jon Peddie Research and author of a three-book series dubbed The History of the GPU, he answered with a question about what the metaverse is. He believes it will be a single entity that ties together all of the networks — something like today’s internet, but evolved. And he said it will be a good question about where it actually resides.

“Should a metaverse ever be created, where will it be? It will be everywhere and nowhere,” Peddie said in an email to VentureBeat. “It won’t have a central location at NSA’s headquarters or Google. One of the basic tenets of a metaverse is that all tokens from any subverse will be frictionlessly exchangeable with any other subverse via the metaverse.”

He added, “If I buy a dress in Ubisoft’s subverse and sell it in Nvidia’s Omniverse, the tokens exchanged (Bitcoin or Euros) will flow to my digital wallet without me having to do anything more than let the computer look at my beautiful blue eyes. I may or may not be wearing a suffocating VR headset, and I may or may not be almost fainting at the enthralling aspects of blockchain transactions on Web3 or Web4, it will just happen — that is a metaverse, and for that to work, it has to be on every machine, much like a browser is on your laptop, tablet, TV, smartphone and car. So don’t look for a zip code for the metaverse.”

Omid Rahmat is an analyst who works with Peddie and he will be a speaker at our GamesBeat Summit 2023 event. He noted that the notion of digital twins, or building a copy of something in the real world in order to simulate something — like a BMW factory — could be a big part of the metaverse.

Under that notion, he said, “the metaverse is the total sum of digital data that mirrors the real world as well as extending into new ones that connect to this one.” He also believes that the revolution in generative AI will also lead to a vast expansion in conversational man-machine interfaces.

Rahmat thinks that “younger generations are going to be happy to move to conversational man-machine interfaces because they’re probably going to be fed up [with] getting neck-aches constantly looking down at their phones.”

He said these younger generations are going to then be more amenable to the use of heads-up displays because they are mobile, can manage their digital environments with conversational AI, and will probably demand mixed-reality experiences.

“All of these assumptions will extend the amount of data and computational demands placed on data centers because no matter how powerful mobiles become, they will never be powerful enough to handle the vast amounts of computing power needed to support this sea change in user behavior,” Rahmat said.

“Into this mix, you add the vast amount of data that is going to be used when we move to the internet of sensors, a natural extension of our need to model, simulate, measure and interact with the metaverse to mitigate costly behaviors in the real world,” he added. “Just the sheer volume of data and servers that will be needed to enable a semi-autonomous automotive experience, not even [fully] autonomous, is beyond existing datacenter capacities.”

This isn’t just about fun and avatars, either.

“The companies that want to control the metaverse, the big tech giants that are investing in all of the above, will need to own vast amounts of data and will have to devote ever more resources to adapt that information into viable products for businesses and consumers. All of this is happening, today, but we’ve only scratched the surface of demand,” Rahmat said.

He believes the metaverse is ultimately going to be a reinvention of our shared reality, a way to create a digital transformation of real-world interactions. That means we are going to need to create infrastructure for a 10 billion-user client-server model by 2050. We are nowhere near having the resources in place to support that kind of expansion, and are only at the beginning of the road to finding more energy-efficient, recyclable approaches to building out the infrastructure, he said.

These are different strategic views of the metaverse that I’ve come across, and there are counter-arguments being made by folks who want the metaverse to be open and decentralized. We’ll see what others foresee as well over the course of this story.

Roblox’s view of data centers of the future

Roblox prefers to run its own data centers, with only a small amount [of data] handled by outsiders, said Dan Sturman, CTO of the company, which has 67 million daily active users. As such, it’s one of the leading companies on the metaverse today, with its focus on user-generated content.

That allows the company to save money and give more money back to creators. For the metaverse, Sturman sees changes ahead for data centers.

To enable that, the company has to create custom solutions that deliver the functions that Roblox and its developers need. It also has to deploy data centers around the world while respecting local networking requirements and national restrictions on storing private user data. Roblox keeps its data in its core data centers and is also pushing a lot of processing out to the edge of the network.

Gamers who have good computers can do a lot more of the processing required for running Roblox on the computers at the edge of the network. But those who have older computers do less of that work at the edge. In that sense, Roblox takes advantage of infrastructure at the edge.

“Pushing compute to the edge is even more important. The frames per second for interactivity is just a whole other level compared to what we’ve had on traditional web apps,” Sturman said. “If you start looking at it, we want to do at least 30 frames a second. Interactivity is really important. We share the load with the client devices.”

By contrast, with virtual reality, much of the computing load is handled by a standalone VR device.

“One thing we’re learning is we need to be ready to kind of shift [computing] work based on the client device we’re talking to,” Sturman said. “That’s the direction we’re heading. And I think it’s important. So that takes me into GPUs because with most devices out there today, most graphics can be done on the end device.”

For voice processing, Roblox does a lot of that in the cloud using GPUs. Roblox is also doing a lot more machine learning inference processing with more algorithms running. Some of that happens on the CPU, but GPUs are also likely to be used in the data center for that purpose.

“Interpreting voice into our facial expressions is something we want to do at the edge, not at the core data center,” Sturman said. “We want to take what you’re saying and put that on your avatar. So your lips move accordingly. I think all good data center design comes down to total cost of ownership. What is my workload? And how do I assemble the tech available to execute that workload efficiently as possible?”

Roblox is also exploring large experiences, like rock concerts with tens of thousands of people. Those could benefit from advances in networking technology like Nvidia’s Mallanox technology. Sturman thinks it would be “incredible” to do a 50,000-person concert. But that’s likely to require changes in both software and hardware architecture, he said. It’s hard to imagine that networking between servers will ever be faster than the memory bus within a server, he said. But it’s worth looking at.

Sturman said his company uses the Lua programming language because it makes it easy to run an app on any device. And it has to run anywhere in the world. To make that happen and build the data centers for all of that, it takes a lot of focus on the game engine, data centers and infrastructure support.

“It doesn’t just happen by itself,” Sturman said.

Generative AI will be a revolution for many industries, and in the case of gaming it will lead to better user-generated content. Creators will be able to craft things much faster and with less help. Roblox has already launched a coding assist feature with generative AI on Roblox. Over time, it could lead to a lot more user-generated content, and, as a result, the need for more data center infrastructure.

Pushing the problem to the cloud

Lisa Orlandi, CEO of 8agora, said in a message to VentureBeat that we’ll see an early push of metaverse processing and applications into the cloud.

“If you look at metaverse companies today, the heavy compute requirements and rendering are downloaded onto the user device, but this model does not scale well to billions of people across the globe,” Orlandi said. “This will need to be pushed to a multicloud infrastructure (similar to what Amazon or Netflix are doing to stream in the cloud). This also means that data centers will need to ramp up their compute capacity and continue to build out their infrastructure to support the high-speed, high-compute requirements.”

But she noted it will be a challenge to do this kind of processing in the cloud in a sustainable way as power consumption for these heavy compute-intensive environments and bandwidth requirements to the user will increase significantly.

“Even when you look at Nvidia’s streaming, where they render in the cloud, the user still needs to download an app (it’s not web-based),” Orlandi said. “They don’t support bidirectional audio and the bandwidth requirements are high. For instance, GeForce Now requires at least 15Mbps for 720p at 60FPS.”

That goes up to 25Mbps for 1080p at 60FPS and 35Mbps for streaming up to 2560×144/2560×1600/3480×1800 at 120FPS. These higher bandwidths equate to a higher power requirement and higher cost to the consumer and will require data centers to increase their capacity exponentially while maintaining sustainability, Orlandi said.

“In Europe, they’ve adopted green energy requirements in their data centers and we believe that this will soon be the case in the U.S. as well,” she added. “This means that new technologies will need to be implemented that can scale to billions of people across the globe. It won’t be just a matter of lowering the component power, but a new strategy to enable an end-to-end multicloud solution that can handle the increase in users, lower their cost and power footprint, and also lower the cost and power footprint for the data centers.”

This is really the problem that 8agora focused on, which is moving the client app to the cloud to integrate it with the streaming app and thereby allowing the use of green energy data centers and high-quality rendering that can scale across any use case, Orlandi said.

“Multiple sessions can be rendered (20) with a single GPU card rated at 70 watts while encapsulating the audio/video data stream back to the user down to 1Mbps,” Orlandi said. “This allows the optimizations needed by the data centers to build out and scale high-performance environments at low power. Because bandwidth requirements are lowered, this means they can support much higher capacity across a multicloud infrastructure in a sustainable way across billions of people.”

The industrial metaverse will drive datacenters

The thing about the metaverse, as Peddie noted, is that processing will take place in the cloud for some applications, like real-time games with massive numbers of players. But much of the processing will also take place at the edge, Nvidia’s Lebaredian said. You may need to access the metaverse with your smartphone if you’re at a location where you can capture data on the scene but not have access to a supercomputer.

Some people are legitimately wondering if the metaverse craze, rising out of the pandemic when we were forced to communicate digitally, has waned as the hype cycle has moved on to AI and as the effects of the pandemic have lessened and enabled more people to go out publicly.

It’s natural for some of the interest on the consumer side to subside, as mixed reality technology is still a long way from fruition as a consumer product. But Nvidia sees a huge amount of metaverse activity on the industrial and enterprise side, said Lebaredian.

“On the industrial side, the parts we’ve been focused on, the metaverse is alive and kicking. And everybody wants it from all the customers that we’re working with,” Lebaredian said. “The enterprise is more like the lead horse of the metaverse.”

The industrial metaverse is a business-to-business ecosystem. The parts of the metaverse, or virtual environments, that connect back to the real world are where Nvidia is focused. That means things like digital twins, where a company designs a factory and makes it perfect in the digital world before it builds the real thing in the physical world. And it will outfit that physical factory with sensors that can feed data back to the digital twin so the company can have a data loop that improves the design over time.

It follows that enterprise data centers are going to be the ones that will evolve to serve the customers of the metaverse. New technologies like the metaverse will start out expensive — note the $3,300 cost of the Magic Leap 2 mixed reality headset — and only enterprises will be able to afford it.

“When you get to the market, you have to have the perfect confluence of conditions” like low costs and seamless user experiences, Lebaredian said. “In industry, you have less of those constraints. If you simulate things that help you design products in the metaverse for things that cost billions of dollars and then can save you millions of dollars, then your price sensitivity is different. We are building systems that let you scale at high fidelity with extreme scale.”

The result is likely to be that data centers will adapt to meet the needs of enterprise metaverses first, Lebaredian said. We will likely see an explosion of technologies to serve the metaverse and its infrastructure, just like search engines and accompanying businesses like Akamai served the needs of the fledgling internet.

“Once people figured out a business model around the internet, then that’s what it took to make the internet really grow,” he said. “Then look what happened. Google made search work. They built a business model around it. We’ve seen this before.”

Lebaredian isn’t sure the metaverse term itself will stick. He remembered how Al Gore referred to the internet as the information superhighway, but that buzzword didn’t last. But he thinks the technology itself will absolutely be necessary and useful in the long run.

“Somebody has to keep the ball moving forward, and it makes sense that Nvidia would be one of those companies investing in this particular set of technologies,” Lebaredian said. “We’ve done computer graphics. We’ve done gaming. We continue to do supercomputing, AI — all of this stuff. It all comes together right here in the metaverse.”

The hardware underneath


Right now, Nvidia’s lead system for datacenters running metaverse applications is the Nvidia OVX system, which is a SuperPod architecture that offers scalable performance for operating real-time simulations and AI-enabled digital twins on a factory, city or planetary scale.

“OVX systems are designed for the industrial metaverse for digital twins and designed to scale into data centers, networking, low latency and high bandwidth,” Lebaredian said. “That is the foundation we are building Omniverse Cloud on. And Microsoft Azure is about to stand up a whole bunch of OVX systems.”

BMW demonstrated how factory planners from around the world can come together in a digital twin — a factory that is ready in the virtual sense now and will be built physically in 2025 — and walk through it together virtually to figure out what is right or wrong about the design. In Japan, this is known as a “gemba walk.” Those people have to see what the others are modifying in real time as they interact with a factory that has something like 20,000 robots. During the walk, they can make agile decisions.

“Getting that factory to simulate in real time, that’s a major challenge,” said Lebaredian. “Gaming systems like a PlayStation can’t do that. But the OVX has GPUs, CPUs and enough memory to handle something like that. The physical factory for BMW will be in Hungary and be miles long. It’s so big the curvature of the earth matters in the design. But it will exist as a simulation in the Omniverse.”

BMW and Nvidia have to make thousands of GPUs available to run that digital-twin simulation. That’s essentially going to be running in a Microsoft Azure datacenter. With this infrastructure in place, an engineer can make a change in one part of the factory and it can immediately be visible to everyone.

The problems that enterprise designers run into — and the need for access to massive amounts of real-time data — have some parallels in the game world. You could make a game that is so realistic that a building can collapse and produce a pile of rubble. That rubble has to be calculated with care since there are so many pieces of data that must be accessed in real time by different players in the game. If one player’s PC at the edge calculates the rubble faster than another’s, the rubble will look different to different players based on how fast their machines are.

That doesn’t work. But if you put the game in the cloud and do all of the calculations in hardware inside a data center, then the calculations can be done quickly and shared among all of the users whose game data is in the data center itself.

“If we change it up, instead of doing computation at the edge, so that it all happens in the same data center, like with GeForce Now, we can ensure almost zero latency between the players,” Lebaredian said.

Read the entire article on https://venturebeat-com.cdn.ampproject.org/

Author: Dean Takahashi