This article is more than 1 year old

Nvidia anoints itself a creator of the metaverse

That's so Meta

Nvidia sees itself as a hardware overlord of the "metaverse" and dropped some hints about the operation of a parallel 3D universe in which our cartoon selves can work, play and interact.

The chip biz has added new plumbing in Omniverse – an underlying hardware and software engine that acts as a planet core fusing together virtual communities in an alternate 3D universe. Omniverse is also being used to create avatars to enhance real-world experiences in cars, hospitals and robots.

"We're not telling people to replace what they do, we are enhancing what they do," said Richard Kerris, vice president of the Omniverse platform, during a press briefing.

The Omniverse announcements came during the company's GPU Technology Conference this week. Nvidia CEO Jensen Huang will talk about many of these announcements on Tuesday.

Jensen

Jensen as you've never seen him before. Source: Nvidia. Click to enlarge

One such announcement is Omniverse Avatar, which can generate interactive, intelligent AI avatars for things like aiding diners order food, or helping a driver self-park or navigate the roads better.

Nvidia gave an example of a conversational avatar to replace servers in restaurants. When ordering food, an AI system – represented by an on-screen avatar – could converse in real time using speech recognition and natural intelligence techniques, and use computer vision to capture a person's mood, and recommend dishes based on the knowledge base.

For that, the avatar will need to run several AI models – for example, speech, image recognition and context – simultaneously, which can be a challenge. The company has created the Unified Compute Framework that models AI as microservices, so apps can run in a single or hybrid systems.

Nvidia already has underlying AI systems like the Megatron-Turing Natural Language Generation model – a monolithic transformer language jointly developed with Microsoft. The system will be now offered on its DGX AI hardware.

Omniverse Avatar is also the underlying technology in Drive Concierge – an in-car AI assistant that is a "personal concierge in the car that will be on call for you," said Deepu Talla, vice president and general manager of Embedded and Edge Computing.

AI systems in cars represented by interactive characters can understand a driver and the car's occupants through habits, voice and interactions. Accordingly the AI system can make phone calls or offer recommendations of nearby places to eat.

Using cameras and other sensors, the system can also see if a driver is asleep, or alert a rider if they forget something in the car. The AI system's messages are represented through interactive characters or interfaces on screens.

Old dog, new tricks

The metaverse concept isn't new – it has existed through Linden Lab's Second Life, or games like The Sims. Nvidia hopes to break proprietary walls and create a united metaverse so users to theoretically jump between universes created by different companies.

During the briefing, Nvidia did not make reference to helping Facebook meet its vision of a future around the metaverse, which is at the center of its rebranding to Meta.

But Nvidia is roping other companies into bringing their 3D work to the Omniverse platform through its software connectors. That list includes Esri's ArcGIS cityEngine, which helps create urban environments in 3D, and Replica Studio's AI voice engine, which can simulate real voice for animated characters

"What makes this all possible is the foundation of USD, or Universal Scene Description. USD is the HTML of 3D – an important element because it allows for all these software products to take advantage of the virtual worlds we are talking about," Kerris said. USD was created by Pixar to share 3D assets in a collaborative way.

Nvidia also announced Omniverse Enterprise – a subscription offering with a software stack to help companies create 3D workflows that can be connected to the Omniverse platform. Priced at $9,000 per year, the offering is targeted at industry verticals like engineering and entertainment, and will be available through resellers that include Dell, Lenovo, PNY and Supermicro.

The company is also using the Omniverse platform to generate synthetic data on which to train "digital twins," or virtual simulations of real-world objects. The ISAAC SIM can train robots through synthetic data based on real-world and virtual information. The SIM allows the introduction of new objects, camera views and lighting to create custom data sets on which to train robots.

A automotive equivalent is Drive SIM, which can create realistic scenes through simulated cameras for autonomous driving. The SIM factors in real-world data to train autonomous driving AI models. The camera lens models are simulated and take in real-world phenomena like motion blur, rolling shutter and LED flicker.

Nvidia is working closely with sensor makers to replicate Drive SIM data accurately. The camera, radar, lidar and ultrasonic sensor models are all path-traced using RTX graphics technology, according to Danny Shapiro, Nvidia's vice president for automotive.

The company intertwined some hardware announcements in the overall Omniverse narrative.

Join the new generation

The next-generation Jetson AGX Orin developer board will be available to makers in the first quarter next year. It has 12 CPU cores based on Arm Cortex-A78 designs, 32GB of LPDDR5 RAM and delivers 200 TOPS (Tera Operations Per Second) of performance.

The Drive Hyperion 8 is a computing platform for cars which has dual Drive Orin SoCs and delivers performance of up to 500 TOPS. The platform has 12 cameras, nine radars, one lidar, and 12 ultrasonic sensors. It will go into vehicles produced in 2024, and has a modular design so auto makers can use only the features they need. Cars with older Nvidia computers can be upgraded to Drive Hyperion 8.

Nvidia also announced the Quantum-2 InfiniBand switch, which has 57 billion transistors and is being made using Taiwan Semiconductor Manufacturing Co's 7nm process. It can process 66.5 billion packets per second, and has 64 ports for 400Gbit/sec data transfers, or 128 ports for 200Gbit/sec transfers, we're told.

The company also talked up Morpheus – an AI framework it revealed earlier this year that allows cybersecurity vendors to identify and alert companies to irregular behavior in a network or data center. The framework identifies subtle changes in applications, users or network traffic to identify anomalies and suspicious behavior.

Morpheus draws the data it needs from Nvidia’s BlueField SmartNICs/Data Processing Units, which have been imbued with new powers thanks to an upgrade to the DOCA SDK that is to Bluefield DPUs as CUDA is to Nvidia's GPUs.

The DOCA upgrade – version 1.2 – can also "build metered cloud services that control resource access, validate each application and user [and] isolate potentially compromised machines." DOCA 1.2 also lets Bluefield devices authenticate software and hardware authentication, apply line-rate data cryptography, and support distributed firewalls that run on the SmartNIC. Nvidia told The Register Palo Alto has seen a 5x improvement in firewall performance when running the tools in distributed mode on SmartNICs.

Talking of AI, the GPU giant also expanded its Launchpad program, where it will offer short-term access to AI hardware and software through Equinix data centers in the US, Europe, Japan and Singapore. The latter three locations are Launchpad's first presences outside the USA, giving Nvidia hope that its function as an AI on-ramp might be more widely adopted.

Another new offering is a new cut of the RIVA conversational AI tool that is said to be capable of creating a custom human-like voice in a day, based on just 30 minutes of sample speech. Nvidia thinks that's just the ticket for orgs that want to offer custom speech interfaces. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like