NVIDIA Expands Omniverse With Generative Physical AI
New models, including Cosmos World Foundation Models, and Omniverse Mega Factory and Robotic Digital Twin Blueprint lay the foundation for industrial AI.
Latest News
January 7, 2025
NVIDIA announces generative artificial intelligence (AI) models and blueprints that expand NVIDIA Omniverse integration further into physical AI applications such as robotics, autonomous vehicles and vision AI. Global leaders in software development and professional services are using Omniverse to develop new products and services that will accelerate the next era of industrial AI, NVIDIA reports.
Accenture, Altair, Ansys, Cadence, Foretellix, Microsoft and Neural Concept are among the first to integrate Omniverse into their next-generation software products and professional services. Siemens, announced at the CES trade show the availability of Teamcenter Digital Reality Viewer—the first Siemens Xcelerator application powered by NVIDIA Omniverse libraries.
“Physical AI will revolutionize the $50 trillion manufacturing and logistics industries. Everything that moves—from cars and trucks to factories and warehouses—will be robotic and embodied by AI,” says Jensen Huang, founder and CEO at NVIDIA. “NVIDIA’s Omniverse digital twin operating system and Cosmos physical AI serve as the foundational libraries for digitalizing the world’s physical industries.”
Accelerating World Building for Physical AI
Creating 3D worlds for physical AI simulation requires three steps: world building, labeling the world with physical attributes and making it photoreal.
NVIDIA offers generative AI models that accelerate each step. The USD Code and USD Search NVIDIA NIM microservices are now generally available, letting developers use text prompts to generate or search for OpenUSD assets. A new NVIDIA EdifySimReady generative AI model unveiled can automatically label existing 3D assets with attributes like physics or materials, enabling developers to process 1,000 3D objects in minutes instead of over 40 hours manually.
NVIDIA Omniverse, paired with new NVIDIA Cosmos world foundation models, creates a synthetic data multiplication engine—letting developers generate massive amounts of controllable, photoreal synthetic data. Developers can compose 3D scenarios in Omniverse and render images or videos as outputs. These can then be used with text prompts to condition Cosmos models to generate countless synthetic virtual environments for physical AI training.
Speeding Up Industrial, Robotic Workflows
During the CES keynote, NVIDIA also announced four new blueprints that make it easier for developers to build Universal Scene Description (OpenUSD)-based Omniverse digital twins for physical AI. The blueprints include:
- Mega, powered by Omniverse Sensor RTX APIs, for developing and testing robot fleets at scale in an industrial factory or warehouse digital twin before deployment in real-world facilities.
- Autonomous Vehicle (AV) Simulation, also powered by Omniverse Sensor RTX APIs, that lets AV developers replay driving data, generate new ground-truth data and perform closed-loop testing to accelerate their development pipelines.
- Omniverse Spatial Streaming to Apple Vision Pro that helps developers create applications for immersive streaming of large-scale industrial digital twins to Apple Vision Pro.
- Real-Time Digital Twins for Computer Aided Engineering (CAE), a reference workflow built on NVIDIA CUDA-X acceleration, physics AI and Omniverse libraries that enables real-time physics visualization.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via DE-Editors@digitaleng.news.