A set of soon-to-be-available microservices from
Nvidia, known as Omniverse Cloud Sensor RTX, will reportedly be able to provide physically accurate sensor simulation to accelerate the development of fully autonomous machines of any kind. According to Nvidia, developers using Omniverse Cloud Sensor RTX can test sensor perception and associated artificial intelligence (AI) software at scale in physically accurate, realistic virtual environments before real-world implementation.
Omniverse Cloud Sensor RTX will also enable sensor manufacturers to validate and integrate digital twins of their sensors in virtual environments, reducing the time needed for physical prototyping.
Built on the
OpenUSD framework and powered by
NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX accelerates the creation of simulated environments by combining real-world data from videos, cameras, radar and lidar with synthetic data.
Even for scenarios with limited real-world data, the microservices can be used to simulate a broad range of activities, such as whether a robotic arm is operating correctly, a factory conveyor belt is in motion, or a robot or person is nearby.
“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories…helping accelerate the next wave of AI.”
Foretellix and MathWorks are among the first software developers to which Nvidia is providing Omniverse Cloud Sensor RTX access for autonomous vehicle development.