The top five product announcements from Nvidia’s GPU Technology Conference

0

Nvidia Corp.’s GPU Technology Conference, held in digital form this week, has grown over the years from being a hardcore tech event for pure propeller heads to an industry event showcasing the latest and greatest innovations in accelerated computing .

This spans a wide variety of use cases including ray tracing, autonomous vehicles, artificial intelligence, machine learning, and more. In addition to offering some industry vision, Nvidia typically uses GTC to remove the covers from new products and this year was no different. Here are GTC’s top five product announcements:

Omniverse Replicator

Virtual worlds are all the rage. In the past few weeks we’ve seen Facebook Inc. change its company name to Meta Platforms Inc. to get into the Metaverse, and Microsoft Corp. announced its own metaverse vision with 3D avatars for Microsoft Teams.

At GTC, Nvidia made a number of announcements related to its version of the Metaworld Omniverse that will make it easier for customers to train AI models to make the Metaworld more realistic.

Nvidia’s Omniverse Replicator is a synthetic data generation engine that creates simulated data for training neural networks that would power a virtual world. It’s easy to be skeptical of the use of virtual worlds, but Nvidia came up with what I would call low-hanging fruit for the Omniverse.

Nvidia DRIVE Sim is a virtual world for creating digital twins of autonomous vehicles. Training cars to drive can take tens of thousands of hours while traveling millions of kilometers to replicate all sorts of scenarios, many of which are difficult to accomplish in the physical world. For example, a self-driving car’s sensors can be flaky when sunset is precisely at the horizon time.

Car manufacturers can only test this scenario in real life for a few minutes a day. With Omniverse, the sun can be kept at a specific point and the virtual car can be driven for hundreds of hours, which speeds up exercise time.

Nvidia Isaac SIM is similar but is designed for robots. Training robots can be a very costly and time-consuming process as it would have to be taught how to go up, down, sloping streets, how to avoid objects, what is a moving object, what is fragile, and other scenarios. With Isaac SIM this can be done in the virtual world and when the training is finished it is loaded into the robot and can work.

Synthetic data is important because it complements real-world data, which can be labor-intensive, error-prone, skewed, and expensive. Omniverse Replicator can also create data that is difficult for humans to create, such as the Sunset example. This can include moving objects at high speed, at high altitude or shallow depth, or in inclement weather.

Omniverse avatar

This product combines speech AI, computer vision, natural language processing, simulation and recommendation engines to create interactive, intelligent 3D avatars. The training enables them to understand language and get into real conversations. Nvidia positions customer service as the first use case, where orders can be taken in a restaurant, an appointment can be made or a hotel room can be booked.

Virtual agents are already in use today, but they are mostly text based and Omniverse is now shifting that to a 3D interactive virtual person. Microsoft showed an example of people working through teams with avatar-based colleagues, but I’m not sure how appealing that would be. For face-to-face interactions, the Webex hologram product that Cisco debuted at its recent WebexOne event made more sense because I could see the actual person.

In customer service, however, the use cases I highlighted are fine for an avatar because they are basic tasks. Anything more essential, such as dealing with money or healthcare, would be equivalent to a human being. But for quick transactions, avatars could be an inexpensive way to provide faster, better service.

Zero trust cybersecurity platform

There’s no hotter topic in cybersecurity than Zero Trust. An easy way to think about Zero Trust is to rotate the entire network model 180 degrees. Internet Protocol networks were built on the concept that anything can communicate with anything, which is why the Internet works so well

Unfortunately, it also gives hackers access to anything once they break a point on the network. Zero Trust prohibits access to anything unless specifically allowed.

While the concept of Zero Trust is simple, its implementation is not. The rise of 5G, WiFi 6, the “Internet of Things”, work from home and the cloud has greatly increased the attack surface for companies, making the process of implementing Zero Trust complex and computationally intensive.

At GTC, Nvidia announced a zero trust platform that combines its BlueField data processing units, DOCA software development kits for BlueField and the Morpheus security AI framework. The DPUs play a key role as they offload the processor-heavy tasks from the central processing units to the firewalls or servers, which drive up the costs of these devices. The DPU can handle processes like validating users, isolating data, and other tasks to make the firewalls and other devices do what they are supposed to do.

DOCA 1.2 and Morpheus provide the developer tools and AI frameworks used to analyze traffic, review logs and application traffic, and customize Zero Trust. During the launch, Juniper Networks Inc. and Palo Alto Networks Inc. were announced as providers of the Zero Trust platform.

Clara Holoscan

Nvidia Clara is a healthcare application framework for AI-powered imaging, genomics, and smart hospitals. Clara Holoscan enables developers to create applications that process sensor data, render high quality graphics, and perform AI inferences to improve medical device technology.

Although medical devices are very diverse, they usually process data in the same way. Data is collected, analyzed and then visualized for human decision making and Clara Holoscan addresses each phase.

In fact, Holoscan uses a wide range of Nvidia technologies to address the various aspects of medical AI. For example, Omniverse can be used to render visual data that can then be manipulated to run what-if scenarios. The Nvidia Triton Inferencing Server also classifies, segments and tracks objects.

During his keynote speech, Chief Executive Jensen Huang (pictured) presented a number of examples of medical devices infused with AI, such as the Medtronic Hugo robotic surgical robot, Johnson and Johnson robotic endoscopy, and the Stryker AIRO interoperative CT scanner.

Earth two

The keynote ended with Huang announcing that Nvidia will build a simulated Earth, or Earth Two as he put it. The purpose is not to create a multiverse or a science fiction fantasy, but to study and predict climate change. Every business of any size has net-zero plans and committed to making the world a better place, but how do they know their efforts will result in meaningful change?

Earth Two can be used to run global simulations and understand that if world and business leaders come together and agree on certain things, we will get positive results on certain dates. This can help organization leaders change plans if necessary.

I can imagine such a tool being used heavily at events like the World Economic Forum, where climate change has become a hot topic. Earth Two would allow delegates to make informed decisions rather than just making decisions based on hope.

Photo: Robert Hof / SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of Experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.


Source link

Share.

Leave A Reply