Wednesday, July 24, 2024
HomeFeatureNVIDIA is at the centre of Generative AI boom

NVIDIA is at the centre of Generative AI boom

NVIDIA is at the centre of Generative AI boom

Note: This feature was first published on 29 May 2023.

Jensen Huang on stage during Computex 2023. Image source: NVIDIA.

All the fuss around Generative AI

Much has been made about the help Generative AI can be to our lives. Tech companies from Microsoft to Google and from Meta to Intel, all of these companies have been jumping on the Generative AI bandwagon.

And NVIDIA is perfectly poised to ride this wave of Generative AI to success as more companies turn towards it to use NVIDIA’s chips in their hardware used for AI. Where the company used to be known for GPUs used for gaming PCs and cryptomining, Nvidia hardware underpins most AI applications today. So we’re following NVIDIA’s AI based announcements coming from Computex 2023 in Taipei today.

Using Generative AI to deliver creative content

Some of the creative content from WPP. Image source: NVIDIA.

But while we are seeing growth in Generative AI solutions, where and how they’ll be used is currently the question. While some like ChatGPT have found its niche, most experts agree that it will be those that can use technology to improve operational efficiencies or generate content will succeed.

For example, Dall-E 2 is an AI image generator that creates content from a description in natural language. Observe.AI helps automate workflows so you can accelerate business outcomes. Jasper.AI helps you create content faster and more accurately.

In fact, we’re seeing actual deployments of Generative AI in the real-world as company move to embrace it in ways that meet the needs of their companies. As we’ve covered previously, NVIDIA has partnered with creative agencies to advance and improve productivity for creative professionals.

For example, creative company WPPand NVIDIA announced that they are developing a content engine that harnesses NVIDIA AI and Omniverse to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale, while staying aligned to a client’s branding.

Based on Omniverse Cloud, WPP uses generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

WPP is able to connect its product-design data from software such as Adobe’s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.

Lastly, to deliver on the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.

Getting the infrastructure right

The Grace Hopper superchip. Image source: NVIDIA.

To ensure that Generative AI is able to run smoothly, NVIDIA has announced a new class of large-memory AI supercomputer, the NVIDIA DGX.

Created to enable the development of giant, next-generation models for generative AI language applications, recommender systems and data analytics workloads, The NVIDIA DGX supercomputer comes powered by the NVIDIA GH200 Grace Hopper Superchips and the NVIDIA NVLink Switch System.

The NVIDIA DGX GH200’s massive, shared memory space uses NVLink interconnect technology with the NVLink Switch System to combine 256 GH200 superchips into a single giant GPU. This allows it to provide 48 times more NVLink bandwidth than the previous generation, delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU.

In terms of performance of the DGX GH200, this provides 1 exaflop of performance and 144 terabytes of shared memory — nearly 500x more memory than in a single NVIDIA DGX A100 system. The previous generation system only provided for eight GPUsto be combined with NVLink as one GPUwithout compromising performance.

Say hello to Helios

The Helios will be based on four NVIDIA DGX GH200 servers. Image source: NVIDIA.

To ride on this performance, NVIDIA has announced plans to build its own DGXGH200-based AI supercomputer.

NVIDIA Helios, as the supercomputer will be called, will feature four DGXGH200 systems interconnected with NVIDIA Quantum-2 InfiniBand networking to supercharge data throughput for training large AI models. Helios will include 1,024 Grace Hopper Superchips and is expected to come online by the end of the year.

Launching AI-focused servers based on NVIDIA’s MGX specs

The NVIDIA MGX provides reference architecture to quickly and cost-effectively build more than 100 server variations. Image source: NVIDIA.

NVIDIA announced the launch of the NVIDIA MGX server specification. This forms the blueprint for a modular reference architecture so hardware manufacturers can quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high-performance computing and Omniverse applications.

To begin an MGXdeployment, manufacturers start with a basic system architecture optimised for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. MGX based servers can also be integrated into cloud and enterprise data centres.

ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT, and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months. SoftBank Corp. plans to roll out multiple hyperscale data centres across Japan and use MGX to dynamically allocate GPUresources between generative AI and 5G applications.

Move into an AI Cloud with NVIDIA Spectrum-X

NVIDIA Spectrum-X is an accelerated Ethernet platform for hyperscale generative AI. Image source: NVIDIA.

Finally, NVIDIA Spectrum-X combines Spectrum-4 Ethernet switch with a BlueField-3 DPU and acceleration software to create an accelerated Ethernet platform designed to improve the performance and efficiency of Ethernet-based AI clouds.

According to NVIDIA, this allows NVIDIA Spectrum-X to reach 1.7x better overall AI performance and power efficiency, along with consistent, predictable performance in multi-tenant environments.

The platform starts with Spectrum-4, the world’s first 51Tb/sec Ethernet switch built specifically for AI networks. Advanced RoCE extensions work together with the Spectrum-4 switches, BlueField-3 DPUs and LinkX optics to create an end-to-end 400GbE network that is optimised for AI clouds.

As a blueprint and testbed for NVIDIA Spectrum-X reference designs, NVIDIA is building Israel-1, a hyperscale generative AI supercomputer to be deployed in its Israeli data centre on Dell PowerEdge XE9680 servers based on the NVIDIA HGXTM H100 eight-GPU platform, BlueField-3 DPUs and Spectrum-4 switches.

< PrevPage 1 of 1 – NVIDIA is at the centre of Generative AI boomPage 1 of 1 – NVIDIA is at the centre of Generative AI boomPage 1 of 1 Page 1 of 1 – NVIDIA is at the centre of Generative AI boomNext >

RELATED ARTICLES
- Advertisment -

Most Popular