Friday, May 17, 2024
HomenewsNVIDIA is enabling super streaming videos and speeding up generative AI on...

NVIDIA is enabling super streaming videos and speeding up generative AI on your RTX system

NVIDIA is enabling super streaming videos and speeding up generative AI on your RTX system

(Image source: NVIDIA)

Earlier this year, NVIDIA showcased a variety of AI-powered content creator tools like an updated NVIDIA Broadcast, Canvas 1.4, RTX Video Super Resolution (VSR) and NVIDIA's Omniverse AI ToyBox to generate 3D assets via generative AI.

Now in the latest Game Ready Driver, NVIDIA is bringing VSRversion 1.5 that makes use of an updated and retrained VSR AI model to better differentiate between subtle details and compression artefacts. This helps it preserve finer details and producing sharper and crisper videos.

One additional improvement is that the original VSR feature could only enhancevideos when being upscaled, such as a lower quality video to better suit your display resolution. The updated VSR 1.5 works on videos streamed in their native resolution too. So a 1080p video streamed on a 1080p display resolution should now look smoother and with less artefacts thanks to VSR working intelligently in the background. So whether you're watching YouTube, Netflix, Amazon Prime or a random video embed on the web, you'll get to enjoy even more out of your favourite shows.

This feature will be available for any RTX-capable GPU equipped system, be in on the professional RTX graphics cards or the GeForce RTX 20 series GPU.

(Image source: NVIDIA)

To better utilise the Tensor Cores(available on all RTX class systems) responsible for thefabulous AI processing, TensorRT-LLM for Windows will now accelerate inference performance up to 4x faster for AI large language models (LLMs) like Llama 2 and Code Llama.TensorRT-LLM is an open-source library that accelerates and optimises inference performance on the latest LLMs on NVIDIA Tensor Core equipped GPUs. As the importance of generative AI grows stronger day by day, so are the sizes of LLMs, which in turn drives up the cost and complexity to deploy them. This is why NVIDIA worked closely with leading LLM companies to accelerate and optimize LLM inferencing performance.

TensorRT-LLM was initally released last month for the NVIDIA H100 data centre GPU that doubled the inference performance gains and it's great to see that the TensorRT-LLM library is now available onWindows to supercharge RTX-powered Windows systems.

Last but not least, TensorRT acceleration is now available for Stable Diffusion in the popular Web UI by Automatic1111 distribution (available now). It speeds up the generative AI diffusion model by up to 2x over the previous fastest implementation. That means you can create novel works of art quicker than ever before and realise your creations sooner, which would otherwise have taken hours on an ill-equipped system. NVIDIA says that on a GeForce RTX 4090, it can run 7x faster than the best implementation on a Mac machine running the Apple M2 Ultra silicon.

Source: NVIDIA

< PrevPage 1 of 1 – NVIDIA is enabling super streaming videos and speeding up generative AI on your RTX systemPage 1 of 1 – NVIDIA is enabling super streaming videos and speeding up generative AI on your RTX systemPage 1 of 1 Page 1 of 1 – NVIDIA is enabling super streaming videos and speeding up generative AI on your RTX systemNext >

RELATED ARTICLES
- Advertisment -

Most Popular