“The world’s most powerful chip” — Nvidia says its new Blackwell is set to power the next generation of AI

Celebrity Gig

The next generation of AI will be powered by Nvidia hardware, the company has declared after it revealed its next generation of GPUs.

Company CEO Jensen Huang took the wraps off the new Blackwell chips at Nvidia GTC 2024 today, promising a major step forward in terms of AI power and efficiency.

The first Blackwell “superchip”, the GB200, is set to ship later this year, with the ability to scale up from a single rack all the way to an entire data center, as Nvidia looks to push on with its leadership in the AI race.

Nvidia Blackwell

Representing a significant step forward for the company’s hardware from its predecessor, Hopper, Huang noted that Blackwell contains 208 billion transistors (up from 80 billion in Hopper) across its two GPU dies, which are connected by  10 TB/second chip-to-chip link into a single, unified GPU. 

READ ALSO:  Dyson's powerful 360 Vis Nav robot vacuum finally lands in the US

This makes Blackwell up to 30x faster than Hopper when it comes to AI inference tasks, offering up to 20 petaflops of FP4 power, far ahead of anything else on the market today.

(Image credit: Future / Mike Moore)

During his keynote, Huang highlighted not only the huge jump in power between Blackwell and Hopper – but also the major difference in size.

READ ALSO:  Another startup is taking on Nvidia using a clever trick — Celestial AI brings DDR5 and HBM together to slash power consumption by 90%, may already be partnering with AMD

“Blackwell’s not a chip, it’s the name of a platform,” Huang said. “Hopper is fantastic, but we need bigger GPUs.”

Despite this, Nvidia says Blackwell can reduce cost and energy consumption by up to 25x, giving the example of training a 1.8 trillion parameter model – which would previously have taken 8,000 Hopper GPUs and 15 megawatts of power – but can now be done by just 2,000 Blackwell GPUs consuming just four megawatts.

The new GB200 brings together two Nvidia B200 Tensor Core GPUs and a Grace CPU to create what the company simply calls, “a massive superchip” able to drive forward AI development, providing 7x the performance and four times the training speed of an H10O-powered system.

READ ALSO:  NNPC, firm sign agreement to end licensing dispute

The company also revealed a next-gen NVLink network switch chip with 50 billion transistors, which will mean 576 GPUs are able to talk to each other, creating 1.8 terabytes per second of bidirectional bandwidth.

Nvidia has already signed up a host of major partners to build Blackwell-powered systems, with AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure already on board alongside a host of big industry names.

More from TechRadar Pro

Categories

Share This Article
Leave a comment