Computer makers unveil Nvidia Blackwell systems for AI rollouts



Nvidia CEO Jensen Huang announced at Computex that the world’s top computer manufacturers today are unveiling Nvidia Blackwell architecture-powered systems featuring Grace CPUs, Nvidia networking and infrastructure for enterprises to build AI factories and data centers.

Nvidia Blackwell graphics processing units (GPUs), which have 25 times better energy consumption and lower costs for tasks for AI processing. And the Nvidia GB200 Grace Blackwell Superchip — meaning it consists of multiple chips in the same package — promises exceptional performance gains, providing up to 30 times performance increase for LLM inference workloads compared to previous iterations.

Aimed at advancing the next wave of generative AI, Huang said that ASRock Rack, Asus, Gigabyte, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron and Wiwynn will deliver cloud, on-premises, embedded and edge AI systems using Nvidia graphics processing units (GPUs) and networking.

“The next industrial revolution has begun. Companies and countries are partnering with Nvidia to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center — AI factories — to produce a new commodity: artificial intelligence,” said Huang, in a statement. “From server, networking and infrastructure manufacturers to software developers, the whole industry is gearing up for Blackwell to accelerate AI-powered innovation for every field.”


Lil Snack & GamesBeat

GamesBeat is excited to partner with Lil Snack to have customized games just for our audience! We know as gamers ourselves, this is an exciting way to engage through play with the GamesBeat content you have already come to love. Start playing games now!


To address applications of all types, the offerings will range from single to multi-GPUs, x86- to Grace-based processors, and air- to liquid-cooling technology.

Additionally, to speed up the development of systems of different sizes and configurations, the Nvidia MGX modular reference design platform now supports Blackwell products. This includes the new Nvidia GB200 NVL2 platform, built to deliver unparalleled performance for mainstream large language model inference, retrieval-augmented generation and data processing.

Jonney Shih, chairman at Asus, said in a statement, “ASUS is working with NVIDIA to take enterprise AI
to new heights with our powerful server lineup, which we’ll be showcasing at COMPUTEX. Using NVIDIA’s MGX and Blackwell platforms, we’re able to craft tailored data center solutions built to handle customer workloads across training, inference, data analytics and HPC.”

GB200 NVL2 is ideally suited for emerging market opportunities such as data analytics, on which companies spend tens of billions of dollars annually. Taking advantage of high-bandwidth memory performance provided by NVLink-C2C interconnects and dedicated decompression engines in the Blackwell architecture, speeds up data processing by up to 18x, with 8x better energy efficiency compared to using x86 CPUs.

Modular reference architecture for accelerated computing

Nvidia's Blackwell platform.
Nvidia’s Blackwell platform.

To meet the diverse accelerated computing needs of the world’s data centers, Nvidia MGX provides computer manufacturers with a reference architecture to quickly and cost-effectively build more than 100 system design configurations.

Manufacturers start with a basic system architecture for their server chassis, and then select their GPU, DPU and CPU to address different workloads. To date, more than 90 systems from over 25 partners have been released or are in development that leverage the MGX reference architecture, up from 14 systems from six partners last year. Using MGX can help slash development costs by up to three-quarters and reduce development time by two-thirds, to just six months.

AMD and Intel are supporting the MGX architecture with plans to deliver, for the first time, their own CPU host processor module designs. This includes the next-generation AMD Turin platform and the Intel® Xeon® 6 processor with P-cores (formerly codenamed Granite Rapids). Any server system builder can use these reference designs to save development time while ensuring consistency in design and performance.

Nvidia’s latest platform, the GB200 NVL2, also leverages MGX and Blackwell. Its scale-out, single-node design enables a wide variety of system configurations and networking options to seamlessly integrate accelerated computing into existing data center infrastructure.

The GB200 NVL2 joins the Blackwell product lineup that includes Nvidia Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips and the GB200 NVL72.

An ecosystem

Nvidia Blackwell has 208 billion transistors.
Nvidia Blackwell has 208 billion transistors.

NVIDIA’s comprehensive partner ecosystem includes TSMC, the world’s leading semiconductor manufacturer and an Nvidia foundry partner, as well as global electronics makers, which provide key components to create AI factories. These include manufacturing innovations such as server racks, power delivery, cooling solutions and more from companies such as Amphenol, Asia Vital Components (AVC), Cooler Master, Colder Products Company (CPC), Danfoss, Delta Electronics and LITEON.

As a result, new data center infrastructure can quickly be developed and deployed to meet the needs of the world’s enterprises — and further accelerated by Blackwell technology, NVIDIA Quantum-2 or Quantum-X800 InfiniBand networking, Nvidia Spectrum-X Ethernet networking and NVIDIA BlueField-3 DPUs — in servers from leading systems makers Dell Technologies, Hewlett Packard Enterprise and Lenovo.

Enterprises can also access the Nvidia AI Enterprise software platform, which includes Nvidia NIM inference microservices, to create and run production-grade generative AI applications.

Taiwan embraces Blackwell

Generative AI is driving Nvidia forward to Blackwell.
Generative AI is driving Nvidia forward to Blackwell.

Huang also announced during his keynote that Taiwan’s leading companies are rapidly adopting Blackwell to bring the power of AI to their own businesses.

Taiwan’s leading medical center, Chang Gung Memorial Hospital, plans to use the Blackwell computing platform to advance biomedical research, accelerate imaging and language applications to improve clinical workflows, ultimately enhancing patient care.

Young Liu, CEO at Hon Hai Technology Group, said in a statement, “As generative AI transforms industries, Foxconn stands ready with cutting-edge solutions to meet the most diverse and demanding computing needs. Not only do we use the latest Blackwell platform in our own servers, but we also help provide the key components to Nvidia, giving our customers faster time-to-market.”

Foxconn, one of the world’s largest makers of electronics, is planning to use Nvidia Grace Blackwell to develop smart solution platforms for AI-powered electric vehicle and robotics platforms, as well as a growing number of language-based generative AI services to provide more personalized experiences to its customers.

Barry Lam, chairman of Quanta Computer, said in a statement, “We stand at the center of an AI-driven
world, where innovation is accelerating like never before. Nvidia Blackwell is not just an engine; it is the spark igniting this industrial revolution. When defining the next era of generative AI, Quanta proudly joins NVIDIA on this amazing journey. Together, we will shape and define a new chapter of AI.”

Charles Liang, President and CEO at Supermicro: “Our building-block architecture and rack-scale, liquid-cooling solutions, combined with our in-house engineering and global production capacity of 5,000 racks per month, enable us to quickly deliver a wide range of game-changing Nvidia AI platform-based products to AI factories worldwide. Our liquid-cooled or air-cooled high-performance systems with
rack-scale design, optimized for all products based on the Blackwell architecture, will give customers an incredible choice of platforms to meet their needs for next-level computing, as well as a major leap into the future of AI.”

C.C. Wei, CEO at TSMC, said in a statement, “TSMC works closely with Nvidia to push the limits of semiconductor innovation that enables them to realize their visions for AI. Our industry-leading semiconductor manufacturing technologies helped shape Nvidia’s groundbreaking GPUs, including those based on the Blackwell architecture.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *