Tech

New NVIDIA H200 GPU Sets the Standard for AI Technology

Published

on

Just unveiled, NVIDIA is taking AI technology to the next level with its latest H200 GPU. This new class-leading chip builds upon the success of its predecessor, the highly sought-after H100, offering increased memory capacity and bandwidth for enhanced performance in generative AI and large language models (LLMs).

The H200 GPU boasts 1.4 times more memory bandwidth and 1.8 times more memory capacity compared to the H100, thanks to its utilization of the new HBM3e memory specification. This upgrade results in a significant bump in memory bandwidth to 4.8 terabytes per second and a total memory capacity of 141GB, surpassing the capabilities of the H100 with 3.35 terabytes per second bandwidth and 80GB memory capacity.

In a video presentation, Ian Buck, Nvidia’s VP of high-performance computing products, highlighted that the integration of faster and more extensive HBM memory in the H200 accelerates performance across demanding tasks, such as generative AI models and high-performance computing applications, while optimizing GPU efficiency.

The H200 is designed to be easily integrated into systems already compatible with H100 GPUs, ensuring a seamless transition for users. Cloud service providers like Amazon, Google, Microsoft, and Oracle are among the first to adopt the new GPUs, with availability expected in the second quarter of 2024.

While Nvidia has not disclosed the pricing for the H200, previous generation H100 GPUs were estimated to range from $25,000 to $40,000 each, making them a significant investment for companies utilizing AI technology. Despite the introduction of the H200, Nvidia reassures customers that production of the H100 will continue uninterrupted to meet ongoing demand.

As the demand for AI technology continues to soar, Nvidia’s announcement of the H200 GPU comes at a crucial time for companies in need of cutting-edge computational power. With plans to triple H100 production in 2024 and the introduction of the H200, Nvidia is poised to meet the growing demand for GPUs tailored for generative AI and large language models in the coming year.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version