The H200, a graphics processing unit from Nvidia that is intended for developing and implementing the kinds of AI models driving the generative AI revolution, was presented on Monday.
OpenAI trained their most sophisticated big language model, GPT-4, on the H100 processor; the new GPU is an improvement. Large corporations, start-ups, and governmental organisations are competing for a finite amount of the chips.
Raymond James estimated that the cost of H100 chips ranged from $25,000 to $40,000. Thousands of these chips must be used in tandem to generate the largest models through a process known as “training.”
The shares of the firm has surged more than 230% in 2023 due to excitement surrounding Nvidia’s AI GPUs. With a 170% increase in revenue from a year ago, Nvidia anticipates earning almost $16 billion in its fiscal third quarter.
141GB of next-generation “HBM3” memory, which the chip will use to do “inference”—using a huge model once it has been trained to generate text, images, or predictions—is the main advance with the H200.
The H200 will provide output almost twice as quickly as the H100, according to Nvidia. That’s based on a Llama 2 LLM test conducted by Meta.
Anticipated for release in the second quarter of 2024, the H200 is set to rival AMD’s MI300X GPU. With more memory than its predecessors, AMD’s chip—like the H200—allows for the running of inference on larger models on the hardware.
Source (CNBC)