NVIDIA has unveiled the first performance teaser of its next-gen Blackwell B100 GPUs which will more than double the performance of Hopper H200 in 2024.
NVIDIA Blackwell B100 AI GPUs To Offer More Than 2x Performance Versus Hopper H200 GPUs In 2024
During its SC23 special address, NVIDIA teased the performance of its next-gen GPUs codenamed Blackwell which will offer more than 2x the AI performance of Hopper GPUs when they make their debut in 2024. The GPU used was the next-generation B100 which will succeed the Hopper H200 and can be seen just crushing the GPT-3 175B inference performance benchmark, showcasing its massive AI performance potential.
For the past two years, NVIDIA has relied on its Hopper and Ampere GPUs to serve the needs of AI & HPC customers worldwide, collaborating with various partners, but all of that is about to change in 2024 with the arrival of Blackwell. NVIDIA saw a big boost to its data center and overall company revenue thanks to the AI craze and it looks like that train is going full steam ahead as the green team is aiming to launch two brand new GPU families by 2025.
The first of these new AI/HPC GPU families from NVIDIA is going to be Blackwell, named after David Harold Blackwell (1919-2010). The GPU will be the successor to the GH200 Hopper series & will use the B100 chip. The company plans on offering various products including GB200NVL (NVLINK), the standard GB200, and the B40 for visual compute acceleration. The next-gen lineup is expected to be unveiled at the next GTC (2024) followed by a launch sometime later in 2024.
Current rumors estimate that NVIDIA will be utilizing the TSMC 3nm process node for producing its Blackwell GPUs and the first customers will be delivered the chips by the end of 2024 (Q4) though the latest reports have highlighted that NVIDIA is fast-tracking the production to Q2 2024 which is the same time its recently announced Hopper H200 GPUs will be made available. Samsung is said to be a major memory provider for NVIDIA’s next-gen Blackwell GPUs too.
The GPU is also expected to be the first HPC/AI accelerator from NVIDIA to utilize a chiplet design and will be competing with AMD’s Instinct MI300 accelerator which is also going to be a big deal within the AI space as the red team has touted it to be.
The other chip that has been disclosed is the GX200 and this one is the follow-up to Blackwell with a launch scheduled for 2025. Now NVIDIA has been following a two-year cadence between its AI & HPC products so it is likely that we might only see an announcement of the chip by 2025 with actual units to commence shipments by 2026.
The lineup will be based on the X100 GPU and will include a GX200 lineup of products and a separate X40 lineup for Enterprise customers. NVIDIA is known to name its GPUs after well-known scientists and it already uses the Xavier codename for its Jetson series so we can expect a different scientist name for the X100 series. Besides that, there’s little that we know about the X100 GPUs but it is much better than the Hopper-Next codenames that NVIDIA is using in prior roadmaps.
NVIDIA also plans to deliver major “doubling” upgrades on its Quantum and Spectrum-X with new Bluefield and Spectrum products, offering up to 800 Gb/s transfer speeds by 2024 and up to 1600 Gb/s transfer speeds by 2025. These new networking and interconnect interfaces will also help the HPC / AI segment a lot in achieving the required performance.
NVIDIA Data Center / AI GPU Roadmap