As was widely expected, Nvidia (NVDA) has unveiled next generation artificial intelligence (A.I.) microchips at its developer conference taking place in San Jose, California.
Nvidia chief executive officer (CEO) Jensen Huang introduced the new A.I. graphics processors during a townhall speech, calling them the “Blackwell” series.
The first Blackwell chip is called the “GB200” and will ship later this year.
The new chips are being introduced as companies ranging from Microsoft (MSFT) to Meta Platforms (META) struggle to get their hands on Nvidia’s current generation of H100 microchips that power A.I. applications and models.
During his presentation, Huang also introduced revenue-generating software called Nvidia Inference Microservice (NIM) to its Nvidia enterprise software subscription. NIM will make it easier for companies to deploy A.I.
Huang said that Nvidia is moving to become less of a microchip provider and more of a platform provider, such as Microsoft and Apple (AAPL), on which other companies can build software.
“Blackwell’s not a chip, it’s the name of a platform,” said Huang in his address.
Every two years Nvidia updates its graphics processing unit (GPU) architecture, initiating a big jump in performance.
The company says that its Blackwell-based processors, such as the GB200, offer a huge performance upgrade for A.I. companies.
The additional processing power will enable companies to train bigger and more sophisticated A.I. models going forward.
The new GB200 chip includes what Nvidia calls a “transformer engine” that’s designed to run transformer based A.I., one of the core technologies underpinning chatbots such as ChatGPT.
Nvidia didn’t provide a cost for the new GB200 microchip. Nvidia’s current H100 chips cost $25,000 U.S. to $40,000 U.S. each.
The stock of Nvidia has risen 242% over the past 12 months to trade at $884.55 U.S. per share.