Nvidia to make a new AI chip every year to avoid slowdown

mohitsiddhi

Nvidia to make a new AI chip every year to avoid slowdown


FILE PHOTO: Nvidia CEO Jensen Huang stated that the company will design new AI chips every year instead of once every two years. 

FILE PHOTO: Nvidia CEO Jensen Huang stated that the company will design new AI chips every year instead of once every two years. 
| Photo Credit: AP

After announcing their higher-than-estimated revenue during the quarterly earnings call, Nvidia CEO Jensen Huang stated that the company will design new AI chips every year instead of once every two years. “I can announce that after Blackwell, there’s another chip. We’re on a one-year rhythm,” Huang said. The news means that the AI chipmaker will be leveraging the boost they’ve received from artificial intelligence. 

The company has posted a $14 billion worth of profit in its first-quarter with shares trading at above $1,000. 

In 2020, Nvidia had released the Ampere chip after which it released a new architecture, Hopper in 2022 and then Blackwell earlier this year. But a report published early this month by analyst Ming-Chi Kuo had stated that the new chip called Rubin should be released in 2025. The chip is expected to power R100 GPUs. Until now, tech firms training AI models were clamouring for the industry-favourite, the H100 GPU.

Huang added that the company plans to accelerate production of other chips as well. “We’re going to take them all forward at a very fast clip. “New CPUs, new GPUs, new networking NICs, new switches… a mountain of chips are coming,” he said. Huang explained that it will be easy for companies to transition seamlessly from say the Hopper to Blackwell given that the new AI chips were electrically and mechanically-backward compatible and run on the same software. 

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

With the hotly contested AI race, Big Tech companies have rushed in to buy Nvidia’s GPUs which remain high in demand. In January this year, Meta CEO Mark Zuckerberg revealed that they plan to snap up about 350,000 H100 GPUs for compute. Self-driving companies too are buying more compute. Nvidia CEO pointed out that Tesla had purchased 35,000 GPUs to train its full self-driving system. 

Over last year, Nvidia ended up selling 2.5 million chips when each chip costs around a steep $15,000.

Huang described why the demand was so high. “The next company who reaches the next major plateau gets to announce a groundbreaking AI, and the second one after that gets to announce something that’s 0.3 percent better. Do you want to be the company delivering groundbreaking AI, or the company, you know, delivering 0.3 percent better?,” he asked. 

However, the cost of compute and demand has forced tech companies to make their own AI chips presenting a challenge to Nvidia’s dominance. Huang’s decision to move fast signals a need to maintain the lead in AI computing.



Source link

Leave a Comment