All along last year, in Silicon Valley, a company’s status in the AI industry was dictated by the number of Nvidia-made graphic processing units or GPUs they owned. Microsoft, Amazon, Meta and Tesla are “GPU-rich”.
Nvidia, a company pivoted from making hardware for games and graphics to manufacturing AI chips in 2010 is reaping the reward of its prescient strategy. Currently, the company has about 70% of the market share on GPUs. (Their third-quarter results reported in November-end showed a year-on-year growth of 206% in revenue.)
But Big Tech firms are not just relying on Nvidia. In fact, Amazon, Microsoft, Alphabet and Meta have all either unveiled their own custom AI chipsets or have planned to launch one soon. Google’s latest and most powerful AI model Gemini, announced in December, was trained using the company’s own tensor processing unit, or TPU, chip. (Google claims TPUs to be the next generation of GPUs.)
Given this backdrop, what role can upstart chip makers play in a market dominated by tech giants?
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
“Upstart companies have their task cut out by a bit,” said Sangeet Paul Choudhary, management consultant and author of Platform Scale. “As companies look to target an enterprise audience and build highly fine-tuned models, they will look to build purpose-fit AI chips that best deliver performance within that context. This will drive greater vertical integration overall.”
Nvidia’s demand isn’t just bolstered by chips – it is a one-stop-shop for developers. “Getting vertical play right involves engaging the whole ecosystem. In Nvidia’s case, it has much higher uptake in research [based on citations], has higher developer engagement, even more so now with the Hugging Face partnerships. Cuda [a programming model that helps compute GPUs] is the most preferred toolkit by developers. [So,] just building better chips won’t cut it,” he added.
Investment is low, but for all
Investors who were already wary, were reportedly staying away from a tough industry, which has been made even harder by Nvidia. Pitchbook data showed that U.S. chipmaking startups raised $881.4 million until August-end 2023, a considerable decline from $1.79 billion during the first three quarters of 2022. The number of deals had also dropped from 23 to just four during the same period.
But the sluggishness wasn’t just contained to funding for chipmaking. According to PitchBook’s First Look data packs, global VC funding for the fourth-quarter fell to roughly $345 billion, down from $531 billion in 2022.
“The funding requirements for AI chipmaking companies is generally very high and can be as high as 8–10 times that of a startup at the initial stage due to large investment requirement in R&D. Moreover, it takes at least two years to develop a medium-complexity chip. These factors result in a longer wait time for investors to reap the rewards, that too at a higher risk,” said Madhukar Bhardwaj, principal investor at Physis Capital.
While Nvidia’s dominance is evident and that it is making it difficult for other companies to attract funds it is worth noting “that the market is always receptive to innovators with revolutionary products,” he said.
In September, Santa Clara-based AI chip startup d-Matrix raised $110 million in its Series B funding round led by Singapore’s Temasek, Microsoft and Playground Global. The startup was not focused on training massive AI models, instead it chose to do something specialised – inferencing i.e., making predictions from the data.
In August, another AI hardware startup, Tenstorrent founded by pioneering chip architect Jim Keller raised $100 million in a convertible note funding round co-led by Hyundai Motor Group and Samsung Catalyst Fund. The startup recently started a service which allows clients to use AI models without buying them. A spokesperson for the firm said, “While Nvidia is currently a dominating force in the AI chip industry, we do believe there’s still room for viable competitors to rise up. We think the way to compete with Nvidia is to deliver a full solution (hardware and software) without requiring the user to change their workflow. We think we can challenge Nvidia with an open-source platform.”
OpenAI CEO Sam Altman signed a deal worth $51 million with a chip startup Rain Neuromorphics. The company is developing a neuromorphic processing unit or NPU, a chip that replicates the human brain in its form. NPUs promise 100 times more computing power and 10,000 times more energy efficiency as compared to GPUs.
Challengers
There are other young companies that have risen to the challenge – Tiny Corp. designs ARM-based training and inference chips for edge computing; Modular develops parallel accelerator chips for training and inference (both offer speed at low cost and are viewed as alternatives to Nvidia’s Cuda). Another startup called MatX designs neural network inference chips focused on edge applications.
“This is just the beginning of what we are seeing in this industry. I am sure there is much more to come. Despite the perception that Nvidia has a monopoly, I believe the next five years will be a ramp-up. The AI chipmaking pie is a huge one, and Nvidia’s portion even if big is a part of what the market forecast is of a $200-300 billion industry over the next decade,” Ashok Chandak, president of the IESA or Indian Electronics and Semiconductor Association said.
Chandak believes there are several factors behind this: “Firstly, competencies will build up. We will see AI applications in healthcare, automotive and robotics; this isn’t just restricted to large language models. Secondly, GPUs aren’t the only source of compute. Depending on the use case, there are CPUs from Intel or AMD, so there’s plenty of opportunity. Thirdly, edge computing will grow scope-wise and percolate to smaller applications like smart cameras or medical instruments or safety tools,” he explained.
Nvidia has woken the industry up and will be an enabler in the long run, he noted.
Add Comment