Qualcomm announced on Monday its plans to launch new artificial intelligence accelerator chips, intensifying competition with Nvidia, which currently dominates the AI semiconductor market. Following the announcement, Qualcomm’s stock surged by 11%.
These AI chips represent a strategic shift for Qualcomm, which has primarily focused on semiconductors for wireless connectivity and mobile devices, rather than large-scale data centers. The new AI200 chip, set to debut in 2026, and the AI250, expected in 2027, will be integrated into a system that occupies an entire liquid-cooled server rack.
Qualcomm aims to compete with Nvidia and AMD, both of which provide full-rack systems capable of housing up to 72 graphics processing units (GPUs) that operate as a single computer—essential for AI labs needing substantial computing power for advanced models. Qualcomm’s data center chips will utilize the Hexagon neural processing units (NPUs) from its smartphone chips.
Durga Malladi, Qualcomm’s general manager for data center and edge, stated that the company first wanted to establish its credibility in other sectors before expanding into the data center market. This entry signals new competition in the rapidly growing technology sector focused on AI server farms, with an estimated $6.7 trillion in capital expenditures projected for data centers through 2030, primarily for AI-based systems, according to McKinsey.
Nvidia currently holds over 90% of the GPU market, with its chips being essential for training models like OpenAI’s GPT. However, companies such as OpenAI are exploring alternatives, having recently announced intentions to procure chips from AMD, the second-largest GPU maker. Other tech giants, including Google, Amazon, and Microsoft, are also developing their own AI accelerators for cloud services.
Qualcomm’s chips will emphasize inference—running AI models—rather than training, which involves processing vast amounts of data. The company claims its rack-scale systems will operate at a lower cost for cloud service providers, with each rack consuming 160 kilowatts of power, comparable to some Nvidia GPU setups.
Malladi noted that Qualcomm plans to offer its AI chips and components separately, catering to clients like hyperscalers who prefer custom rack designs. He mentioned that other AI chip manufacturers, including Nvidia and AMD, might even become customers for some of Qualcomm’s data center components, such as CPUs.
While Qualcomm did not disclose pricing details for its chips, cards, or racks, it recently partnered with Saudi Arabia’s Humain to supply AI inference chips for regional data centers, committing to deploy systems that could utilize up to 200 megawatts of power.
Qualcomm claims its AI chips offer advantages over competitors in terms of power consumption, cost-effectiveness, and a novel memory management approach, supporting up to 768 gigabytes of memory—surpassing Nvidia and AMD’s offerings.
Source : cnbc Edited by Bernie