Qualcomm has announced two new AI accelerator chips aimed at reshaping digital transformation across the data centre industry, positioning itself as a direct challenger to Nvidia and AMD. The company’s shares surged 11% after revealing its AI200 and AI250 models, designed to power large-scale inference systems.

The chips, set for release in 2026 and 2027, signal Qualcomm’s move beyond mobile and wireless semiconductors into enterprise AI infrastructure. Both models will be available in full liquid-cooled server racks, enabling up to 72 chips to function as a single computing system—an approach that mirrors Nvidia’s GPU architecture.

Durga Malladi, Qualcomm’s general manager for data centre and edge, said the company’s experience in smartphone neural processing units laid the foundation for scaling up to data centre-level performance. “We built our strength in smaller domains first, which made it easier to step up into large-scale systems,” he noted.

The entry of Qualcomm into the AI data centre market adds competition to a rapidly growing sector projected to attract $6.7 trillion in capital spending by 2030. Nvidia currently dominates the space with more than 90% market share, but cloud providers and AI labs—including OpenAI, Google, and Microsoft—are actively seeking alternative hardware options.

Qualcomm said its AI systems would offer lower operational costs, improved energy efficiency, and up to 768 gigabytes of memory per card—surpassing some existing GPU solutions. The firm is also collaborating with Saudi Arabia’s Humain to deploy AI inferencing systems consuming up to 200 megawatts of power.

As demand for scalable and efficient AI hardware accelerates, Qualcomm’s move could redefine how enterprises modernise infrastructure for the next wave of digital transformation.

Learn how Qualcomm’s latest AI chips could reshape enterprise computing efficiency by reading the full story.