
Photo: Tom's Hardware
Arm announced on Monday that future CPUs built on its architecture will be able to integrate seamlessly with Nvidia’s NVLink Fusion interconnect, a major step that enables custom Arm-based Neoverse chips to work directly with Nvidia’s market-leading GPUs. This collaboration marks one of the clearest signs yet that Nvidia is expanding NVLink support beyond its own CPU lineup, allowing cloud giants and chipmakers to design far more customized AI computing systems.
The move is particularly significant for hyperscalers—including Microsoft, Amazon, and Google—which have increasingly turned to in-house Arm-based processors to cut costs, increase performance per watt, and gain greater control over their data-center architectures. By opening NVLink Fusion to custom chips, Nvidia is positioning itself at the center of nearly every AI infrastructure roadmap, regardless of which CPU provider a company chooses.
Nvidia’s decision reflects the shifting balance of power in AI infrastructure. While Nvidia has traditionally marketed full-stack solutions such as the Grace Blackwell platform—pairing Nvidia GPUs with Nvidia’s own Arm-based CPU—the company is now embracing a more open ecosystem. This flexibility allows multiple chip suppliers, including Intel and AMD, to connect directly into GPU-driven AI servers.
Arm, for its part, does not manufacture chips but licenses its instruction set and sells reference designs that allow partners to quickly produce optimized CPU architectures. As part of the partnership, Arm confirmed that new custom Neoverse designs will include a protocol enabling ultra-high-bandwidth data transfer with Nvidia GPUs. This is critical because modern AI servers increasingly rely on large clusters of accelerators, often pairing as many as eight GPUs per CPU to support generative AI workloads.
Nvidia’s NVLink expansion follows its September announcement that it would invest $5 billion into Intel, with the goal of ensuring Intel CPUs can integrate smoothly into AI systems using NVLink. This collaboration is another indication that the AI hardware market is moving toward hybrid, multi-vendor configurations where customers assemble their own “best-of-breed” systems rather than relying on prepackaged solutions.
The Arm–Nvidia relationship has been complex in recent years. Nvidia attempted to acquire Arm for $40 billion in 2020, but the deal collapsed in 2022 due to regulatory pushback in the U.S. and U.K. While Nvidia once held a minority stake in Arm, the chip designer’s majority owner SoftBank sold its entire Nvidia position earlier this month. SoftBank is now backing the OpenAI Stargate supercomputing project, which intends to use a combination of Arm-based CPUs, Nvidia GPUs, and AMD processors.
Generative AI workloads are radically reshaping the role of CPUs. Historically the central component in servers, CPUs now function largely as support systems for GPU clusters that perform most of the heavy lifting. This dynamic has left hyperscalers pushing for CPUs tailored to their own requirements—and Arm’s flexible licensing model is well-positioned to meet that demand.
By enabling NVLink Fusion across a broader range of custom chips, Nvidia is future-proofing its dominance in AI accelerators. The partnership ensures that as more companies design their own CPUs, those chips can still plug directly into Nvidia’s ecosystem without sacrificing performance, bandwidth, or efficiency.
In a market where speed, scale, and customization define competitive advantage, the Arm-Nvidia alignment underscores a powerful trend: the next generation of AI infrastructure will be collaborative, modular, and driven by deep integration across multiple chipmakers.







.png)

