
Photo: The Business Times
Broadcom is strengthening its position at the center of the artificial intelligence infrastructure boom, unveiling expanded partnerships with Google and Anthropic that highlight the accelerating demand for high-performance computing at scale. The agreements not only reinforce Broadcom’s role in custom AI chip manufacturing but also underscore the massive capital being deployed to support the next wave of generative AI innovation.
At the core of the announcement is Broadcom’s continued collaboration with Google to develop future generations of artificial intelligence chips. The company will play a key role in producing updated versions of Google’s proprietary tensor processing units, or TPUs, which are specifically designed to handle large-scale machine learning workloads more efficiently than traditional chips.
In parallel, Broadcom revealed an expanded agreement involving Anthropic, one of the fastest-growing AI startups globally. The deal will provide Anthropic access to approximately 3.5 gigawatts of computing power built on Google’s AI infrastructure. To put this into perspective, that level of compute capacity rivals the energy consumption of several mid-sized cities, illustrating the enormous scale required to train and operate advanced AI models.
Investor response was immediate, with Broadcom shares rising around 3 percent in extended trading following the announcement. The market reaction reflects growing confidence that the company is becoming a critical supplier in the AI value chain, particularly as demand shifts toward custom silicon solutions optimized for specific workloads.
Anthropic’s rapid growth is a major driver behind this surge in infrastructure demand. The company has seen its annualized revenue climb past 30 billion dollars, a sharp increase from roughly 9 billion dollars just months earlier. Its enterprise footprint is also expanding quickly, with more than 1,000 corporate clients now spending over 1 million dollars annually on its AI services. This doubling of high-value customers in a short period highlights how quickly adoption is accelerating across industries.
To support this growth, Anthropic is aggressively scaling its infrastructure footprint, with most of the new capacity expected to be deployed in the United States. The partnership with Broadcom and Google is central to this strategy, enabling the company to secure the compute resources necessary to handle increasingly complex AI workloads while maintaining performance and reliability.
Broadcom’s leadership has already signaled that this is just the beginning. The company is currently delivering around 1 gigawatt of compute capacity to Anthropic through Google’s TPU systems in 2026. However, demand is projected to exceed 3 gigawatts by 2027, representing a threefold increase in just one year. This trajectory points to a broader industry trend where compute requirements are scaling exponentially alongside model complexity.
Financially, the implications are significant. Industry analysts estimate that Broadcom could generate as much as 21 billion dollars in AI-related revenue from Anthropic partnerships in 2026, with that figure potentially doubling to over 40 billion dollars in 2027. While official contract values have not been disclosed, these projections highlight the scale of opportunity in the AI semiconductor market.
Beyond Anthropic, Broadcom is also diversifying its partnerships across the AI ecosystem. The company is working with other major players, including collaborations on custom silicon solutions for OpenAI. This reflects a broader shift in the industry, where leading AI developers are increasingly seeking tailored hardware to reduce dependence on traditional graphics processing units and improve efficiency.
Currently, much of the AI sector still relies heavily on GPUs supplied by dominant chipmakers and accessed through cloud platforms operated by companies like Amazon, Google, and Microsoft. However, the move toward custom-designed chips, such as TPUs and other specialized accelerators, signals a new phase of competition focused on performance optimization and cost control.
Adding to this competitive landscape, alternative chip providers are also gaining traction. OpenAI, for example, has committed to utilizing up to 6 gigawatts of GPU capacity from AMD, with initial deployments expected to begin later this year. This diversification of supply chains indicates that the race to power AI is expanding beyond a single vendor ecosystem.
Overall, Broadcom’s latest deals highlight a critical shift in the technology industry. As artificial intelligence becomes more deeply embedded across sectors, the demand for scalable, energy-intensive computing infrastructure is rising at an unprecedented pace. Companies that can deliver efficient, high-performance chips are poised to capture a substantial share of this rapidly growing market, and Broadcom is positioning itself as one of the key players driving that transformation.









