
Cerebras CEO Andrew Feldman, front row, second from left, participates in a ribbon-cutting ceremony for the company’s data center in Oklahoma City on Sept. 22, 2025.
Artificial intelligence chipmaker Cerebras Systems is gaining new attention in the fast-growing AI infrastructure market after being highlighted by Oracle leadership alongside industry giants Nvidia and AMD.
During Oracle’s latest quarterly earnings call with investors, Clay Magouyrk, one of the company’s chief executives responsible for cloud infrastructure, pointed to Cerebras as a key provider of advanced computing hardware used within Oracle’s expanding data center network.
The acknowledgement places Cerebras in the same conversation as the dominant players powering the modern AI boom.
Magouyrk explained that Oracle’s cloud infrastructure is designed to support everything from small experimental workloads to massive AI training clusters, requiring a diverse mix of hardware accelerators.
According to him, Oracle’s platform includes the newest processors from Nvidia and AMD as well as emerging accelerator technologies from companies like Cerebras and other startups developing specialized AI chips.
For a younger company competing in a market dominated by trillion-dollar technology firms, being included in Oracle’s infrastructure ecosystem represents a significant milestone.
Landing a major cloud provider as a customer could be a major boost for Cerebras as it prepares for a future public offering.
Large cloud platforms such as Oracle, Microsoft Azure, Google Cloud and Amazon Web Services collectively operate hundreds of data centers and purchase enormous volumes of AI computing hardware each year.
Even a small share of that demand can translate into billions of dollars in potential revenue.
Oracle has been aggressively expanding its AI infrastructure footprint over the past two years, building large-scale data centers designed to support generative AI training and inference workloads.
During the earnings call, the company reported that its remaining performance obligations surged to approximately $553 billion, more than four times higher than the previous year, reflecting massive demand for cloud services and AI computing capacity.
Executives emphasized that continued investment in data centers, computing hardware and customer relationships will likely become even more valuable as global demand for AI systems accelerates.
Cerebras has developed one of the most unusual and powerful AI processors in the industry.
The company’s flagship technology is the Wafer-Scale Engine (WSE) chip architecture, which dramatically differs from traditional graphics processing units used by competitors.
Its newest processor, the WSE-3, is built using an entire semiconductor wafer instead of slicing the wafer into smaller chips. This approach allows the processor to include over four trillion transistors and roughly 900,000 AI cores, making it one of the largest chips ever produced.
The design is optimized for training massive AI models and handling extremely large datasets while minimizing communication delays between computing units.
Cerebras also operates cloud-based AI services that allow companies to run machine learning workloads directly on its specialized hardware.
These systems are increasingly used for applications ranging from natural language processing to scientific research and advanced robotics.
Cerebras has been working toward becoming a publicly traded company, although its initial timeline was delayed.
The firm filed documents for an initial public offering in 2024 but withdrew the application later that year amid uncertain market conditions.
Shortly afterward, the company secured a major $1.1 billion funding round, valuing the business at about $8.1 billion. Leadership has stated that going public remains part of the company’s long-term plan once market conditions improve.
For potential investors, one of the key concerns previously highlighted in Cerebras’ financial disclosures was its heavy dependence on a single major customer.
During the first half of 2024, G42, an artificial intelligence and cloud computing company based in Abu Dhabi, accounted for approximately 87% of Cerebras’ total revenue.
While that partnership has been lucrative, relying so heavily on one client raised questions among analysts about revenue concentration risk.
Securing additional high-profile customers like Oracle could help diversify the company’s income sources and strengthen its case for a successful IPO.
Cerebras has also been expanding its partnerships within the generative AI ecosystem.
Earlier this year the company announced a $10 billion commitment from OpenAI and related partners to support the deployment of its AI computing systems.
OpenAI, one of the most prominent developers of generative AI technology, relies on large-scale cloud infrastructure for training and deploying its models.
The collaboration between OpenAI and Cerebras became more visible when the organizations launched a research preview of Codex-Spark, an AI model designed specifically for software development tasks.
The system is capable of rapidly generating and analyzing programming code and is being tested by ChatGPT Pro users as part of an experimental rollout.
The partnership reflects a broader trend in which AI developers are exploring alternative hardware architectures to complement or compete with traditional GPUs.
The global AI chip market is expanding rapidly as demand for computing power skyrockets.
Industry analysts estimate that spending on AI-focused semiconductor hardware could surpass $300 billion annually by the end of the decade, driven by generative AI, data analytics and autonomous systems.
Nvidia remains the dominant force in this market, with its GPUs powering the majority of AI training clusters worldwide. The company’s market value has surged as demand for its chips continues to outpace supply.
However, competition is increasing.
Advanced Micro Devices has been rolling out new AI accelerator chips designed to challenge Nvidia’s dominance in data centers.
Meanwhile, startups like Cerebras, Groq and Positron are experimenting with entirely new architectures aimed at delivering faster inference speeds, lower power consumption and reduced costs for large-scale AI deployments.
The race to improve AI computing efficiency has become particularly important as models grow larger and more expensive to run.
Even as new challengers emerge, Nvidia continues expanding its influence across the AI ecosystem.
The company has been investing heavily in new technologies and acquisitions aimed at strengthening its position.
In late 2025, Nvidia reportedly acquired key assets from AI chip startup Groq in a deal valued at roughly $20 billion, signaling its intention to incorporate new design concepts into future hardware.
Industry insiders expect Nvidia to unveil additional breakthroughs during its annual GTC developer conference, one of the most important events in the global AI industry.
These developments highlight how rapidly the hardware landscape is evolving as companies compete to provide the computing infrastructure powering artificial intelligence.
One of the central challenges facing AI developers today is improving the speed and efficiency of running AI models once they are trained.
This process, known as inference, occurs whenever a user asks an AI system to generate text, analyze data or produce an image.
Reducing the cost and latency of inference is becoming a major focus for both chip designers and cloud providers.
According to Oracle executives, emerging accelerator technologies from companies like Cerebras are attracting attention because they attempt to address these performance bottlenecks in new ways.
The goal is not only to reduce the cost of running large AI systems but also to dramatically shorten the time required for responses.
Faster inference means AI systems can support real-time applications such as autonomous vehicles, medical diagnostics, cybersecurity monitoring and complex scientific simulations.
Despite the intense competition, the AI infrastructure market is expanding so quickly that multiple companies are likely to succeed.
The scale of computing required to train and operate modern AI models continues to grow dramatically each year.
Leading AI models now require tens of thousands of high-performance processors running simultaneously for weeks or months during training cycles.
As organizations across industries integrate artificial intelligence into their operations, demand for specialized hardware is expected to remain extremely strong.
For Cerebras, gaining recognition from a major cloud provider like Oracle marks another step toward becoming a serious contender in the global race to build the next generation of AI computing platforms.









