Photo: Fortune
Keith Heyde, 36, a former Meta AI compute executive, has spent the last 10 months spearheading OpenAI’s secretive search for its Stargate data center network—massive facilities designed to power advanced artificial intelligence models. Since joining OpenAI in early 2025, Heyde has shifted his focus from optimizing cloud usage to building proprietary infrastructure, turning CEO Sam Altman’s vision for next-generation AI into a tangible reality.
Rather than a traditional holiday break, Heyde spent late December touring potential sites across the U.S., evaluating land for its scalability, power capacity, and community support. “My family loved that, trust me,” he joked, underscoring the all-consuming nature of the role.
Since January, OpenAI has reviewed around 800 proposals from prospective sites across the Southwest, Midwest, and Southeast. The list has now narrowed to approximately 20 sites in advanced diligence, each under assessment for multi-gigawatt power availability, rapid construction feasibility, and local governmental cooperation.
“Tax incentives are a relatively small part of the decision,” Heyde noted. “The main focus is whether we can build quickly, ramp up power effectively, and have community support.”
The stakes are immense. A single gigawatt data center can consume as much electricity as an entire mid-sized city. OpenAI’s plans include a 17-gigawatt expansion, supported by strategic partners like Oracle, Nvidia, and SoftBank. These sites will integrate solar, battery storage, gas turbine retrofits, and even small modular nuclear reactors to meet the unprecedented energy demands of AI computation.
Owning first-party infrastructure represents a major shift for OpenAI. While historically reliant on cloud providers, controlling its own data centers allows the company to reduce vendor markups, protect intellectual property, and scale AI operations without limits.
Heyde and his industrial compute team are tackling complex challenges, from ensuring sufficient GPU capacity to navigating labor availability and zoning requirements. With approximately 100 site visits completed, the team is balancing the need for speed with flexibility, including converting existing facilities when necessary.
“The perfect parcels are largely taken,” Heyde said. “But perfect wasn’t the goal—the goal was a compelling power ramp and scalable infrastructure.”
OpenAI is entering a fiercely competitive landscape. Meta is building a $10 billion data center in Louisiana, Amazon and Anthropic are developing a 1,200-acre AI campus in Indiana, and states across the U.S. are offering tax incentives and expedited approvals to attract AI infrastructure projects.
Despite being a relative newcomer—founded just a decade ago and mainstream-famous for ChatGPT for less than three years—OpenAI has already raised substantial capital from Microsoft, Nvidia, and SoftBank, positioning the company toward a $500 billion valuation. The firm has even activated a self-built solar campus in Abilene, Texas, showcasing its capability to execute large-scale infrastructure projects independently.
Unlike traditional cloud or enterprise computing, AI at OpenAI’s scale has no precedent. Each Stargate facility represents a leap in computational power and energy requirements, and Heyde emphasizes that the logistics are “very challenging, but possible.”
Some applicants, including former bitcoin mining operators, offered pre-existing power infrastructure, but Heyde stressed that site suitability often depends on community integration and long-term growth potential, not just immediate capacity.
The 20 finalist sites are just phase one of OpenAI’s ambitious buildout, which aims to eventually scale from single-gigawatt installations to massive multi-gigawatt campuses nationwide.
As OpenAI moves from being a cloud customer to a full-fledged infrastructure owner, the company is redefining what it means to scale AI responsibly and sustainably. With Heyde at the helm of site development, the Stargate network will become the backbone of next-generation AI, enabling OpenAI to maintain its competitive edge in an increasingly high-stakes industry.
“Controlling our own infrastructure is central to the future of AI,” Heyde said. “It’s a daunting challenge, but the scale we’re aiming for is achievable with the right sites, the right partners, and the right strategy.”