Google is preparing to provide construction financing for a multibillion-dollar data center campus in Texas leased to AI startup Anthropic, according to people familiar with the plans. Operated by Nexus Data Centers, the project’s initial phase could exceed $5 billion, and a consortium of banks is also competing to arrange longer‑term financing by midyear.
Anthropic has signed a lease for a roughly 2,800‑acre campus that is part of a broader infrastructure agreement with Google. Construction is already under way, initially financed by debt from Eagle Point, a publicly traded closed‑end investment company. The site is expected to deliver about 500 megawatts of capacity by late 2026 — roughly the electricity demand of 500,000 homes — with room to expand as large as 7.7 gigawatts.
The Texas location sits near major gas pipeline networks run by operators such as Enterprise Products Partners, Energy Transfer and Atmos Energy, allowing the campus to rely on on‑site gas turbines for some of its power needs. The scale and strategic siting underscore growing competition among hyperscalers and AI firms to secure dedicated, high‑capacity infrastructure for large models.
Separately, Anthropic is facing regulatory and legal challenges in the United States. A federal judge in San Francisco granted a preliminary injunction that temporarily prevents the Department of Defense from formally designating Anthropic a national security risk and from halting federal use of the company’s Claude AI tools. The injunction pauses a directive backed by President Donald Trump that aimed to cut off federal agencies from using Anthropic’s chatbot.
The injunction followed a lawsuit by Anthropic arguing that Defense Secretary Pete Hegseth exceeded his authority in labeling the company a supply chain risk. In her ruling, Judge Rita Lin described the government’s actions as arbitrary and cautioned against branding a domestic company a threat without a clear legal basis. The decision also raised concerns that the measures may have been retaliatory and could implicate First Amendment protections.
The dispute grew out of failed negotiations between Anthropic and the Pentagon over how the company’s models might be used. Anthropic resisted terms that would permit its technology to be applied to lethal autonomous weapons or broad mass surveillance, contributing to a standoff with parts of the administration.
Reports have also indicated that U.S. military units used Anthropic’s Claude model during operational planning for a major airstrike on Iran, even after the administration announced restrictions. Military commands, including U.S. Central Command (CENTCOM), reportedly relied on the model for operational support, highlighting the tension between operational adoption of AI tools and regulatory efforts to limit their use.