Energy availability is the primary bottleneck for the widespread adoption of AI, surpassing the challenge of GPU supply.
A vertically integrated model controlling land, power, and data center construction is necessary to deliver AI infrastructure at the speed the market demands.
Traditional utility timelines are incompatible with the pace of AI development, with gigawatt-scale power requests potentially facing interconnection delays of over a decade.
Specialized 'NeoCloud' providers like Crusoe can outcompete legacy clouds on AI workloads by offering purpose-built infrastructure designed for the unique complexity of large GPU clusters.
In deregulated energy markets like ERCOT, large data centers can function as dynamic grid assets, acting as both a load and a generator by selling excess power back to the grid.
▶Vertical Integration as a Competitive DifferentiatorApr 2026
Lochmiller positions Crusoe as a vertically integrated AI infrastructure business that manages the entire value chain, from land and power development to GPU cluster deployment. This model is presented as the key to delivering complex 'AI factories' on rapid timelines, such as completing an initial facility in 11 months versus a competitor's bid of two and a half years.
This strategy directly targets the primary pain point for hyperscalers—long lead times for power and data center capacity—positioning Crusoe as a critical enabler for AI companies that prioritize speed-to-market over reliance on traditional infrastructure providers.
▶The Abilene Project: A Case Study in Hyperscale AI InfrastructureApr 2026
The data center campus in Abilene, Texas, is Crusoe's flagship project, built to support major clients like OpenAI and Oracle. With a total planned capacity of 1.2 gigawatts and an initial project value of $15 billion, it exemplifies the massive scale and capital investment required for modern AI workloads.
The project's successful financing by major institutions like J.P. Morgan and Blue Owl serves as a powerful validation of Crusoe's business model and its ability to execute on capital-intensive infrastructure projects at a scale that few specialized companies can attempt.
▶Energy Availability as the Primary Bottleneck for AIApr 2026
A core tenet of Lochmiller's perspective is that energy availability, not just GPU supply, is the fundamental constraint on AI's expansion. He argues that the power demands of AI will require massive new investment in generation, as traditional utility interconnection queues can stretch for more than a decade.
By framing the problem around energy, Lochmiller positions Crusoe's core competency—developing power and infrastructure in tandem—as the essential solution to the next major challenge in the AI arms race.
▶Positioning as a Specialized 'NeoCloud' ProviderApr 2026
Lochmiller identifies Crusoe within a new category of 'NeoCloud' providers that differentiate from legacy cloud giants through specialization in AI workloads. This involves designing data centers specifically for the high complexity, cooling, and networking demands of large GPU clusters, supporting multiple future chip architectures.
This niche positioning suggests a strategy of not competing with AWS or Google on breadth, but rather on depth and performance for the most demanding and lucrative segment of the cloud market, thereby capturing high-value AI customers.