Public Compute Redeployment Program
Turning AI Overcapacity into a National Scientific Asset
Summary
The U.S. government needs to establish a public compute redeployment program to allow the Department of Energy (DOE) to purchase idle GPU hours from AI data centers. This would have two key positive effects. First, an established program would provide a stable source of demand for the growing AI industry, allowing companies to efficiently use their computing resources by keeping them at max utilization. This would be especially important in the case of an AI bubble collapse, mitigating some of the macroeconomic chaos that a financial crisis could cause. Second, it would dramatically increase the computing resources available to scientists at the DOE and across the country. An abundance of computing power will accelerate the development of advanced technologies and unlock a deeper scientific understanding of our world.
Challenge and Opportunity
The rapid build-up of new infrastructure in American history, as seen in the cases of the railroad and telecom industries, is often accompanied by an initial burst of large capital expenditures on technology with unproven revenue streams, followed by a financial crash that leaves much of the recently built infrastructure unused. When it comes to AI, many have observed that similarly large capital expenditures are being shoveled into training models with unproven revenue streams. While this technology could prove to justify the enormous valuations that have driven the AI infrastructure buildout, there are significant risks that could derail the implementation of this technology. The future of large language models (LLMs) that power the leading frontier models is predicated on scaling laws that hypothesize that these models will continue to increase in capability as the model size, dataset size, and compute budgets increase. It is unknown if these scaling laws will continue to hold as the models grow. Additionally, while there is an increasing number of AI applications, it has not been proven that companies will be able to generate the revenues required to match their immense valuations. These risks are compounded by the increasingly circular investment patterns and the relatively opaque methods that companies are turning to in order to finance their construction.
These risks may be resolved in the coming years, but if history serves as an indicator, there is a non-negligible chance that a market correction will seriously impact these companies and the broader economy. In response to past financial collapses driven by speculation in unstable assets, the U.S. government has stepped in to bail out critical industries to stabilize the economy. In these instances, the taxpayer has rarely received any compensation for supporting the risk-taking behavior of large corporations.
Currently, we assume that this infrastructure will be solely used for AI purposes, but this neglects an important use case for computing resources: scientific computing. Scientific computing is a powerful and proven tool used by scientists and engineers to understand the natural world and develop increasingly advanced technologies. In the 1980s, the U.S. Department of Energy’s Office of Energy Research established what is now called the Advanced Scientific Computing Research (ASCR) program. For the past four decades, this program has been at the forefront of building and managing advanced high-performance supercomputers that they claim “address some of the biggest challenges facing our world, from modeling climate change to protecting national security to designing new kinds of materials.”
There’s a clear opportunity for synergy between the DOE and AI companies here. At the moment, AI companies are struggling to put all their infrastructure to use. Only 7% of companies surveyed in 2024 claimed that their GPU infrastructure achieved 85% utilization. This leaves room for scientific computing to slot into the excess compute at these data centers. By purchasing GPU hours from the private sector, the U.S. government becomes an important stopgap customer for a nascent industry. In the case of a market crash, the DOE can be provided with the funding to purchase additional GPU hours from the private sector, eliminating the need to provide a blank-check bailout. In this world, the government fulfills its important market-stabilizing role while acquiring computing resources to supercharge the Department of Energy’s national security and public innovation missions.
Plan of Action
To effectively capitalize on the unique opportunities presented by the rapid infrastructure development surrounding artificial intelligence, the Department of Energy requires authorization and funding to establish a Public Compute Redeployment program. The DOE should:
Develop Software Frameworks/Libraries. DOE national labs create many of the open-source libraries and frameworks that underpin a large portion of scientific computing codes in use today. Funding will be needed to create a new generation of frameworks that will allow scientists and engineers to adapt their current codes to work optimally on AI infrastructure.
Build partnerships with frontier AI labs. The program should establish partnerships with frontier AI companies that are constructing large-scale data centers, with the explicit goal of purchasing idle GPU hours for public research use. These partnerships should be structured in coordination with the National Science Foundation’s National AI Research Resource (NAIRR), which already provides a governance and access model for connecting private-sector compute to researchers. New, flexible legal and financial mechanisms may be needed to ensure that access to these resources is fair, secure, and aligned with DOE mission priorities.
Initiate a Pilot Grant Program to Distribute Excess Compute. The DOE currently runs the INCITE and ALCC programs, which competitively allocate time on DOE exascale supercomputers for large-scale, computationally intensive research. Building on this existing model and leveraging NAIRR’s partnerships and infrastructure, the DOE should issue new requests for ambitious proposals that utilize idle commercial data center GPU time for scientific modeling, simulation, and data analytics workloads. By combining DOE’s experience in merit-based compute allocation with NAIRR’s established infrastructure/partnerships, the Department can rapidly distribute excess private-sector compute to high-impact research efforts.
Establish Continuous Programs to Harness Excess Compute. Once the pilot program has validated the potential to harness excess AI GPU hours for scientific computing applications, a more permanent program can be implemented to regularly purchase GPU hours at or just above cost from these data centers. This would provide additional revenue streams for AI companies, supporting their research and development efforts throughout turbulent business cycles. Additionally, a program like this may invite other private actors to purchase GPU time from AI companies, effectively commodifying processing power to the benefit of the American economy.
The United States is once again experiencing a familiar pattern: a transformative infrastructure boom is outpacing the institutions needed to harness it. A Public Compute Redeployment program would enable the Department of Energy to provide a stabilizing source of demand for the AI industry while dramatically expanding the computing resources available for cutting-edge research. By connecting scientists and engineers with access to powerful computing resources for their research, we can foster scientific advancements in crucial fields such as materials discovery, molecular and protein modeling, advanced reactor design, grid-scale energy systems, and climate and earth-system simulation. Done right, this approach would transform potential computing overcapacity into a national competitive advantage, accelerating scientific discovery while reinforcing American technological leadership at a moment of economic and geopolitical uncertainty.
About the Author
Pranav Nathan holds a bachelor’s degree in Mechanical Engineering from the Georgia Institute of Technology and is currently a Launch Mechanisms Engineer on the Starship program at SpaceX in Texas. Previously, he developed scientific computing methods for modeling multiphase fluid dynamics at Tesla, Sandia National Laboratories, and the Flow Physics and Computational Science Laboratory at Georgia Tech.





Amazing work!