Veterans Affairs has lost track of software licenses amid $985M bill
The next constraint is thermal design, not just appetite for more compute.
- The Register Data Centre reported a development that could affect colocation & wholesale planning.
- The practical issue is whether demand can be converted into reliable capacity on schedule.
- Watch execution details, customer commitments, and any bottlenecks around power, cooling, silicon, or permitting.
The Register Data Centre reported: A federal spending watchdog has found the Department of Veterans Affairs (VA) faced "challenges" in understanding the correct number of licenses it should hold for the top five vendors in its $985 million annual software expenditure. In a report [PDF] published last month, the Government Accountability Office (GAO) found that "while the VA identified its five most widely used software vendors with the highest quantity of licenses installed, it faced challenges in determining whether it was purchasing too many or too few of these software licenses." The GAO said that for fiscal 2025, the VA planned to spend about $985 million on software, including commercial software licenses. In 2015, the GAO identified "the management of software licenses as a focus area" for the VA in a high-risk report. Another GAO report in January 2024 said the VA should track licenses in use within its inventories and compare them with purchase records. The VA agreed with the recommendations and – as the latest report states – had taken "preliminary actions" to track software license usage and was due to implement initial functionality for a centralized software license inventory in late March 2026. "If successful, this could be a critical first step in improving the department's ability to track and analyze licenses across the department. Implementation of these recommendations would allow VA to identi.
The story lands in a market where demand is already assumed. The more useful question is whether the supporting layer around data center leasing is flexible enough to turn that demand into available capacity. The constraint is thermal design. Higher rack density changes the shape of the facility, the maintenance model, and the supplier base behind each deployment.
The pressure point is timing. Cooling design standardization may determine who can actually monetize higher-density deployments on schedule.
Operators that treat cooling as a late-stage engineering detail risk turning demand into stranded capacity. Buyers will care less about headline megawatts and more about which sites can support the next generation of accelerator clusters without long retrofit cycles.
The financial question is whether this development improves pricing power, locks in scarce capacity, or exposes execution risk that the market may still be discounting, the operating question is procurement timing, facility readiness, network design, and the likelihood that adjacent constraints will slow realized deployment, and the customer question is whether this changes build sequencing, partner dependence, or the economics of scaling regions and clusters over the next few quarters.
This is where AI infrastructure differs from ordinary software growth. Capacity has to be financed, permitted, powered, cooled, connected, staffed, and then sold into real workloads before the economics are visible.
The practical read is that infrastructure advantage is becoming more local and more operational. Two companies can chase the same AI demand and end up with very different outcomes if one has better access to power, more credible delivery dates, or a cleaner path through procurement and permitting.
The next signal to watch is the next disclosures on customer commitments, infrastructure readiness, and any evidence that power, cooling, silicon supply, or permitting becomes the real gating factor. The next test is whether cooling standards, vendor capacity, and operations teams can scale as quickly as the compute roadmap requires.