Something doesn't get nearly enough attention in AI conversations: infrastructure.

Not GPUs. Not "the next model." Not hype. Just the fundamental reality that AI runs on physical systems — electricity, compute, data pipelines, and the governance frameworks that hold it all together. And as AI scales across every sector of the economy, those physical systems have to be built, funded, and managed responsibly.

A recent development from Washington made this point in a way worth examining — not from a political angle, but from a practical one. Several of the largest technology companies in the world — including Google, Microsoft, Meta, Oracle, xAI, OpenAI, and Amazon — signed what's being called the Ratepayer Protection Pledge. The commitment focuses on making sure that as these companies build massive AI data centers across the United States, the cost burden of the electricity infrastructure required to support them doesn't fall on ordinary American households and communities.

That's a serious statement about what the AI boom actually demands. And it raises questions that matter well beyond the energy sector.

Why Energy Is the Right Metaphor for the Larger Infrastructure Problem

AI data centers are power-hungry at a scale that most people don't appreciate. A single large AI training facility can consume as much electricity as a small city — and the number of these facilities is growing rapidly. If that growth isn't managed responsibly, you get a pattern that should sound familiar to anyone who has watched complex infrastructure projects unfold: the benefits accrue at scale while the costs get distributed to people who had no say in the decision.

The Ratepayer Protection Pledge pushes back on that dynamic. Its core logic is simple: if you are the company building and profiting from AI infrastructure, you should help fund the power generation and grid upgrades that make it possible. Not your neighbors. Not the communities who happen to be near your data center. The companies that benefit from the scale should carry their share of the weight.

The organizations that will be positioned to move fastest when mature clinical AI tools arrive are the ones building their data infrastructure now — not waiting until the AI use case is fully defined before asking whether the data is ready.

What I'll be watching is whether this pledge translates into real action: companies funding new generation capacity rather than buying credits, meaningful investment in transmission and distribution infrastructure, and local job creation in communities hosting these facilities. A pledge is a start. Follow-through is what matters.

The Broader Lesson: Responsible AI Growth Requires an Infrastructure Plan

The reason this matters beyond the energy sector is that the same dynamic — benefits at scale, costs distributed to those least equipped to bear them — plays out in other AI infrastructure contexts, including healthcare.

Healthcare organizations are under enormous pressure to adopt AI. The technology promises faster diagnosis, better clinical decision support, streamlined operations, and the ability to do research at a scale that was previously impossible. These are real and meaningful possibilities. But they require something most healthcare AI conversations tend to skip past: the underlying data infrastructure has to actually be ready.

You cannot run effective clinical AI on a PACS system that hasn't been modernized. You cannot train a research model on data that hasn't been de-identified and governed. You cannot connect imaging AI tools to a hospital ecosystem without interoperability infrastructure that can normalize data movement across disparate systems. The AI capabilities that hospital and health system leaders are being asked to evaluate are only as good as the data foundation underneath them.

This is the part of the AI conversation that most vendors aren't having — because it's harder to package and sell than a demo. But it's the part that determines whether an AI initiative actually delivers value or becomes an expensive proof of concept that stalls out in production.

What This Means for Healthcare Organizations Right Now

The organizations that will be positioned to move fastest when mature clinical AI tools arrive are the ones building their data infrastructure now — not waiting until the AI use case is fully defined before asking whether the data is ready to support it. That means doing the work that isn't glamorous but is genuinely strategic.

Modernize legacy imaging archives and PACS environments

Legacy data that can't be migrated cleanly cannot be used for AI training or retrieval, and migrations that aren't planned carefully create disruption, data loss risk, and compliance exposure. Archive consolidation before AI adoption isn't optional — it's foundational.

Build interoperability into the architecture, not as an afterthought

Healthcare AI tools depend on data from multiple systems — radiology, pathology, EHR, lab, and more. An organization with fragmented systems and no interoperability layer cannot get those tools to function consistently across its environment. Workflow orchestration and normalization need to happen before AI deployment, not during it.

Establish governance and de-identification workflows now

Research and AI use cases in healthcare require data that has been properly de-identified and handled under a governance framework that can withstand scrutiny. Organizations that don't have this infrastructure in place will find themselves unable to participate in AI research partnerships or data-sharing programs, regardless of how advanced the AI model they're evaluating actually is.

Create operational visibility across data movement

AI readiness is partly a technical problem and partly an operational one. Organizations that don't have clear insight into how their data moves, where it lives, and what state it's in will struggle to build the data pipelines that clinical and research AI depends on. Visibility and monitoring infrastructure is not overhead — it's a prerequisite.

U.S. Leadership in AI Starts with Honest Conversations About Infrastructure

I believe the United States should remain the global leader in AI research and development. Maintaining that position isn't just about talent, capital, or regulatory environment — it's about whether we can build the infrastructure to support what AI is becoming at scale. And that conversation has to be honest about costs, responsibilities, and what it actually takes to do this right.

The Ratepayer Protection Pledge is one piece of that conversation. The equivalent conversation in healthcare is about whether we are building the data infrastructure layer that AI in medicine will require — or whether we're continuing to deploy AI initiatives on top of data environments that aren't ready for them.

Both conversations deserve more attention than they're getting.

Jim Cook

Jim Cook

Jim Cook is a Senior PACS Administrator and author focused on AI and data innovation in healthcare. He is a contributor to Radiant AI Health Data, a healthcare data infrastructure company developing solutions for migration, interoperability, governance, de-identification, and AI readiness. Questions or thoughts? info@radiantaihealthdata.com