As we approach the threshold of nearly 1,000 FDA-cleared AI tools across medical imaging and diagnostics, the question is no longer "Does AI work?" but "Who is responsible when it doesn't?"
I've spent over two decades working with DICOM systems, watching the evolution from film digitizers to enterprise PACS to cloud-native archives. Nothing in my career has felt quite like the inflection point we're experiencing right now.
At RSNA 2025 in Chicago, AI wasn't just a buzzword — it was inescapable. Over 100 companies filled an AI showcase spanning what felt like two football fields. GE Healthcare, Philips, and Siemens Healthineers weren't just showing scanners; they were demonstrating increasingly autonomous imaging workflows with expanding AI-driven decision support under clinician oversight. The conversations weren't about algorithms anymore. They were about accountability.
The Numbers Tell a Story
The FDA's 2025 updates show a rapid acceleration in AI-enabled medical devices, with radiology accounting for the largest share. We're adding new clearances at a pace of roughly ten per month across imaging domains.
In cardiology, companies such as HeartFlow, Cleerly, and Elucid have transformed coronary CT angiography from a diagnostic checkpoint into a precision-medicine platform. In pathology, the FDA's Breakthrough Device Designation for AstraZeneca's VENTANA TROP2 RxDx computational pathology companion diagnostic marked a historic milestone.
Yet according to a 2024 European Society of Radiology survey, nearly 80% of radiologists reported limited understanding of regulatory approval pathways and post-market surveillance obligations for AI tools already in clinical use. We are deploying faster than we are governing — and that gap is widening.
The Governance Gap
Our imaging infrastructure was designed for storage and display — not for continuous algorithmic learning, version control, or explainability. Traditional DICOM and PACS environments were never intended to support audit trails for AI decisions or model performance monitoring over time.
This is not a technology failure. It is a governance failure.
The tools exist. What's missing is the operational framework to apply them consistently — clear accountability structures, embedded validation protocols, and the institutional willingness to treat AI models as the regulated medical devices they are, not simply as software add-ons.
What RSNA 2025 Made Clear
Three themes stood out above the noise at RSNA 2025 — each one pointing toward the same conclusion: governance is the work that will determine whether AI in imaging succeeds or stalls.
Health systems need embedded validation protocols, continuous performance monitoring, and clearly defined accountability structures. Policy documents are not enough. The governance has to live inside the workflow.
Imaging IT leaders are now expected to understand AI validation, population bias, and regulatory compliance alongside traditional infrastructure management. This is a fundamentally different job description than it was five years ago.
AI systems must speak DICOM fluently, with transparent data provenance and auditable decision pathways. You cannot govern what you cannot trace — and most environments cannot yet trace AI decisions at the level the clinical and regulatory stakes demand.
The Cardiology Model
Cardiology provides a preview of what mature AI governance looks like when the field commits to building it seriously. In 2025, the American College of Cardiology and American Heart Association issued detailed scientific statements outlining the implementation and oversight of AI-powered coronary imaging.
The FISH&CHIPS study of over 90,000 NHS patients demonstrated that a CCTA+FFRCT pathway reduced unnecessary invasive procedures while simultaneously establishing scalable governance models applicable across large healthcare systems. The message is important: rigorous governance and better patient outcomes are not in tension. They are the same goal.
The Pathology Frontier
Digital pathology has graduated from experimentation to industrialization. The vision of H&E slides serving as genomic-grade biomarkers is rapidly advancing, with computational pathology companion diagnostics moving from research into regulated clinical deployment.
But industrial-scale adoption demands industrial-grade governance. The pathology environment has less established regulatory infrastructure than radiology — which means the governance frameworks being built now will define the field's trajectory for the next decade. Getting it right early matters enormously.
A Personal Reflection
The pace of change can feel overwhelming. Radiologist attrition has increased sharply since 2020, with workforce projections remaining constrained through 2037. AI is no longer optional in this environment — it is becoming structurally necessary for the specialty to function at the scale the healthcare system requires.
But what gives me genuine optimism is this: the conversations have changed. Clinicians and technologists are asking harder questions, demanding transparency, and building governance frameworks that did not exist five years ago. The field is taking accountability seriously in a way it wasn't when the first wave of AI tools arrived.
What Comes Next
In 2026, healthcare imaging organizations that want to lead — rather than react — need to prioritize three things:
Governance literacy across the imaging team. Not just the CIO or compliance officer — radiologists, PACS administrators, and clinical informaticists all need to understand what AI validation, post-market surveillance, and model monitoring actually require.
Vendor accountability frameworks. Procurement and deployment processes need to demand explainability, version control, performance benchmarks, and clear contractual obligations around ongoing model monitoring.
Deep clinical partnership. The governance frameworks that work are the ones built with clinical stakeholders, not handed down to them. Radiologists and pathologists who understand the stakes are the most important architects of trustworthy AI deployment.
The real measure of success will not be how many AI tools are deployed. It will be whether patients can trust that these systems are validated, monitored, and governed with the rigor expected of any medical device.
We built DICOM to enable interoperability. Now we must build governance frameworks worthy of the AI that runs on top of it.
Where is the governance gap most acute in your imaging environment right now? Validation, performance monitoring, vendor accountability, or something else entirely? info@radiantaihealthdata.com →
Sources & References
- U.S. Food & Drug Administration. Artificial Intelligence and Machine Learning Enabled Medical Devices. FDA.gov.
- European Society of Radiology. ESR Survey on AI Adoption and Regulation, 2024.
- American College of Cardiology & American Heart Association. Scientific Statements on AI in Cardiovascular Imaging, 2025.
- National Health Service (UK). FISH&CHIPS Study on CCTA and FFRCT Pathways.
- AstraZeneca. FDA Breakthrough Device Designation for VENTANA TROP2 RxDx Assay.