Why Physicians Must Define Their Role in the AI Era — Before Someone Else Does
Healthcare AI is advancing faster than the medical profession’s ability to define what it means to be a physician in this era. If physicians define their role reactively — as “whatever AI cannot yet do” — that definition will erode every eighteen months as capabilities advance. The durable path is proactive definition: identifying what only a physician should be accountable for, regardless of what AI can do. This piece sets out three non-delegable physician responsibilities, applies Michael Kremer’s O-Ring theory to healthcare AI to explain why human-in-the-loop is load-bearing rather than redundant, and argues that AI data governance is now a clinical scope-of-practice question.
By Susan Sly, Founder & CEO of The Pause Technologies Inc. and Amsara Health
On May 21, 2026, I have been asked to speak at a closed-door convening hosted by the American Medical Association (AMA) and the Digital Medicine Society (DiMe) at the National Academy of Sciences in Washington, D.C. The session — Defining the Role of the Physician in the Digital and AI Era of Medicine — has one of the most consequential goals I have encountered in healthcare policy: to lend my voice to a shared, authoritative definition of the physician’s role, responsibilities, and boundaries in an AI-enabled healthcare system.
This piece distills the framework I intend to bring to that conversation.
The Numbers That Should Concentrate the Profession’s Attention
Five statistics frame why this conversation is relevant now:
- Healthcare AI market growth. The global healthcare AI market is projected to grow from roughly $20–26 billion in 2024 to over $180 billion by 2030, according to Grandview Research — a compound annual growth rate above 35%.
- FDA-cleared AI/ML medical devices. The U.S. Food and Drug Administration’s public list of AI/ML-enabled medical devices passed 950 cleared products in 2024, up from fewer than 30 in 2018.
- Physician AI adoption. AMA’s Augmented Intelligence research found physician AI usage nearly doubled in twelve months — from 38% in 2023 to 66% in 2024.
- The administrative load physicians already carry. Physicians spend nearly two hours on EHR and clerical work for every one hour of direct patient care, according to the widely cited Sinsky et al. time-and-motion study published in Annals of Internal Medicine.
- Workforce burnout. AMA tracking shows that physician burnout, while declining from its 2021 pandemic peak near 63%, remained at approximately 48% in 2023.
These five numbers tell a coherent story. AI capability is accelerating. Physicians are adopting AI faster than any prior workplace technology. And the workforce is still paying the administrative debt of the last technology wave. We are entering this transformation with low slack and high stakes.
What I Learned Deploying Computer Vision in Clinical Settings
In 2020, the computer vision platform I co-founded was asked to do an urgent pilot to screen patients as they entered two of the most reputable hospital organizations during the COVID-19 pandemic. People were dying. Front-line staff were exhausted. Screening patients was something happily off-loaded to AI. It was however reactive, as opposed to proactive, and this article will discuss the differences as they relate to healthcare further on.
The computer vision deployment included:
- Febrile detection at point of entry
- Behavioral pre-screening via head movement and hand-gesture analysis
- Contact tracing without capturing any personally identifiable information
Every system was human-in-the-loop by design. The AI did not make clinical decisions. It extended the clinical team’s attention to a scale that human staffing alone could not reach during a public health crisis.
Three lessons from that deployment translate directly to the broader conversation about AI in medicine:
- AI’s most underrated capability is not superhuman diagnosis. It is the temporal expansion of clinical attention — letting physician judgment reach earlier, more continuously, and across more patients than human bandwidth allows.
- Privacy-preserving design is not a feature added at the end. It is a precondition for adoption, durability, and patient trust.
- Every healthcare AI deployment failure I have observed came from treating AI as a bolt-on to a reactive workflow — rather than as a reason to redesign toward a proactive one.
Proactive vs Reactive: The Defining Choice for the Profession
Modern medicine is structurally reactive. We wait for the symptom, the lab value, the readmission. Reimbursement reinforces it. Training reinforces it. EHRs are built for it.
AI is changing the unit economics of clinical attention. Continuous signals, ambient capture, and multimodal inference make a different model of care possible — one in which the physician’s role can reach upstream of the acute event, at scale.
This sets up a choice the profession must make now. If physicians define their role reactively — as “whatever AI cannot yet do” — that definition will erode every eighteen months as capabilities advance. That is a treadmill the profession cannot win. If physicians define their role proactively — as “what only a physician should be accountable for, regardless of what AI can do” — they set a durable boundary.
The Three Non-Delegable Physician Responsibilities in the AI Era
I propose three responsibilities that should remain non-delegable regardless of how capable AI becomes:
1. Judgment Under Uncertainty
AI systems return probabilities. Physicians render decisions. These are different acts, and the second cannot be delegated to the first. A probability distribution is not a treatment plan.
2. Accountability for the Patient
Someone must own the outcome — and it must be a licensed clinician in a fiduciary relationship with the person in front of them. Accountability diffused across a vendor stack is not accountability.
3. The Therapeutic Relationship
Trust is not a feature that can be shipped. It is earned in a human encounter, and it is the foundation on which the entire practice of medicine rests.
The O-Ring Principle: Why Human-in-the-Loop Is Load-Bearing
Michael Kremer — winner of the 2019 Nobel Prize in Economics — developed the O-Ring theory of economic development to describe complex production systems in which the value of the whole is determined by the weakest link. The reason: in systems with sequential, interdependent components, failures multiply rather than add. The theory is named for the Challenger disaster, in which a single low-quality component destroyed the value of every other high-quality input.
Healthcare AI is an O-Ring system.
You can have a world-class foundation model, immaculate data, and an elegantly designed workflow. But if the human-in-the-loop is weakened, miscalibrated, or removed, the value of the whole system collapses. The physician is not a redundancy to be optimized out of the system. The physician is the load-bearing component, and human-in-the-loop design is the precondition for the system to create value rather than destroy it.
This is the rigorous, theoretically grounded reason for keeping physicians at the center of AI-enabled care. It is not nostalgia. It is not regulatory caution dressed up as principle. It is the economics of complex production.
The Expanded Scope: AI Data Governance Is a Clinical Responsibility
At The Pause Technologies and Amsara, the AI infrastructure we are building is foundational-model-first: a base architecture tuned with proprietary, domain-specific data rather than relying on general-purpose models trained on the open internet.
This architectural choice has a direct implication for physician scope of practice.
A general-purpose model dropped into a clinical workflow is, functionally, a stranger giving advice. A foundational model tuned on a specific patient population, care setting, and clinical vocabulary is closer to a colleague who has read the charts. The accuracy, safety, and behavioral profile of these two systems is meaningfully different — and the second is what clinical-grade AI requires.
But “tuned on your data” raises a question the profession has not yet formally addressed: who is responsible for curating the data the system learns from, and validating where its competence ends?
I argue this is a clinical-governance responsibility. It is not a task for vendors, IT departments, or administrators. The judgments about what data appropriately represents a clinical context, where edge cases require human escalation, and how to validate the boundaries of a model’s competence — these are clinical decisions, and they belong to physicians.
If physicians do not claim this ground, vendors, administrators, and regulators will define it for them. None of those parties carries accountability for the patient at the bedside.
A Call to Define Now
Every era of medicine has had a defining proactive question. In public health, it was: how do we prevent this? In genomics, it became: how do we predict this?
In the AI era, the question is: how do we intervene before this becomes acute — with a physician in the loop, accountable, every time?
The physician’s role does not shrink in that question. It expands. But only if the profession defines it proactively, now — rather than litigating it later, reactively.
Frequently Asked Questions
What is the physician’s role in the AI era?
The physician’s role in AI-enabled medicine is to provide judgment under uncertainty, hold accountability for patient outcomes, sustain the therapeutic relationship, and govern the clinical AI systems that extend the reach of clinical attention. AI extends physician judgment; it does not replace it.
What does “human-in-the-loop” mean in healthcare AI?
Human-in-the-loop is a design principle in which a human clinician reviews, validates, or has authority to override an AI system’s outputs at decision points that affect patient care. It is not a constraint on AI capability — it is the design condition that allows AI to create clinical value safely.
What are the non-delegable responsibilities of physicians in AI-enabled medicine?
Three responsibilities should be considered non-delegable: (1) judgment under uncertainty, because rendering decisions differs from returning probabilities; (2) accountability for patient outcomes, which must rest with a licensed clinician in a fiduciary relationship with the patient; and (3) the therapeutic relationship itself, which is earned in human encounter and cannot be replicated by AI.
How does Michael Kremer’s O-Ring theory apply to healthcare AI?
In O-Ring production systems, the value of the whole is determined by the weakest link, because failures multiply rather than add. Healthcare AI is such a system: a world-class model, dataset, and workflow lose their value if the clinician in the loop is weakened or removed. The physician is the load-bearing component of clinical AI value.
What is the difference between a foundational model and a general-purpose AI model in healthcare?
A general-purpose model is trained primarily on broad internet data and lacks clinical context. A foundational model tuned on proprietary clinical data — a specific patient population, care setting, and clinical vocabulary — produces materially different accuracy, safety, and relevance in clinical use. Clinical-grade AI generally requires the latter.
Why is AI data governance becoming a physician responsibility?
Because the data a clinical AI learns from determines its competence, and the boundaries of that competence must be validated against the clinical context in which the system will be used. These are clinical judgments, not technical ones, and they belong to physicians rather than vendors or administrators.
About the Author
Susan Sly is the founder and CEO of The Pause Technologies Inc. and Amsara Health. Her clinical AI deployment experience includes computer vision–based patient screening at two leading hospital systems during the COVID-19 pandemic, where artificial intelligence was used for febrile detection, behavioral pre-screening, and contact tracing without personally identifiable information. She is a recognized voice on responsible AI deployment and an advisor on healthcare AI strategy. On May 21, 2026, she will speak at the AMA and Digital Medicine Society’s closed-door convening Defining the Role of the Physician in the Digital and AI Era of Medicine at the National Academy of Sciences in Washington, D.C.

