Under the Knife by a Robot: When Sci Fi Becomes Reality, By Brian Simpson
The terrifying prospect of facing surgery where artificial intelligence, not a human surgeon, makes life-or-death decisions while you're under the knife is no longer distant sci-fi. It's emerging in real operating rooms today, and recent reports expose why handing over critical judgment to AI could prove catastrophic.
Daniel Horowitz's opinion piece in The Blaze (linked below) captures this dread perfectly: "Never before have we seen a technology that offers such an impressive veneer of competence, yet demonstrates such dangerous incompetence when it actually matters." He argues that AI's biggest threat in surgery isn't outright malice — it's the false confidence it projects, combined with a lack of human hesitation, ethical reflection, or ability to course-correct when uncertain. In high-stakes moments under anesthesia, where split-second calls can mean survival or permanent harm, outsourcing those calls to algorithms rushed to market is a recipe for disaster.
Real-World Nightmares Already Happening
A Reuters investigation (February 2026) spotlighted the TruDi Navigation System from Acclarent (a Johnson & Johnson unit), an AI-enhanced tool for ENT surgeries like sinus procedures. Before AI integration in 2021, the FDA logged just seven malfunctions and one injury over three years. After adding machine-learning algorithms for real-time imaging and guidance, complaints exploded: at least 100 unconfirmed malfunctions, 10+ serious injuries reported between late 2021 and late 2025. Cases included:
Surgeons misinformed about instrument locations, leading to punctured skull bases.
Cerebrospinal fluid leaks from the nose.
Accidental cuts to major arteries (e.g., carotid), causing blood clots and strokes—one patient required skull removal and still faces severe daily impairments a year later.
Lawsuits allege the software "hallucinated" anatomical details, feeding surgeons dangerously wrong information. Horowitz notes the grim irony: the device was arguably safer before AI was added. Yet the FDA has approved 1,357 AI medical devices (double the count from 2022), many recalled quickly — 43% within a year per a Yale/Johns Hopkins JAMA study, often from publicly traded firms under investor pressure to launch fast.
Broader risks compound the fear:
Large language models (LLMs) excel on standardised medical tests but mix accurate and incorrect info when applied to real patients, per an Oxford study in Nature Medicine:
Bean, A.M., Payne, R.E., Parsons, G., Kirk, H.R., Ciro, J., Mosquera, R., Monsalve, S.H., Ekanayaka, A.S., Tarassenko, L., Rocher, L. and Mahdi, A. (2026). "Reliability of large language models as medical assistants for the general public: a randomized preregistered study." Nature Medicine. https://doi.org/10.1038/s41591-025-04074-y
In dynamic OR environments, AI lacks the "internal resistance" humans feel when unsure — hesitation that prompts double-checks, second opinions, or aborting risky moves.
The Human Element AI Can't Replicate
Surgery isn't chess or data crunching — it's a profoundly human act amid uncertainty, bleeding, anatomy variations, and unforeseen complications. Surgeons draw on years of tactile experience, intuition honed by failure and success, empathy for the unconscious patient, and moral accountability. AI processes patterns from training data at lightning speed, but:
It hallucinates (fabricates plausible but wrong details).
It has no "gut feel" for when something feels off.
It can't weigh unquantifiable factors like a patient's overall resilience or family context.
Errors cascade without remorse — there's no pause to say, "This doesn't feel right; let's stop."
Recent developments push toward greater autonomy: Johns Hopkins experiments (2025) saw AI perform gallbladder removals on pig models without human input, adapting to variations. Shanghai teams reportedly achieved in-vivo autonomous abdominal surgery on pigs by late 2025. Systems like da Vinci now incorporate AI for feedback and partial autonomy. Proponents tout precision, reduced fatigue, and fewer errors in controlled settings — but the leap to full decision-making authority terrifies because the stakes are existential for the patient.
You're sedated, helpless, trusting strangers with your life. Imagine the monitor showing "optimal path" based on an algorithm that's wrong, the robot arm moving decisively toward disaster, and no human override in time. Liability? Blame diffuses — surgeon? Manufacturer? The code itself? No one feels the full weight the way a human doctor does.
Why This Future Feels Inevitable — and Terrifying!
Tech hype, investor billions, and policy pushes (FDA approvals outpacing scrutiny — only 25 scientists review thousands of devices) create momentum. "Just good enough" AI gets deployed because it's cheaper, faster, scalable. But in the OR, "good enough" isn't enough — it's lethal when wrong.
Horowitz concludes: Safety must trump speed. AI can augment (e.g., real-time alerts, predictive risk scores), but never replace the last line of defence — human judgment powered by ethics and a conscience. Until regulators demand rigorous, long-term proof of safety in live humans, not just pig models or datasets, the operating table remains one of the last places where we should fear ceding control to machines.
Lying on that table, anesthetized and vulnerable, the scariest thought isn't the scalpel — it's wondering if the decision to cut deeper came from a mind that can feel fear... or from code that simply optimises. In that moment, the terror is absolute: your life in the hands of something that doesn't truly understand what life means.
https://www.theblaze.com/columns/opinion/would-you-want-ai-making-decisions-for-your-doctor-while-you-are-under-the-knife-in-the-operating-room
