top of page

AI in Healthcare: Beyond Accuracy, Beyond Hype

  • Aikyaveda Communications
  • Feb 15
  • 4 min read

Updated: Feb 15

At Aikyaveda AI, we believe that the future of healthcare demands more than algorithms and accuracy - it requires trust, accountability, and a deep respect for human life. In this thought-provoking episode, AI in Healthcare: Beyond Accuracy, Beyond Hype, Co-Founders Dr. Komal Prasad, a senior neurosurgeon with two decades of clinical leadership, and Padmavathy D Kothandaraman, a transformation leader with 30 years in global banking and technology, explore how artificial intelligence can responsibly reshape medicine. Together, they bridge the worlds of clinical precision and enterprise innovation, challenging assumptions, exposing the risks of hype-driven adoption, and charting a pragmatic roadmap where AI serves as a partner to clinicians rather than a replacement. This conversation is a call to move beyond numbers and into outcomes- where safety, trust, and patient benefit remain at the heart of transformation.



Dr Komal Prasad
Dr Komal Prasad
Padmavathy D K
Padmavathy D K


Ms. Padma: We hear a lot about AI transforming every sector—banking, finance, agriculture, logistics, and now healthcare. But can we really apply AI to medicine in the same way we apply it elsewhere?

Dr. Komal: That’s the key question. In most industries, an error costs money or time. In healthcare, an error can cost health, function, or life. That single fact makes healthcare fundamentally different from other AI use cases.



Ms. Padma: But the underlying algorithms are similar, right?

Dr. Komal: Mathematically yes, but contextually no. In finance, a wrong prediction might deny a loan. In medicine, a wrong prediction might miss a stroke or trigger an unnecessary surgery.

Also, healthcare decisions are part of a doctor–patient relationship built on trust and accountability. You cannot simply say, “The model decided.” A clinician must understand, explain, and own the decision.

Ms. Padma: Many AI papers boast very high accuracy. Why is that not convincing enough?

Dr. Komal: Because accuracy alone can be misleading.

If a disease affects only 1% of patients, a model that always predicts “no disease” is 99% accurate but clinically useless.

In medicine we must look beyond accuracy to:

·         false negatives (missed disease)

·         false positives (unnecessary alarms or treatments)

A model can look statistically impressive yet fail the patient in front of you.

Ms. Padma: That’s very similar to the identification of suspicious/fraudulent transaction in the financial crime domain, although AI is being fine-tuned here for higher accuracy.

False negatives (missed alerts and criminal funds entering the system, regulatory fines)

False positives (frozen legitimate accounts; poor customer experience; investigator burnout)


"In healthcare, accuracy alone is not enough—trust, safety, and patient outcomes must come first.

Ms. Padma: So what metric should clinicians prioritize?

Dr. Komal: Often recall or sensitivity.

For things like sepsis detection or predicting hospital readmission, missing a true case is dangerous. It’s usually safer to have some extra false alarms than to miss the one patient who deteriorates. In high-stakes medicine, false negatives hurt more than false positives.


Ms. Padma: If a model predicts well on past data, isn’t that enough?

Dr. Komal: No. Prediction does not automatically translate into better outcomes.

You need pragmatic randomized controlled trials to show that using the AI in real clinical workflow actually improves patient results. Medicine adopted drugs and surgeries only after such trials; AI that changes decisions deserves the same standard.

Otherwise we risk deploying elegant mathematics that doesn’t truly help patients.


Ms. Padma: Why do models often fail when moved to another hospital?

Dr. Komal: Because healthcare is deeply contextual. Different patients, machines, and practices change the data.

A model that works perfectly in one center may quietly degrade in another. In medicine, silent degradation means silent risk, so continuous monitoring and recalibration are essential.

Ms. Padma: There’s massive investment in healthcare AI. Why the hesitation among doctors?

Dr. Komal: Not resistance to technology, but caution about hype.

Clinicians worry about black-box tools and premature deployment based on marketing claims of “high accuracy” instead of real patient benefit.

Since doctors remain legally and ethically responsible, they must lead how AI is introduced, ensuring it follows scientific and clinical standards.


Ms. Padma: So both clinical and statistical literacy are needed?

Dr. Komal: Absolutely. You need people who understand disease biology and also understand bias, validation, and calibration.

Without that dual understanding, you either get impressive math that ignores patients or good clinical intuition supported by weak statistics. Neither is safe for decision-making.


Ms. Padma: Where can AI be safely and usefully deployed right now?

Dr. Komal: Many operational areas that don’t directly make life-and-death calls:

  • smarter appointment scheduling and predicting no-shows

  • operating theatre scheduling and time optimisation

  • inventory and implant stock management

  • bed and discharge prediction to reduce wait times

  • automated billing and insurance authorisation

  • medication reconciliation and interaction checks

  • clinical documentation and summarisation

  • patient triage and routing to the right service

These reduce waste, delays, and clinician burnout without replacing medical judgment.


Ms. Padma: What about clinical use?

Dr. Komal: Use AI as decision support.

Let it highlight abnormal trends or flag high-risk patients, but the clinician examines the patient and makes the final call.

Think of AI as a tireless assistant—not an autonomous doctor.


Ms. Padma: Is there a danger that investment pressure will rush unsafe tools into clinics?

Dr. Komal: Yes. Healthcare cannot follow “move fast and break things.”

We need transparency, ongoing monitoring, and honest discussion of limitations. If we oversell and harm patients, trust will collapse and genuinely useful AI will suffer.


Ms. Padma: Can you illustrate false positives versus false negatives?

Dr. Komal: In sepsis detection, a model that rarely alerts may miss real septic patients—those misses can be fatal.

A model that alerts more often may create extra work, but clinicians can filter alerts. In this setting, high recall is more valuable than perfect precision.

Missing disease is worse than over-suspecting it.


Ms. Padma: So what is the sensible path forward?

Dr. Komal:

1.      Start with operational AI to improve efficiency.

2.      Use AI to assist, not replace, clinical decisions.

3.      Demand real-world trials for tools that change management.

4.      Continuously validate performance after deployment.

5.      Keep clinicians in leadership roles guiding adoption.


Ms. Padma: So healthcare AI is unique because it is high-stakes, trust-based, and evidence-driven.

Dr. Komal: Exactly. AI should make good clinicians better and systems smoother, not bypass judgment or shortcut science.


Ms. Padma: High accuracy numbers are attractive, but safety, outcomes, and trust matter more.

Dr. Komal: When we respect both the complexity of medicine and the rigor of statistics, AI becomes a powerful partner rather than a risky replacement.


Comments


bottom of page