Healthcare leaders on busting AI myths

Press Release

Healthcare leaders are moving quickly on AI — but many believe the biggest risk isn’t that the technology falls short. It’s that organizations overestimate what AI can do on its own and underestimate what it takes to make it work safely, at scale and inside real workflows. 

According to many healthcare leaders who spoke with Becker’s, better models automatically produce better outcomes. The path to impact runs through governance, data discipline, workflow redesign, cybersecurity rigor and human accountability — with AI delivering the most value when it’s embedded, trusted and measured, not flashy.

The leaders featured below are speaking at Becker’s 11th Annual Health IT + Digital Health + RCM Conference, Sept. 14-17, 2026, at the Hilton Chicago.

If you would like to join the event as a speaker, please contact Scott King at sking@beckershealthcare.com.

As part of an ongoing series, Becker’s is connecting with healthcare leaders who will speak at the event to get their perspectives on key issues in the industry.

Editor’s note: Responses have been lightly edited for clarity and length.

Question: What’s the biggest myth in healthcare AI?

Vitaly Herasevich, MD, PhD. Professor, Anesthesiology and Medicine and Co-Director for Center for Clinical and Translational Science Office of Digital Innovation, Mayo Clinic (Rochester, Minn.): Headlines have narratives [such as] “AI as doctor.” AI can do an excellent job in pattern recognition, data synthesis, documentation drafting, but don’t have accountability, legal responsibility and cannot do clinical judgment under uncertainty, conflicting goals and missing data. No data — no AI.

Andrew Kim, MD. Director of Physician Informatics for Baylor Scott & White Health (Dallas): I think the biggest myth in healthcare AI is that it needs to be visible and flashy. The best AI products feel invisible and embedded in existing workflows but can produce reliable results and provide efficiency.

Luis Taveras, PhD. Senior Vice President and CIO for Jefferson Health (Philadelphia): The biggest myth in healthcare AI is the belief that it’s a magic pill that will fix everything. AI often amplifies the challenges we already face. One of the first pressure points is governance: AI requests pour in from every direction, and without a strong, disciplined governance model, organizations can quickly lose control. Instead of creating a separate governance structure just for AI, we should integrate AI into the frameworks we already trust.

Cybersecurity is another critical concern. Healthcare organizations are prime targets for highly sophisticated, well-funded cybercriminals. If we rush into AI adoption without rigorous evaluation, we risk introducing vulnerabilities that could compromise both patient safety and organizational integrity.

AI holds tremendous promise, and it will undoubtedly transform every corner of healthcare. But we need to approach it with intention — not by reacting to every vendor pitch, but by prioritizing solutions that address clearly defined needs and deliver measurable value. To truly optimize the benefits of AI, we must be methodical, thoughtful, and willing to slow down long enough to make smart, sustainable decisions.

Christopher Horvat, MD. Senior Director of Clinical Informatics for UPMC (Pittsburgh): The biggest myth in healthcare AI is that better models alone improve care. Outcomes change when people, systems, and incentives change. AI only matters when it is embedded into real workflows with accountability and trust.

Hetal Rupani. Senior Director of Business Intelligence and Analytics for Johns Hopkins Medicine (Baltimore): One of the biggest myths in healthcare AI is that a highly accurate, “perfect” model guarantees impact. It doesn’t.

Real value depends far more on how effectively AI is integrated into clinical workflows, EHR systems, and aligned across strategy, data, and people. Efficiency gains only materialize when AI meaningfully improves how work gets done — not when it simply performs well in isolation.

Competitive advantage will come from using AI distinctively and reimagining how human expertise delegates the right tasks to intelligent systems. In healthcare, integration and trust matter more than precision alone.

Are we prioritizing model development — or truly redesigning processes for impact?

Aimmon Lago. Executive Director of Revenue Cycle Systems, Technology and Digital Solutions for Stanford Health Care and School of Medicine (Stanford, Calif.): One prevailing myth related to AI technology in healthcare is that it should be handled completely differently from other technology solutions. Many of the best practices of IT governance to create and sustain value still apply. These basics include thorough understanding of the target problem, calculation of the present value of risk adjusted future benefits, and sizing of the cost to implement and sustain. LLMs are powerful tools, and have the potential to create value, however they should be evaluated objectively alongside all technologies to avoid cognitive bias and maintain a systematic focus on organizational value.

Rajiv Pramanik, MD. CIO and Chief Health Informatics Officer for Contra Costa Health (Martinez, Calif.): I think the myth is that conclusions can be drawn, as everything is constantly changing in the healthcare space at a rate we have never seen due to tech, payor policies, labor, regulations etc. Things that work this month might not be useful in three months.

Greg Thompson. Chief Information Security Officer for VHC Health (Deerfield Beach, Fla.): The biggest myth in healthcare AI is that it is primarily a technology problem. Healthcare does not struggle because the models are insufficient; it struggles because of fragmented data, unclear ownership, governance gaps, workflow resistance and misaligned incentives. Another misconception is that AI will replace clinicians or magically eliminate inefficiency, when it works best as a force multiplier that augments decision-making, reduces administrative burden and strengthens operational resilience. Organizations that see real value treat AI as a business transformation initiative grounded in workflow redesign, governance, and measurable ROI, not just a model deployment. The myth is that AI is about algorithms; the reality is that AI is a spotlight on organizational strengths, weaknesses, and untapped opportunities.

Chuck Christian. Vice President of Technology and Chief Technology Officer for Franciscan Health (Mishawaka, Ind.): I believe the biggest myth related to AI in healthcare is that AI will or can replace clinicians. We need to remember that AI is not reasoning or thinking, at least not at the moment, AI is only as good as their training datasets. Having been in healthcare for several decades and observed the exponential growth in clinical knowledge, I do believe that AI can be a very useful knowledge/information augmentation tool. AI can deal with high-volume cognitive tasks, triage prioritization and summarizations. We need to ensure that our workflows have a human in the loop and not fall into the assumption that AI has the perfect answers for all situations.

Diane Constantine. Senior Director of Enterprise Health Informatics for Children’s Hospital of Philadelphia: The biggest myth in healthcare AI is that it’s here to replace clinicians. It’s not. AI can absolutely support decision-making, but it can’t replace human judgment, context, or compassion — and we’re still the ones accountable for the outcomes. Another myth is that AI is somehow neutral or un biased, when in reality, it reflects the data and systems it was built on. The real power of AI isn’t automation for automation’s sake — it’s how thoughtfully we redesign care around it.

Melissa Hall. Director and CIO of Information Technology for Silver Hill Hospital (New Canaan, Conn.): Although AI in healthcare is a game changer for some things, it can’t fix everything. Patients will still have their own unique issues that AI will not have ever had to consider. This means that caregivers and providers will still need to “think” about what is occurring. Artificial Intelligence can assist with putting the pieces together, but it is important that it is not seen as the final answer.

Tomi Kolade, MD. Assistant Chief Medical Information Officer for UTHealth (Houston): The biggest myth in healthcare AI is that it is about replacing clinicians. It is not. The real revolution is about restoring them.  AI will not outperform the best physicians in judgment, context, or accountability. What it can do is eliminate the cognitive and administrative drag that has slowly pulled clinicians away from the bedside. When deployed well, AI becomes force multiplication. It reduces friction, clarifies signal, and gives physicians back the mental bandwidth required for complex decision making and human connection. The future of healthcare is not autonomous medicine. It is intelligent infrastructure. The organizations that will lead are not those chasing the flashiest tools, but those building disciplined governance around AI, measuring outcomes rigorously, and embedding these systems directly into clinical workflows. AI is not here to replace clinicians. It is here to help us practice medicine the way it was meant to be practiced.

See also  This Ballad Hospital, Flooded by Hurricane Helene, Will Be Rebuilt for $44M in a Flood Plain

Babatope Fatuyi, MD. CMIO for UTHealth (Houston): The biggest myth in healthcare AI is that it can replace human judgment. AI is most effective when it augments clinicians and operational leaders by reducing administrative burden, improving visibility, and supporting decision-making — not replacing it. Without strong data governance, workflow integration, and clinical oversight, AI adds risk rather than value. Healthcare organizations that treat AI as a tool for resilience and efficiency, rather than a shortcut, see the most meaningful results.

Conrad Gleber, MD. Associate CMIO for University of Rochester Medical Center (Rochester, N.Y.): The biggest myth in healthcare AI is that more of it automatically leads to better outcomes. AI is genuinely powerful, particularly for pattern recognition across imaging and clinical data as that’s what it was designed to do. However, its value is realized only when it’s thoughtfully implemented into specific workflows, with the right human expertise still in the loop constantly giving feedback and improving the tool. I see a large risk that health systems will deploy it broadly and assume the work is done. A care team using a sepsis risk prediction model still needs to understand the patient in front of them. The technology doesn’t replace that expertise though it can synergize with it.

Yasir Tarabichi, MD. Chief Health AI Officer for MetroHealth System (Cleveland): The biggest myth in healthcare AI is that the technology just isn’t good enough yet. That framing misses the point. Healthcare disruption is not primarily a technical problem — it’s a socio-technical one. In many cases, the models are already “good enough.” What’s not ready is the environment we’re trying to deploy them into. We are layering advanced analytics on top of fragmented data, brittle workflows, unclear ownership and operating models built for episodic care.

Brian Lancaster. Senior Vice President and CIO for Children’s Mercy (Kansas City, Mo.): The biggest myth in healthcare AI is that it is a solution in and of itself. Like every major technology before it, it is an enabler rather than the answer. Value only emerges when organizations first define the problem by identifying the clinical or operational pain point, the stakeholders affected, and the outcomes that matter. A disciplined process should then validate data readiness, assess workflow fit, set measurable success criteria and test through a small pilot before scaling. Without this problem-first approach, the technology becomes a tool in search of relevance and initiatives almost inevitably fail to deliver meaningful results.

Uday Madasu. CIO for Covenant Health (Knoxville, Tenn.): Clinical AI will change how physicians and nurses work. It will not eliminate why they are required. The healthcare system is built on licensed accountability, not algorithmic autonomy. AI will not eliminate clinicians, but it will alter skill composition, productivity models and governance requirements.

Raymond Lowe. Senior Vice President and CIO for AltaMed (Commerce, Calif.): One of the biggest myths in healthcare AI is that AI can fully replace clinical judgment or operate independently without oversight from healthcare professionals. AI in healthcare is a powerful support tool, but it is not a substitute for the expertise, empathy and decision-making of trained clinicians.

Bob Berbeco. CIO for Mahaska Health (Oskaloosa, Iowa.): The biggest myth in healthcare AI is that an AI solution can be plug-and-play and safely “dropped in” to replace human judgment or automatically improve outcomes.

AI is a sociotechnical system: if it is black-box opaque, not trusted or not embedded into real clinical workflows with physician/clinician oversight, it creates mistrust, noise, risk, and disappointment instead of value. This is why we moved away from one-off, bolt-on pilots and toward operationalizing AI inside workflows with governance, ethics, measurable outcomes, and clear accountabilities. The most durable wins show up when AI reduces administrative friction and protects physician/clinician time; with humans remaining in-the-loop that are accountable for decisions made; and the system is monitored continuously and improved over time rather than a set and forget model.

Divya Devli. Principal Product Manager for Blue Shield of California (Oakland): The biggest myth in healthcare AI is that the hardest problem is building the model. The real challenge is integrating AI into daily workflows without adding cognitive or operational burden, because even highly accurate tools can fail when they interrupt people or require extra steps. Successful AI feels invisible and supports decisions without demanding attention.

Thomas Kingsley. Director of Applied AI for UCLA Health (Los Angeles): The biggest myth in healthcare AI is that a model’s technical performance is the primary barrier to clinical adoption. Most promising AI tools fail not because they don’t work, but because they weren’t designed with the clinical workflow, user experience and implementation context in mind. A perfectly accurate model that adds 30 seconds to a physician’s workflow per patient will quietly die on the vine. The hard problems in healthcare AI are sociotechnical — governance, trust, integration and sustained change management — not algorithmic.

This challenge is only accelerating. In the age of LLM/GenAI, it can take weeks to go from concept to functional prototype or even early product. It has never been easier to have an idea turn into something that can be a nice-looking demo. This has led to an explosion of AI healthcare tech startups. However, most healthcare systems are sophisticated and won’t buy a good idea without evidence. We need to start evaluating AI tools rigorously, the way we do novel therapeutics. This, more than the AI technology itself, will unlock innovation in healthcare. Once there’s evidence, adoption will soon follow.

Eric Snyder. Executive Director of Technology and Innovation for University of Rochester Medical Center—Wilmot Cancer Institute (Rochester, N.Y.): The biggest myth in healthcare AI is that AI is the hard part. Models are easy to get; what’s hard is clean, trustworthy data, governance and integration into real clinical and operational workflows. Without curation, validation and context, AI just scales bad assumptions faster. AI doesn’t fix broken systems, it amplifies them.

George Bailey. Director of CyberTAP, Technical Assistance Program for Purdue University (West Lafayette, Ind.): The biggest myth in healthcare AI is that it will replace clinicians. Healthcare AI is best understood as clinical decision support, not clinical decision replacement. AI excels at narrow tasks like pattern recognition and risk flagging, but it lacks the contextual judgment, ethical reasoning and accountability required for patient care. The greatest value of AI comes when it augments clinicians — reducing cognitive load, improving consistency, and helping them make better-informed decisions — rather than attempting to take decisions away from them.

Parag Jain, MD. Director of Clinical Research for Children’s Health and University of Texas Southwestern Medical School (Dallas): The biggest myth in healthcare AI is that it’s merely a technology revolution. It’s a care-delivery revolution. AI’s real promise isn’t smarter algorithms, but freeing clinicians from cognitive overload, restoring time with patients and reshaping how decisions are made. AI won’t transform healthcare by being added to the system — it will transform healthcare when the system is redesigned around AI.

Matt Morton. Assistant Vice President and Chief Information Security Officer for University of Chicago: The biggest myth in healthcare AI is that it’s a technology problem. Everyone’s racing to pick the right model or the right vendor, but the organizations that treat this as a procurement decision are going to be deeply disappointed — and in some cases, they’re going to create serious risk in the process. AI will be transformative in healthcare, but it’s only as good as what you feed it, and in an environment already burdened by fragmented data, legacy infrastructure and inconsistent data practices, deploying AI without solid data governance isn’t innovation — it’s amplification of the problem. Without knowing what data you have, where it lives, how it’s classified, and who has access to it, you’re not just risking poor model performance — you’re creating HIPAA exposure and potentially introducing bias into clinical workflows. The organizations that will benefit from healthcare AI are the ones investing right now in data quality, data stewardship and the security infrastructure that makes responsible AI use possible. The technology will keep getting better on its own — the question is whether your organization will be ready to use it responsibly when it matters.

See also  Dignity Health taps former Ascension exec as Central Coast market president

Sunil Dadlani. Executive Vice President, Chief Information and Digital Transformation Officer, Chief Cyber Security Officer for Atlantic Health (Morristown, N.JNj.): There is a common assumption that the most advanced or largest model will automatically drive the strongest clinical or operational results.

In healthcare, that is rarely true. Context and workflow integration matter far more than model size. A focused, well-tuned model embedded directly into Epic workflows often outperforms a highly sophisticated standalone platform that sits outside the clinician’s daily environment.

Scott McEachern. CIO for Southern Coos Hospital & Health Center (Bandon, Ore.): The biggest myth in healthcare AI is that AI tools will solve all the problems. AI is a generally personal technology that has demonstrable upside to organization of daily tasks in both personal and professional settings. However, currently, especially for rural healthcare facilities, AI tools are one-subject applications, just like software tools. I would encourage product designers and organizations to pivot to development of low-cost, high-impact, consolidated AI tools that can impact [key performance indicators] across the organization, not just in a handful of areas.

Jennifer Ngure. Director of Clinical Informatics for Sturdy Health (Attleboro, Mass.): The biggest myth in healthcare AI is that if the technology works, adoption will follow. It won’t. I’ve seen near-perfect algorithms collect dust because no one asked the clinicians first. The hospitals winning on AI aren’t those with the best tools — they’re the ones who treated implementation as a human problem before a technical one.

Zafar Chaudry, MD. Senior Vice President, Chief Digital Officer and Chief AI and Information Officer for Seattle Children’s: The biggest myth is the persistent belief that these algorithms are ready to autonomously replace the clinical intuition and nuanced decision-making of a seasoned clinician. While the technology excels at processing massive datasets with incredible speed, it lacks the essential human empathy and contextual understanding required to manage complex patient care safely. We must view AI as a sophisticated co-pilot that enhances our capabilities rather than a total substitute for the expert hands and minds of our frontline clinicians.

Omer Awan. Vice President and CIO for Fred Hutch (Seattle): I believe the biggest myth in healthcare AI is that it will completely replace doctors and clinicians. Although AI has the potential to replace several of the tasks performed by clinicians today, it can never replace empathy and the nuanced judgement brought forward by clinicians. The correct way would be to look at AI in healthcare as an augmentation and not a substitution.

Gary Janchenko, DSc. Chief Information Officer for Central Ohio Primary Care (Columbus, Ohio): A myth worth confronting is that AI can be rolled out across the provider, staff and patient experience and — almost automatically — change everything for the better. Impact follows intention. Scattered or overly broad ambition produces skepticism; focused implementation produces results. Find the friction points, deploy there first, talk openly about what AI can and can’t do, and treat user feedback as your most valuable tuning tool. Build trust before you build scope.

Andrew Rosenberg, MD. CIO for Michigan Medicine (Ann Arbor): There are two primary myths that we need to collectively engage with. No. 1, AIs will not replace people; and No. 2, healthcare organizations have the basic processes, controls and governance to safely deploy various AIs for actual clinical care across the health system and at scale. Regarding the first assertion, the razor-tight margins and ongoing expansion of costs within healthcare will remain a strong driver to reduce costs and after pharmacy costs, payroll and benefits across organizations tend to still be the highest single cost center. Roles that are amenable to automation have been the target for previous efforts to use less expensive alternatives, [such as offshore Tier 1 services, [robotic process automation], early chatbots and robotics). These trends will likely now increase as various AIs become more capable and available. As we all become more familiar and more comfortable engaging with AIs for relatively low-risk or low-complexity services (call centers, service desk, basic revenue cycle etc.), we will increase our confidence to use AIs for work people previously did. Healthcare will not be as aggressive to replace people as other industries have and will, but it will not be able to sit on the sidelines and maintain both people and increased technology expenses.

Unlike the centurylong maturation of several key regulatory and industry practices for safe and effective use of medicinal products and devices or the regulatory systems, there is essentially nothing currently new or even tied to current regulations that AI providers to healthcare and life science organizations need to follow in a similar manner, especially when they avoid specifically noting themselves as clinical decision support tools. Compounding this, most healthcare organizations are still working out how to incorporate the basic elements of the SAFER guides for the safe use of the EHR. More nuanced issues, such as AI observability categories (data quality, model performance and drift, infrastructure monitoring and user feedback) are nascent at best.

We have a long way to go if various AIs will be used in the same manner we have become accustomed and assured as we use medical devices, pharmaceuticals, laboratory practices and behavior of listened clinicians.

Tomas Gregorio. Senior Vice President and Chief Digital Information Officer for Care New England (Providence): At Care New England, we’ve learned that sustainable workflow impact, whether it uses AI or not, comes from disciplined use cases, strong governance, and close partnership with clinicians.

AI works best when it’s boring — invisible, reliable and embedded into everyday care delivery.

Rick Leesmann. CIO and Chief Information Security Officer for Sky Lakes Medical Center (Klamath Falls, Ore.): The biggest myth in healthcare AI currently is that it’s a silver bullet to what’s broken about modern healthcare in the United States. AI can’t fix broken workflows, poor data hygiene, staffing shortages or misaligned incentives. All it does is better expose them. Organizations that treat AI as a transformation catalyst, and not a plug-and-play solution that will make all their problems disappear, will win. Those chasing point by point solutions without strategy surrounding the technology and change management are only adding to their digital clutter.

Himadeep Movva. Intelligent Automation Specialist for MedStar Health (Columbia, Md.): The fear dominates headlines — that AI is coming for doctors and nurses. But health systems that deployed AI tell a different story: it gives clinicians their time back, not their pink slips. The deeper myth, though, is that simply adopting AI transforms your organization. The question has shifted from “Should we use AI?” to “How do we actually get ROI from it and how soon?”

Ed Lee, MD. Director of Informatics and Chair of Clinical Education for California Northstate University College of Medicine (Elk Grove, Calif.): AI accuracy isn’t the whole game in healthcare. That’s the minimum. If a tool is 98% accurate but slows me down, adds clicks, or makes clinicians feel exposed, it’s not going to fly. What matters is whether it fits into the reality of clinical care and makes the right thing easier to do. If it doesn’t, it doesn’t matter how good the model is.

T.Y. Alvin Liu, MD. Endowed Professor, AI Oversight Team and Inaugural Director for James P. Gills Jr. MD and Heather Gills Artificial Intelligence Innovation Center, Johns Hopkins Medicine (Baltimore): “AI won’t replace doctors, but doctors who use AI will replace those who don’t.” This reassuring line may no longer be safe to assume. Over the past 24 months, frontier AI models have exceeded expectations across multiple technical domains —reasoning, coding, multimodal understanding, and long-context integration — with performance gains that outpaced prior forecasts. Given this accelerating trajectory, it is reasonable to anticipate the emergence of artificial general medical intelligence: an AI system that outperforms at least 75% of physicians across all specialties in both medical knowledge and clinical decision-making. If that prediction proves correct, then in certain contexts of healthcare, such as protocol-driven, diagnostic, or information-synthesis tasks, AI will not merely augment physicians but will meaningfully replace aspects of what doctors currently do.

See also  WellSpan tops 1M users on patient portal

Salim Saiyed, MD. CMIO for Dell Medical School, The University of Texas at Austin: AI in healthcare represents a once in a lifetime opportunity to meaningfully improve how we deliver care. The biggest myth in 2026 is that AI will replace physicians or nurses, or that it will single-handedly solve all of healthcare challenges. AI will augment and reshape the healthcare workforce, enabling us to care for more patients with fewer resources. Success will depend on clinical integration, governance, and trust — not on model size or data volume.

Ashley Antipolo, MD. Vice President and Chief Medical Information Officer for Cook Children’s Health Care System (Fort Worth, Texas): One of the biggest myths in healthcare AI is that it is a plug-and-play solution. While AI will unlock meaningful gains in efficiency and insight, the fundamentals of informatics will still determine its degree of success. It may bring information closer to our fingertips, but complex healthcare systems still require thoughtful workflow design, governance, clinician trust, and education. If we are not intentional, we risk layering technology onto broken processes and complicating clinicians’ work — when our real opportunity is to simplify care, restore focus, and keep both patients and clinicians at the center of every innovation.

Brian Dilcher, MD. Associate Professor, Director of Clinical Informatics in Emergency Medicine and Associate CMIO for WVU Medicine (Morgantown, W.Va.): To me, the biggest myth in healthcare AI is that it serves as a panacea for all the challenges clinicians face. While AI can meaningfully improve many administrative and operational processes, it cannot resolve the underlying systemic issues that shape our daily reality in clinical practice — emergency department boarding and overcrowding, well intentioned but burdensome regulations, limited social services, insufficient support for our aging population and concerns about workplace safety, among others. These problems require collective societal commitment and policy-level change. AI can support us, but it cannot replace the need to prioritize the conditions that allow physicians to care for our communities effectively.

Racheal Hernandez. Lead Director of Operations for Rush University Medical Center (Chicago): The biggest myth in healthcare AI is that better algorithms alone will fix broken systems.

There is an underlying assumption in many AI deployments that all tasks in a workflow are already being completed, just inefficiently. Many healthcare processes operate on partial visibility. Work gets triaged informally, deprioritized, or simply goes uncounted.

Refill automation is a clear example. Organizations implemented AI-driven refill processing with the expectation of improved efficiency and, in some cases, reduced staffing. Benchmarks were established, productivity assumptions were made, and teams were resized accordingly. But once refill requests were fully digitized and aggregated, including paper fax requests that had previously been inconsistently processed, the true volume of work became visible. In some practices, daily refill counts effectively doubled. Even duplicate requests require review and reconciliation. What looked like automation-driven efficiency was volume revelation.

AI didn’t create more work. It exposed work that had always existed but wasn’t fully measured. When we deploy AI without operational readiness and governance, we risk optimizing around inaccurate baselines. Automation layered onto incomplete workflows doesn’t eliminate burden but often redistributes and formalizes it. AI scales systems as they are. If the underlying process is fragmented, under-resourced or misaligned with incentives, the algorithm will faithfully scale those weaknesses.

Before asking whether a model is accurate, healthcare leaders should ask a harder question: Are we sure we understand the full scope of the work today?

Russell Horton, DO. Division Medical Director of BMG Primary Care for Banner Health (Phoenix): The biggest myth in healthcare AI is probably the perceived strengths and abilities of AI. There is a fear that AI will outpace us clinically or start to replace clinicians. At this point, AI still very much relies on our training. The human aspect of medicine, the need for a worldview, context, compassion, common sense, and experience cannot be replicated in the AI mind. It is a wonderful and exciting tool but still has growing up to do.

Ryan Kenney. Vice President of Strategy Enablement for Nebraska Medicine (Omaha, Neb.): The biggest myth in healthcare AI: It’s a technology problem. The most significant barriers to success are operational; workflows, incentives and change management. Success comes from integrating people, processes and technology to tackle our most critical problems, not simply deploying a model.

Keith Kaiser. Senior Director of Nursing-Cardiology for Our Lady of the Lake Health (Baton Rouge, La.): The biggest myth behind healthcare AI is the fear and skepticism that follows those letters. I share this perspective as a clinician, and the apprehension to AI integration at the bedside. Commonly this resistance starts with the fear of replacement or devaluation of bedside clinicians. When I believe AI in healthcare is not a replacement but rather an enhancement and augmentation to the tenured clinicians’ instincts. We are beginning to see the subtle adoption of this revolutionary change in EMR integrations that share AI functions

Mohammed Abdelaziz, MD. Hospital Medicine Medical Director for Mercy Health, Kings Mills Hospital (Mason, Ohio): The biggest myth in healthcare AI is that it can fix systemic healthcare problems. There is a persistent belief that AI is a silver bullet that will eliminate inefficiencies, reduce costs, and resolve fragmentation across the industry. AI is a broad set of tools, and the conversation must shift from discussing AI in general to focusing on specific, high-value use cases with measurable outcomes. AI adoption is not the goal. Using AI to solve clearly defined operational and clinical problems is what creates sustainable value. While AI can meaningfully improve workflows, documentation, and decision support, many of healthcare’s toughest challenges are rooted in economics, policy, regulation, and social determinants that technology alone cannot solve.

Chantal Fremont, DNP, MSN, RN. Corporate Director of Nursing Performance Standards and Innovation for Emory Healthcare (Atlanta): People assume better prediction automatically equals better outcomes when prediction without execution is just a dashboard. Real impact happens when the alert has a clearly assigned owner, the accountability structure is explicit, workflow is frictionless and the frontline trusts the tool. AI doesn’t transform healthcare; systems that know how to operate intelligence do. Operationalizing intelligence requires governance, clinical ownership and continuous recalibration not just model performance metrics. The future of healthcare AI won’t be won by the smartest model. It will be won by the organizations that can turn signals into reliable action.

Krishnaj Gourab, MD. Chief Medical Officer and Vice President, University of Maryland Rehabilitation and Orthopaedic Institute; System Medical Director of Post-Acute Care for University of Maryland Medical System (Baltimore): That AI agents operating in isolation, confined to closed systems like the electronic medical record or workforce management platform, will deliver meaningful clinical or operational improvements for a healthcare organization.

Effective agentic workflows in healthcare must be able to operate across multiple enterprise systems. Organizations need to intentionally design for, or explicitly require, this interoperability when implementing agentic AI solutions.Nariman Heshmati, MD. Chief Physician and Operations Executive for Lee Physician Group, Lee Health (Fort Myers, Fla.): The biggest myth in healthcare AI is that AI is going to rapidly replace the workforce and solve all the gaps in care. Perhaps there is a future state where AI allows us to be significantly more efficient and leverage greater productivity per individual but the more near future reality is one where AI augments our workforce and allows us to provide more necessary services that we simply lack the manpower for. AI is also not an end-all solution for every problem in healthcare. AI is another tool that when thoughtfully applied can be a piece of the solution or when inappropriately overlayed on every application can be a costly distraction. That is the challenge healthcare systems currently face — identifying the practical and effective use cases for AI deployment in healthcare to improve day-to-day operations or patient care delivery.

The post Healthcare leaders on busting AI myths appeared first on Becker's Hospital Review | Healthcare News & Analysis.

Source: Read Original Article

Leave a Reply

Your email address will not be published. Required fields are marked *