Shadow AI in healthcare: How 5 system leaders are balancing risk and innovation

Press Release

As generative AI rapidly reshapes healthcare, a new challenge is emerging across hospitals and health systems: “shadow AI,” or the unsanctioned use of AI tools by clinicians and staff. While these tools promise efficiency and innovation, they also introduce significant risks, from patient safety concerns to data security vulnerabilities.

During a Becker’s Healthcare webinar sponsored by Wolters Kluwer, leaders from Cleveland Clinic, Ann & Robert H. Lurie Children’s Hospital of Chicago, Nuvance Health (Danbury, Conn.), Seattle Children’s and University of Chicago Medicine discussed how shadow AI is taking hold, where governance gaps exist and how organizations can respond without slowing progress.

Here are three key takeaways from the discussion:

1. Shadow AI is widespread and often driven by unmet needs.

Panelists emphasized that shadow AI is not an edge case but a growing reality across healthcare organizations. Clinicians and staff are turning to tools like chatbots, writing assistants and coding platforms to work more efficiently — often without formal approval. But rather than viewing this behavior as purely risky, several leaders said it should be interpreted as a signal of unmet operational needs.

Clara Lin, MD, vice president and chief medical information officer at Seattle Children’s, said her organization is analyzing shadow AI usage to better understand where existing tools fall short.

“IT is now looking at that and really trying to decide where the gaps are — what are people trying to use that we are not providing them?” she said. “We’re using that list as a roadmap for us to think about our AI deployment and our AI implementation in the organization so that we give people a safer alternative to use.”

Kelly M. Greening, vice president and deputy general counsel at Ann & Robert H. Lurie Children’s Hospital of Chicago, echoed that sentiment, framing shadow AI as feedback rather than misconduct. “If people are turning to unsanctioned tools, this generally means the organization hasn’t provided a safer or easier alternative,” she said.

See also  To Avoid Care Disruptions, Know When the Clock Runs Out on Your Prior Authorization

2. Patient safety and data risks remain underappreciated.

While administrative use cases may appear lower risk, panelists stressed that shadow AI can directly impact clinical decision-making — and not always reliably.

Peter Bonis, MD, Chief Medical Officer at Wolters Kluwer Health, said risk varies by use case, but clinical applications carry particularly high stakes. Tools used at the point of care can directly influence medical decisions.

About 30% of the time, healthcare professionals will change their decision if they’re presented with information at or near the point of care,” Dr. Bonis said. “So if that information in some way is faulty, it can lead to at least suboptimal care, if not impair patient safety.”

He added that evaluating generative AI tools remains difficult as they evolve in real-world workflows, with issues such as inconsistent outputs and AI hallucinations still unresolved. Patient trust is also a concern: He noted 93% of consumers report at least one concern about AI in healthcare, and more than half say it reduces trust — highlighting the need for stronger oversight and accountability.

Deborah Gordon, executive vice president, chief legal officer and chief governance officer at Cleveland Clinic, said risks extend beyond clinical care, particularly in business and legal workflows, where AI training may be less established.

“If a paralegal puts a contract in an open-source format, that could subject [us] to putting in confidential information that could open us up to social engineering attacks,” Ms. Gordon said. “Those are things that I think we need to do more education around, because it might not have as firm of a foundation as perhaps we do already in some of the clinical realms.”

3. Governance is shifting from control to enablement and culture.

See also  Making the Invisible Visible: How Leading Health Systems Are Turning Payment Complexity into a Competitive Advantage

Across organizations, leaders are shifting from reactive compliance to more proactive governance strategies that enable safe AI use rather than restrict it.

A key part of that shift is improving visibility into what tools already exist across the enterprise. Cheng-Kai Kao, MD, CMIO, medical director of international programs and associate professor of medicine at UChicago Medicine, said his organization has taken a structured approach to inventorying AI tools across departments — reviewing contracts through supply chain and IT to identify what solutions are already in use and where duplication may exist.

“Many times, the reason why a team uses shadow AI might be because they don’t necessarily know the other team is using something that actually could be scalable to their own team as well,” Dr. Kao said.

By centralizing that visibility, organizations can scale existing, approved tools more effectively, reducing the need for unsanctioned alternatives while also improving efficiency and cost management.

At the same time, leaders are investing in AI literacy — training staff on both the capabilities and limitations of AI, particularly around open-source tools, to reduce risk while empowering responsible use.

Albert Villarin, MD, vice president and CMIO at Nuvance Health, said organizations must bring shadow AI into the open through cross-functional engagement, not treat it as a siloed IT issue.

“The reason why they call it shadow is because it’s an unknown brought into a known environment,” he said. “Our job is to take it out of the shadows and make it transparent — putting it into the discussion, whether it be at the faculty level, operational level, executive leadership level. Everyone is part of this.”

Panelists emphasized that governance alone is not enough, pointing repeatedly to human factors like communication, trust and shared accountability as equally critical to managing risk at scale.

Dr. Lin from Seattle Children’s said governance bodies can act as partners rather than barriers, helping teams deploy AI safely while maintaining speed. At Cleveland Clinic, Ms. Gordon underscores that success depends on collaboration across the organization.

See also  Visa renewal delays sideline physicians at US hospitals

“If we don’t actually work together as real people, then we’re really not being able to effectively use and deploy [AI], but also manage those risks,” she said.

Shadow AI can be a catalyst for innovation

While shadow AI introduces risk, it also highlights strong demand for better tools and alignment, and can accelerate innovation when approached thoughtfully. By studying shadow AI usage patterns, organizations can identify gaps, prioritize investments and deploy enterprise-grade solutions that meet real user needs.

Dr. Villarin from Nuvance Health described a shift from resistance to momentum: “We’re in an environment where we’re being pushed to adopt change,” he said. “Take advantage of that energy and facilitate it into innovation.”

Dr. Bonis at Wolters Kluwer said the emergence of AI — shadow or otherwise — signals a broader transformation already underway across healthcare, with organizations still early in understanding its full potential.

“I think AI is still a work in progress,” he said. “We have to put some guardrails on this to keep patients and staff safe, but it’s an exciting journey ahead. We’re seeing the canvas emerge, and people are beginning to draw and paint on it.”

The post Shadow AI in healthcare: How 5 system leaders are balancing risk and innovation appeared first on Becker's Hospital Review | Healthcare News & Analysis.

Source: Read Original Article

Leave a Reply

Your email address will not be published. Required fields are marked *