Artificial intelligence can support, but should not replace, clinical decision-making, ECRI and the Institute for Safe Medication Practices said in its latest annual ranking on patient safety concerns.
In the report, “navigating the AI diagnostic dilemma” was named the No. 1 patient safety concern for 2026. The biggest risks cited concerned overreliance on the technology, which can lead to diagnostic errors and erosion of clinicians’ critical thinking skills, ECRI said.
ECRI outlined 14 “action recommendations” for hospitals to more safely implement AI in diagnostics:
- Establish AI governance policies, roles and oversight structures.
- Train staff on appropriate AI use and its limitations.
- Require documentation of AI use in diagnostic decisions.
- Evaluate AI tools using human factors engineering principles.
- Assess AI tools for value, cost and potential risk of harm.
- Disclose AI use to patients and obtain their informed consent.
- Allow patients to opt out of AI-supported care.
- Address patient concerns and reinforce AI as a support tool.
- Evaluate AI technologies’ impact on clinical workflows.
- Monitor staff satisfaction and user experience with AI systems.
- Encourage reporting and investigation of AI-related concerns.
- Reinforce clinician judgment and second opinions alongside AI.
- Track disparities across patient populations during AI use.
- Train staff to identify, document and report AI-related errors.
The report warned that unchecked reliance on AI could increase the risk of misdiagnosis, despite its potential to improve accuracy and efficiency.
The post 14 steps for hospitals to take on AI in diagnostics: ECRI appeared first on Becker's Hospital Review | Healthcare News & Analysis.
Source: Read Original Article