6 Questions To Ask Before Integrating AI Into A Clinical Workflow 


The emergence of large language models (LLMs) prompted one research team to compare how well this technology performs compared with traditional clinical decision support systems in identifying potential drug-to-drug interactions. The retrospective analysis found that traditional clinical support tools identified 280 clinically relevant instances, but AI only found 80.

Studies like these are good examples of why healthcare providers are cautious about adopting AI into clinical practice. Respondents to a 2024 Healthcare IT Spending study from Bain and KLAS Research cite regulatory, legal, cost, and accuracy concerns, which are all valid considerations when patient safety is at stake.

However, the study also found that AI continues to gain traction with healthcare providers. Respondents appear optimistic about implementing generative AI and are more inclined to experiment with the technology to improve outcomes. 

AI faces the long-standing central dilemma when integrating technology into clinical workflows: How do we use technology to improve care while minimizing risk?

Let’s look at this question through the lens of clinical decision support, specifically medication information for prescribers. For decades, technology has supported clinicians with insights into drug safety, as it would be impossible for clinicians to keep pace with continuously growing and evolving evidence. For example, more than 30 million citations currently exist in PubMed, and it expands by about one million new citations every year. 

Technology can help. Content databases surveil the world’s literature, regulatory updates, and clinical guidelines. They critically evaluate quality and synthesize results into content and recommendations clinicians can use at the point of care. 

Sound decision support systems provide trusted, evidence-based information. They engage clinicians to carefully and accurately curate it from the universe of medical literature available today. This provides clinician users with the latest relevant evidence to inform specific patient care decisions at the point of care. AI can enhance the experience by surfacing the information within these systems even faster, and with fewer clicks, especially if it has been built for this purpose.

General AI vs. purpose-built AI

Large Language Models (LLMs), such as ChatGPT, have taken center stage in conversations about AI in recent years. These tools enable better general-purpose language understanding and reasoning capabilities. 

However, just adding general AI tools to these decision support systems and pointing them to a body of clinical documents will not deliver the benefits many are looking for. Studies provide a cautionary tale for those who believe they can use general-purpose LLMs instead of an established decision support system to assess drug-drug interactions. 

For example, one study found that ChatGPT missed clinically important potential drug-drug interactions. In another study, ChatGPT could identify potential drug-drug interactions, but scored poorly in predicting severity and onset or providing high-quality documentation. These findings demonstrate the shortcomings of systems that are not purpose-built for clinicians making patient care decisions.

Simple questions can help healthcare organizations determine if the decision support AI they’re considering is purpose-built for clinicians:

  1. Who is this AI designed for? Purpose-built AI is focused. It targets a limited audience and focuses on the questions that matter most to that audience. When done correctly, these systems should outperform a general-purpose system in its area of expertise. 
  1. What data is training this AI? Direct citations of evidence must be a core part of any answer in decision support tools. General AI systems may comb the internet for related content, but may include flawed evidence not peer-reviewed or vetted by experts. Many publications are not available in free full text on the internet, so an LLM may not capture the details of a critical piece, creating a gap in evidence. The system should also be updated frequently to include the most recent findings and regulatory materials. Finally, it should be clear to the user what information the AI uses to source answers.
  1. How is this AI interpreting my question? In healthcare, users may ask questions with ambiguous acronyms or incomplete follow-up questions. For instance, if someone types “what about Vancomycin,” it seems like a random fragment in isolation. But if the previous question was “monitoring parameters for Cefepime,” then it becomes clear that the correct interpretation of the question is “monitoring parameters for vancomycin” The AI system should tell the user how it is interpreting a question, so the user knows from the start if the AI is even answering the right question. Clarification mechanisms allow users to refine their query before AI provides an answer.
  1. Does this AI provide more than one best-fit answer? A common situation for nurses and pharmacists involves determining whether multiple drugs can be combined in different solutions for intravenous (IV) administration. A simple chat response may provide only one best-fit answer, yet the clinician may need several options, especially if the patient has limited IV access. Clinicians should have systems that enable them to use their best judgment to administer medications safely.
  1. Will this AI recognize its limitations? AI technologies are improving every day, but they have limitations. Finding an answer quickly is important, but expectations should be realistic. For example, a user could ask a question equivalent to asking the AI to do a meta-analysis, which would be difficult to do accurately and promptly to support a decision at the point of care. AI systems must recognize and be transparent about their limitations rather than risk providing a fabricated answer that endangers patient safety.
  1. Have clinicians been involved in developing this AI?  Clinicians must always remain in the driver’s seat for any tool, technology, or process that affects patient safety. Period. Clinicians bring an essential point of view to developing technologies and the continuous feedback loop that continually improves the systems. Clinicians and user testing should validate critical components of all clinical decision support tools.

A collaborative approach offers better outcomes

Ultimately, purpose-built AI is focused on the outcome: helping clinicians access trusted information at the point of care. Together, the combination of human and AI can achieve better outcomes than either can alone.

Image: MR.Cole_Photographer, Getty Images


Sonika Mathur is the Executive Vice President and General Manager of Micromedex, a drug information clinical decision support technology. Sonika has more than 20 years’ experience in clinical decision support, tech-driven care delivery, and patient engagement. Before joining Merative, she led initiatives at Cityblock Health and Elsevier Clinical Solutions.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.



Source link

0
Comments are closed