Why infrastructure is key to combining AI and virtual care

Dr. Kedar Mate is chief medical officer and cofounder of Qualified Health, a vendor of generative artificial intelligence infrastructure. He’s also former president and CEO of the Institute for Healthcare Improvement, which aims to advance equitable health outcomes worldwide through improvement science.

Mate believes that in telehealth and remote patient monitoring today, much attention is being given to technologies like artificial intelligence – and not enough to the robust infrastructure needed to support important AI technology. 

Mate says his company aims to help hospitals and health systems move beyond point systems toward platforms that embed safety, equity and real-world impact into virtual care delivery. We spoke with him recently about why RPM and telehealth tools need a foundational AI stack to be safe, scalable and effective.

He also discussed what it takes to move from fragmented experiments to operational AI that supports real-time clinical decision making in virtual care, how governance, monitoring and evaluation support sustainable virtual care models and how one designs an AI system for telehealth that totally supports equity.

Q. RPM and telehealth tools need a foundational AI stack to be safe, scalable and effective, not just novel, you say. Please elaborate.

A. We’ve seen too many healthcare AI pilots that dazzle in demos but crumble under real-world clinical and operational complexity. What we need is infrastructure that handles the messiness of actual patient care and the data systems that support patient care.

AI tools can allow clinicians to set threshold parameters for remote monitoring and provide critical alerts similar to those for critical lab values, integrating seamlessly into existing clinical workflows rather than creating additional burden. This is even more important in remote or virtual care settings where we need better and earlier signals of when care isn’t going according to plan.

The AI tools to do this will, in turn, require robust data governance, interoperability standards and fail-safe mechanisms that recognize healthcare delivery is fundamentally about human relationships, not just algorithmic outputs. Safety means building systems that can flag edge cases for additional human intervention – because in healthcare, edge cases often are the patients who need us most.

Q. What does it take to move from fragmented experiments to operational AI that supports real-time clinical decision making in virtual care?

A. You need to embed improvement science principles from Day One: rapid cycle testing, measurement for learning, and systematic spread strategies that account for local variation in how care teams actually work.

AI must integrate multimodal data from EHRs, wearables, medical imaging, genetics and social determinants of health to create holistic patient profiles, moving beyond single-point systems to comprehensive adjuncts and supports to care.

Operational readiness requires change management that addresses the human factors – training care teams not just on technology but on how to use AI tools to augment clinical judgment rather than replacing it.

Real-time decision support demands infrastructure that can handle the volume and velocity of clinical data while maintaining the trust and reliability that clinicians need to act on AI recommendations.

Q. How do governance, monitoring and evaluation support sustainable virtual care models?

A. Continuous monitoring requires both clinical outcome measures and process measures that track how AI actually is being used by care teams and received by patients in their daily workflows. Such monitoring will be built with tight parameters to understand whether AI tools are providing the needed outputs within clear guardrails.

Governance structures must center equity and patient outcomes, not just efficiency metrics – we have choices about how we train algorithms and how we apply them. We should choose to build AI models that don’t perpetuate or amplify existing disparities in access to quality care.

Evaluation frameworks need to capture unintended consequences and system effects: How does AI-enabled virtual care change the nature of therapeutic relationships and care continuity? Sustainability and improving the AI models depends on building feedback loops that allow rapid learning and adaptation, treating each deployment as both an intervention and an experiment in improving care delivery.

Q. How does one design an AI system for telehealth that totally supports equity?

A. Start with the populations most marginalized by current healthcare systems – design for those with limited digital literacy, unreliable internet or complex social needs, and you’ll build more robust systems for everyone.

AI promotes healthcare equity by expanding access to quality care. For example, by enabling simultaneous translation into hundreds of languages. And by improving both the care experience and clinical relationship. But these effects only happen if we intentionally design our AI tools to address disparities from the outset.

Equity requires interacting with people through such things as multilingual interfaces, culturally responsive care protocols, and flexibility in how patients can engage with AI-supported services based on their preferences and capabilities.

The ultimate proof will be in critically reflecting on the outcome: Have disparities in clinical outcomes across racial, ethnic and socioeconomic lines been reduced post implementation of AI or have they not? If not, you have an important choice ahead to retool your AI deployment to maximize both overall outcome impact and to reduce unnecessary disparities.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.

WATCH NOW: Grabbing the Chief AI Officer brass ring – and working with top brass

Source link

0