Empathy Meets Efficiency: How AI is Transforming Medicare Call Centers

My parents are in their 80s, and every time I visit them, I’m reminded of something profound: health is their top priority, and nothing compares. As we age, our values often shift, and for many senior Americans, that shift means a focus on wellness.

Access to health insurance is crucial. For 68 million Americans, this means relying on Medicare, with over 34 million enrolled in Medicare Advantage. While selecting a Medicare plan might seem routine, the process can be quite complex. In many cities, the average beneficiary has access to more than 40 Medicare Advantage plans.

Additionally, beneficiaries may face long call-wait times, especially when they are contacting the government to enroll in original Medicare and potentially when speaking with insurance agents to compare Medicare Advantage, Medicare Supplement, or Part D plans. These waits can be long during the Medicare Annual Enrollment Period (Oct. 15 to Dec. 7).

AI can address this issue effectively. The following are observations from implementing AI in the Medicare shopping and enrollment process, along with considerations for other companies exploring AI for customer service.

AI voice agents shouldn’t be taskmasters

Calls from Medicare beneficiaries are not routine interactions; they’re about one of the most important decisions people can make. And yet not every call goes as intended. Some Medicare beneficiaries need more patience and empathy during calls. In some instances, callers may need more time to express their needs or explain their use of healthcare services, doctors, and medications. This conversation typically requires more time to fully understand customers’ needs when determining their optimal health insurance options. 

That’s why AI voice agents must be designed with empathy. The underlying focus should be on creating an AI agent that listens and treats every call with the seriousness and respect it deserves.

Patience goes together with empathy, a fact that was missed in early iterations.      . Initially, the AI voice agents were focused solely on efficiency, booking appointments and transferring calls to licensed insurance agents who serve as advisors. But still, performance lagged. The mistake: the AI agents were created to be taskmasters and lacked emotional intelligence.

It is key for AI voice agents to build rapport and understand customer’s intent, not just check a box. Efficiency matters, but empathy drives impact.

Voice quality matters

One major barrier to broader adoption of, and satisfaction with, voice automation has always been the robotic voice. Nobody likes talking to a machine that sounds … like a machine.

That’s why interactive voice response (IVRs) often used human voice recordings in the past. Current generative AI voice models have enhanced the authenticity and emotional qualities of voice synthesis.

For companies building an AI voice agent, choose the AI agent’s voice carefully. It’s not a detail, it’s an experience.

AI voice agents are probabilistic, not deterministic

For many software engineers, we are used to software behaving in predictable ways, like a Java program that always returns the same output for the same input. But conversational AI doesn’t work that way. It predicts the next word based on probability and context, just like humans do.

Development teams can design guardrails, provide prompts, knowledge bases, objection handling, and set boundaries to ensure compliance. But it is important to embrace variability. AI agents must be trained and guided but are never fully predictable.

Generative AI voice agents have their own set of quirks, often frustrating and sometimes humorous. For example, they’ll misread things like ZIP codes, saying “ninety-five thousand, one hundred twenty-nine” instead of just “nine five one two nine.” Or they’ll speed through a toll-free number so quickly the caller can’t write it down.

These issues add up and can impact the customer experience. It is possible to mitigate many of these quirks through prompt engineering, custom validation functions, and input formatting logic.

There’s no finish line

We’re all discovering, debugging, and improving together in this new era of conversational AI, and not everything is going to work. In fact, a recent report suggests that up to 80% of AI contact center projects could fail. 

Here’s a playbook to help succeed with AI voice agents:

  • Start small: Begin with a limited call volume (e.g., after-hours) and expand after an initial stabilization period.
  • Analyze calls: Combine manual review with AI-driven intent analysis to understand real conversations. Refine before expanding volume.
  • Run controlled A/B tests: Experiment with different prompts, voices, call scripts, and workflows. Avoid comparing against top-performing human advisors until the AI voice agent is stable.
  • Measure customer satisfaction: Collect post-call feedback and dig into the reasons behind low ratings to drive improvement.

Final thoughts

The success of AI voice agents for call centers depends on taking the right approach. Start with realistic expectations, focus on customer needs, and work with the right technology providers that are invested in your success, not just selling software.

Photo: PeopleImages, Getty Images


Atul Kumar is vice president of product & AI at eHealth, a leading private online health insurance marketplace that helps consumers confidently navigate their health benefit decisions. He is an entrepreneurial AI product leader with over 25 years of experience building high-impact products and large language model (LLM) workflows across big tech, growth-stage companies and startups. He is based near San Francisco.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Source link