Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img
HomeHealthWhere The AI Action Plan Falls Short On Healthcare Trust

Where The AI Action Plan Falls Short On Healthcare Trust

In a recent opinion piece published by The Hill, Drs. John Whyte and Margaret Lozovatsky laud the current U.S. administration’s AI Action Plan as an exciting first step toward building trust in healthcare AI. 

They claim the plan “evinces close attention to building public and professional trust for AI technology through transparent and ethical oversite [sic] and to accelerate national standards for safety, performance and interoperability.”

To be clear, AI does hold great promise for healthcare. And there are aspects of the plan worth praising, like the acceleration of AI innovation in diagnostics and treatment options, expansion of public-private partnerships, and emphasis on interoperability. But these benefits are overshadowed by three key concerns that will disproportionately impact vulnerable populations if the plan is implemented as written.

Privacy risks of unified health records

A major selling point of the AI Action Plan is the implementation of a data tracking system that will enable patients to more easily share personal health information (PHI) with providers. The trade off is that large tech companies will have access to details that were previously shared only with patients, providers, and insurance companies.

This shift creates risks by centralizing vast amounts of sensitive medical data, like diagnoses, prescriptions, and lab results, in systems that become attractive targets for cybercriminals. Unlike isolated breaches at individual practices, a compromise of unified records could expose millions of patients’ most sensitive data simultaneously.

Affected most by these risks are patients who rely on providers with fewer cybersecurity resources, like community health centers. These patients also tend to be less digitally literate and face greater consequences from health-based discrimination, such as employment or insurance denial following breaches of mental health or genetic data.

As written, the plan offers few safeguards beyond existing regulations that weren’t designed for AI-driven health data systems at this scale. Without stronger encryption standards, mandatory breach notification timelines, and explicit protections for PHI, the convenience of data sharing comes at an unacceptable risk to patient privacy.

Vague standards and punitive approach

Effective AI governance requires clear and robust regulatory standards. In my opinion, a unified federal framework would be better for healthcare AI than the state-by-state patchwork the U.S. currently operates with. But given that the AI Action Plan pushes deregulation at the expense of patient safety — going so far as to punish states with “burdensome AI regulations” — now clearly isn’t the time for a federal framework.

It was encouraging, then, to see the Senate vote overwhelmingly to remove the moratorium on AI from HR 1 last month, which would have blocked states from regulating AI independently. Yet the AI Action Plan takes the opposite approach by calling for the removal of “onerous” rules without defining what it actually considers burdensome or onerous. 

This vague approach becomes more concerning given the plan’s stated philosophy: a “Build, Baby, Build” mentality referenced on Page 1 that prioritizes speed over safety. Such an approach creates particular risks in healthcare, where the stakes are higher than in other industries. Under this framework, states like Illinois, which just passed legislation prohibiting the use of AI for mental health decisions, could face penalties for treating patient protections as essential rather than as “red tape” to remove.

The plan additionally fails to address how AI systems will be monitored after deployment, leaving any monitoring to voluntary industry practice. Because AI algorithms continue learning and changing over time, they are liable to develop new biases or errors that can impact patient care quality. Without robust oversight requirements, patients — particularly in communities with fewer resources — become unwitting test subjects for evolving AI systems.

Instead of relying on voluntary industry monitoring, healthcare would benefit from stricter enforcement of clearly defined regulations that monitor AI performance, make algorithmic decision-making more transparent, and validate diverse patient populations. These protections are especially critical for vulnerable communities who often lack the resources to seek alternative care when AI systems fail them.

Amplification of healthcare disparities

Lastly, the plan dismisses concerns about AI bias by removing diversity, equity, and inclusion (DEI) requirements from oversight frameworks. But in healthcare, algorithmic bias isn’t political — it’s a patient safety issue that already costs lives in underserved communities.

The best known example of this tragedy is how AI models trained predominantly on data from white patients have underestimated breast cancer risk in Black women who were actually at high risk of developing the disease. This likely led to fewer follow-up scans and more undiagnosed or untreated breast cancer cases, worsening health outcomes and contributing to higher mortality rates in Black women.

This isn’t an isolated case. Similar biases have been documented across multiple healthcare applications, from pain assessment tools that underassess discomfort in Black patients to diagnostic algorithms that miss heart disease in women. Yet the plan’s removal of all things DEI means there will be no built-in checks and balances to prevent these biases from being built into new healthcare AI systems. 

Without mandates to test algorithms across diverse populations, such disparities will become widespread as AI adoption accelerates.

Key takeaways

As written, the AI Action Plan actively discourages the kind of rigorous, equity-focused AI governance that patient safety demands. Without correcting course, healthcare AI risks widening rather than closing existing gaps in care quality and access.

This is made abundantly clear by a troubling dynamic: states that attempt to protect vulnerable patients from AI risks could face federal financial penalties for maintaining “burdensome” regulations. This effectively pressures states to lower their standards precisely when stronger protections are needed most. 

Inadequate privacy safeguards will only make systemic vulnerabilities worse. To address rather than amplify existing health disparities in the U.S., oversight and bias prevention mechanisms should be strengthened, not eliminated.

Photo: narvo vexar, Getty Images


Lauren Spiller is an enterprise analyst at ManageEngine, where she explores how emerging technologies like AI are transforming digital workplaces. Her research and writing focus on governance, security, and the human side of tech adoption. Prior to joining ManageEngine, she worked at Gartner, developing data-driven content to help business leaders and software buyers make smarter decisions in fast-moving markets.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Source link