How the generative AI boom changes healthcare cybersecurity

How the generative AI boom changes healthcare cybersecurity

This audio is auto-generated. Please let us know if you have feedback.

Generative artificial intelligence has exploded in the healthcare sector in recent years, driven by hopes the technology could take on a variety of tasks — from clinical documentation to data analysis — and lessen the industry’s long-standing workforce challenges.

At the same time, healthcare organizations often struggle to manage cybersecurity, burdened by frequent cyberattack attempts as the sector adopts more internet-connected tools.

AI products could be another target for cybercriminals. Meanwhile, hackers can use their own AI to launch cyberattacks. That should create new work and security disciplines for healthcare cyber teams, said Taylor Lehmann, a director of the office of the chief information security officer at Google Cloud.

Lehmann sat down with Healthcare Dive to discuss how the advent of generative AI tools will impact healthcare cybersecurity and what organizations need to do to prepare for an increasingly AI-augmented security workforce. 

This interview has been edited for clarity and length. 

HEALTHCARE DIVE: Would you say these generative AI tools are more ripe for attack than any other piece of tech hospitals use? Or is that just another vendor they have to consider when they’re thinking about cybersecurity?

Taylor Lehmann, director of the office of the chief information security officer at Google Cloud

Taylor Lehmann, a director of the office of the chief information security officer at Google Cloud

Permission granted by Google Cloud

 

TAYLOR LEHMANN: This is where I am concerned about the future. Today we’re talking with AI systems, tomorrow we’re watching AI systems do stuff — like the agentic push we’re still on the cusp of — so this answer might change a bit. But number one, it’s going to be very hard to detect when AI is wrong and whether that being wrong is due to it being manipulated by a nefarious actor. 

Detecting wrongness is already a challenge. We’ve been working on this for years. No one’s been able to completely eliminate hallucinations or inaccuracy. But I do think detecting more than just hallucinations, but functionality that has been introduced by a bad guy sitting in the middle to get a certain outcome — it’s going to be really hard to detect and increasingly difficult to detect. 

So organizations need to make sure that they have provenance for basically everything from the model being served to you, the technology that has been done with, all the way down to the provider of the model and the data they use to train it. That visibility is going to be critical. 

At Google we use model cards. We use practices like cryptographic binary signing, where we can tell exactly where the code came from that is running that model. We can tell you exactly where the data came from that trained that model. We can trace every record that went into that model’s training from birth to death. And organizations are going to need to be able to see that in order to manage those risks.

You mentioned the importance of being able to tell what models are trained on and what data is flowing in, and how that could help you figure out if the tool has been infiltrated. How should health systems be thinking about implementing workforces now to deal with those concerns?

There’s definitely new security disciplines coming. This is actually some of the work that we’re doing right now in the office of the CISO. Google is looking at some of these new roles and new capabilities, and then putting guidance together for folks on what to do.

The first thing I would say is, there are ways and methods that are needed to secure AI systems out of the gate. We feel very strongly that any reasonable approach to securing AI and AI systems needs to involve having strong identity controls in place and making sure that we know who’s using the model. We know what the model’s identity is, we know what the user’s identity is and we can differentiate between those two things. We also have radical transparency in everything the model does. Right or wrong, we can see it.

One of the new big areas is this concept of AI “red teaming.” Organizations deploy a team of people that do nothing but try to make these things break — basically try to get them to produce harmful content, take actions that were not intended — to test the limits of the safeguards, as well as evaluate how well the models are trained, whether they’re over- or under-fit for purpose.

I think you’re starting to see AI governance becoming hugely important. It’s always been important, but understanding the risks associated with AI — especially in regulated industries or in safety-critical use cases — requires a combination of technical skill as well as understanding of how AI works and is built, as well as the regulatory or the business context to determine what is an important risk.

Source link