Using AI to defend against cyberattacks is now a SOC imperative, experts say



The reality of a threat landscape emboldened by artificial intelligence means more cyber adversaries can push their malicious code, breaking through an organization’s endpoint detection and response systems, whether they take jailbroken or uncensored approaches. 

Frontline security leaders say security operations teams need AI to fight back.

Subverting detection with AI

Since OpenAI launched ChatGPT in late 2022, researchers have found a 4,151% increase in malicious emails, said Steve Akers, chief technology officer and corporate chief information security officer at Clearwater, a cybersecurity consulting firm.

At the company’s virtual healthcare security summit, Akers briefed attendees on Wednesday on how threat actors are using AI to subvert detection by the industry’s security operations centers (SOCs).

Generative AI has made reconnaissance simple for cybercriminals who operate like commercial technologies, such as wormGPT, a large language model (LLM) murky on guardrails that they can use for blackhat activities like writing malware snippets and developing phishing campaigns for uncensored attacks. While wormGPT emerged in 2023, later variants are reportedly using xAI’s Grok and Mistral AI’s Mixtral LLMs, according to Cato Networks, a security platform vendor. 

Likewise, healthcare SOCs need AI to detect and respond to such threats fast enough to defend against them. Deepfakes and AI-generated content are also making it harder to detect malicious activities, which means security teams must continually adapt their detection methods.

“When you’re using or looking at utilizing AI in security, you’re not doing it in a vacuum,” said Justin Sun, Clearwater’s SOC director. 

“Your threat actors are utilizing AI as well. So it’s an arms race,” he said.

Using AI in the SOC

Analysts in the SOC often wear many hats, and with manual workflows, disconnected incident response and pre-AI threat hunting practices, they may struggle more now than ever with time constraints. 

Further, legacy security systems are often ineffective against modern, fileless and polymorphic malware that changes its form.

Sun, who was joined at the summit by Albert Caballero, a field CISO with cybersecurity vendor SentinelOne, discussed strategies to decode malware obfuscation techniques, which have grown a lot over the last year. 

AI can help break down the complex malware obfuscation code for faster detection and response. By processing vast amounts of data and identifying subtle behavioral anomalies that humans might miss, AI threat detection is capable of adaptive threat hunting. 

Sun and Caballero made it clear the primary goal of integrating AI into SOCs is not to replace the jobs of human analysts, but to amplify their capabilities. In addition to automating mundane tasks, AI can speed up the discovery of complex threats, which is critical to keep healthcare organizations operating and delivering on patient care.

“Can we do this without AI? Yeah, but we’re losing. And we want to win, right?” Caballero said.

In the future, agentic AI, where specialized and general-purpose AI models collaborate to automate tasks, enhance investigation and reduce manual response actions, will simplify initial security alert triage and investigation.

“We need to automate this, we need the intelligence that AI brings us, and the continuous learning, the reinforcement,” Sun said.

However, full autonomy, which Caballero called “a holy grail objective,” should only be implemented gradually, and only where trusted.

The security experts noted that defending against deepfakes is more challenging. Those exploits require user awareness training to recognize AI-generated voice and video anomalies, they said. In the SOC, they recommended focusing on detecting abnormal activities, such as unfamiliar geolocations for account access and unusual communication patterns to aid in deepfake detection. 

The bottom line, according to the security experts, is that key platforms the SOC teams use must be fed with data. Sun and Caballero explained that the data must also be filtered by well-known machine learning algorithms and then positioned so that security specialists can query it with generative AI prompts. 

“It’s not optional,” said Caballero. “It is an arms race, and we are saying that AI needs to be implemented in every aspect of your security program.”

Then, “you can use agents for reasoning based on the data that they’re seeing to make better decisions, and hopefully anticipate and get ahead of the attack,” he added.

Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS Media publication.



Source link

0