Misconfigured AI will shut down national critical infrastructure in a G20 country: Gartner forecast

Misconfigured AI will shut down national critical infrastructure in a G20 country: Gartner forecast

Image for representative purposes only. File

Image for representative purposes only. File
| Photo Credit: Reuters

Gartner, a business and technology insights company, predicted that by 2028, misconfigured AI in cyber physical systems (CPS) would shut down national critical infrastructure in a G20 country.

Gartner defines CPS as engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans). CPS is the umbrella term to encompass operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), industrial Internet of Things (IIoT), robots, drones, or Industrie 4.0.

“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” cautioned Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorised operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.”

According to Garner, misconfigured AI can autonomously shut down vital services, misinterpret sensor data or trigger unsafe actions. This can result in physical damage or large-scale service disruption, posing direct threats to public safety and economic stability by compromising control of key systems like power grids or manufacturing plants.

For example, modern power networks rely on AI for real-time balancing of generation and consumption. A misconfigured predictive model could misinterpret demand as instability, triggering unnecessary grid isolation or load shedding across entire regions or even countries.

“Modern AI models are so complex they often resemble ‘black boxes,’” said Mr. Voster. “Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed.”

[

Source link