Saturday, January 3, 2026

Businesses can’t audit and insure their way to responsible AI

Recently, the Big Four accountancy firms have started offering audits to verify that organisations’ AI products are compliant and effective. We have also seen insurance companies provide AI liability cover to protect companies from risk. These are clear indicators that AI is maturing and customer facing use cases are becoming widespread. There is also clearly an appetite for organisations to protect themselves amid regulatory changes and reputational concerns.

But audits and insurance alone will not fix the underlying issue. They are an effective safety net and an added line of protection against AI going wrong, but by the time an error has been discovered by auditors, or organisations make an insurance claim, the damage may already of occurred. In most cases data and infrastructure that continues to hold organisations back from using AI safely and effectively, so it is a challenge that needs to be addressed.

Large organisations handle huge volumes of highly sensitive data—whether it’s payroll records, customer information, or intellectual property. Keeping oversight of this data is already a major challenge.

As AI adoption spreads across teams and departments, the associated risks become more distributed. It gets significantly harder to monitor and govern where AI is being used, who’s using it, what it’s being used for, what it’s producing, and how accurate its outputs are. Losing visibility over just one of these areas can lead to potentially serious consequences.

For example, data could be leaked via public AI models—as we saw in the early days of GenAI deployment. AI models can also end up accessing data they shouldn’t, generating outputs that are biased or influenced by information that was never meant to be used.

The risks for organisations are twofold. First, customers are unlikely to trust companies that can’t demonstrate their AI is safe and reliable. Second, regulatory pressure is growing. Laws like the EU AI Act are already in force, with other regions expected to introduce similar rules in the coming months and years. Falling short of compliance won’t just damage reputation—it could also trigger major financial penalties that have the potential to impact the entire business. For instance, the EU has the power to impose fines of €35m or 7% of an organisation’s global turnover—whichever is higher—under the AI Act.

While AI liability insurance might help recover some of the financial fallout from AI errors, it can’t win back lost customers. Audits may spot potential governance issues, but they can’t undo past mistakes. Without proper guardrails, organisations are essentially gambling with AI risk—introducing fragility and unnecessary complexity that distorts outcomes and erodes trust in AI-driven decisions.

Source link

Hot this week

Bullish flow in Nike with shares up 4.34%

Bullish flow in Nike (NKE), with...

Hochul declares Muslim American Heritage Month, illuminating 16 state landmarks

NEWYou can now listen to Fox News articles! ...

Daily ETF Flows: VXUS Takes No.1 Spot

Top 10 Creations (All ETFs) ...

California senator corrected on federal immigration officer mask ban

NEWYou can now listen to Fox News articles! ...

Topics

Related Articles

Popular Categories