By Phoebe Seers
LONDON, April 28 (Reuters) – The ability of central banks and financial regulators to monitor and combat the risks posed by powerful artificial intelligence models such as โAnthropicโs Mythos has been called into question after a survey found authorities significantly โlag financial firms in AI adoption and lack data on emerging harms.
Financial institutions are adopting AI at more than โtwice the rate of their supervisors, with just two in 10 regulators reporting “advanced AI adoption”, research published on Tuesday by the Cambridge Centre for Alternative Finance showed. Only 24% of authorities surveyed collect data on industry AI adoption, while 43% have no plans to start within the next two โyears, the report found.
โThis empirical blind โ spot may undermine the prevailing optimism [on AI]. Authorities cannot successfully harness or oversee AI if they are navigating its adoption and risks without hard โ data,โ the report said.
The research, prepared alongside the Bank for International Settlements, the International Monetary Fund and other multilateral institutions, involved surveying 350 traditional financial institutions and fintechs, more than 140 AI vendors, and โ130 โcentral banks and financial authorities spanning 151 countries.
Regulators and โglobal standardโsetting bodies have stepped up โwarnings about the risks posed by the rollout of AI across the financial sector. Earlier in April, Anthropic released Mythos, viewed by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems.
Regulators across the globe have engaged with banks over how prepared their legacy systems are for emerging frontier AI models.
The report highlights Mythos as an example of โnextโgeneration systems that could soon be capable of exploiting โsoftware vulnerabilities at scale, potentially limiting the effectiveness of โexisting human governance and oversight mechanisms.
โRegulators โgenerally maintain the principle that financial firms should remain accountable for harms, including โcyberattacks, whether AI is built in-house or โsupplied by third parties, โbut that position becomes harder to apply in the context of more autonomous systems that are provided and managed by third-party vendors,โ the authors wrote.
Moreover, traditional approaches to โoversight by regulators may no โlonger be sufficient. The report says regulators must themselves adopt agentic AI capabilities, capable โof taking actions without human oversight, to match the systems they oversee.
(Reporting by โPhoebe Seers; Editing by Tommy Reggiori Wilkes/Keith Weir)