A core—and sometimes invisible—regulatory
tenet across global regulatory frameworks is the obligation of supervision. Irrespective
of the jurisdiction, regulators expect firms to actively supervise their
employees, systems, and processes. This requirement is woven into the DNA of
financial regulation.
As an example, FINRA requires firms to “establish and
maintain a system to supervise” the activities conducted on their behalf. Similarly,
the NFA mandates that members “diligently supervise” their employees, agents,
and the systems they rely on. FCA obligates firms to maintain “systems and
controls” appropriate to their business.
Across all three regimes, the message
is harmonized and unmistakable: supervision is not optional, not discretionary
and not limited. Instead, supervision applies to everything the firm does.
AI is already making a mark at retail brokerages, with firms like $HOOD and $ETOR highlighting that roughly half their code is being written with AI, which I think has been one contributing factor to the acceleration we are seeing in product launch cycles without corresponding… pic.twitter.com/F77FgPdZlT
— Devin Ryan (@devinpryan) November 23, 2025
An Integral Part of Daily Business
Because supervision runs so deeply in
day-to-day activities, it at times feels second nature. For instance, firms supervise
employees for misconduct, competence, and adherence to procedures.
Supervision stretches
beyond human conduct; it encompasses the technology firms rely on every day.
Payment systems must route client funds correctly, algorithmic trading
solutions must be executed within set parameters, and risk engines must
accurately measure exposures and enforce limits.
What happens if these systems fail? Client
funds can be misrouted, an algo can blow up a customer’s account, and a failed
risk engine can expose the firm to catastrophic losses. Collectively, failure
of systems or even human conduct can expose the firm to legal and regulatory
risks.
Consequently, firms, as a natural
course of business, supervise these activities to prevent such unintended
outcomes by embedding controls, monitoring, and establishing escalation
procedures to ensure an intended result.
AI Changes What Must Be Supervised And How
Naturally, this DNA-level expectation of
supervision extends to emerging technology. As an example, the advent of AI
does not change supervisory obligations, it simply changes what must be
supervised and how.
Regulators have already made clear that new technology
does not replace oversight. It becomes subject of oversight. And AI is no
exception.
Unlike traditional systems with fixed logic, AI generates an output
based on patterns, context, and linguistic probability. Studies show AI models
can hallucinate at rates approaching 60%, and their reasoning often remains
opaque. This leads to the central supervisory question every firm must confront.
Utah’s tech community deserves a voice in the debate over AI regulation. pic.twitter.com/LiyjnOlH27
— Clint Betts (@clintbetts) November 20, 2025
How does a firm supervise a system that is probabilistic, may hallucinate, and
cannot explain its reasoning 100% of the time?
The answer is to apply the same supervisory
discipline that already governs every critical system. Document how the AI is intended to behave,
test it rigorously, monitor its outputs, challenge its decisions, and build
escalation pathways for when it inevitably gets something wrong.
Firms must
demand transparency from their engineering team, vendors and maintain evidence
of ongoing oversight, and ensure humans remain firmly in control. In short, the
invisible and second nature supervisory tenet must be applied to the adoption
of AI: understand it, control it, and be able to prove you are supervising it.
AI may be new—but supervision is not.
This article was written by Aydin Bonabi at www.financemagnates.com.
Source link




