climb, while customers bristle at seemingly intrusive questions
from employees that often are redundant when a customer
interacts across multiples lines of business. AI can reduce
the pain points.
Due diligence for KYC purposes involves several steps that
remain manual. RPA could automate most or all of these steps,
such as data entry of customer information and recording
of screening results and risk ratings. NLP also might play a
role, extracting vital information from documents as part of
onboarding. And machine learning can perform identity and
background pre-checks, compute risk ratings based on organizational and outside information (such as public records and
social media), and render approval decisions or raise red flags,
all in a matter of minutes.
Regulatory Change Management
It can sometimes seem like regulatory demands and expectations are constantly in flux. Until now, most financial services
organizations have relied on staff or third-party consultants to
stay on top of developments, but AI can make it less burdensome to keep abreast of ever-evolving regulatory requirements.
An organization might use NLP to review pages and pages of
documents to identify rules; determine the customers, lines of
businesses, and processes and procedures affected; and distribute the information to the relevant parties. From there, RPA frequently can be deployed to help implement the changes necessary
to satisfy regulators.
Data is the foundation for AI success
Most organizations recognize the wisdom of integrating automation in their compliance programs. The hurdle is not
reluctance or skepticism, but paralysis. They try to take the
first step toward AI-driven compliance and walk smack into
a seemingly insurmountable brick wall—inadequate or inaccurate data. Data, after all, is the foundation of every type of
AI, and good data is the foundation for successful AI.
AI implementation always starts with data discovery. Before
analysis and AI model development can begin, an organization
must understand which data is available, its quality, how it is
linked, how it is segmented, and its degree of trustworthiness. It
is not enough for data to be big; it also must be properly curated
for these emerging technologies. Two common challenges are
data availability and data governance.
The data necessary for compliance rarely is in the format, location, or level of detail required, and that challenge extends to
compliance-oriented AI. The data often is derived from systems—
such as imaging, customer relationship management, or even
third-party provider systems (for example, flood determination
service providers)—that never were intended to generate data
for compliance monitoring or testing purposes.
It comes in different structures and organizational schemes
(for example, customer-centric versus account-centric). It might
be in a database, an Excel spreadsheet, or paper documents. All
of this leads to substantial data integration headaches. On the
positive side, optical character recognition (OCR, a form of AI)
and other technologies have made it much easier to work with
paper-based information.
A financial services organization’s data availability can be assessed in various dimensions, based on questions like:
■ ■ ■ Is the information accessible?
■ ■ ■ Is it in the requisite level of granularity?
■ ■ ■ Is it sufficiently discrete?
■ ■ ■ Does it go as far back in time as necessary?
Data Governance
The success of an AI initiative rests largely on data governance—
that is, how an organization manages its data over time to establish and maintain the data’s trustworthiness. Despite impressive
technological advances, the old maxim remains true: garbage in,
garbage out. Every compliance effort, AI-driven or otherwise,
must have data that is:
■ ■ ■ Consistent: Think of this as one version of the truth. Pulling
from multiple sources should not provide different answers (for
example, one system cannot identify a customer as a citizen
while another says the customer has a green card).
■ ■ ■ Accurate: The data is factually correct.
■ ■ ■ Complete: All elements that are necessary to perform the
requisite analysis are available.
■ ■ ■ Timely: The data is there when needed to answer the questions
that need to be asked.
■ ■ ■ Secure: Only the people who need to see it should see it.
■■ ■ Reconciled: The data agrees with the “trusted” sources of
information.
■ ■ ■ Governed: The data has clear ownership and stewardship in
terms of who is responsible for it and controlling it and has
traceability through its lineage.
Proper data governance therefore requires an organization have
the right people, processes, and tools in place to ensure data is
sufficiently trustworthy to withstand regulatory scrutiny. That
means an organization must have a strong and effective structure that focuses on the collection, management, protection, and
delivery of data. For the purposes of compliance monitoring, this
also suggests that side-by-side testing in an analogue and digital
approach would be valuable to prove accurate and complete data.
Good governance also requires that data requirements, assumptions, and architecture be well-documented and auditable. Data
ownership must be established, too, with the appropriate owners
and stewards assigned to achieve and maintain data quality.
Before analysis and AI model
development can begin, an organization
must understand which data is
available, its quality, how it is linked,
how it is segmented, and its
degree of trustworthiness.