Qayyum Ali
Associate Editor
Loyola University of Chicago School of Law, JD 2025
Artificial intelligence (AI) is the latest tool in a financial institution’s arsenal to restrict the flow of money being channeled to fund illegal activities worldwide. As criminals get more innovative and sophisticated in using the latest technology to evade detection of their financial crimes, financial institutions must follow suit and utilize similar technology to root out these crimes or risk facing regulatory sanctions. Money laundering generally refers to financial transactions in which criminals, including terrorist organizations, attempt to disguise the proceeds of their illicit activities by making the funds appear to have come from a legitimate source. However, this is not a new phenomenon. Congress passed the Bank Secrecy Act (BSA) in 1970 to ensure financial institutions follow a set of guidelines known as KYC (Know Your Customer/Client) to detect and prevent money laundering through their systems.
Criminals use AI to exacerbate financial crime
Financial crime has remained prevalent despite financial institutions implementing measures such as intelligent document processing, natural language processing, robotic process automation (RPA), and machine learning to keep up with the advent of digital transactions, encompassing online payments, withdrawals, and deposits. With the help of advanced technologies that have become cheaper and easier to access than ever before, criminals have become more innovative and dangerous using AI; scammers can quickly replicate an entire bank’s website code (a task previously requiring hours or days from a skilled programmer), craft more convincing emails, and mimic local accents to build trust. These abilities enable fraudsters to efficiently and rapidly send thousands of emails and make numerous phone calls at scale.
In 2023, several major financial institutions, such as Wells Fargo, HSBC, and Scotia Capital, were subject to regulatory enforcement actions for violating record-keeping requirements of the federal securities laws. Also, the recent sanctions imposed on Binance and the agreement to pay over 4 billion for allowing sanctioned customers unfettered access to American capital and financial services and money laundering on its platform underscored the significant repercussions of non-compliance. Banking compliance executives face evolving regulatory frameworks, increased scrutiny, economic instability, escalating geopolitical risks, and unprecedented sophistication in financial criminal activities. In the recently released annual Examination Priorities, the Securities and Exchange Commission (SEC) identified money laundering as a primary risk area for 2024.
Nasdaq’s most recent Global Financial Crime Report estimates that $3.1 trillion in illicit funds traversed the global financial system in the previous year, facilitating international criminal activities, including $346.7 billion in human trafficking, $782.9 billion in drug trafficking, and $11.5 billion in terrorist financing.
Advantages of using generative AI to combat financial crime
To address the escalating issue of criminal enterprise, it is now imperative for financial institutions and RegTech companies to implement various technological solutions, including AI and generative AI. RegTech firms can leverage the massive amounts of data accumulated by financial institutions to spot patterns relating to transactions and customer behavior to combat economic crime with enhanced efficiency and precision.
In financial crime investigations, a generative AI co-pilot-Large Language Model (ChatGPT is the most commonly known Large Language Model) can aid in crafting case narratives when background information and investigation history are provided to the generative AI model. These co-pilot applications allow investigators to delve deeper into specific risks by gathering contextual data from the internet, social media, map searches, and other business departments. They can generate questions, perform analyses, and compile results, thereby streamlining and enhancing investigative efforts. However, human oversight of these narratives and summaries is essential.
As for risk assessment, generative AI models can determine credit risk more accurately and much faster by analyzing an individual’s or business’s financial history and other relevant data, leading to better lending decisions. By better understanding the nuances of transaction data, generative AI reduces the occurrence of false positives – the flagging of regular banking activity as suspicious. It can save resources and minimize customer inconvenience.
Pitfalls of Generative AI
The opacity of generative AI systems presents significant challenges in ascertaining whether decisions are made equitably, without bias, and in compliance with regulatory standards. This lack of transparency can pose substantial difficulties, particularly in regulated industries such as the financial services sector, where regulatory bodies such as the SEC, FinCEN, and DOJ require that decision-making processes be documented, be explainable, and are subject to audit.
Performance risk is another area of concern. Generative AI models, though sophisticated, are not immune to generating inaccurate or misleading information—a phenomenon often referred to as ‘hallucinations.’ Financial institutions face potential legal consequences and regulatory sanctions if they rely on AI-generated content or decisions that prove to be inaccurate. AI-generated content includes analyzing transactions, expressing concerns about money laundering, and identifying potential terrorist-related financing activities.
Financial institutions frequently grapple with fragmented data and incomplete information, which could render AI-driven results unreliable or inaccurate. Before implementing AI-powered tools or processes, any financial organization must evaluate whether their data is of sufficient quality and accessible to the AI system. Conducting a comprehensive data cleanup across the entire institution may be necessary before proceeding with AI implementation.
Generative AI can enhance financial crime detection, but significant work needs to be done
Financial institutions are constantly playing catch-up. As criminals become ever more sophisticated, financial institutions also need to follow suit and use the same technology to ensure the detection of financial crimes. Fighting financial crime with AI is not a mere trend but a necessity.
With the ability to process large volumes of data, generative AI can give financial institutions a fighting chance to quickly detect financial crimes by spotting patterns related to transactions and customer behavior with enhanced efficiency and precision. However, as AI tools require accurate and reliable data, it is paramount that human oversight be a part of integrating AI into the current processes already used by financial institutions. While this may be difficult due to the limited number of professionals trained in this emerging technology, the risk of false positives causing innocent customers to be flagged as criminals is too significant of a risk not to have at least some level of human interaction with crime detection systems using generative AI.