Artificial Intelligence and Compliance
Jason Taken
Associate Editor
Loyola University Chicago School of Law, J.D. 2019
With the rise of the machine at our doorsteps, companies (those with foresight, anyhow) will be finding more innovative ways to gain the edge while using those machines. One of the ways companies will seek this edge is through the use of Artificial Intelligence (“AI”). AI is one of the hottest, and arguably controversial, topics confronting mainstream business today. Many are skeptical of it, but also hopeful, despite the controversy surrounding the field. While both sides of the controversy have their reasons, some on each side are generally clueless as to how AI is manifesting itself today, and how it will in the future. How will it be applied? What is it useful for? What follows is a primer on current applications of AI and how they may be applied to the compliance world.
What is AI?
John McCarthy first coined the term “Artificial Intelligence” in 1956, during a conference that launched a journey into making machines intelligent. AI is the field of computer science that aims to create intelligent machines and the science and engineering behind it. Over the past few years, AI has experienced a recent uptick in attention and use because of the increased processing speed and power computers hold. Moore’s Law is a universal law that instructs us on technology’s progress, and accordingly, we are beginning to experience the effects of the exponential rise not previously seen.
A world filled with AI is not a world filled with robots that will replace humans at every turn, despite what Hollywood will lead you to believe. An understanding of this fact is important. In fact, AI is already influencing much of our daily lives. Consider first, your Amazon account’s “recommended items.” Amazon only recommends those items from data they have gathered on you. When you purchase an item, they take that purchase data and combine that information with other people’s information who has purchased the same item. Then, using that data, and those other purchasers’ data they recommend for you what they make you might want to buy next. That, in essence, is what AI is today. AI is, in very large part, data driven. The more data it has, the smarter the AI gets. Similarly, this is how human intelligence works. And that’s where AI gets its flare.
Next, consider your social media timeline. Your behavior on social networks determines what content is provided to you on your timeline Computers discern behavior from data and that data is supplied by each and every click of the mouse. Accordingly, AI learns from your past behavior and uses it as a predictor of your future behavior. That is, in simple terms, what AI is, and what AI will be. Predictive learning is the key to what drives the current momentum in the world of AI. As the data becomes more available, whether that be increasing purchasing on the internet or browsing your friend’s social feeds more, the computer will learn more about you, and be able to predict what you will likely do in the future. Given an understanding of that, it can be easy to see the benefits of AI. It takes the rule-based decision making away from humans.
AI and Compliance
Compliance decisions require making rule-based decisions. Whether that decision is driven by policy, or a set of guidelines that dictate outcome, these decisions and programs are prime opportunities to implement AI. Although not an implementation of AI in the sense we described above, the rules, combined with the data, in the form of outcomes of those decisions, provide for a very powerful decision maker. The machine becomes “insightful.” This is how humans generally craft their decisions. The advantage of the AI though, or machine-based decision making, is one of speed, and accuracy.
First, decision-making speed increases. Computers can compute logic, and make decisions far faster than humans can. AI can analyze the circumstances given as input, and determine the correct result, with high accuracy, thousands of times faster than we ever will be able to. The prefrontal cortex just isn’t as equipped as today’s CPUs (or those of the past for that matter).
Second, the collection of data on the results of those decisions allows the computer to begin to make decisions based on what we currently know as instinct. Better instinct results in better accuracy. Instinct in humans is based in large part on experience. And when a computer (AI) gains experience, in the form of data, it will start to be a very powerful, more so than us, decision maker. This is what makes AI so powerful.
Industries using AI
Retail industries and social media are just a couple of the several industries already investing heavily into building AI infrastructure. . There are opportunities across the spectrum to implement these powerful, decision makers.
Within the last few years, the financial industry has started to experiment with the use of computer-based decision-making. Decisions that require a set of rules, and are made better by data, are great opportunities for AI. Financial institutions are realizing that they have the power to decrease overhead, and increase output, while using these tools.
While the upsides are many, commentators have posited some key issues. The question of liability has come up many times in the rising use of AI. Who’s responsible if the machine makes a wrong decision? Many regulatory issues revolve around the issue of bad faith. Can machines make decision in bad faith? What would bad faith look like in terms of AI? After all, people program the machine’s algorithms, and people are capable of bad faith. How would we ever know if the machine made a decision in bad faith?
The multi-industry implementation of these technologies will bring to light many answers, and probably many more questions. The harm in not progressing in the direction of the machine, though, will leave many behind. Regulators, and participators alike, need to be involved in this growing field, now, more than ever. A failure to do so, based on the few aspects discussed, will leave those who choose to sideline.