Navigating the AI Frontier: Securing the Future of Financial Services

Arti Sahajpal

Associate Editor

Loyola University Chicago School of Law, JD 2025

 

As artificial intelligence becomes more available, apprehension regarding its potential impact on security and data protection grows, especially within the financial services sector. AI technology undoubtedly provides some benefits to the financial sector by offering services that would otherwise be unwieldy, inefficient, time-consuming, and costly when undertaken by humans. The financial services sector is no stranger to security risks and with the increased prevalence of AI, the threat landscape grows larger, especially when considering the financial sector’s increasing dependence on web applications and APIs.

Risks and Consequences

Many have raised concerns that the ubiquity of AI technology will result in increased data breaches, detrimental selective algorithms, and threats to economic and political stability.

The financial industry’s reliance on AI technology has also stirred concerns regarding discriminative algorithms, unfair treatment, and biases. AI has regularly been criticized for its unreliability and tendency to generate misinformation and unsupported data. As wealth inequality continues to grow, AI may create more barriers to economic mobility for lower- and middle-class Americans by creating discriminatory selective algorithms to prevent certain demographics from accessing financial services. Current credit reporting systems have already been proven to promote racial bias and discrimination. Indeed, researchers have already uncovered racial and gender bias in AI technology. Consequently, low-income citizens may disproportionately lose access to necessary financial services. AI data analytics may result in lending exclusion for these citizens due to poorly calibrated algorithms and risk-based approaches that fail to account for subjective characteristics that may otherwise result in access to such lending. AI eliminates the human dimension from these processes that many depend on, adding further complexity to the already formidable financial sector.

Vulnerability to Misinformation

Generative AI, as a machine learning technology, relies on user interactions and data inputs for its training and output generation. However, it lacks the inherent ability to fact-check or verify the accuracy of the information it processes. This inherent limitation makes generative AI susceptible to manipulation and increases its propensity to disseminate misinformation. This concern becomes particularly pressing when one acknowledges that people are often more easily persuaded by AI-generated misinformation.

AI-generated videos, or “deepfakes” have already provoked uncertainty, blurring the line between reality and deception. The sophistication of AI voice generation technology has reached a point where it can convincingly mimic human voices, leading to unsettling incidents such as a mother being fooled into believe her daughter had been kidnapped. These developments extend to a point where AI technology has even bypassed banking biometric systems, casting doubt on the reliability of existing security and authentication measures. Scams already exact a staggering toll on Americans of nearly $8.8 billion per year. As access to such advanced technology increases, that number will likely only increase.

Addressing Concerns

Despite the profound risks associated with the increasing popularity of AI, little has been done to regulate it. The European Union recently passed a bill through Parliament’s legislative branch seeking to place restrictions on various AI programs and tools. However, even the bill’s drafters note the proposal is far from perfect. Still, it is a step in the right direct and a notable benchmark in the European Union’s strategic plan to regulate AI. Unfortunately, most countries seeking to regulate the novel technology are stuck in the research phase. The rapid progression of AI technology complicates the legislative process, as it becomes increasingly difficult to foresee the full spectrum of future challenges, leaving lawmakers in a state of ambiguity about the optimal scope and nature of effective regulatory measures.

The lack of legislative action has prompted many small-scale efforts to deal with the risky technology. Earlier this year, several financial firms began restricting use of generative AI programs such as ChatGPT. New York City has also been recognized for its internal efforts to regulate AI under a law passed in 2021. The law requires organizations that involve AI software in the hiring process to disclose it and also mandates annual independent audits to test the programs for racial, ethnic, and gender bias.

The New York law came two years after Illinois passed the Artificial Intelligence Video Interview Act, which places several limits on employers’ use of AI in the hiring process. Unfortunately, subsequent efforts to pass legislation in Illinois has not been as successful. In February 2023, Illinois House members introduced the Illinois Data Protection and Privacy Act, noted as one of the strongest efforts to curb access to private information and data collection. Unfortunately, the bill failed in March. Nevertheless, Illinois has made significant progress in technology regulation, and will undoubtedly continue to do so to address the risks of artificial intelligence.

Still, much remains to be done, especially as the financial services sector faces greater risks from cyber-related issues. While it is true that regulating AI is a complex task, the potential consequences of inaction are even more daunting. Addressing risks requires a collaborative effort involving legislators, industry stakeholders, and technology experts. Moreover, it is essential for financial services professionals to adapt to this evolving landscape. Acquiring digital skills and staying updated on AI developments can empower professionals to contribute positively to the industry’s transformation.

In the end, the convergence of AI and finance holds great promise, but it also presents great peril. The path forward involves striking a delicate balance between innovation and protection, and it will require sustained effort, vigilance, and adaptability. Legislators should focus on creating regulations in key areas, by way of limiting AI-driven trading algorithms, mandating regular audits of AI systems to prevent biases and ensure equitable access to financial services, and promulgating laws to establish strict data protection and cybersecurity standards. Concurrently, private actors within the financial services sector should provide responsible AI training to provide insight on the ethical implications of the burgeoning technology, conduct risk assessments to identify operational, legal, and reputational risks, and collaborate with industry peers to share best practices, standards, and approaches to responsible AI. By addressing these concerns head-on and working together, we can secure a future where AI enhances financial services while preserving the principles of security, fairness, and accessibility for all.