Chris Gasche
Associate Editor
Loyola University of Chicago School of Law
Facial recognition technology has become widespread in consumer and commercial environments, and particularly law enforcement. Despite numerous benefits, these systems raise great concerns about privacy and data protection. The current legal frameworks are not strong enough to effectively manage the risks. No federal laws currently exist to regulate the use of facial recognition technology. Instead, enforcement is left to the states. Without aggressive state initiatives, use of facial recognition technology by law enforcement will continue unabated. This will result in data collection mired in algorithmic bias and will result in a complete disregard of civil liberties.
Concerns with the implementation of biometric analysis in law enforcement
Police departments and other law enforcement agencies use facial recognition technology by comparing footage from closed-circuit television (CCTV), body-worn cameras, and social media databases. This allows agencies to identify individuals quickly in order to prevent crime. This would provide tremendous relief for law enforcement agencies because it would allow them to locate and pursue active criminals faster than they would without the technology. However, concerns arise when these technologies are used in real-time to provide alerts to law enforcement agencies about potential offenders. Cameras equipped with this technology can immediately relay facial recognition information to local law enforcement. These systems then match your biometric data to data stored in law enforcement databases.
Despite some state regulation of real-time facial recognition, law enforcement is finding ways around the restrictions. In New Orleans, a private network of AI-powered cameras deliver real-time alerts to law enforcement. Although the city council restricted police use of this technology, privately owned cameras upload facial recognition data to a nonprofit called Project NOLA. Police can then download their app and receive alerts indirectly. This technique is not uncommon. Law enforcement in cities like Austin and San Francisco circumvent regulation by having non-regulated law enforcement agencies run facial recognition data on their behalf.
One of the main concerns with the use of facial recognition technology in law enforcement is accuracy and bias. Studies from the National Library of Medicine and the ACLU of Minnesota show that facial recognition technology is the least reliable for people of color and women. These systems tend to emphasize existing inequalities because of the data that the system receives. An MIT study tested facial recognition technology and found a 34.7% error rate when identifying the gender of dark-skinned women,
In at least a dozen cases, the criminal charges have been dismissed as a result of a false identification elicited by facial recognition software. In many of these cases, the defendants were dark-skinned. Further, these false recognitions leading to an arrest often contain no information as to how the identification was made. Despite these concerns, funding for facial recognition in law enforcement is only growing. In September of 2025, ICE signed $10 million dollar contracts with organizations such as Clearview AI. The same company that provided the software which led to the false identifications.
Proposed legislation
Absent federal regulation, state laws must fill the gap to regulate these systems. As of August 2025, 23 states have passed laws intended to restrict the use of facial recognition technology, including Illinois. Illinois’s Biometric Information Privacy Act (BIPA) is unique because the law allows individuals to sue companies that abuse these technologies. The law has significant requirements on private entities, including requiring the destruction of biometric data when it is no longer needed, and requiring these entities to consult with the subject of data collection prior to the collection occurring. Even still, the bill lacks direct effectiveness at regulating law enforcement agencies’ use of facial recognition technology.
A proposed bill in Illinois would prohibit law enforcement agencies from utilizing facial recognition software with some exceptions. It would also prevent the aforementioned loophole allowing law enforcement to contract with third parties to indirectly use facial recognition technology. Currently, the ACLU of Illinois supports this bill and argues that it is a good step to protect our civil liberties from unknowing intrusion by law enforcement.
At the federal level, legislators announced in 2023 the “Facial Recognition and Biometric Moratorium Act of 2023.” This bill failed to make its way out of Congress. This would provide direct regulation of federal law enforcement agencies by preventing the use of any biometric surveillance system absent an act of Congress authorizing its use. The legislation also would regulate state or local government units if their use falls under similar federal restrictions provided by the bill. For example, the legislation requires that Congress proscribe standards for the collection of biometric data such as facial recognition information. Despite failing to move out of Congress, the bill provides hope that comprehensive regulation might be forthcoming.
The use of biometric analysis in immigration enforcement
Another concern is the use of facial recognition technology in immigration enforcement. The technology, Mobile Fortify, can even access a person’s criminal history from a scan of their face. Currently, Immigration and Customs Enforcement (ICE) officers can use this technology to discover someone’s immigration status and subsequently make swift arrests. Although the Department of Homeland Security (DHS) claims that this technology is safe, there are no guardrails preventing the indiscriminate use of this technology in the hands of officers that have demonstrated apathy in the face of violence.
Concerned by the mass deployment of facial recognition technology by ICE, legislators proposed a bill in February aimed at prohibiting the use and possession of all biometric surveillance systems by any immigration officer. The bill also requires that any data already collected be deleted immediately after the bill’s enactment. To enforce this ban, the bill provides a private right of action for those affected by the use of this biometric data. So far, the bill has not gotten past the Senate where it was introduced.
Comparative perspectives
In the European Union (EU), the AI Act and the General Data Protection Regulation, impose strict limits on how and when law enforcement is able to use biometric surveillance. Particularly, under the AI Act, the use of real-time facial recognition in public spaces is heavily restricted and is subject to very narrow exceptions. Additionally, these frameworks require government entities to justify the reasons behind biometric surveillance. The Act also requires prior authorization for these types of uses and imposes very strict limits on the scope of the biometric collection.
The contrast behind the extensive regulation imposed by the EU and the lackluster regulation in the United States highlights the distance that needs to be covered by current regulatory schemes. There needs to be comprehensive federal regulation so that it makes the states’ jobs easier when they are trying to regulate the use of biometric data. There needs to be tighter regulation preventing the use of loopholes to avoid accountability by law enforcement agencies. This could mean expanding the scope of existing law to ban agreements with third party AI data collection companies. Additionally, the use of AI in immigration proceedings should be significantly restricted.