Jay Fort
Associate Editor
Loyola University Chicago School of Law, JD 2026
What is AI?
Artificial intelligence enables the automation of tasks otherwise requiring human intelligence. AI enables computer systems to simulate human capabilities, including (but not limited to): comprehension, problem-solving, and decision-making. More specifically, based on inputs received, AI systems infer methods of output generation, including predictions, recommendations, or decision-making. From consumer protection to data and privacy, education to employment, and healthcare law, the AI era has raised countless red flags and regulatory implications. As such, questions around the development of proper AI guardrails are more pressing than ever, as automation and AI technologies are rapidly integrated and dominate large portions of the U.S. (and global) commercial market, from product and intangible goods to financial and human services.
AI and the Employment Law Context
One of the legal areas most implicated by AI adoption is that of employment and labor law. Today, a large and growing number of companies (like those in the Fortune 500) have begun integrating AI technology into their internal systems for recruitment and hiring. According to Gallup reporting, 93% of fortune 500 officers have already adopted AI within their departments. AI models are trained to streamline human resource tasks, such as processing a high volume of resumes, identifying and selecting candidates by analyzing patterns that might indicate a desirable applicant, etc. However, the risk of algorithmic bias resulting in discriminatory hiring practices must also be considered and appropriately weighed. Claims of discrimination have resulted in high profile lawsuits in various U.S. states and jurisdictions.
For example, in the case of Mobley v. Workday, Inc., a California job applicant alleged that company Workday’s AI-based recruitment screening tools disproportionately (and unfairly) rejected older, African American, and disabled applicants, including himself, in violation of anti-discrimination laws, including federal statutes and agency (i.e. EEOC) guidance covering protected classes. Later certifying plaintiff to expand the suit to a class action, the court allowed the suit to proceed, reasoning that the plaintiff stated a plausible disparate impact claim and that Workday could potentially be held liable as an “agent” of its client employers. Ultimately, the ruling affirmed that AI vendors (as third parties) may be held directly liable for discrimination if its algorithm, as a delegated hiring function, unlawfully screens out protected groups (further evoking agency law principles and third party liability doctrine). Mobley is considered a high profile, and seminal, legal case considering algorithmic bias and addressing employment, liability, and decision-making claims. However, the reality of AI’s rapid development as an ever-emerging technology, coupled with an absence of clear understanding and established binding (or persuasive) legal precedent suggests that many of these claims are novel (or at least somewhat novel) cases of first impressions.
Federal Legislation and Guidance
Unfortunately, there remains a lack of clear consensus and cohesive national (or industry-specific) guidelines to address AI implementation and application. The Biden-Harris Administration moved in this direction, providing general principles for responsible AI. The Executive Order’s guidelines focused on a few key (yet broad) priorities, including: safety and security; innovation and competition; worker support; consideration of AI bias and civil rights; consumer protection; privacy; federal use of AI; and international leadership. Unfortunately, while executive orders (executive actions, when implemented) apply to Federal workforce, they are merely a framework and lack substance or legal enforcement mechanisms. Further, the trump administration has shifted course, embracing the “arms race” mentality and a policy of acceptance rather than a more moderately skeptical approach.
Illinois’ AI Legislative Efforts
However today, an ever-increasing number of states have introduced AI and employment regulatory legislation. As one recent example, Illinois has passed several important pieces of AI legislation, including HB 3773, which amends the Illinois Human Rights Act, making it a civil rights violation to (1) utilize AI models which subject employees to discrimination or to use zip codes as a proxy for protected classes, and (2) fail to notify employees of the employer’s use of AI. The act applies to employers, defined as any person (includes organizations and corporations) employing one or more staff within the state, but does not expressly impose liabilities on employment agencies.
Ultimately, rather than a one-size-fits-all approach, this emergent AI era requires an all hands on deck mindset. In terms of generally advisable principles, government, business, private and public sector leaders can take proactive steps to protect an organization from AI related employment liability. First, regularly auditing the organizations AI tools, searching for any gaps or problems closer to inception. Second, implementing clear and effective training for HR and other stakeholders, ensuring understanding of applicable federal and state regulations and potential compliance risk. Third, maintaining human oversight as a guardrail and backstop against issues like algorithmic bias, hallucinations, etc. Fourth, staying informed- ensuring that leadership understands AI tools, policies, implementation, and application of assessment in order to effectively understand AI models, manage their use, reduce risks, and avoid unnecessary costs. Although far from exhaustive, these are steps on the path to a dynamic, strategic approach to AI governance and regulatory sustainability, no doubt a necessity for the days ahead.