The Use of Artificial Intelligence Tools in Employment Decisions Can Lead to Violations of the ADA and New York City Law
Cryptopolytech Public Press Pass
News of the Day COVERAGE
200000048 – World Newser
•| #World |•| #Online |•| #Media |•| #Outlet |
View more Headlines & Breaking News here, as covered by cryptopolytech.com
The Use of Artificial Intelligence Tools in Employment Decisions Can Lead to Violations of the ADA and New York City Law appeared on www.wsgr.com by wsgr.com.
Employers’ use of artificial intelligence (AI) tools in making hiring and other employment decisions is drawing increased scrutiny. The Equal Employment Opportunity Commission (EEOC) published guidance on how employers’ use of AI tools may violate the Americans with Disabilities Act (ADA), and the New York City Council enacted a new law mandating a yearly bias audit of the tools, along with certain notice requirements. In light of this heightened focus, companies should carefully evaluate their use of AI tools to ensure they are compliant with current requirements.
EEOC Guidance on How the Use of AI Tools May Violate the ADA
The EEOC issued guidance warning employers that using algorithms and artificial intelligence in making hiring decisions can result in discrimination based on disability. These tools can include any software and applications that use algorithms to process data to evaluate, rate, or make other decisions about candidates and employees, such as resume scanners, employee monitoring software, chatbots, video interviewing software, and testing programs. While the EEOC’s guidance does not have the force of law, it serves as notice to employers so that they can avoid engaging in discriminatory action.
In the guidance, the EEOC focused on three situations in which an employer’s use of these algorithmic decision-making tools could violate the ADA:
- “The employer does not provide a ‘reasonable accommodation’ that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.” The EEOC provides as an example a situation where a job applicant has limited manual dexterity due to a disability and reports having trouble with a knowledge-based test that requires the use of a manual input device such as a keyboard or trackpad. If the company does not provide an accessible version of the test (e.g., an oral test) as a reasonable accommodation, assuming such accommodation would not constitute an undue hardship on the employer, then the employer may be in violation of the ADA.
- “The employer relies on an algorithmic decision-making tool that ‘screens out’ an individual with a disability, even though the individual is able to do the job with a reasonable accommodation.” For instance, if a chatbot conversation with an applicant automatically screens out an applicant because of significant gaps in their employment history, but those gaps were caused by a disability (e.g., the applicant had to take time off from work to undergo treatment), then screening out the applicant would violate the ADA. Similarly, a video interview program may impermissibly screen out an applicant with a speech impediment that caused a low rating.
- “The employer adopts an algorithmic decision-making tool for use that violates the ADA’s restrictions on disability-related inquiries and medical examinations.” Such a violation can happen even if the applicant does not have a disability, as this type of violation can occur simply if the program asks questions that are likely to elicit information about a disability or seeks information about an individual’s physical or mental impairments or health.
New York City’s Law on the Use of Automated Employment Decision Tools
Companies with candidates or employees in New York City also should be aware that the city enacted a law, which will go into effect on January 1, 2023, governing the use of automated decision tools in making hiring and other employment decisions. The New York City Department of Consumer and Worker Protection has proposed rules to clarify certain provisions in the law. The proposed rules are open for public comment in advance of a public hearing on October 24, 2022.
Overview of New York City’s Law
The law requires any company that uses an “automated employment decision tool” for candidates or employees in New York City to conduct a yearly “bias audit” of the tool and publish the results on the company’s website. A “bias audit” for these purposes is an impartial evaluation by an independent auditor that must include an assessment of whether the tool has a disparate impact on individuals based on race/ethnicity or sex. The proposed rules, if adopted, would add details regarding the bias audit, including requirements for calculating the selection rate and the impact ratio for each category.
An “automated employment decision tool” means “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” It does not include a tool that “does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.” Based on the text, the law likely covers any computerized tool or algorithm-based software used to evaluate, select, or assess candidates or employees, such as programs or software that review resumes, conduct testing, rank employees or candidates, evaluate performance, or conduct personality assessments.
Notice Requirements
The law also contains notice requirements. Employers who use an automated employment decision tool must, for each employee or candidate residing in New York City: (i) notify the employee or candidate of the fact that such tool will be used in connection with the assessment or evaluation of such employee or candidate; (ii) notify the employee or candidate of the job qualifications and characteristics that such tool will use in the assessment; and (iii) allow the employee or candidate to request an alternative selection process or accommodation. The notice must be provided at least 10 business days prior to the employer’s use of the tool. The rules propose details on how the company may furnish the notice to candidates and employees, such as by including the notice in the job posting or mailing or emailing the notice to the individual. Further, upon written request by an employee or candidate, the employer must, within 30 days of the request, make available the type of data collected, the source of the data, and the company’s data retention policy.
Under the text of the law, it is unclear who is considered a “candidate” entitled to notice. For instance, if an employer uses a program that screens LinkedIn profiles for particular qualifications, it is not clear whether every individual in New York City whose profile was reviewed would be considered a candidate and must receive the notice. The proposed rules would clarify this issue. Under the proposed rules, a candidate for employment is someone who has actually applied for a specific employment position by submitting the necessary information and/or items in the format required by the company. Therefore, a tool that screens LinkedIn profiles of individuals who have not applied for employment would not be included.
The law specifically states that the notices must be provided to an employee or candidate who “resides in the city.” As a result, companies may be required to determine where candidates reside to understand the notice obligations, rather than providing notice only to candidates for job postings located in New York City. It is not clear how companies should practically go about finding out the residence of candidates if the company does not have a resume from the applicant containing their home address. Further, the focus on where the candidate resides means that employers outside of New York City must still comply with this law for any candidates or employees located in the city.
Penalties for Violations
Employers who violate this New York City law can be liable for up to $500 for a first violation and each additional violation that occurs on the same day as the first violation. Subsequent violations can result in penalties between $500 to $1,500 each. Each day an automated employment decision tool is used in violation of the law is considered a separate violation, and the failure to provide any required notice to an employee or candidate also is considered a separate violation.
Next Steps for Employers
Given the varied ways use of AI tools may violate employment laws, employers are advised to discuss with employment counsel the particulars of their situation. In the meantime, employers should consider the following steps in light of the guidance from the EEOC and to prepare for the enactment of the New York City law if covered:
- Review the company’s current practices and planned use of AI tools to evaluate whether using such tools in making employment decisions could violate the ADA or fall under the purview of NYC’s new law;
- Review any user interface that candidates are required to use to ensure that it is accessible to persons with disabilities;
- Ensure that the AI tools make clear that reasonable accommodations are available for persons with disabilities and do not impermissibly screen out individuals as a result of their disability;
- If covered by the NYC law, conduct an audit of the use of any AI tools to determine whether such use may create a disparate impact on individuals based on race/ethnicity or sex, and make modifications to address any such disparate impact;
- If covered by the NYC law, prepare the requisite notice about the company’s use of AI tools in employment decisions, and determine how the notice will be provided to applicable candidates and employees;
- Update data retention policies with a specific focus on the input and output of AI tools used in employment decisions; and
- Monitor developments in the law for additional guidance and restrictions on the use of AI tools, including the expected regulations from the NYC Division of Human Rights that may add clarity to NYC’s law on automated employment decision tools.
Wilson Sonsini Goodrich & Rosati’s employment and trade secret litigation group is actively following developments related to the use of artificial intelligence tools in making employment decisions. For more information, please contact Rico Rosales, Marina Tsatalis, Jason Storck, Rebecca Stuart, or another member of the employment and trade secret litigation group.
FEATURED ‘News of the Day’, as reported by public domain newswires.
View ALL Headlines & Breaking News here.
Source Information (if available)
This article originally appeared on www.wsgr.com by wsgr.com – sharing via newswires in the public domain, repeatedly. News articles have become eerily similar to manufacturer descriptions.
We will happily entertain any content removal requests, simply reach out to us. In the interim, please perform due diligence and place any content you deem “privileged” behind a subscription and/or paywall.
First to share? If share image does not populate, please close the share box & re-open or reload page to load the image, Thanks!