Our #TECH_Newser covers ‘news of the day’ #techNewserTechnology content.
| cutline • press clip • news of the day |
New EEOC Guidance: The Use of Artificial Intelligence Can Discriminate Against Employees or Job Applicants with Disabilities | Foley & Lardner LLP.
As the use of artificial intelligence wedges its way into every side of business and culture, government regulation is (perhaps too slowly) moving to build legal boundaries around its use.
On May 12, 2022, the Equal Employment Opportunity Commission issued a new comprehensive “technical assistance” guidance entitled The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. The guidance, which covers a number of areas, defines algorithms and artificial intelligence (AI); gives examples of how AI is used by employers; answers the question of employer liability for use of vendor AI tools; requires reasonable accommodation in deploying AI in this context; addresses the “screen out” problem of AI rejecting candidates who would otherwise qualify for the job with reasonable accommodation; requires limitations to avoid asking disability-related and medical questions; promotes “promising practices” for employers, job applicants, and employees alike; and gives many specific examples of disability discrimination pitfalls in using AI tools.
Here are some primary takeaways from the new guidance:
Employers Can Be Exposed to ADA Liability for AI Vendor Software:
- Risk Exposure for Vendor Software. Employers who deploy AI-driven decision-making tools to evaluate employees or job applicants may be liable under the Americans with Disabilities Act (ADA) for the shortcomings of that technology. Even if the AI tool was developed or administered by a third party vendor, the employer can be on the hook — especially if the employer has “given [the vendor] authority to act on the employer’s behalf.”
- On the Test Side and on the Accommodation Side. This means employers need to manage risk from the AI vendor’s action or inaction in administering the evaluation and in granting reasonable accommodations. If an individual requests a reasonable accommodation due to a disability and the vendor denies the request, the employer can be exposed for the vendor’s inaction despite being unaware of the request.
- Check the Vendor Agreement. Employers should closely consider indemnity and other liability- limiting and allocating provisions of their AI vendor agreements.
Al Tools Can Unlawfully “Screen Out” Qualified Individuals with Disabilities:
- Screen Outs. “Screen outs” in the AI context can occur when a disability lowers an individual’s performance in taking an AI-driven employment test, or prevents a candidate from consideration in the first place, for failure to meet AI-driven threshold criteria. Under the ADA, a screen out is unlawful if the tool screened out an individual who is able to perform the essential functions of the job with a reasonable accommodation.
- Examples. AI tools can screen out individuals with limited manual dexterity (to use a keyboard); who are sight, hearing, or speech impaired; who have employment gaps due to past disability issues; or who suffer from PTSD (thus skewing the results, for example, of personality tests or gamified memory tests).
Per the guidance: “A disability could have this [screen out] effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether.”
- Bias Free? Some AI-based decision-making tools are marketed as “validated” to be “bias-free.” It sounds good, but that labelling may not speak to disabilities, as opposed to gender, age, or race. Disabilities — physical, mental, or emotional — cover a broad swath of life, can be highly individualized (including as to necessary accommodation), and as such are less susceptible to bias-free software adjustment. For example, learning disabilities can often go undetected by human observers because their severity and characteristics so widely vary. Employers will need assurances that AI can do better.
AI Screens Can Generate Unlawful Disability and Medical-Related Inquiries:
- Unlawful Inquiries. AI-driven tools can generate unlawful “disability-related inquires” or seek information as part of a “medical examination” before approving candidates for conditional offers of employment.
Per the guidance: “An assessment includes ‘disability-related inquiries’ if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether an applicant or employee is an individual with a disability. It qualifies as a “medical examination” if it seeks information about an individual’s physical or mental impairments or health. An algorithmic decision-making tool that could be used to identify an applicant’s medical conditions would violate these restrictions if it were administered prior to a conditional offer of employment.”
- Indirect Failure. Not all health-related inquiries by AI tools are considered “disability-related inquires or medical examination.”
Per the guidance: “[E]ven if a request for health-related information does not violate the ADA’s restrictions on disability-related inquiries and medical examinations, it still might violate other parts of the ADA. For example, if a personality test asks questions about optimism, and if someone with Major Depressive Disorder (MDD) answers those questions negatively and loses an employment opportunity as a result, the test may ‘screen out’ the applicant because of MDD.”
Best Practices: Robust Notice of What is Being Measured — and that Reasonable Accommodation is Available:
There are a number of best practices employers can follow to manage the risk of using AI tools. The guidance calls them “Promising Practices.” Primary points:
- Disclose the Topics and Methodology. As a best practice, regardless of whether a third-party vendor developed the AI software/tool/application, employers (or their vendors) should inform employees or job applicants — in plain, understandable terms — what the evaluation entails. In other words, disclose up front the knowledge, skill, ability, education, experience, quality, or trait that will be measured or screened with the AI tool. In the same vein, disclose up front how testing will conducted and what will required — using a keyboard, verbally answering questions, interacting with a chatbot, or what have you.
- Invite Requests for Accommodation. Empowered with that information, an applicant or employee has more of an opportunity to speak up ahead of time if they feel some disability accommodation will be needed. As such, employers should consider asking employees and job applicants if they require a reasonable accommodation using the tool.
- Obvious or Known Disability: If an employee or applicant with an obvious or known disability asks for an accommodation, the employer should promptly and appropriately respond to that request.
- Otherwise-Concealed Disability: If the disability is otherwise unknown, the employer may ask for medical documentation.
- Provide Reasonable Accommodation. Once the claimed disability is confirmed, the employer must provide a reasonable accommodation even if that means providing an alternative testing format. This is where the guidance can truly come in conflict with the use of AI. As such tools become endemic, alternative testing may seem inadequate by comparison, and potential discrimination between AI-tested individuals and old-school tested individuals may arise.
Per the guidance: “Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies.”
- Protect PHI. As always, any medical information obtained in relation to accommodation requests must be kept confidential and stored separately from the employee’s or applicant’s personnel file.
With the increasing reliance on AI in the private employer sector, employers will have to expand their proactive risk management so as to control for the unintended consequences of this technology. The legal standards remain the same, but AI technology may push the envelope of compliance. In addition to making a best effort in that direction, employers should closely review other means of risk management such as vendor contract terms and insurance coverage.
This article was prepared with the assistance of 2022 summer associate Ayah Housini.
‘News of the Day’ content, as reported by public domain newswires.
Source Information (if available)
It appears the above article may have originally appeared on www.foley.com and has been shared elsewhere on the internet, repeatedly. News articles have become eerily similar to manufacturer descriptions.
We will happily entertain any content removal requests, simply reach out to us. In the interim, please perform due diligence and place any content you deem “privileged” behind a subscription and/or paywall.
First to share? If share image does not populate, please close the share box & re-open or reload page to load the image, Thanks!