DOL offers a framework for AI hiring decisions
An interesting thing happened once the pandemic ushered in remote work on a large scale—employers hired many more employees with disabilities. This upended a lot of discriminatory thinking about hiring employees with disabilities.
In conjunction with the Partnership on Employment & Accessible Technology, the Department of Labor’s Office of Disability Employment Policy has a new publication and associated tools, AI & Inclusive Hiring Framework, both of which are geared toward hiring employees with disabilities.
How the framework works
AI isn’t neutral. It reflects the unconscious biases of the people who write the algorithms, which in turn reflect the unconscious biases of the clients commissioning the algorithms. The EEOC has already provided guidance on how AI can negatively affect the hiring process by compounding decisions rooted in discrimination.
PEAT’s framework builds on the idea that the algorithms powering AI-based hiring decisions can be inclusive of employees with disabilities. The key is understanding and human oversight of how AI works so it can work for you.
The framework has 10 focus areas, including practices, goals and sample activities you can adapt to your AI governance and disability-inclusive hiring initiatives. Each area has information on maximizing benefits and managing risks for workers and job seekers as a company assesses, acquires or deploys AI hiring technology.
The framework also lists five roles for employers, but we’re going to concentrate on two—leaders overseeing AI and the role of HR and hiring managers.
Leaders lead, of course, and leadership is usually top-down. Leaders who work responsibly with AI hiring tools set an example for everyone else in the company who uses those tools. Key practices for leaders overseeing AI include:
- Outlining organizational roles and responsibilities
- Developing policies and practices for inventorying AI hiring technology
- Developing ways to assess AI’s impact
- Exploring AI’s impact on external users
- Monitoring performance and legal risk.
HR and hiring managers’ tasks are geared toward the day-to-day use of AI hiring tools. They are assigned six tasks:
1. Training staff and contractors. The goal is to establish training policies, content and delivery systems to educate internal users on AI’s hiring technology roles, risk management and change management.
2. Outlining the context for using AI. The goal is to set boundaries for when it’s acceptable to use AI and when it’s not.
3. Fostering an inclusive culture that values critical thinking. This is document-based and may include internal and external impact assessments, independent audits and a whistleblower policy and mechanisms.
4. Outlining reasonable accommodation procedures. This encompasses creating a process for job candidates to request an accommodation and ensuring you have a process for offering, responding to and promptly managing accommodation requests.
5. Developing procedures to handle incidents and appeals. This covers testing policies and procedures to manage and report AI’s failures or negative impacts.
6. Monitoring performance and legal risk. This is also document-based and includes impact assessments (including fairness), independent audit reports, risk treatments, legal and regulatory compliance records, stakeholder feedback mechanism evaluations and participatory approaches for assessing impacts.