
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 3, 2018 |
Latest Amendment Date: | August 3, 2018 |
Award Number: | 1841368 |
Award Instrument: | Standard Grant |
Program Manager: |
Ephraim Glinert
IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2019 |
End Date: | June 30, 2021 (Estimated) |
Total Intended Award Amount: | $225,054.00 |
Total Awarded Amount to Date: | $225,054.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
201 OLD MAIN UNIVERSITY PARK PA US 16802-1503 (814)865-1372 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
PA US 16802-1000 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | HCC-Human-Centered Computing |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Perceived fairness and justice in job recruiting and hiring are influenced by several factors. Some factors are the consistency of the decision-making process across people and time, timely and informative feedback, propriety of the interview questions, and the extent to which pre-employment tests appear to relate to the job requirements. These factors come together to influence decisions about recruiting and hiring and are being made increasingly with the help of artificial intelligence (AI). In this project, a sociotechnical frame is applied to explore perceptions of fairness and justice of AI-supported talent acquisition algorithms. the investigator will elicit and analyze perceptions of human resources personnel, African American job seekers, and AI software designers. The outcomes will be used to inform the design of bias recognition and mitigation procedures and technologies for both humans and the algorithms being used.
The intellectual merit of this exploratory study is the development of qualitative instruments and metrics that can be used to measure perceptions of algorithmic fairness and justice. The research approach extends a theory of procedural rules for perceived fairness of selection systems by using a three-pronged approach comprising job seekers who are under-represented in the IT industry, human resource professionals who manage the talent acquisition process, and IT professionals who design AI software with fairness as the core value in product design and development. Perceptions using scenarios are examined as well as the actual experiences of jobseekers who are affected by these decisions. This research contributes to an assessment of algorithmic fairness at a time when there is currently little insight into how historically marginalized populations might perceive or be adversely affected by AI systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Our research examines the perceptions and lived experiences of women, Black and Latinx undergraduate students completing degrees in computing majors who encounter algorithmic systems during the hiring process for entry-level technology positions. Specifically, we use Gilliland's procedural justice rules as a theoretical model for understanding applicants' reactions to Applicant Tracking System (ATS) software employers use to automate recruiting and hiring tasks. Procedural justice rules include formal characteristics like the consistency of the administration of interview questions, the opportunity to perform a pre-employment task, and the job-relatedness of the task. Also included are honest and informative explanations of hiring decisions and being treated respectfully by the hiring organization.
Using surveys and workshops, we found that job seekers' perceptions about the fairness of the testing, interviewing, and other procedures applied during candidate selection process are inextricably linked with perceptions of equity in the outcomes of the selection process. Consistent with prior research on selection systems, women and racially minoritized job seekers perceived hiring procedures to be fairer when they are able to demonstrate their technical skills through a test that consistently and accurately evaluated performance. Fairness is also perceived when job seekers are offered two-way communication with an interviewer who does not exhibit personal bias based on the applicants' race, gender and ethnicity.
When considering the fairness of the hiring decision, situational factors like the characteristics of the decision-maker (human or algorithm), type of task (programming vs. interviewing), explanation of the decision-making rationale (feedback on why I was/was not hired), and the possibility for human oversight (can a hiring decision be modified if a fault in the algorithm was identified?) become important. The salience of discrimination and performance expectations based on racial and gender stereotypesis critical in determining fairness in the hiring process. Thus, ATS software may reproduce human bias when algorithmic methods for detecting, classifying, and selecting human beings are built without considering the broader historical and social context of racial and gender inequality.
The broader impacts of our work are immediately found in the demographic diversity of our research team, which reflects the under-representation found in our study participants. In addition, using a solid theoretical framework to empirically assess the reactions of job seekers from historically excluded groups in computing offers complex viewpoints highlighting both the positive and negative sides of ATS software, which provides novel insights for diversifying the technology workforce. From an ethical perspective, organizations should be concerned with the effects of automated recruiting and hiring procedures on the psychological well-being of job candidates. Moreover, organizations should be concerned about how perceived unfairness and algorithmic bias may lead applicants to pursue discrimination cases from a legal perspective.
Last Modified: 09/28/2021
Modified by: Lynette M Yarger
Please report errors in award information by writing to: awardsearch@nsf.gov.