Award Abstract # 2147256
FAI: Using Explainable AI to Increase Equity and Transparency in the Juvenile Justice System?s Use of Risk Scores

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: BOWLING GREEN STATE UNIVERSITY
Initial Amendment Date: February 11, 2022
Latest Amendment Date: February 11, 2022
Award Number: 2147256
Award Instrument: Standard Grant
Program Manager: Todd Leen
tleen@nsf.gov
 (703)292-7215
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: May 1, 2022
End Date: June 30, 2025 (Estimated)
Total Intended Award Amount: $392,993.00
Total Awarded Amount to Date: $392,993.00
Funds Obligated to Date: FY 2022 = $120,962.00
History of Investigator:
  • Trent Buskirk (Principal Investigator)
    tbuskirk@odu.edu
  • Kelly Murphy (Co-Principal Investigator)
Recipient Sponsored Research Office: Bowling Green State University
1851 N RESEARCH DR
BOWLING GREEN
OH  US  43403-4401
(419)372-2481
Sponsor Congressional District: 05
Primary Place of Performance: Bowling Green State University
302 Hayes Hall
Bowling Green
OH  US  43403-0230
Primary Place of Performance
Congressional District:
05
Unique Entity Identifier (UEI): SLT3EB6G3FA9
Parent UEI:
NSF Program(s): Fairness in Artificial Intelli,
Fairness in Artificial Intelli
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z
Program Element Code(s): 114y00, 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.075

ABSTRACT

Throughout the United States, juvenile justice systems use juvenile risk and need-assessment (JRNA) scores to identify the likelihood a youth will commit another offense in the future. This risk assessment score is then used by juvenile justice practitioners to inform how to intervene with a youth to prevent reoffending (e.g., referring youth to a community-based program vs. placing a youth in a juvenile correctional center). Unfortunately, most risk assessment systems lack transparency and often the reasons why a youth received a particular score are unclear. Moreover, how these scores are used in the decision making process is sometimes not well understood by families and youth affected by such decisions. This possibility is problematic because it can hinder individuals? buy-in to the intervention recommended by the risk assessment as well as mask potential bias in those scores (e.g., if youth of a particular race or gender have risk scores driven by a particular item on the assessment). To address this issue, project researchers will develop automated, computer-generated explanations for these risk scores aimed at explaining how these scores were produced. Investigators will then test whether these better-explained risk scores help youth and juvenile justice decision makers understand the risk score a youth is given. In addition, the team of researchers will investigate whether these risk scores are working equally well for different groups of youth (for example, equally well for boys and for girls) and identify any potential biases in how they are being used in an effort to understand how equitable the decision making process is for demographic groups based on race and gender. The project is embedded within the juvenile justice system and aims to evaluate how real stakeholders understand how the risk scores are generated and used within that system based on actual juvenile justice system data.

More specifically, this project aims to understand how risk assessment scores are currently being used in the juvenile justice system and how interpretable machine learning methods can be used to make black-box risk assessment algorithms more transparent (without reverse engineering them given that most assessments are proprietary). The team of researchers endeavor to understand the way that juvenile justice risk scores are being used through the analysis of quantitative data from the juvenile justice system (which details the risk scores and justice system decisions) and through qualitative data collected via key informant interviews. In the second phase of the work, the team of researchers will train various interpretable machine learning algorithms to predict youth?s risk scores (which are currently generated by a proprietary, black-box algorithm). The team will also predict the sentencing dispositions for youth based on these risk scores and other pertinent data collected by the juvenile justice system. The project team will then test and measure how understandable a series of the automated explanations derived from these machine learning methods are to youth, families, judges and probation officers. The goal of this step will be to identify algorithms that are highly predictive of the risk score and dispositions, respectively and then to identify methods that provide clear, human-interpretable explanations of the risk and dispositions to key stakeholders throughout the process. This step will also allow researchers to optimize methods for explaining outcomes by possibly identifying one method that is more understandable for explaining risk scores to youth compared to another method that is more understandable for their families or probation officers, for example. Finally, the project team will also explore the potential for bias throughout the process (from risk scoring to the use of the scores) and ways in which these interpretable algorithms can be used to help identify, quantify and mitigate biases.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page