Award Abstract # 2349804
Collaborative Research: Conference: Workshop on Advanced Automated Systems, Contestability, and the Law

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
Initial Amendment Date: October 27, 2023
Latest Amendment Date: October 27, 2023
Award Number: 2349804
Award Instrument: Standard Grant
Program Manager: Sara Kiesler
skiesler@nsf.gov
 (703)292-8643
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: November 1, 2023
End Date: October 31, 2024 (Estimated)
Total Intended Award Amount: $29,601.00
Total Awarded Amount to Date: $29,601.00
Funds Obligated to Date: FY 2024 = $29,601.00
History of Investigator:
  • Steven Bellovin (Principal Investigator)
    smb@cs.columbia.edu
Recipient Sponsored Research Office: Columbia University
615 W 131ST ST
NEW YORK
NY  US  10027-7922
(212)854-6851
Sponsor Congressional District: 13
Primary Place of Performance: Columbia University
202 LOW LIBRARY 535 W 116 ST MC 4309,
NEW YORK
NY  US  10027
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): F4N1QNPB95M4
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 7556
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This award is in support of an interdisciplinary workshop to be held in the winter of 2023-2024 (current plan is January, 2024). The workshop is organized collaboratively by PIs at Tufts University and Columbia University The workshop's primary objective is to inform the executive branch and U.S. government agencies about new technological advances and related concerns and risks related to the legal contestability of artificial intelligence used in creating government software-enabled processes, regulations, and legal proceedings.

Over two days, the workshop features speakers and panels, all of whom are leading researchers in the field of artificial intelligence and the law.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

So-called explainable AI---artificial intelligence systems that can give a rationale for their answers---are very much a research area. That said, the increasing use of AI by government agencies, including the Social Security Administration, means that some form of accountability is necessary.

We focused instead on "contestable" AI: AI systems that emit enough information to let someone affected challenge the outcome. We held a workshop on this; participants include researchers in AI, government people, and (quite crucially) people whose lives have been or are likely to be affected by AI systems. Their input was crucial: what do they need to dispute a decision. Following the workshop, the organizers produced a report.

Our report recommended the following:

  • Adequate notice be given when a government system requiring contestability is being developed and used.
  • Notice to the public must be adequate to allow for system challenges to the system, before large numbers of people are affected.
  • Notices to individuals must be comprehensible: how was the decision made, and what is necessary to contest it?
  • Contestability must be part of the system design from the beginning.
  • Designers should always consider not deploying the system if they can't incorporate contestability
  • Design consultations should include operators, end users, decision makers, and (very important and often overlooked) decision subjects.
  • Stakeholders who will be affected by the system must be involved from the beginning.
  • Contestability features must be stress-tested before the system goes live.
  • Contestability should be accessible to and usable by people with different backgrounds.
  • Reproducibility of outcomes is crucial.
  • The automated system's decisions must follow the law—progammers may not ignore difficult-to-implement provisions.
  • Additional research on design of such systems would be helpful.

Following this, we held an instructional workshop for students from Tufts University and Spelman College, a historically Black women's school. The goal of this effort was to educate the students about the problem, the surrounding legal background, and how are recommendations addressed the issue.


Last Modified: 02/18/2025
Modified by: Steven M Bellovin

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page