
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | September 9, 2020 |
Latest Amendment Date: | October 19, 2020 |
Award Number: | 2007955 |
Award Instrument: | Standard Grant |
Program Manager: |
Andy Duan
yduan@nsf.gov (703)292-4286 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | January 1, 2021 |
End Date: | December 31, 2024 (Estimated) |
Total Intended Award Amount: | $167,374.00 |
Total Awarded Amount to Date: | $167,374.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
6823 SAINT CHARLES AVE NEW ORLEANS LA US 70118-5665 (504)865-4000 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
6823 St Charles Ave New Orleans LA US 70118-5698 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Robust Intelligence |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Many settings in everyday life require making decisions by combining the subjective preferences of individuals in a group, such as where to go to eat, where to go on vacation, whom to hire, which ideas to fund, or what route to take. In many domains, these subjective preferences are combined with moral values, ethical principles, or business constraints that are applicable to the decision scenario and are often prioritized over the preferences. The potential conflict of moral values with subjective preferences are keenly felt both when AI systems recommend products to us and when we use AI enabled systems to make group decisions. This research seeks to make AI more accountable by providing mechanisms to bound the decisions that AI systems can make, ensuring that the outcomes of the group decision making process aligns with human values. To achieve the goal of building ethically-bounded, AI-enabled group decision making systems, this project takes inspiration from humans, who often constrain their decisions and actions according to a number of exogenous priorities coming from moral, ethical, or business values. This research project will address the current lack of principled, formal approaches for embedding ethics into AI agents and AI enabled group decision support systems by advancing the state of the art in the safety and robustness of AI agents which, given how broadly AI touches our daily lives, will have broad impact and benefit to society.
Specifically, the long-term goal of this project is to establish mathematical and machine learning foundations for embedding ethical guidelines into AI for group decision-making systems. Within the machine ethics field there are two main approaches: the bottom-up approach focused on data-driven machine learning techniques and the top-down approach following symbolic and logic-based formalisms. This project brings these two methodologies closer together through three specific aims. (1) Modeling and Evaluating Ethical Principles: this project will extend principles in social choice theory and fair division using preference models from the literature on knowledge representation and preference reasoning. (2) Learning Ethical Principles From Data: this project will develop novel machine-learning frameworks to learn individual ethical principles and then aggregate them for use in group decision making systems. And finally, (3) Embedding Ethical Principles into Group Decision Support Systems: this project will develop novel frameworks for designing AI-based mechanisms for ethical group decision-making. This research will establish novel methods for the formal and experimental unification of aspects of the top-down or rule-based approach with the bottoms-up or data-based approach for embedding ethics into group decision making systems. The project will also formalize a framework for ethical and constrained reasoning across teams of computational agents.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Many settings in everyday life require making decisions by combining the subjective preferences of individuals in a group, such as where to go to eat, where to go on vacation, whom to hire, which ideas to fund, or what route to take. In many domains, these subjective preferences are combined with moral values, ethical principles, or business constraints that are applicable to the decision scenario and are often prioritized over the individual preferences of the agents making the decision. The potential conflict of moral values with subjective preferences are keenly felt both when AI systems recommend products to us and when we use AI enabled systems to make group decisions. This research seeks to make AI more accountable by providing mechanisms to bound the decisions that AI systems can make, ensuring that the outcomes of the group decision making process aligns with human values. To achieve the goal of building ethically-bounded, AI-enabled group decision making systems, this project takes inspiration from humans, who often constrain their decisions and actions according to a number of exogenous priorities coming from moral, ethical, or business values. This research project has taken concrete steps to address the current lack of principled, formal approaches for embedding ethics into AI agents and AI enabled group decision support systems by advancing the state of the art in the safety and robustness of AI agents which.
This project developed a number of innovative algorithmic techniques to address these issues, leading to scientific publications, code, and datasets that accomplish the primary goals of the project and lay the groundwork for future development of AI systems that are responsive to the diverse needs of decision makers and software systems that enable those decisions. Specifically this project has accomplished the following concrete results.
Modeling and Evaluating Ethical Principles We have extended principles from social choice theory and fair division to include more fine grained models of individual preferences in their decision making. We proposed and evaluated new models of consensus decision making, taking ideas from liquid democracy and integrating them into a formal model of group consensus. In addition we developed reinforcement learning based frameworks to disentangle various objectives in environments modeled as Markov decision processes. These algorithms and tools are available in open source libraries including a number of new datasets hosted at www.preflib.org.
Learning Ethical Principles From Data We developed novel machine-learning frameworks to learn individual ethical principles, aggregate them into an overall group preference, and put them to work in Markov Decision Processes, a general model of decision making under uncertainty. This resulted in novel algorithms to learn from trajectories in these environments, and model the underlying constraints formally. For instance, when navigating a simulated driving / grid environment, we are able to separate travel preferences from rules like “don’t travel through certain color squares.” We used this framework along with human experiments on Amazon Mechanical Turk to collect and test these algorithms with real world data. All these code and algorithms have been released in open source libraries.
Embedding Ethical Principles into Group Decision Support Systems Combining results from the first two areas our algorithms are able to take data from human decision makers, disentangle the (often) conflicting preferences and constraints, and then deploy them in real environments. In addition we proposed and evaluated a new framework, called Social Mechanism Design, that is able to more flexibly define both goals and constraints in group decision making tasks.
In terms of intellectual merit, this project led to 18 academic publications, more than 10 research talks, and more than 10 public engagement opportunities where the work was shared with experts and non-experts. In terms of broader impacts, this project funded two PhD students at Tulane University (both advanced to candidacy 2025, one female), one masters in computer science (graduates Spring 2025), and one undergraduate student (female, graduated Spring 2025).
Last Modified: 06/16/2025
Modified by: Nicholas Mattei
Please report errors in award information by writing to: awardsearch@nsf.gov.