Award Abstract # 2007955
Collaborative Research: RI: Small: Modeling and Learning Ethical Principles for Embedding into Group Decision Support Systems

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE ADMINISTRATORS OF TULANE EDUCATIONAL FUND
Initial Amendment Date: September 9, 2020
Latest Amendment Date: October 19, 2020
Award Number: 2007955
Award Instrument: Standard Grant
Program Manager: Andy Duan
yduan@nsf.gov
 (703)292-4286
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: January 1, 2021
End Date: December 31, 2024 (Estimated)
Total Intended Award Amount: $167,374.00
Total Awarded Amount to Date: $167,374.00
Funds Obligated to Date: FY 2020 = $167,374.00
History of Investigator:
  • Nicholas Mattei (Principal Investigator)
    nsmattei@tulane.edu
Recipient Sponsored Research Office: Tulane University
6823 SAINT CHARLES AVE
NEW ORLEANS
LA  US  70118-5665
(504)865-4000
Sponsor Congressional District: 01
Primary Place of Performance: Tulane University
6823 St Charles Ave
New Orleans
LA  US  70118-5698
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): XNY5ULPU8EN6
Parent UEI: XNY5ULPU8EN6
NSF Program(s): Robust Intelligence
Primary Program Source: 01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7495, 7923, 9150
Program Element Code(s): 749500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Many settings in everyday life require making decisions by combining the subjective preferences of individuals in a group, such as where to go to eat, where to go on vacation, whom to hire, which ideas to fund, or what route to take. In many domains, these subjective preferences are combined with moral values, ethical principles, or business constraints that are applicable to the decision scenario and are often prioritized over the preferences. The potential conflict of moral values with subjective preferences are keenly felt both when AI systems recommend products to us and when we use AI enabled systems to make group decisions. This research seeks to make AI more accountable by providing mechanisms to bound the decisions that AI systems can make, ensuring that the outcomes of the group decision making process aligns with human values. To achieve the goal of building ethically-bounded, AI-enabled group decision making systems, this project takes inspiration from humans, who often constrain their decisions and actions according to a number of exogenous priorities coming from moral, ethical, or business values. This research project will address the current lack of principled, formal approaches for embedding ethics into AI agents and AI enabled group decision support systems by advancing the state of the art in the safety and robustness of AI agents which, given how broadly AI touches our daily lives, will have broad impact and benefit to society.

Specifically, the long-term goal of this project is to establish mathematical and machine learning foundations for embedding ethical guidelines into AI for group decision-making systems. Within the machine ethics field there are two main approaches: the bottom-up approach focused on data-driven machine learning techniques and the top-down approach following symbolic and logic-based formalisms. This project brings these two methodologies closer together through three specific aims. (1) Modeling and Evaluating Ethical Principles: this project will extend principles in social choice theory and fair division using preference models from the literature on knowledge representation and preference reasoning. (2) Learning Ethical Principles From Data: this project will develop novel machine-learning frameworks to learn individual ethical principles and then aggregate them for use in group decision making systems. And finally, (3) Embedding Ethical Principles into Group Decision Support Systems: this project will develop novel frameworks for designing AI-based mechanisms for ethical group decision-making. This research will establish novel methods for the formal and experimental unification of aspects of the top-down or rule-based approach with the bottoms-up or data-based approach for embedding ethics into group decision making systems. The project will also formalize a framework for ethical and constrained reasoning across teams of computational agents.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 18)
Abramowitz, Ben and Lev, Omer and Mattei, Nicholas "Who Reviews The Reviewers? A Multi-Level Jury Problem" , 2025 Citation Details
Abramowitz, Ben and Mattei, Nicholas "Social Mechanism Design: A Low-Level Introduction" Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems , 2023 Citation Details
Abramowitz, Ben and Mattei, Nicholas "Social Mechanism Design: Making Maximally Acceptable Decisions" 9th International Workshop on Computational Social Choice , 2023 Citation Details
Abramowitz, Ben and Mattei, Nicholas "Towards Group Learning: Distributed Weighting of Experts" The 13th Workshop on Optimization and Learning in Multiagent Systems at AAMAS 2022 , 2022 Citation Details
Awad, Edmond and Levine, Sydney and Loreggia, Andrea and Mattei, Nicholas and Rahwan, Iyad and Rossi, Francesca and Talamadupula, Kartik and Tenenbaum, Joshua and Kleiman-Weiner, Max "When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data" Autonomous Agents and Multi-Agent Systems , v.38 , 2024 https://doi.org/10.1007/s10458-024-09667-4 Citation Details
Aziz, Haris and Huang, Xin and Mattei, Nicholas and Segal-Halevi, Erel "Computing welfare-Maximizing fair allocations of indivisible goods" European journal of operational research , 2022 https://doi.org/10.1016/j.ejor.2022.10.013 Citation Details
Bergamaschi Ganapini, Marianna and Campbell, Murray and Fabiano, Francesco and Horesh, Lior and Lenchner, Jonathan and Loreggia, Andrea and Mattei, Nicholas and Rossi, Francesca and Srivastava, Biplav and Venable, Kristen Brent "Combining Fast and Slow Thinking for Human-like and Efficient Decisions in Constrained Environments" Proceedings of the 16th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy) 2022 , 2022 Citation Details
Culotta, Aron and Mattei, Nicholas "Use Open Source for Safer Generative AI Experiments" MIT Sloan Management Review , 2024 Citation Details
Ganapini, Marianna Bergamaschi and Fabiano, Francesco and Horesh, Lior and , Loreggia Andrea and Mattei, Nicholas and Murugesan, Keerthiram and Pallagani, Vishal and Rossi, Francesca and Srivastava, Biplav and Venable, Kristen Brent "Value-based Fast and Slow AI Nudging" Proceedings of the Workshop on Ethics and Trust in Human-AI Collaboration: Socio-Technical Approaches (ETHAICS 2023) , 2023 Citation Details
Glazier, Arie and Loreggia, Andrea and Mattei, Nicholas and Rahgooy, Taher and Rossi, Francesca and Venable, Brent "Learning Behavioral Soft Constraints from Demonstrations" Workshop on Safe and Robust Control of Uncertain Systems at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) , 2021 Citation Details
Hassan, Saad and Asad, Syeda_Mah Noor and Eslami, Motahhare and Mattei, Nicholas and Culotta, Aron and Zimmerman, John "PACE: Participatory AI for Community Engagement" Proceedings of the AAAI Conference on Human Computation and Crowdsourcing , v.12 , 2024 https://doi.org/10.1609/hcomp.v12i1.31610 Citation Details
(Showing: 1 - 10 of 18)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Many settings in everyday life require making decisions by combining the subjective preferences of individuals in a group, such as where to go to eat, where to go on vacation, whom to hire, which ideas to fund, or what route to take. In many domains, these subjective preferences are combined with moral values, ethical principles, or business constraints that are applicable to the decision scenario and are often prioritized over the individual preferences of the agents making the decision. The potential conflict of moral values with subjective preferences are keenly felt both when AI systems recommend products to us and when we use AI enabled systems to make group decisions. This research seeks to make AI more accountable by providing mechanisms to bound the decisions that AI systems can make, ensuring that the outcomes of the group decision making process aligns with human values. To achieve the goal of building ethically-bounded, AI-enabled group decision making systems, this project takes inspiration from humans, who often constrain their decisions and actions according to a number of exogenous priorities coming from moral, ethical, or business values. This research project has taken concrete steps to address the current lack of principled, formal approaches for embedding ethics into AI agents and AI enabled group decision support systems by advancing the state of the art in the safety and robustness of AI agents which.


This project developed a number of innovative algorithmic techniques to address these issues, leading to scientific publications, code, and datasets that accomplish the primary goals of the project and lay the groundwork for future development of AI systems that are responsive to the diverse needs of decision makers and software systems that enable those decisions. Specifically this project has accomplished the following concrete results.


Modeling and Evaluating Ethical Principles We have extended principles from social choice theory and fair division to include more fine grained models of individual preferences in their decision making. We proposed and evaluated new models of consensus decision making, taking ideas from liquid democracy and integrating them into a formal model of group consensus. In addition we developed reinforcement learning based frameworks to disentangle various objectives in environments modeled as Markov decision processes. These algorithms and tools are available in open source libraries including a number of new datasets hosted at www.preflib.org.


Learning Ethical Principles From Data We developed novel machine-learning frameworks to learn individual ethical principles, aggregate them into an overall group preference, and put them to work in Markov Decision Processes, a general model of decision making under uncertainty. This resulted in novel algorithms to learn from trajectories in these environments, and model the underlying constraints formally. For instance, when navigating a simulated driving / grid environment, we are able to separate travel preferences from rules like “don’t travel through certain color squares.” We used this framework along with human experiments on Amazon Mechanical Turk to collect and test these algorithms with real world data. All these code and algorithms have been released in open source libraries.


Embedding Ethical Principles into Group Decision Support Systems Combining results from the first two areas our algorithms are able to take data from human decision makers, disentangle the (often) conflicting preferences and constraints, and then deploy them in real environments. In addition we proposed and evaluated a new framework, called Social Mechanism Design, that is able to more flexibly define both goals and constraints in group decision making tasks.


In terms of intellectual merit, this project led to 18 academic publications, more than 10 research talks, and more than 10 public engagement opportunities where the work was shared with experts and non-experts. In terms of broader impacts, this project funded two PhD students at Tulane University (both advanced to candidacy 2025, one female), one masters in computer science (graduates Spring 2025), and one undergraduate student (female, graduated Spring 2025).


 


Last Modified: 06/16/2025
Modified by: Nicholas Mattei

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page