
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 25, 2022 |
Latest Amendment Date: | May 3, 2023 |
Award Number: | 2225823 |
Award Instrument: | Standard Grant |
Program Manager: |
Andy Duan
yduan@nsf.gov (703)292-4286 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | October 1, 2022 |
End Date: | September 30, 2026 (Estimated) |
Total Intended Award Amount: | $299,184.00 |
Total Awarded Amount to Date: | $315,184.00 |
Funds Obligated to Date: |
FY 2023 = $16,000.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
1350 BEARDSHEAR HALL AMES IA US 50011-2103 (515)294-5225 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
515 MORRILL RD, 1350 BEARDSHEAR HALL AMES IA US 50011-2105 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Robust Intelligence |
Primary Program Source: |
01002223DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
The ability to express and reason about preferences over a set of alternatives is central to rational decision-making in a broad range of applications, such as product design, public policy, health care, information security, and privacy, among others. Because of the lack of quantitative preferences in many practical settings, there is increasing interest in methods for representing and reasoning with qualitative preferences. Furthermore, practical decision making scenarios typically involve multiple stakeholders, with possibly conflicting preferences, and the preferences of some stakeholders may sometimes override those of others, e.g., because of the relative positions of the stakeholders within an organization. However, existing preference languages and methods are limited to the single stakeholder setting. Against this background, this project brings together a team of researchers with complementary expertise in formal methods, artificial intelligence, and preference reasoning to develop methods and tools for representing and reasoning with multi-stakeholder preferences. The practical open-source multi-stakeholder decision support tools resulting from the project will significantly lower the barrier to the applications of AI and formal methods for multi-stakeholder decision making in a number of domains. The project enhances research-based training of graduate and undergraduate students, including females and members of other under-represented groups, at ISU and PSU in artificial intelligence, formal methods, and related areas of national importance. Broad dissemination of research results (including publications, open source software, data, tutorials, course materials), incorporation of research results into undergraduate and graduate curricula in Computer Science, Information Sciences and Technology, Data Sciences, and related disciplines, and outreach to targeted application domains e.g., health, public policy, security and privacy, that would benefit from advanced tools for multi-stakeholder decision-making further enhance the broader impacts of the project.
The primary intellectual merit of the project centers around substantial advances in the current state-of-the-art in languages, algorithms, and software for multi-stakeholder representation and reasoning with preferences. The researchers will develop Generalized Conditional Relative Importance and Preference Theory (GCRIPT), an expressive language for multi-stakeholder preference representation that subsumes existing preference languages. The resulting preference reasoners will be able to (a) analyze preferences expressed in GCRIPT, (b) reason with the preferences of multiple stakeholders, taking into account not only their individual preferences, but also hierarchies that give precedence to the preferences of some stakeholders over those of others, and (c) offer easy-to-understand explanations of the preferred choices as well as their impacts on the stakeholders. The project will also enhance the underlying model checking techniques that form the core technology for the preference reasoning framework; e.g., in the areas of incremental model checking, counter-example analysis and justification. The resulting advances in knowledge representation and formal methods contribute to AI systems that substantially augment and extend human capabilities in multi-stakeholder decision making.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.