
NSF Org: |
SES Division of Social and Economic Sciences |
Recipient: |
|
Initial Amendment Date: | August 14, 2019 |
Latest Amendment Date: | September 14, 2021 |
Award Number: | 1919453 |
Award Instrument: | Continuing Grant |
Program Manager: |
Claudia Gonzalez-Vallejo
clagonza@nsf.gov (703)292-4710 SES Division of Social and Economic Sciences SBE Directorate for Social, Behavioral and Economic Sciences |
Start Date: | August 15, 2019 |
End Date: | July 31, 2024 (Estimated) |
Total Intended Award Amount: | $500,000.00 |
Total Awarded Amount to Date: | $500,000.00 |
Funds Obligated to Date: |
FY 2021 = $86,066.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
5000 FORBES AVE PITTSBURGH PA US 15213-3815 (412)268-8746 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
5000 Forbes Ave Pittsburgh PA US 15213-3890 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Decision, Risk & Mgmt Sci |
Primary Program Source: |
01002122DB NSF RESEARCH & RELATED ACTIVIT |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.075 |
ABSTRACT
New technologies in sensing, data communication and processing allow for extensive instrumentation of the built environment, and the massive flow of information collectable by sensors can transform the operation and the functionality of urban systems. However, this development depends also on the attitude of citizens and stakeholders toward information. This project investigates how interacting agents take decisions about collecting information, with focus on users and managers of urban systems interacting with public policies. For rational and isolated agents acting without external constraints, "information never hurts" and data with low impact on the agents' belief have a small value. This implies, for example, that these agents are always willing to install free (or cheap) sensors, and to install expensive ones only if they provide high-impact information. However, these intuitive properties do not hold true in multi-agent settings, when agents compete one against each other, nor for agents acting under external constraints as those imposed by regulations. Integrating analysis in social science, engineering and computer science, the project will develop a framework for modeling the attitude towards information in these contexts, depending on the agents' preference and the external regulations.
The goals of the project are: 1) To develop a framework for assessing the Value of Information in multi-agent settings, modeling the interaction between policy makers and decision makers following external regulations, 2) to gather and analyze empirical data about the attitude toward information, using surveys and interviews among users, and calibrate the models developed in (1), 3) to design mechanisms alleviating Information Avoidance and Over Evaluation, and assess their effectiveness. The project integrates probabilistic models of quantities to be measured and of sensor performance, agents' utility functions and external constraints, optimization methods and behavior modeling, to assess the Value or Information via Bayesian pre-posterior analysis. Such approach will allow understanding how Information Avoidance and Over Evaluation arise, and how appropriate mechanisms of incentives and regulations can mitigate them. The project's outcomes will be key for a better empirical understanding of the attitude towards information, for developing effective large-scale monitor of the built environment and public policies promoting effective information collection, integrating societal and agents' utilities.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
A rational decision maker, when acting in isolation, should always collect free information, according to the principle that "information never hurts." However, when multiple agents interact, in some settings they can find appropriate to avoid free information: this is the phenomenon of "Information Avoidance" (IA). Better understanding IA is crucial for promoting a sustainable and efficient integration of sensors in urban systems, since examples of IA (i.e. of cases when system managers or users "prefer not to know") have been observed in many circumstances.
This project has investigated how IA emerges from the interaction among agents: we have focused on two related problems. In the first one, an agent is taking decisions under "epistemic constraints," as those imposed by societal regulations. For example, a policy may impose on the owner of an asset to repair it when its probability of failure (that is an epistemic quantity) is high. The typical decision problem is illustrated in Figure 1, where the "agent" and "society" take different actions and got different losses. In the second problem, multiple agents compete or cooperate while managing different parts of a system.
Overcoming the impact of selfish behavior of rational players in multiagent systems, as that related to IA, is a fundamental problem in game theory. In the context of these problems, the main achievements of this project have been:
An analysis, via Partially Observable Markov Decision Processes, of IA in sequential decision making, under epistemic constraints. We have shown how to assess VoI in this context, for the value of collecting information at current time, and of that of collecting sequential information in time. We have illustrated how these values are interrelated and how IA can occur, depending on the specific constraints.
An analysis of the VoI for inspecting components in systems managed by multiple agents, using game theory and Nash equilibrium. We have focused on binary systems made up of binary components, where agents taking maintenance actions are responsible for the repair costs of their own components, and the penalty for system failure is shared among all agents. We show how, for example, when the information is perfect, for simple systems such as series and parallel systems, under the assumption that the global Nash equilibrium is selected, the VoI of revealing one component's status is always non-negative. When a local equilibrium is selected, we illustrate that the VoI can be negative, and so IA can occur. For general systems, even in cases when the "best" equilibrium is selected, we have found out, quite surprisingly, that the VoI can be negative for all the agents involved in the game. This happens when the information can trigger a Prisoner-Dilemma configuration. Realizing this, all agents prefer to avoid information (i.e., they prefer not to inspect).
We have shown how subsidies can alleviate IA. Subsidies can reduce the so-called "price of anarchy" for the system; this ensures that the harmful effect of the selfish behavior and lack of coordination of the agents on the social cost is minimized. But subsidies can also remove the occurrence of IA, forcing the VoI to be positive. We have shown that the design of appropriate subsidies is computationally hard but, by processing data about the behavior of the agents, a good subsidy mechanism can be designed with a relatively small computational effort.
We also explored behavioral factors that drive IA. Specifically, we examined how salience, perceived importance, and the valence of anticipated beliefs shape individuals' willingness to seek or avoid information. These psychological mechanisms provide further insight into when agents may prefer to remain uninformed (even when information is freely available).
Overall, the outcomes of this project will be relevant for promoting a better integration between public policies and the instrumentation of urban systems. Our framework allows the identification of effective schemes for information collection that account for preferences and utilities of agents, as common citizens and public managers, including their tendency to IA. Also, it provides guidelines to policy makers, identifying appropriate mechanisms to alleviate IA, using appropriate subsidy schemes.
Last Modified: 01/29/2025
Modified by: Matteo Pozzi
Please report errors in award information by writing to: awardsearch@nsf.gov.