Award Abstract # 1827826
Collaborative Research: Participatory Technology Assessment and Cultures of Expertise

NSF Org: SES
Division of Social and Economic Sciences
Recipient: UNIVERSITY OF MARYLAND, COLLEGE PARK
Initial Amendment Date: August 15, 2018
Latest Amendment Date: July 17, 2020
Award Number: 1827826
Award Instrument: Continuing Grant
Program Manager: Frederick Kronz
SES
 Division of Social and Economic Sciences
SBE
 Directorate for Social, Behavioral and Economic Sciences
Start Date: September 1, 2018
End Date: February 28, 2023 (Estimated)
Total Intended Award Amount: $96,096.00
Total Awarded Amount to Date: $96,096.00
Funds Obligated to Date: FY 2018 = $26,816.00
FY 2019 = $26,025.00

FY 2020 = $43,255.00
History of Investigator:
  • David Tomblin (Principal Investigator)
    dtomblin@umd.edu
Recipient Sponsored Research Office: University of Maryland, College Park
3112 LEE BUILDING
COLLEGE PARK
MD  US  20742-5100
(301)405-6269
Sponsor Congressional District: 04
Primary Place of Performance: University of Maryland College Park
3112 Lee Bldg 7809 Regents Drive
College Park
MD  US  20742-5103
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): NPU8ULVAAS23
Parent UEI: NPU8ULVAAS23
NSF Program(s): STS-Sci, Tech & Society,
SciSIP-Sci of Sci Innov Policy
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7567, 9178
Program Element Code(s): 760300, 762600
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.075

ABSTRACT

This award supports a research project that studies efforts to democratize expertise in the context of federal agency decision-making. It investigates whether and how participatory technology assessments have change expert cultures at NASA, NOAA, and DOE. Very little is known about how participatory technology assessments, which are public engagement exercises where different stakeholder groups (including citizen organizations, state systems, and non-government agencies) interact with technical scientist and technical expert groups, impact agency decision-making processes and the ways that technical experts think about lay citizens. The case studies of federal government agencies and their participatory technology assessment practices that are to be developed in this project have the potential to inform how numerous types of technical experts plan and implement such assessments in practice; it will result in practical lessons learned for improving future collaborations. Beyond academic publications and conference presentations, the results of this study will be used to develop a best practices handbook for effective collaboration between boundary organizations that specialize in public engagement and government agencies. Results will be presented at the Arizona State Consortium for Science Policy Outcomes seminar series in Washington, DC, which has a long history of engaging local and federal agencies, NGOs, and academics interested in the practical application of research.

This research project will use a combination of in-depth interviews and document analysis to assess whether and how participatory technology assessments lead to reflexive changes in expert views on public input and knowledge, including how experts perceive and implement public engagement practices in decision making. The research team has access to technical expert decision-makers in the U.S. federal government context. As a result, the study will be able to address fundamental knowledge gaps in the public engagement and expertise literature. It will provide a comparative, applied account of federal agency expert reflections on their participation in the adoption, framing, and implementation of participatory technology assessment and the integration of assessment results into decision-making processes. It will bring to light how particular federal agency experts and expert groups are influenced by and challenge the assessment process, and it will explore the extent to which such assessments serve the role of a reflexive learning device for technical expert decision-makers. It may also serve to substantiate existing theories in the public engagement and expertise literature that merely postulates improvements in decision making processes and techno-scientific cultural change through public engagement exercises.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Leah Kaplan and Mahmud Farooque and Daniel Sarewitz and David Tomblin "Designing Participatory Technology Assessments: A Reflexive Method for Advancing the Public Role in Science Policy Decision-making" Technological forecasting and social change , v.171 , 2021 https://doi.org/doi.org/10.1016/j.techfore.2021.120974 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Our study explored two important previously understudied facets of the process of adopting, designing, implementing, and evaluating participatory technology assessment (pTA) forums (a method for collecting diverse, informed public knowledge about science and technology issues) in the U.S. government agency context: 1) the bureaucratic complexity and challenges of doing pTA, and 2) the contextual implementation and adaptation of pTA methods to different U.S. agency contexts. Our findings are derived from the review of over 100 documents and 32 interviews of public engagement organizers, federal government agency personnel, and government contractors involved with pTA projects at the National Aeronautics and Space Administration (NASA), Department of Energy (DOE), and the National Oceanic and Atmospheric Administration (NOAA).

The first part of the study shined light on the internal workings of government agencies adopting pTA as a public engagement strategy. Our findings show that the process of developing pTA can be facilitated or hindered depending on three conditions: 1) organizational culture (e.g., historical experience with the public; how controversial a topic is; whether experimentation with public engagement is encouraged);  2) the influence of broader political controls that constrain an agency?s freedom to independently make decisions (e.g., changing presidential administrations; existing federal laws governing how government agencies can interact with the public); and 3) the presence or absence of agency personnel that have working knowledge of the value and methods of pTA and how to navigate existing federal laws that govern agency interactions with the public (or public engagement methods in general). These findings will help government agencies facilitate the adoption of pTA as a public engagement practice. 

The second part of the study took a deep look at how pTA methods developed by a single pTA practitioner network change from context to context. This is important because little is known about how the procedures and norms of this method are influenced by institutional context. One concern often expressed in the literature about using pTA in the government is that it is implemented as a one-size-fits-all process that serves as a check-the-box exercise for rubber-stamping existing policy commitments. This study shows the contrary. The development of pTA forums varied from context to context, involving intense negotiations between the pTA practitioners and government agency personnel on the purpose of the forums, the framing of the public inquiry, and the meaning of the public outputs.

Furthermore, for agency personnel that participated in the design of the forums, pTA came to mean many things and helped structure important conversations about science and technology issues that otherwise wouldn?t have happened. Depending on the context, agency personnel see pTA as a process and a platform for collaborative experiments; participatory planning; understanding public values; reframing an agency issue or goal; building public trust and ownership of a policy or issue; informing decision making; reaching and hearing underrepresented communities; reflecting on organizational commitments; and expanding collective literacy or capacity of the public. These findings demonstrate that pTA is a highly flexible and adaptable public engagement tool that not only offers opportunities for empowering the public in science and technology decision-making matters, but also promotes co-learning among the public, agency personnel, and pTA practitioners. pTA also serves the diverse needs of different government agencies and the issues they are contending with and offers important reflection points for agency personnel on science and technology policy and the role of the public in decision-making.


Last Modified: 06/21/2023
Modified by: David Tomblin

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page