Award Abstract # 1652537
CAREER: Continual Automated Refinement of Human Computation Systems

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: NORTHEASTERN UNIVERSITY
Initial Amendment Date: January 31, 2017
Latest Amendment Date: January 15, 2021
Award Number: 1652537
Award Instrument: Continuing Grant
Program Manager: William Bainbridge
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: February 1, 2017
End Date: September 30, 2022 (Estimated)
Total Intended Award Amount: $546,784.00
Total Awarded Amount to Date: $546,784.00
Funds Obligated to Date: FY 2017 = $105,779.00
FY 2018 = $106,445.00

FY 2019 = $108,932.00

FY 2020 = $111,494.00

FY 2021 = $114,134.00
History of Investigator:
  • Seth Cooper (Principal Investigator)
    scooper@ccs.neu.edu
Recipient Sponsored Research Office: Northeastern University
360 HUNTINGTON AVE
BOSTON
MA  US  02115-5005
(617)373-5600
Sponsor Congressional District: 07
Primary Place of Performance: Northeastern University
360 Huntington Ave
Boston
MA  US  02115-5005
Primary Place of Performance
Congressional District:
07
Unique Entity Identifier (UEI): HLTMVS2JZBS6
Parent UEI:
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
01001819DB NSF RESEARCH & RELATED ACTIVIT

01001920DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7367
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This research aims to improve automated tools for the application of human and computational problem-solving systems. This will lead to generalized techniques for data-driven modeling and optimization of the process of designing such systems, reducing the workload necessary to create successful ones and broadening the scope of problem domains to which massive amounts of human brainpower can be applied. Despite the vast computational power currently available, a broad range of important problems still rely on human reasoning or intuition to solve. In cases where algorithms are either unknown or computationally intractable, human computation has recently arisen as a means to apply human skills to advance solutions to problems neither humans nor computers could solve alone. By bringing human creativity, problem solving, and perspective to bear, humans and computers combined can solve previously unsolvable problems. Additionally, these systems create a new pathway for involvement in science - a new way for people to contribute towards problems that are important to them. By democratizing science, we involve those who may not otherwise have had such a means. Finally, this research can contribute to our understanding of how to best train people in solving challenging problems.

This work seeks to automate one aspect of the iterative refinement of human computation systems: improving the assignment of tasks to contributors. The basic approach is to construct a model of contributors and tasks, based on skill ratings and skill chains, which can be used to assign contributors an appropriate task to complete. This model will automatically refine the skill estimates and assignments over time based on data, improving both user experience and problem solving outcomes. This approach in broken down into three challenge areas: 1) developing a unified skill model that combines skill atoms and skill ratings, then using that skill model for 2) crafting a difficulty curve tailored for each participant, and 3) evaluating design decisions. The approach will build on existing multi-person matchmaking systems, validated in multiple human computation systems.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 16)
Horn, Britton and Cooper, Seth and Deterding, Sebastian "Adapting Cognitive Task Analysis to Elicit the Skill Chain of a Game" Proceedings of the Annual Symposium on Computer-Human Interaction in Play , 2017 10.1145/3116595.3116640 Citation Details
Horn, Britton and Miller, Josh Aaron and Smith, Gillian and Cooper, Seth "A Monte Carlo approach to skill-based automated playtesting" Proceedings of the 14th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment , 2018 Citation Details
Paranthaman, Pratheep Kumar and Sarkar, Anurag and Cooper, Seth "Applying Rapid Crowdsourced Playtesting to a Human Computation Game" The 16th International Conference on the Foundations of Digital Games , 2021 https://doi.org/10.1145/3472538.3472626 Citation Details
Sarkar, Anurag and Cooper, Seth "An Online System for Player-vs-Level Matchmaking in Human Computation Games" 2021 IEEE Conference on Games , 2021 https://doi.org/10.1109/CoG52621.2021.9619085 Citation Details
Sarkar, Anurag and Cooper, Seth "Comparing paid and volunteer recruitment in human computation games" Proceedings of the 13th International Conference on the Foundations of Digital Games , 2018 10.1145/3235765.3235796 Citation Details
Sarkar, Anurag and Cooper, Seth "Evaluating and Comparing Skill Chains and Rating Systems for Dynamic Difficulty Adjustment" Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment , v.16 , 2020 https://doi.org/ Citation Details
Sarkar, Anurag and Cooper, Seth "Inferring and Comparing Game Difficulty Curves using Player-vs-Level Match Data" Proceedings of the 2019 IEEE Conference on Games , 2019 Citation Details
Sarkar, Anurag and Cooper, Seth "Level Difficulty and Player Skill Prediction in Human Computation Games" Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment , 2017 Citation Details
Sarkar, Anurag and Cooper, Seth "Meet your match rating: providing skill information and choice in player-versus-level matchmaking" Proceedings of the 13th International Conference on the Foundations of Digital Games , 2018 10.1145/3235765.3235795 Citation Details
Sarkar, Anurag and Cooper, Seth "Ordering Levels in Human Computation Games using Playtraces and Level Structure" Proceedings of the 2022 IEEE Conference on Games (CoG) , 2022 https://doi.org/10.1109/CoG51982.2022.9893702 Citation Details
Sarkar, Anurag and Cooper, Seth "Transforming Game Difficulty Curves using Function Composition" Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems , 2019 10.1145/3290605.3300781 Citation Details
(Showing: 1 - 10 of 16)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project aimed to develop approaches applied to human computation systems, where humans and computers work together to solve problems. Based on information about how participants use such systems, we aimed to improve the systems so that they become better problem solving tools, as well as more engaging to participants using them. Research enabled by this project included:

- An approach was developed for matchmaking participants with tasks. The techniques developed were primarily based on ratings of participant skill and task difficulty (skill ratings). Techniques also incorporated discrete skills attained by participants and needed to complete tasks, along with the relationships between these skills (skill chains). These were used to dynamically select and order tasks for participants.

- The matchmaking approach was integrated into, and evaluated in, human computation systems. These included the Foldit citizen science biochemistry project.  We found evidence that using matchmaking could improve participant performance, such as in the number and difficulty of tasks completed, compared to baseline approaches.

- Several applications and variations of the approach were developed, including comparison of different task difficulty curves, predicting difficulty of tasks, and inferring difficulty curves of existing human computation systems.

This work resulted in over a dozen scientific publications based primarily on technical aspects of the approach and evaluations.

 


Last Modified: 04/03/2023
Modified by: Seth Cooper

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page