Award Abstract # 1144520
EAGER: Programming the Crowd

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF MASSACHUSETTS
Initial Amendment Date: August 17, 2011
Latest Amendment Date: July 11, 2014
Award Number: 1144520
Award Instrument: Standard Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: January 1, 2012
End Date: May 31, 2015 (Estimated)
Total Intended Award Amount: $150,000.00
Total Awarded Amount to Date: $180,000.00
Funds Obligated to Date: FY 2011 = $150,000.00
FY 2014 = $30,000.00
History of Investigator:
  • Emery Berger (Principal Investigator)
    emery@cs.umass.edu
Recipient Sponsored Research Office: University of Massachusetts Amherst
101 COMMONWEALTH AVE
AMHERST
MA  US  01003-9252
(413)545-0698
Sponsor Congressional District: 02
Primary Place of Performance: University of Massachusetts Amherst
70 Butterfield Terrace
Amherst
MA  US  01003-9242
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): VGJHK59NMPK9
Parent UEI: VGJHK59NMPK9
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01001112DB NSF RESEARCH & RELATED ACTIVIT
01001415DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7916, 7943, 7944
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

People can perform with ease many tasks that remain difficult or impossible for computers, including vision, motion planning, and natural language understanding. New "crowdsourcing" platforms like Amazon's Mechanical Turk make it easier than ever to harness human computational power by streamlining job posting, tracking, and payment for workers. However, the lack of automation means that crowdsourcing currently does not scale up. Low quality results must be filtered out, but checking human computations can be difficult. Economic incentives and anonymity expose crowdsourcing to fraud. Deciding how much to pay workers for particular tasks and how many workers to hire remains an art.

This project introduces crowdprogramming, an approach that fully integrates human and digital computation. In crowdprogramming, humans are modeled as function calls in a standard programming language. This approach lets programmers focus on programming logic, while the crowdprogramming runtime system manages the critical tradeoffs between cost, time, and data quality. Crowdprogramming will dramatically lower the barriers to harnessing human computational power. It will enable a rich new class of applications that divide labor between digital and human computations, where computers and humans do the work each does best. It will enable complex orchestration of human computations, automatically control quality to maintain high accuracy and avoid fraud, and schedule tasks and adjust payments to maximize speed while staying within budget. By streamlining the incorporation of human labor into computation, crowdprogramming has the potential to add an entirely new job sector to the economy.

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Humans can easily perform many tasks that remain difficult or impossible for computers. For example, humans are far better than computers at performing tasks like vision, motion planning, and natural language understanding. Most researchers expect these tasks to remain beyond the reach of computers for the foreseeable future.

This research explores an approach called crowdprogramming that makes it possible to write computer programs that integrate digital and human-based computation. In effect, crowdprogramming makes it possible to “program with people.” Crowdprogramming opens up the possibility for an entirely new class of computer applications, where computers and humans perform the work each does best. It can dramatically increase the scope and scale of computations available to ordinary programmers. By streamlining the incorporation of human labor into computation, crowdprogramming even has the potential to add a whole new class of jobs to the economy.

We designed and implemented the first completely automatic crowdprogramming system, which we call AutoMan. AutoMan makes it easy to incorporate human-based computation directly into a standard programming language, and eliminates the barriers that make integrating human computation challenging.

AutoMan leverages crowdsourcing systems that streamline the process of hiring humans to perform computational tasks. AutoMan uses the most popular of these systems, Amazon’s Mechanical Turk, a web-based platform that matches employers with employees who perform “human intelligence tasks.”

A programmer using AutoMan simply describes the task to be performed, which can be a multiple-choice question or a fill-in-the-blank form. Behind the scenes, AutoMan handles all of the details of performing human computation automatically.

First, AutoMan posts tasks and manages workers by communicating with Mechanical Turk. It computes on-the-fly how much to pay each worker. If no workers show up to perform a task, AutoMan reasons that the pay must be too low and raises it. It's possible to prove mathematically that workers cannot game this system: it is always in their best interests to take a job whenever they are satisfied with the current rate of pay. Finally, AutoMan verifies the quality of each result by checking humans against each other. For each question, AutoMan recruits enough people to guarantee that when they agree on a result, it is highly unlikely it happened by random chance.

We have used AutoMan to quickly build a number of new, sophisticated applications. One example is an automatic license-plate reading program. Because AutoMan makes it easy to use human labor in an application, a graduate student was able to write this program in less than a day. We fed it a range of difficult-to-read license plates. The program achieved over 90% accuracy. Each plate took under two minutes to process, and the average cost for each plate was just over ten cents.

We have made the AutoMan system publicly available as an open source project at http://automan-lang.org. It runs on all standard computer platforms.

Despite the numerous tasks that AutoMan makes simpler for programmers, it presents new challenges. Because it depends on a marketplace of human employees to do work, success with AutoMan depends on a programmer’s ability to make their tasks sufficiently desirable to workers. Success also depends on the programmer writing clear instructions in English, which can be surprisingly tricky.

To assist software developers using AutoMan, we have developed two tools: a debugger and a simulator. The AutoMan Interactive Debugger lets programmers visually monitor the performance of runnin...

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page