Award Abstract # 1409739
III: Medium: Collaborative Research: Closing the User-Model Loop for Understanding Topics in Large Document Collections

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: BRIGHAM YOUNG UNIVERSITY
Initial Amendment Date: July 30, 2014
Latest Amendment Date: July 12, 2018
Award Number: 1409739
Award Instrument: Continuing Grant
Program Manager: Hector Munoz-Avila
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 1, 2014
End Date: July 31, 2020 (Estimated)
Total Intended Award Amount: $550,000.00
Total Awarded Amount to Date: $558,000.00
Funds Obligated to Date: FY 2014 = $267,176.00
FY 2015 = $282,824.00

FY 2018 = $8,000.00
History of Investigator:
  • Kevin Seppi (Principal Investigator)
    kseppi@byu.edu
  • Eric Ringger (Former Principal Investigator)
  • Kevin Seppi (Former Co-Principal Investigator)
Recipient Sponsored Research Office: Brigham Young University
A-153 ASB
PROVO
UT  US  84602-1128
(801)422-3360
Sponsor Congressional District: 03
Primary Place of Performance: Brigham Young University
3368 TMCB
Provo
UT  US  84602-1231
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): JWSYC7RUMJD1
Parent UEI:
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01001415DB NSF RESEARCH & RELATED ACTIVIT
01001516DB NSF RESEARCH & RELATED ACTIVIT

01001819DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 7924, 9150, 9251
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Individuals and organizations must cope with massive amounts of unstructured text information: individuals sifting through a lifetime of e-mail and documents, journalists understanding the activities of government organizations, companies reacting to what people say about them online, or scholars making sense of digitized documents from the ancient world. This project's research goal is to bring together two previously disconnected components of how users understand this deluge of data: algorithms to sift through the data and interfaces to communicate the results of the algorithms. This project will allow users to provide feedback to algorithms that were typically employed on a "take it or leave it" basis: if the algorithm makes a mistake or misunderstands the data, users can correct the problem using an intuitive user interface and improve the underlying analysis. This project will jointly improve both the algorithms and the interfaces, leading to deeper understanding of, faster introduction to, and greater trust in the algorithms we rely on to understand massive textual datasets. The resulting source code and functional demos will be broadly disseminated, and tutorials will be shared online and in person in educational efforts and to aid the adoption of the methodologies.

This project enables computer algorithms and humans to apply their respective strengths and collaborate in managing and making sense of large volumes of textual data. It "closes the loop" in novel ways to connect users with a class of big data analysis algorithms called topic models. This connection is made through interfaces that empower the user to change the underlying models by refining the number and granularity of topics, adding or removing words considered by the model, and adding constraints on what words appear together in topics. The underlying model also enables new visualizations in the form of a Metadata Map that uses active learning to focus users' limited attention on the most important documents in a collection. Users annotate documents with useful meta-data and thereby further improve the quality of the discovered topics. The project includes evaluations of these methods through careful user studies and in-depth case studies to demonstrate that topics are more coherent, users can more quickly provide annotations, users trust the underlying algorithms more, and users can more effectively build an understanding of their textual data. The project web site (http://nlp.cs.byu.edu/closing-the-loop) will include pointers to the project Git repositories for source code, project demos, tutorials, and publications communicating experimental results.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 17)
Alison Smith, Tak Yeon Lee, Forough Poursabzi-Sangdeh, Jordan Boyd-Graber, Kevin Seppi, Niklas Elmqvist, and Leah Findlater "Human-Centered and Interactive: Expanding the Impact of Topic Models" CHI Human Centred Machine Learning Workshop , 2016
Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi and Leah Findlater "Digging into User Control: Perceptions of Adherence andInstability in Transparent Models" Intelligent User Interfaces , 2020 https://doi.org/10.1145/3377325.3377491
Felt, Paul and Ringger, Eric and Seppi, Kevin "Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings" COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. ACL 2016 , 2016 , p.1787 978-4-87974-702-0
Forough Poursabzi-Sangdeh and Jordan Boyd-Graber and Leah Findlater and Kevin Seppi "ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling" Association for Computational Linguistics. , 2016 10.18653/v1/P16-1110
Jeffery Lund, Piper Armstrong, Wilson Fearn, Stephen Cowley, Emily Hales, and Seppi, Kevin "Cross-referencing Using Fine-grained Topic Modeling" 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics , 2019 , p.3978
Jeffrey Lund, Chace Ashcraft, Andrew W. McNabb and Kevin D. Seppi "Mrs: High Performance MapReduce for Iterative and Asynchronous Algorithms in Python" 6th Workshop on Python for High-Performance and Scientific Computing, PyHPC@SC 2016, Salt Lake, UT, USA, November 14, 2016 , 2016 , p.76 10.1109/PyHPC.2016.014
Jeffrey Lund, Paul Felt, Kevin D. Seppi, Eric K. Ringger "Fast Inference for Interactive Models of Text" COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. ACL 2016 , 2016 , p.2997 978-4-87974-702-0
Jeffrey Lund, Piper Armstrong, Wilson Fearn, Stephen Cowley, Courtni Byun, Jordan Boyd-Graber and Kevin Seppi "Automatic and Human Evaluation of Local Topic Quality" Annual Meeting of the Association for Computational Linguistics (ACL) , 2019 http://dx.doi.org/10.18653/v1/P19-1076
Jeffrey Lund, Stephen Cowley, Wilson Fearn, Emily Hales, Kevin Seppi "Labeled Anchors and a Scalable, Transparent, and Interactive Classifier" 2018 Conference on Empirical Methods in Natural Language Processing , 2018
Jeffrey Lund, Stephen Cowley, Wilson Fearn, Emily Hales, Kevin Seppi "Labeled Anchors and a Scalable, Transparent, and Interactive Classifier" 2018 Conference on Empirical Methods in Natural Language Processing , 2018
Nguyen, Thang and Boyd-Graber, Jordan and Lund, Jeffrey and Seppi, Kevin and Ringger, Eric "Is your anchor going up or down? Fast and accurate supervised topic models" North American Chapter of the Association for Computational Linguistics , 2015
(Showing: 1 - 10 of 17)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Machine learning is revolutionizing relationships, businesses, and academia.  But the advanced techniques pushed by researchers are useless if people cannot use them.  This project investigated how to “close the loop” to create algorithms that meet users’ needs and to create systems to bring users and algorithms together to understand and productively analyze large text datasets.


This project formalized ways for users to correct automatic clusterings of documents called “topic models”: given a large collection of text, these algorithms create an automatic summary of the primary themes in the collection.  Through the project, we developed a new understanding of interactive topic models: using spectral methods to make them faster and decrease latency and to apply these insights to other forms of user information such as crowdsourced labels.


But these algorithms aren’t the end of the story: how do people actually use them?  To address that question, the project created user studies that examined which automatically created clusters of documents were most useful for users, how to evaluate that utility, and what users want from machine learning tools.  Users want explanations from imperfect machine learning algorithms, and they want algorithms to surprise them, surfacing unexpected information, but  not too often.  


Research papers from this grant received best paper awards or nominations at CoNLL 2015 and IUI 2018.


 


Last Modified: 02/10/2021
Modified by: Kevin Seppi

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page