Award Abstract # 1715095
III: Small: Searching for Answers through Iterative Feedback

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF MASSACHUSETTS
Initial Amendment Date: August 4, 2017
Latest Amendment Date: September 12, 2017
Award Number: 1715095
Award Instrument: Continuing Grant
Program Manager: Wei-Shinn Ku
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2017
End Date: July 31, 2021 (Estimated)
Total Intended Award Amount: $492,023.00
Total Awarded Amount to Date: $492,023.00
Funds Obligated to Date: FY 2017 = $492,023.00
History of Investigator:
  • W. Bruce Croft (Principal Investigator)
    croft@cs.umass.edu
Recipient Sponsored Research Office: University of Massachusetts Amherst
101 COMMONWEALTH AVE
AMHERST
MA  US  01003-9252
(413)545-0698
Sponsor Congressional District: 02
Primary Place of Performance: University of Massachusetts Amherst
OGCA, 70 Butterfield Terrace
Amherst
MA  US  01003-9242
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): VGJHK59NMPK9
Parent UEI: VGJHK59NMPK9
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 7923, 7924
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

In current web search engines, the response to a query is typically a series of pages that contain ranked results (search engine result pages or SERPs). The increasing use of mobile search places a premium on the use of the limited display space that is available. Similarly, voice-based search, where both questions and answers are done by voice recognition and speech generation, is becoming more common and also creates a limitation on the interaction bandwidth between the system and the user. In these situations, the ability to deliver more precise answers to a broad range of questions, rather than a ranked display of results, becomes critical. If a search system can return a ranked list of possible answers instead of documents, and a search environment may limit the user-system bandwidth, this leads to the following important research question that is the focus of this proposal -- what is the most effective way to present and interact with a ranked list of answers, where the goal is to identify one or more satisfactory answers as quickly as possible. Understanding this problem and discovering solutions to it will have a large impact on the future development of search engines.

This project will work on four research tasks: (a) develop and evaluate iterative relevance feedback models for answers; (b) develop and evaluate interactive summarization techniques for answers; (c) develop and evaluate finer-grained feedback approaches for answers; (d) develop and evaluate a conversation-based model for answer retrieval. This project will be the first to study methods and models for interacting with ranked lists of answers. Many researchers are developing neural models for the factoid question-answering task, but this effort is one of just a few looking at the problem of finding non-factoid answers in passages of documents. The experience gained from developing neural models for this complex task provides the background for the unique tasks and approaches described in this proposal, which address the key, but previously ignored, issue of how we make effective use of ranked lists of answers to interact with users and improve the results from neural answer retrieval models. The later part of the project will address the use of conversational models in search, which is also becoming increasingly important but has not yet been studied.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 22)
Ai, Q. and Bi, K. and Croft, W. B. "Asking Clarifying Questions Based on Negative Feedback in Conversational Search" Proceedings of the 7th ACM International Conference on the Theory of Information Retrieval (ICTIR 2021) , 2021 https://doi.org/10.1145/3471158.3472232 Citation Details
Aliannejadi, Mohammad and Zamani, Hamed and Crestani, Fabio and Croft, W. Bruce "Asking Clarifying Questions in Open-Domain Information-Seeking Conversations" Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR'19 , 2019 10.1145/3331184.3331265 Citation Details
Bi, K. "Iterative Relevance Feedback for Answer Passage Retrieval with Passage-Level Semantic Match" Proceedings of the European Conference on Information Retrieval (ECIR 19) , 2019 10.1007/978-3-030-15712-8_36 Citation Details
Bi, K. and Ai, Q. and and Croft, W. B. "Revisiting Iterative Relevance Feedback for Document and Passage Retrieval" SIGIR Workshop on Conversational Interaction Systems (WCIS'19) , 2019 Citation Details
Bi, K. and Teo, C. and Mohan, V. and and Croft, W. B. "Leverage Implicit Feedback for Context-aware Product Search" SIGIR 2019 workshop on eCommerce (ECOM19) , 2019 Citation Details
Bi, Keping and Ai, Qingyao and Zhang, Yongfeng and Croft, W. Bruce "Conversational Product Search Based on Negative Feedback" Proceedings of the 28th ACM International Conference on Information and Knowledge Management - CIKM '19 , 2019 10.1145/3357384.3357939 Citation Details
Guo, Jiafeng and Fan, Yixing and Pang, Liang and Yang, Liu and Ai, Qingyao and Zamani, Hamed and Wu, Chen and Croft, W. Bruce and Cheng, Xueqi "A Deep Look into neural ranking models for information retrieval" Information Processing & Management , v.57 , 2020 https://doi.org/10.1016/j.ipm.2019.102067 Citation Details
Hashemi, H. "ANTIQUE: A Non-factoid Question Answering Benchmark" Proceedings, Part II, of the 42nd European Conference on Information Retrieval (ECIR 2020) , 2020 10.1007/978-3-030-45442-5_21 Citation Details
Hashemi, Helia and Zamani, Hamed and Croft, W. Bruce "Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search" Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR2020) , 2020 https://doi.org/10.1145/3397271.3401061 Citation Details
Hashemi, Helia and Zamani, Hamed and Croft, W. Bruce "Performance Prediction for Non-Factoid Question Answering" Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval - ICTIR '19 , 2019 10.1145/3341981.3344249 Citation Details
Qu, C. and Yang, L. and Chen, C. and Croft, W. B. and Krishna, K. and Iyyer, M. "Weakly-Supervised Open-Retrieval Conversational Question Answering" Proceedings of the 43rd European Conference on Information Retrieval (ECIR 2021) , 2021 https://doi.org/10.1007/978-3-030-72113-8_35 Citation Details
(Showing: 1 - 10 of 22)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

In current web search engines, the response to a query is typically a series of pages that contain ranked results. The increasing use of mobile search places a premium on the use of the limited display space that is available. Similarly, voice-based search, where both questions and answers are done by voice recognition and speech generation, is becoming more common and also creates a limitation on the interaction bandwidth between the system and the user. In these situations, the ability to deliver more precise answers to a broad range of questions, rather than a ranked display of results, becomes critical. This change in the nature of search leads to the following important research question that is the focus of this research -- what is the most effective way to present and interact with a ranked list of answers, where the goal is to identify one or more satisfactory answers as quickly as possible.

To address this issue, we have made contributions to four research tasks: (a) developing and evaluating iterative relevance feedback models for answers; (b) developing and evaluating interactive summarization techniques for answers; (c) developing and evaluating finer-grained feedback approaches for answers; (d) developing and evaluating conversation-based models for answer retrieval. We have published 24 papers on these topics and produced three theses as outcomes for this grant. In addition, we developed new testbeds that are being used throughout the research community.


Last Modified: 08/31/2021
Modified by: W. Bruce Croft

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page