
NSF Org: |
IIS Division of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | August 19, 2015 |
Latest Amendment Date: | August 19, 2015 |
Award Number: | 1526012 |
Award Instrument: | Standard Grant |
Program Manager: |
Sylvia Spengler
sspengle@nsf.gov (703)292-7347 IIS Division of Information & Intelligent Systems CSE Directorate for Computer and Information Science and Engineering |
Start Date: | September 1, 2015 |
End Date: | August 31, 2019 (Estimated) |
Total Intended Award Amount: | $248,790.00 |
Total Awarded Amount to Date: | $248,790.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1 BROOKINGS DR SAINT LOUIS MO US 63130-4862 (314)747-4134 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
One Brookings Drive Saint Louis MO US 63130-4899 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Info Integration & Informatics |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
This research project investigates the design and development of machine learning algorithms that make decisions that are interpretable by humans. As predictions of machine learning models are increasingly used in making decisions with critical consequences (e.g., in medicine or economics), it is important that decision makers understand the rationale behind these predictions. The project defines interpretable algorithms through three key properties; Simplicity: intuitively comprehensible by users who are not experts in machine learning, Verifiability: a clear relationship between input features and model output, and Actionability: For a given input and desired output, the user should be able to identify changes to the input features that transform the model prediction to the desired output. The project investigates how to design distance metrics supporting simplicity and verifiability, as well as algorithms to identify input changes to change outputs. The project will be evaluated in a medical context, addressing the problem of early detection of hospital patients at risk of sudden deterioration.
This work builds on the well-understood k-Nearest-Neighbor classifier, which would inherently seem to provide simplicity and verifiability. The challenge is in high dimensions, e.g., when used for document classification; differences are spread across more dimensions than are humanly comprehensible. The project uses novel dimensionality reduction approaches to create dissimilarity metrics that are interpretable and accurate. Visualization techniques to present this data will be explored, including techniques supporting more complex classification approaches such as ensembles. The project investigates novel methods for delivering actionability in machine learning algorithms by identifying changes that can truly transform an entity's class membership - a problem that has recently been identified as surprisingly difficult. A secondary outcome will be improvements in classifier robustness, as small changes that change class membership are a good indication of non-robustness.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Machine learning has achieved great success in many applications. However, one of the key problems with machine learning algorithms is that their output is not always comprehensible to humans. Especially neural networks, which are now so popular, output probability estimates that supposedly communicate the certainty in a prediction. These probability estimates however are highly misleading and may well be the key reason why neural networks are considered so noninterpretable. If for example a neural network outputs that an image contains a specific object with 90% probability this does not mean that it is right 90% of the time.
In this project, we have investigated how neural networks can be calibrated so that their outputs are "truthful" and really do match reality and human expectations. We developed and compared metrics to investigate how uncalibrated neural networks are and then investigated various approaches to recalibrate neural networks. Ultimately we found that this can be done with pretty high accuracy most of the time.
Another key question that arises in the context of interpretable neural networks is what kind of feature spaces deep neural networks learn. There is a general misconception that neural networks are very accurate, however that their intermediate representations are a "black box". We decided to investigate this thoroughly and conjectured that the intermediate features representations of convolutional neural networks do indeed learn a linearized version of the input image manifold. Proving this conjecture is very hard and has so far been elusive. However, we managed to come up with an experimental set up that provides insights into whether this claim is true or not. If neural networks really learn a linearized version of the manifold, then traversals along the manifold must become linear in feature space. For example, if you have a picture of a man without facial hair, adding facial hair would move that image towards a completely different location in the image manifold. Similarly, aging people moves images from areas of the manifold populated by images of young people towards areas populated by images of older people. These transformations are highly nonlinear if performed in pixel space, however we managed to show that if you perform these transformations in deep feature space they reduce to a simple linear translation. By reconstructing the manipulated image we can show that the final outcome is indeed on the image manifold.
As a key area of application, we have applied our interpretable learning techniques to the medical domain. We have collaborated with the School of Medicine at Washington University in St. Louis and worked on predicting hospital readmissions for hospitalized patients. For this task, we need not only accurate predictions, but also explanations on potential risks. We have developed such a model that can be used by physicians.
For broader impacts, we have disseminated our results in conferences, seminars, and workshops. Our research effort to understand the internal workings of neural networks has led us to designs of much more compact in your network architectures. These especially suitable for mobile applications. It is therefore highly likely that our research will have a significant impact through technology transfer. We have trained graduate and undergraduate students, and integrated research into the teaching of our graduate courses.
Last Modified: 04/25/2020
Modified by: Yixin Chen
Please report errors in award information by writing to: awardsearch@nsf.gov.