Award Abstract # 1618690
III: Small: Transfer Learning Within and Across Networks for Collective Classification

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: PURDUE UNIVERSITY
Initial Amendment Date: June 30, 2016
Latest Amendment Date: May 18, 2020
Award Number: 1618690
Award Instrument: Standard Grant
Program Manager: Sorin Draghici
sdraghic@nsf.gov
 (703)292-2232
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: July 1, 2016
End Date: June 30, 2021 (Estimated)
Total Intended Award Amount: $495,308.00
Total Awarded Amount to Date: $495,308.00
Funds Obligated to Date: FY 2016 = $495,308.00
History of Investigator:
  • Jennifer Neville (Principal Investigator)
Recipient Sponsored Research Office: Purdue University
2550 NORTHWESTERN AVE # 1100
WEST LAFAYETTE
IN  US  47906-1332
(765)494-1055
Sponsor Congressional District: 04
Primary Place of Performance: Purdue University
IN  US  47907-2107
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): YRXVL4JYCEF5
Parent UEI: YRXVL4JYCEF5
NSF Program(s): Info Integration & Informatics
Primary Program Source: 01001617DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7364, 7923
Program Element Code(s): 736400
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Relational machine learning methods can significantly improve the predictive accuracy of models for a range of network domains, from social networks to physical and biological networks. The methods automatically learn network correlation patterns (e.g., in biological networks a pair of interacting proteins are more likely to have the same function than two randomly selected proteins) from observed data and then use them in a collective inference process to propagate predictions throughout the network. The primary assumption in these relational methods is that model parameters estimated from one network are applicable to other networks drawn from the same distribution. However, there has been little work studying the impact of this assumption, and in particular how variability in network structure affects the performance of relational models and collective inference. This project aims to investigate this issue in order to move beyond the implicit assumption that the networks/data are drawn from the same underlying distribution. The research will establish a formal framework for learning across heterogeneous network structures and characterize the impact of network structure on models of attribute correlation. The findings will deepen our understanding of how/when relational model performance generalizes across network datasets and the work will develop new methods to improve generalization.

More specifically, in this project the PI makes the key observation that templated graphical models are often used in network classification methods. These models are composed of small (i.e., local) model templates that are ''rolled out'' over a heterogeneous network to dynamically construct a larger model with variable structure for estimation and inference. Due to the roll out process, the generalizability of a learned model will depend on the similarity between the networks used for learning and prediction. In this project, the PI will study this issue in greater depth by formalizing relational learning and collective inference as a ''transfer learning'' problem, with the goal of learning a model from one domain and successfully applying it to a different domain. The research will investigate how to best transfer learned knowledge within networks (i.e., from one labeled part of a network to another), and across networks (i.e., from one network in a population to another). The project will develop rigorous statistical methods and advanced computational algorithms to answer this question via four specific aims: (Aim1) formal foundation for assessing transferability within and across networks; (Aim2) generative models of attributed networks for empirical investigation; (Aim3), within-network transfer methods for non-stationary data; and (Aim4) across-network transfer methods using template matching and global smoothing.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 21)
C. Meng and C. Mouli and B. Ribeiro and J. Neville "Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction" Proceedings of the 32nd AAAI Conference on Artificial Intelligence , 2018
C. Meng and C. Mouli and B. Ribeiro and J. Neville "Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction" Proceedings of the 32nd AAAI Conference on Artificial Intelligence , 2018
C. Meng and J. Yang and B. Ribeiro and J. Neville "HATS: A Hierarchical Sequence-Attention Framework for Inductive Set-of-Sets Embeddings" Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , 2019
G. Gomes and V. Rao and J. Neville "Multi-level hypothesis testing for populations of heterogeneous networks" Proceedings of the 18th IEEE International Conference on Data Mining , 2018
H. Park and J. Neville "Exploiting Interaction Links for Node Classification with Deep Graph Neural Networks" Proceedings of the 29th International Joint Conference on Artificial Intelligence , 2019
H. Park and J. Neville "Role Equivalence Attention for Label Propagation in Graph Neural Networks" Proceedings of the 24th Pacific-Asia Conference on Knowledge Discovery and Data Mining , 2020
J. Yang and B. Ribeiro and J. Neville "Should We Be Confident in Peer Effects Estimated From Partial Crawls of Social Networks?" Proceedings of the 11th International AAAI Conference on Weblogs and Social Media , 2017
J. Yang and Q. Liu and V. Rao and J. Neville "Goodness-of-fit Testing for Discrete Distributions via Stein Discrepancy" Proceedings of the 35th International Conference on Machine Learning , 2018
J. Yang and V. Rao and J. Neville "A Stein?Papangelou Goodness-of-Fit Test for Point Processes" Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTAT) , 2019
J. Yang and V. Rao and J. Neville "Decoupling Homophily and Reciprocity with Latent Space Network Models" Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence , 2017
M. Goindani and J. Neville "Cluster-Based Social Reinforcement Learning" Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems , 2020
(Showing: 1 - 10 of 21)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page