Award Abstract # 1912958
Collaborative Research: Sparse Optimization in Large Scale Data Processing: A Multiscale Proximity Approach

NSF Org: DMS
Division Of Mathematical Sciences
Recipient: OLD DOMINION UNIVERSITY RESEARCH FOUNDATION
Initial Amendment Date: June 11, 2019
Latest Amendment Date: June 11, 2019
Award Number: 1912958
Award Instrument: Standard Grant
Program Manager: Yuliya Gorb
ygorb@nsf.gov
 (703)292-2113
DMS
 Division Of Mathematical Sciences
MPS
 Directorate for Mathematical and Physical Sciences
Start Date: July 1, 2019
End Date: June 30, 2023 (Estimated)
Total Intended Award Amount: $125,000.00
Total Awarded Amount to Date: $125,000.00
Funds Obligated to Date: FY 2019 = $125,000.00
History of Investigator:
  • Yuesheng Xu (Principal Investigator)
    y1xu@odu.edu
Recipient Sponsored Research Office: Old Dominion University Research Foundation
4111 MONARCH WAY STE 204
NORFOLK
VA  US  23508-2561
(757)683-4293
Sponsor Congressional District: 03
Primary Place of Performance: Old Dominion University
5115 Hampton Blvd.
Norfolk
VA  US  23529-0001
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): DSLXBD7UWRV6
Parent UEI: DSLXBD7UWRV6
NSF Program(s): COMPUTATIONAL MATHEMATICS
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 9263
Program Element Code(s): 127100
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.049

ABSTRACT

There is an emergent demand in areas of national strategic interest such as information technology, nanotechnology, biotechnology, civil infrastructure and environment for abstracting useful knowledge for decision making or uncovering truth from large-scale data acquired via various means such as sensors and internet. A core issue of these areas is to develop accurate mathematical models, which govern the abstraction process, and to design efficient algorithms that solve the underlying optimization problems for the models. A challenge of the tasks comes from the large-scale nature of given data. This nature requires determining a large number of model parameters and it is computationally expensive. To address this challenge, this project will take advantage of certain intrinsic multiscale structure of given data in modeling so that the resulting models have significantly fewer parameters to be determined. It is also crucial to introduce efficient algorithms for solving the resulting optimization problems for the models, which have intrinsic multiscale structures. The second goal of this proposed research is to provide rigorous training of young mathematicians and computational scientists so that they have the skill sets needed to face the challenges of the big data era through this proposed research and its associated educational components. Outcomes of the proposed research and its educational component will certainly contribute to the Federal strategic interest areas.

This research project addresses several critical issues of processing large-scale data, such as high dimensionality and high noise, through properly choosing structured sparsity promoting non-convex functions in modeling and through synthesizing the multiscale representation of data and using fixed-point equations/inclusions involved the proximity operator in solving the resulting optimization problem. Structured non-convex sparsity promoting functions are proposed to overcome drawbacks of the existing modeling of large-scale data, leading to the design of efficient single-scale proximity algorithms. Multiscale analysis has been developed to efficiently represent data, while how multiscale representation of data is used to improve convergence of the fixed-point proximity algorithm remains unsolved. The proposed multiscale proximity method avoids iterations on the full large-scale of the fixed-point equation/inclusion. Instead, when data are represented in a multiscale analysis, iterations of the multiscale proximity algorithm are conducted only on a (small-scale) lower frequency component of the equation/inclusion (based on a single-scale algorithm), and only one functional evaluation on a (large-scale) high frequency component is required. The multiscale algorithm will preserve accuracy of the single-scale algorithm while accelerating its convergence significantly. This leads to a fast algorithm for solving the fixed-point equation/inclusion involved the proximity operator.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 19)
Cheng, Raymond and Xu, Yuesheng "Minimum norm interpolation in the 1() space" Analysis and Applications , v.19 , 2021 https://doi.org/10.1142/S0219530520400059 Citation Details
Chen, Yun and Huang, Jiasheng and Li, Si and Lu, Yao and Xu, Yuesheng "A content-adaptive unstructured grid based integral equation method with the TV regularization for SPECT reconstruction" Inverse Problems & Imaging , v.14 , 2020 10.3934/ipi.2019062 Citation Details
Chen, Yun and Lu, Yao and Ma, Xiangyuan and Xu, Yuesheng "A content-adaptive unstructured grid based regularized CT reconstruction method with a SART-type preconditioned fixed-point proximity algorithm" Inverse Problems , v.38 , 2022 https://doi.org/10.1088/1361-6420/ac490f Citation Details
Diefenthaler, Markus and Farhat, Abdullah and Verbytskyi, Andrii and Xu, Yuesheng "Deeply learning deep inelastic scattering kinematics" The European Physical Journal C , v.82 , 2022 https://doi.org/10.1140/epjc/s10052-022-10964-z Citation Details
Guo, Jianfeng and Schmidtlein, C. Ross and Krol, Andrzej and Li, Si and Lin, Yizun and Ahn, Sangtae and Stearns, Charles and Xu, Yuesheng "A Fast Convergent Ordered-Subsets Algorithm With Subiteration-Dependent Preconditioners for PET Image Reconstruction" IEEE Transactions on Medical Imaging , v.41 , 2022 https://doi.org/10.1109/TMI.2022.3181813 Citation Details
Howard C. Giord, C. Ross "An Assessment of PET Dose Reduction with Penalized Likelihood Image Reconstruction using a Computationally Ecient Model Observer" Medical Imaging 2020: Physics of Medical Imaging , 2020 Citation Details
Jiang, Gongfa and He, Zilong and Zhou, Yuanpin and Wei, Jun and Xu, Yuesheng and Zeng, Hui and Wu, Jiefang and Qin, Genggeng and Chen, Weiguo and Lu, Yao "Multiscale cascaded networks for synthesis of mammogram to decrease intensity distortion and increase modelbased perceptual similarity" Medical Physics , v.50 , 2022 https://doi.org/10.1002/mp.16007 Citation Details
Jiang, Gongfa and Wei, Jun and Xu, Yuesheng and He, Zilong and Zeng, Hui and Wu, Jiefang and Qin, Genggeng and Chen, Weiguo and Lu, Yao "Synthesis of Mammogram From Digital Breast Tomosynthesis Using Deep Convolutional Neural Network With Gradient Guided cGANs" IEEE Transactions on Medical Imaging , v.40 , 2021 https://doi.org/10.1109/TMI.2021.3071544 Citation Details
Liu, Weifeng and Jiang, Ying and Xu, Yuesheng "A Super Fast Algorithm for Estimating Sample Entropy" Entropy , v.24 , 2022 https://doi.org/10.3390/e24040524 Citation Details
Liu, Xiaoxia and Lu, Jian and Shen, Lixin and Xu, Chen and Xu, Yuesheng "Multiplicative Noise Removal: Nonlocal Low-Rank Model and Its Proximal Alternating Reweighted Minimization Algorithm" SIAM Journal on Imaging Sciences , v.13 , 2020 https://doi.org/10.1137/20M1313167 Citation Details
null, Yuesheng Xu and Zeng, Taishan "Sparse Deep Neural Network for Nonlinear Partial Differential Equations" Numerical Mathematics: Theory, Methods and Applications , v.16 , 2022 https://doi.org/10.4208/nmtma.OA-2022-0104 Citation Details
(Showing: 1 - 10 of 19)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The research findings of this project include:

(1)   We developed a fast multiscale functional estimation in optimal EMG placement for robotic prosthesis controllers by using multiscale piecewise polynomial bases.

(2)   We proposed a parameter choice strategy, for ℓ1 norm regularization problems, which allows us to obtain their solutions to have a prescribed order of sparsity and accuracy.

(3)   We established representer theorems of machine learning solutions in Banach spaces. Such theorems provide a foundation, for understanding sparse representations of machine learning solutions, which leads to potential efficient numerical implementation of learning methods.

(4)   We designed a multi-parameter regularization model, for deep neural network solutions of partial differential equations, which leads to sparse deep neural networks for the solution representation.

(5)   We applied sparse deep learning techniques to reconstruct the kinematics of the neutral current deep inelastic scattering process in electron-proton collisions. This work was done by collaborated with Drs. M. Diefenthaler (DoE Jefferson Lab) and A. Verbytskyi (Max-Planck-Institut für Physik, Germany).

(6)   We applied a sparse optimization model developed in this project to planar scintigraphy image reconstruction and received encouraging reconstruction outcomes. This application was conducted in collaboration with Drs. C. Ross Schmidtlein (Memorial Sloan Kettering Cancer Center) and Andrzej Krol (SUNY Upstate Medical University).

This project provides an opportunity for training graduate and undergraduate students. The PI integrated research findings of this project in teaching of a course entitled "Introduction to Optimization in Data Science". The content of this course is partially built upon research results of this project. Students were trained in the theory and applications of solving sparse non-smooth optimization problems via fixed-point proximity algorithms.

The collaboration with Memorial Sloan-Kettering Cancer on medical image reconstruction provided a Ph.D. student training opportunities. The student has written his Ph.D. dissertation in topics related to the research of this grant and has been employed as a postdoc at MSKCC to work on reconstruction of PET images.

The PI presented the research outcomes of this project in multiple international conferences. Moreover, he published an invited expository paper entitled "Sparse Machine Learning in Banach Spaces" in the journal "Applied Numerical Mathematics", which is to explain to graduate students and beginning researchers in the field of mathematics, statistics and engineering the fundamental concept of sparse machine learning in Banach spaces.


Last Modified: 08/18/2023
Modified by: Yuesheng Xu

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page