Award Abstract # 2131943
CAREER: Maximal and Scalable Unified Debugging for the JVM Ecosystem

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: UNIVERSITY OF ILLINOIS
Initial Amendment Date: July 7, 2021
Latest Amendment Date: September 15, 2023
Award Number: 2131943
Award Instrument: Continuing Grant
Program Manager: Sol Greenspan
sgreensp@nsf.gov
 (703)292-7841
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: July 15, 2021
End Date: April 30, 2026 (Estimated)
Total Intended Award Amount: $519,819.00
Total Awarded Amount to Date: $519,819.00
Funds Obligated to Date: FY 2020 = $94,106.00
FY 2021 = $96,561.00

FY 2022 = $99,091.00

FY 2023 = $230,061.00
History of Investigator:
  • Lingming Zhang (Principal Investigator)
    lingming@illinois.edu
Recipient Sponsored Research Office: University of Illinois at Urbana-Champaign
506 S WRIGHT ST
URBANA
IL  US  61801-3620
(217)333-2187
Sponsor Congressional District: 13
Primary Place of Performance: University of Illinois at Urbana-Champaign
IL  US  61820-7406
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): Y8CWNJRCNN91
Parent UEI: V2PHZ2CSCH63
NSF Program(s): Software & Hardware Foundation
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002324DB NSF RESEARCH & RELATED ACTIVIT

01002223DB NSF RESEARCH & RELATED ACTIVIT

01002021DB NSF RESEARCH & RELATED ACTIVIT

01002425DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7944, 1045
Program Element Code(s): 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The software industry all over the world has contributed to the massive culture of support around Java, one of the most popular programming languages. The Java runtime, or Java Virtual Machine (JVM), has become a software ecosystem on its own. Nowadays, hundreds of popular JVM languages (including Kotlin, Scala, and Groovy) have been developed/adopted under different platforms (including Oracle JDK and Android SDK), build systems (including Gradle and Maven), and JVM implementations (including HotSpot and OpenJ9). For example, Google just promoted Kotlin to the No.1 preferred language for Android development at Google I/O 2019. The huge and heterogeneous ecosystem of JVM raises unique challenges to automated debugging, including both fault localization and repair.

This project proposes to re-think the role of a foundational concept of program mutation, that is, systematic program transformation, in automated debugging. Program mutation has been widely adopted in traditional mutation testing and program repair, and the investigator conjectures, based on preliminary work, that it can be used to transform and advance the state-of-the-art in automated debugging for software written with technologies from the entire JVM ecosystem and beyond. Specifically, the project focuses on the following research thrusts: (1) unifying both fault localization and repair via program mutation to boost each other, (2) automatically inferring up-to-date advanced mutators from big code corpora for maximal unified debugging, since existing program mutators are often limited and may easily become obsolete, (3) developing novel techniques to optimize patch executions for scalable unified debugging, since patch execution can be extremely time-consuming, and (4) supporting unified debugging of the entire heterogeneous JVM ecosystem. The project will unify program mutations across various dimensions for the first time, e.g., across JVM languages and platforms, across code types (including source, test, and build code), and even across JVM boundaries. Ultimately, the project aims for a practical debugging system to benefit JVM ecosystem developers all over the world. The overarching idea of unified debugging can also substantially impact the ways that both researchers and practitioners view, design, and apply automated debugging -- fault localization always requires manual repair while program repair only works for some bugs; in contrast, unified debugging can support the most automated debugging possible for each bug, and broaden the effective range of the entire program repair area to all possible bugs. The project will integrate the research results into SE curriculum, K-12 camps, software testing contests, and industrial collaborations.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 35)
Benton, Samuel and Xie, Yuntong and Lu, Lan and Zhang, Mengshi and Li, Xia and Zhang, Lingming "Towards boosting patch execution on-the-fly" International Conference on Software Engineering , 2022 https://doi.org/10.1145/3510003.3510117 Citation Details
Cheng, Runxiang and Zhang, Lingming and Marinov, Darko and Xu, Tianyin "Test-Case Prioritization for Configuration Testing" ACM SIGSOFT International Symposium on Software Testing and Analysis , 2021 https://doi.org/10.1145/3460319.3464810 Citation Details
Deng, Yinlin and Xia, Chunqiu Steven and Cao, Zhezhen and Li, Meiziniu and Zhang, Lingming "Can LLMs Implicitly Learn Numeric Parameter Constraints in Data Science APIs?" , 2024 Citation Details
Deng, Yinlin and Xia, Chunqiu Steven and Peng, Haoran and Yang, Chenyuan and Zhang, Lingming "Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models" , 2023 https://doi.org/10.1145/3597926.3598067 Citation Details
Deng, Yinlin and Xia, Chunqiu Steven and Yang, Chenyuan and Zhang, Shizhuo Dylan and Yang, Shujing and Zhang, Lingming "Large Language Models are Edge-Case Generators: Crafting Unusual Programs for Fuzzing Deep Learning Libraries" , 2024 https://doi.org/10.1145/3597503.3623343 Citation Details
Ding, Yifeng and Liu, Jiawei and Wei, Yuxiang and Zhang, Lingming "XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts" , 2024 https://doi.org/10.18653/v1/2024.acl-long.699 Citation Details
Jiang, Ling and Yuan, Hengchen and Wu, Mingyuan and Zhang, Lingming and Zhang, Yuqun "Evaluating and Improving Hybrid Fuzzing" Proceedings of the IEEE/ACM International Conference on Software Engineering , 2023 Citation Details
Liu, Jiawei and Peng, Jinjun and Wang, Yuyao and Zhang, Lingming "NeuRI: Diversifying DNN Generation via Inductive Rule Inference" , 2023 https://doi.org/10.1145/3611643.3616337 Citation Details
Liu, Jiawei and Wang, Yuyao and Zhang, Lingming "Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation" , 2023 Citation Details
Liu, Jiawei and Wei, Yuxiang and Yang, Sen and Deng, Yinlin and Zhang, Lingming "Coverage-guided tensor compiler fuzzing with joint IR-pass mutation" Proceedings of the ACM on Programming Languages , v.6 , 2022 https://doi.org/10.1145/3527317 Citation Details
Liu, Jiawei and Xie, Songrun and Wang, Junhao and Wei, Yuxiang and Ding, Yifeng and Zhang, Lingming "Evaluating Language Models for Efficient Code Generation" , 2024 Citation Details
(Showing: 1 - 10 of 35)

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page