Award Abstract # 2302686
Excellence in Research: Exploring Effectiveness of Automatic Assessment of Cognitive and Metacognitive Processes in Engineering Learning through Natural Language Processing Models

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: JACKSON STATE UNIVERSITY
Initial Amendment Date: May 30, 2023
Latest Amendment Date: May 30, 2023
Award Number: 2302686
Award Instrument: Standard Grant
Program Manager: Subrata Acharya
acharyas@nsf.gov
 (703)292-2451
CNS
 Division Of Computer and Network Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2023
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $600,004.00
Total Awarded Amount to Date: $600,004.00
Funds Obligated to Date: FY 2023 = $600,004.00
History of Investigator:
  • Wei Zheng (Principal Investigator)
    wei.zheng@jsums.edu
  • Frances Dancer (Co-Principal Investigator)
  • Jie Ke (Co-Principal Investigator)
Recipient Sponsored Research Office: Jackson State University
1400 J R LYNCH ST
JACKSON
MS  US  39217-0002
(601)979-2008
Sponsor Congressional District: 02
Primary Place of Performance: Jackson State University
1400 J R LYNCH ST STE 206
JACKSON
MS  US  39217-0002
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): WFVHMSF6BU45
Parent UEI:
NSF Program(s): HBCU-EiR - HBCU-Excellence in
Primary Program Source: 01002223DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1594, 041Z, 9150
Program Element Code(s): 070Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070, 47.083

ABSTRACT

Timely assessment of students' learning is crucial for addressing their needs. However, current assessment methods such as multiple-choice or calculation tests may not truly reveal students? internal thinking processes, because students may guess answers or follow the step-by-step procedures of examples without conceptual understanding. Existing research shows that asking students to write down justifications for their answers, and plan and reflection on their learning can promote them to develop deeper conceptual understanding and apply cognitive and metacognitive strategies in their learning. Nonetheless, this approach is not widely adopted due to the time-consuming nature of assessing free-text responses. This project aims to develop a rubrics-based automatic assessment tool to identify individual students' misconceptions and learning deficiencies from their free-text responses, allowing instructors and students to instantly adjust their teaching and learning, and eventually enabling personalized instructions tailored to individual learners? needs, which particularly benefits students at HBCUs. The project will also provide the base for delivering research experience for both undergraduates and teachers and outreach various audiences, including public school students, increasing the literacy of the public on artificial intelligence.

This research proposes innovative strategies to fine-tune and calibrate the pre-trained language model, through self-supervised learning and few-shot learning, for automatic classification of texts based on user-specified rubrics, with a particular focus on learners' cognitive and metacognitive processes. The proposed model integrates three novel attributes for improving similarity comparison between texts and rubric keywords: (1) across-attention between compared texts for increasing the sensitivity of comparison; (2) joint embedding of words, phrases, and sentences for improving the accuracy of comparison; (3) incorporation of thematic relevance for enhancing the breadth of comparison. Given the assessment rubrics, the model can classify texts to reveal students? thinking or other traits in multiple finer granular perspectives. It is flexible for adding new assessment perspectives and more transparent than the overall assessment, and allows involving human judgment, leading to more reliable assessment acceptable for practice, and advancing knowledge on adapting pre-trained language models for personalized instructions. The dataset collected from diverse students and specific strategies will be adopted to mitigate the potential biases of the proposed model.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page