Award Abstract # 1903304
IUSE: EHR: Improving Undergraduate Algorithms Instructions with Online Feedback

NSF Org: DUE
Division Of Undergraduate Education
Recipient: WORCESTER POLYTECHNIC INSTITUTE
Initial Amendment Date: June 17, 2019
Latest Amendment Date: August 23, 2021
Award Number: 1903304
Award Instrument: Standard Grant
Program Manager: Paul Tymann
ptymann@nsf.gov
 (703)292-2832
DUE
 Division Of Undergraduate Education
EDU
 Directorate for STEM Education
Start Date: July 1, 2019
End Date: June 30, 2023 (Estimated)
Total Intended Award Amount: $299,059.00
Total Awarded Amount to Date: $299,059.00
Funds Obligated to Date: FY 2019 = $299,059.00
History of Investigator:
  • George Heineman (Principal Investigator)
    heineman@cs.wpi.edu
  • Neil Heffernan (Co-Principal Investigator)
  • Anthony Botelho (Co-Principal Investigator)
Recipient Sponsored Research Office: Worcester Polytechnic Institute
100 INSTITUTE RD
WORCESTER
MA  US  01609-2280
(508)831-5000
Sponsor Congressional District: 02
Primary Place of Performance: Worcester Polytechnic Institute
100 Institute Rd
Worcester
MA  US  01609-2280
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): HJNQME41NBU4
Parent UEI:
NSF Program(s): IUSE
Primary Program Source: 04001920DB NSF Education & Human Resource
Program Reference Code(s): 083Z, 8209, 8244, 9178
Program Element Code(s): 199800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.076

ABSTRACT

With support from the NSF Improving Undergraduate STEM Education Program: Education and Human Resources (IUSE: EHR), this project aims to serve the national interest by leveraging artificial intelligence (AI) techniques to improve student learning and instructor productivity in computing courses. An important aspect of an instructor's job is to provide feedback on student work. Research in cognitive science demonstrates that students more effectively learn concepts if they explain answers to cognitively demanding questions. Learning is further improved when instructor feedback is provided. In large classes, it is difficult for instructors provide all students with the feedback and help they need. This project will develop a tool that uses machine learning to automatically generate feedback on student solutions. The tool is designed to reduce the grading burden of the instructor while ensuring that students receive frequent and worthwhile feedback. The project aims to generate knowledge about how AI can help to engage students without requiring significant instructor time. This tool will impact thousands of students as it is incorporated into ASSISTments, which is used by 50,000 students and is a free service of Worcester Polytechnic Institute (WPI).

The goal of this project is to implement and study a tool that uses state-of-the-art machine learning techniques and natural language processing methods to automate instructor feedback. In the first phase of the project, students in the Algorithms course at WPI will be assigned open-ended questions to complete prior to each lecture. Teaching staff for the course will manually grade student responses. The data set generated in this phase will be used to develop and train the algorithms for the automated tool. The tool will apply methods of varying complexities including regression and tree-based modeling methods, and deep learning methods including long short-term memory (LSTM). It will analyze student responses and suggest feedback and a grade. Instructors may choose to accept the tool's suggestions or override the suggestions to provide different feedback. A randomized control trial will assign students to receive manual feedback or feedback generated with the help of the tool. The usability of the tool will be evaluated through interviews with the instructors. Data on how often the tool's suggestions are overridden and how long it takes the instructor to grade each solution will also be collected. Student experience with the tool will be evaluated through online surveys. Student learning will be evaluated through posttests given each week. The project is expected to improve student learning and instructor productivity in the WPI Algorithms course. It is also has potential to contribute to the broader fields of machine learning, natural language processing, and education through the study, generation, and deployment of effective feedback. The NSF IUSE: EHR Program supports research and development projects to improve the effectiveness of STEM education for all students. Through the Engaged Student Learning track, the program supports the creation, exploration, and implementation of promising practices and tools.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 21)
Baral, Sami and Botelho, Anthony and Santhanam, Abhishek and Gurung, Ashish and Cheng, Li and Heffernan, Neil "Auto-scoring Student Responses with Images in Mathematics" , 2023 Citation Details
Baral, Sami and Botelho, Anthony and Santhanam, Abhishek and Gurung, Ashish and Erickson, John and Heffernan, Neil "Investigating Patterns of Tone and Sentiment in Teacher Written Feedback Messages" , 2023 https://doi.org/10.1007/978-3-031-36336-8_53 Citation Details
Baral, Sami and Botelho, Anthony F and Erickson, John A and Benachamardi, Priyanka and Heffernan, Neil T. "Improving Automated Scoring of Student Open Responses in Mathematics" Proceedings of the 14th International Conference on Educational Data Mining , 2021 Citation Details
Baral, Sami and Botelho, Anthony F and Erickson, John A and Benachamardi, Priyanka and Heffernan, Neil T. "Improving Automated Scoring of Student Open Responses in Mathematics" Proceedings of the 14th International Conference on Educational Data Mining , 2021 Citation Details
Baral, Sami and Seetharaman, Karthik and Botelho, Anthony F and Wang, Anzhuo and Heineman, George and Heffernan, Neil T "Enhancing Auto-scoring of Student Open Responses in the Presence of Mathematical Terms and Expressions" Proceedings of the 23rd International Conference on Artificial Intelligence in Education , 2022 Citation Details
Baral, S and Botelho, A and Shin, J and Li, H and Anderson, N and Sowad, M and Heffernan, N "Do These Students Have Similar Strategies? Clustering Math Work in Uploaded Images on an Online Learning Platform." , 2024 Citation Details
Botelho, Anthony and Baral, Sami and Erickson, John A. and Benachamardi, Priyanka and Heffernan, Neil T. "Leveraging natural language processing to support automated assessment and feedback for student open responses in mathematics" Journal of Computer Assisted Learning , v.39 , 2023 https://doi.org/10.1111/jcal.12793 Citation Details
Botelho, Anthony F and Prihar, Ethan and Heffernan, Neil T "Deep Learning or Deep Ignorance? Comparing Untrained Recurrent Models in Educational Contexts" Proceedings of the 23rd International Conference on Artificial Intelligence in Education , 2022 Citation Details
Closser, Avery H. and Erickson, John A. and Smith, Hannah and Varatharaj, Ashvini and Botelho, Anthony F. "Blending learning analytics and embodied design to model students comprehension of measurement using their actions, speech, and gestures" International Journal of Child-Computer Interaction , v.32 , 2022 https://doi.org/10.1016/j.ijcci.2021.100391 Citation Details
Erickson, John A. and Botelho, Anthony F. and McAteer, Steven and Varatharaj, Ashvini and Heffernan, Neil T. "The automated grading of student open responses in mathematics" Tenth International Conference on Learning Analytics & Knowledge , 2020 https://doi.org/10.1145/3375462.3375523 Citation Details
Erickson, John A and Botelho, Anthony F and Peng, Zonglin and Huang, Rui and Kasal, Meghana V and Heffernan, Neil T. "Is It Fair? Automated Open Response Grading" Proceedings of the 14th International Conference on Educational Data Mining , 2021 Citation Details
(Showing: 1 - 10 of 21)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Project Objective:

The goal of our project was to create a tool called ADVISER, aimed at helping teachers give better feedback on students’ work in an undergraduate Algorithms course. Inspired by technologies like Google’s SmartReply, we wanted to make a tool that uses advanced computer techniques to improve the way teachers evaluate open-ended student answers, making their feedback more meaningful and helpful.

Intellectual Merit:

We successfully developed ADVISER using state-of-the-art methods of natural language processing (NLP) and machine learning (ML). These technologies allowed our tool to understand and analyze students' text and image answers more deeply, making the feedback process easier and more effective for teachers. While a larger scale evaluation of ADVISER has yet to be conducted, pilot testing of the tool has supported the feasibility of its usage in practice.

We made significant advancements by using powerful language models to help the tool generate feedback that is relevant and useful. Through research and development efforts, we have been able to demonstrate improvement in not only the tool’s ability to understand textual answers, but we also improved its ability to understand answers that include mathematical terms or are given in image form which are common in computer science courses.

In comparing across domains, we do find that the developed models underperform in the intended application area of undergraduate algorithms in comparison to other contexts such as mathematics, suggesting some cross-context challenges for these types of tools. Despite this, the performance in pilot testing still provides promise for its application in this space.

Broader Impacts:

Our project has made important contributions that go beyond the classroom. We shared our findings and progress through various educational and technological conferences, reaching other researchers and experts in fields like education, machine learning, and more. 

The ADVISER tool has the potential to change the way teachers provide feedback, making education in technical subjects like Algorithms more engaging and effective. By doing this, we hope that more students will succeed in these courses, paving the way for a stronger foundation in important technical skills.

In summary, the ADVISER tool signifies a step forward in educational tools, promoting enhanced teaching and learning experiences, and paving the way for improved educational outcomes in technical disciplines.

 


Last Modified: 10/24/2023
Modified by: Neil T Heffernan

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page