
NSF Org: |
DUE Division Of Undergraduate Education |
Recipient: |
|
Initial Amendment Date: | May 14, 2019 |
Latest Amendment Date: | May 14, 2019 |
Award Number: | 1915404 |
Award Instrument: | Standard Grant |
Program Manager: |
Mike Ferrara
mferrara@nsf.gov (703)292-2635 DUE Division Of Undergraduate Education EDU Directorate for STEM Education |
Start Date: | October 1, 2019 |
End Date: | September 30, 2023 (Estimated) |
Total Intended Award Amount: | $298,728.00 |
Total Awarded Amount to Date: | $298,728.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
1 LOMB MEMORIAL DR ROCHESTER NY US 14623-5603 (585)475-7987 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
NY US 14623-5603 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | IUSE |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.076 |
ABSTRACT
With support from the NSF Improving Undergraduate STEM Education Program: Education and Human Resources (IUSE: EHR), this project aims to serve the national interest by enabling faculty to improve their ability to assess student understanding in computer science courses and give feedback on student assignments. Student interest in computer science courses is rapidly increasing nationwide, putting strain on departments and instructors to offer a quality education. Providing effective personalized feedback to students is a critical part of the learning process, but a limited number of qualified instructors and large student enrollments make providing such feedback a challenge when student to faculty ratios are high. This Engaged Student Learning track Exploration and Design tier project will develop a new teaching platform to assist instructors in computer science courses by automatically propagating feedback to a large body of students. In addition, the new teaching platform aims to help instructors understand collective strengths and weaknesses of students in their courses based on their assignment submissions. This project aims to affect over 1,000 undergraduate students each year at the Rochester Institute of Technology.
The teaching platform developed by this project will analyze student program submissions to create program dependence graphs that combine control and data flows for Java and Python programs. The graphs will be used to cluster similar student submissions using graph alignment, and to detect semantic expected code patterns using subgraph mining. The goal of the platform is to promote improved teaching effectiveness by presenting analytics that will help instructors understand the performance of individual students and classes as a whole. It is also designed to suggest avenues for further discussion with the class. The technical evaluation of the project will study how well the platform identifies clusters and patterns in both synthetic and real assignments. Instructors from Rochester Institute of Technology, several neighboring universities, and local high schools will participate in training workshops, and the platform will be used in introductory courses at Rochester Institute of Technology. An additional goal of the project is to develop knowledge bases to enable the use of the teaching platform with new assignments, and to evaluate the impact of the platform on the instructors' grading and teaching style. The teaching platform targets instructors of computer science courses and has the potential to influence any student studying computer science. The NSF IUSE: EHR Program supports research and development projects to improve the effectiveness of STEM education for all students. Through the Engaged Student Learning track, the program supports the creation, exploration, and implementation of promising practices and tools.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Our goal was to provide personalized feedback in introductory programming assignments. Current tools to provide feedback mostly focus on automating its delivery; unfortunately, the instructor has almost no control over what feedback is provided and how feedback is delivered.
We focused on two components. The first component was to group similar programs. The idea is that, if two or more programs are very similar, we can group them, and the instructor can deliver the same feedback to all the students who submitted those programs. We developed several methods to group programs using program dependence graphs, a graph-based representation that combines information about the control and the data flows within the program. We also explored feedback at a more fine-grained level. Instead of full programs, we focused on grouping program statements. Instructors have thus control to deliver feedback to similar program statements found in many programs within the same assignment. We also explored the use of machine learning to recognize these statements and provide feedback.
The second component was to use a correct program to repair an incorrect program. These repairs can be used as feedback to be delivered to students. We developed several methods to increase the flexibility of the program comparison based on control flow graphs, that is, graph-based representations that only include the information of the control flow within the program. We expanded control flow graphs by adding semantic labels to the program statements. Such a flexible comparison mitigates the requirement that the correct program must be very similar to the incorrect program, which is relevant in practice since we often do not have as many correct programs. We explored the use of machine learning to add feedback to unseen repairs.
Our project has contributed to improving STEM education by aiding instructors deliver meaningful and timely feedback. Our outcomes have the potential to impact virtually all computer science students.
Last Modified: 10/16/2023
Modified by: Carlos Rivero
Please report errors in award information by writing to: awardsearch@nsf.gov.