Award Abstract # 1623605
EXP: Modeling Perceptual Fluency with Visual Representations in an Intelligent Tutoring System for Undergraduate Chemistry

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: UNIVERSITY OF WISCONSIN SYSTEM
Initial Amendment Date: August 30, 2016
Latest Amendment Date: August 30, 2016
Award Number: 1623605
Award Instrument: Standard Grant
Program Manager: Hector Munoz-Avila
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: September 1, 2016
End Date: August 31, 2020 (Estimated)
Total Intended Award Amount: $540,396.00
Total Awarded Amount to Date: $540,396.00
Funds Obligated to Date: FY 2016 = $540,396.00
History of Investigator:
  • Martina Rau (Principal Investigator)
    martina.rau@gess.ethz.ch
  • Robert Nowak (Co-Principal Investigator)
  • Xiaojin Zhu (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Wisconsin-Madison
21 N PARK ST STE 6301
MADISON
WI  US  53715-1218
(608)262-3822
Sponsor Congressional District: 02
Primary Place of Performance: University of Wisconsin-Madison
21 N Park St
Madison
WI  US  53715-1218
Primary Place of Performance
Congressional District:
02
Unique Entity Identifier (UEI): LCLSJAGTNZQ7
Parent UEI:
NSF Program(s): S-STEM-Schlr Sci Tech Eng&Math,
Cyberlearn & Future Learn Tech
Primary Program Source: 1300XXXXDB H-1B FUND, EDU, NSF
01001617DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8244, 8841, 8045
Program Element Code(s): 153600, 802000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The Cyberlearning and Future Learning Technologies Program funds efforts that support envisioning the future of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects design and build new kinds of learning technologies in order to explore their viability, to understand the challenges to using them effectively, and to study their potential for fostering learning. This EXP project aims to help students become visually fluent with visual representations (similar to becoming fluent in a second language). Instructors often use visuals to help students learn (e.g., pie charts of fractions, or ball-and-stick models of chemical molecules) and assume that students can quickly discern relevant information (e.g., whether or not two visuals show the same chemical) once that visual representation has been introduced. But comprehension is not the same as fluency -- students still expend significant mental effort and time interpreting even visuals that they understand conceptually, and the resulting cognitive load can cause them to miss other important information that instructors are imparting. To help improve student fluency with visuals, a series of experiments with undergraduate students and chemistry professors will investigate which visual features they pay attention to and use sophisticated statistical methods to devise example sequences that will most efficiently help students learn to pay attention to relevant visual features. Based on this research, the project team will develop a visual fluency training that will be incorporated into an existing, successful online learning technology for chemistry. The potential educational impact will not be limited to chemistry instruction: given the pervasiveness of visual representations in STEM fields and the number of students who struggle with rapid processing of those visuals, the products of this research could be integrated into other educational technologies.

The PIs will develop a methodology for cognitive modeling of perceptual learning processes that can create adaptive support for perceptual learning tasks. The research will combine machine learning with educational psychology experiments using an Intelligent Tutoring System (ITS) for undergraduate chemistry. In Phase 1, metric learning will assess which visual features of representations novice students and chemistry experts focus on. Applying metric learning to a novice-expert experiment will establish a skill model of student perceptions and perceptual learning goals for the ITS. In Phase 2, the team will use machine learning to develop a cognitive model of perceptual learning. The team will conduct a chemistry learning experiment and apply machine learning to test cognitive models. In Phase 3, the team will use the cognitive model to reverse-engineer optimal sequences of perceptual learning tasks. An experiment will evaluate the effectiveness of these sequences, and the team will build on this analysis to create an adaptive version of perceptual learning tasks. A final experiment will evaluate whether incorporating adaptive perceptual learning tasks with conceptually focused instruction enhances learning. Because educational technologies have traditionally focused on explicit learning processes that lead to conceptual competencies, they cannot currently assess the implicit learning processes that lead to perceptual fluency. Combining educational psychology, cognitive science, and machine learning will yield new cognitive models that could transform the adaptive capabilities of educational technologies to support such perceptual fluency as well as other implicit forms of learning. The project will also yield next-generation computational algorithms to model human similarity judgments and to use adaptive surveying to collect data on perceptual judgments more efficiently.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 37)
Ayon Sen, Purav Patel, Martina A. Rau, Blake Mason, Robert Nowak, Timothy T. Rogers, Xiaojin Zhu "For Teaching Perceptual Fluency, Machines Beat Human Experts" Annual Meeting of the Cognitive Science Society , 2018
Ayon Sen, Purav Patel, Martina A. Rau, Blake Mason, Robert Nowak, Timothy T. Rogers, Xiaojin Zhu "For Teaching Perceptual Fluency, Machines Beat Human Experts" Annual Meeting of the Cognitive Science Society , 2018
Ayon Sen, Purav Patel, Martina A. Rau, Blake Mason, Robert Nowak, Timothy T. Rogers, Xiaojin Zhu "Machine Beats Human at Sequencing Visuals for Perceptual-Fluency Practice" International Conference on Educational Data Mining , 2018
Ayon Sen, Purav Patel, Martina A. Rau,Blake Mason, Robert Nowak, Timothy T. Rogers, Xiaojin Zhu "Machine Beats Human at Sequencing Visuals for Perceptual-Fluency Practice" International Conference on Educational Data Mining , 2018
Ayon Sen, Scott Alfeld, Xuezhou Zhang, Ara Vartanian, Yuzhe Ma, and Xiaojin Zhu "Training set camouflage" Conference on Decision and Game Theory for Security (GameSec) , 2018
Ayon Sen, Scott Alfeld, Xuezhou Zhang, Ara Vartanian, Yuzhe Ma, and Xiaojin Zhu "Training set camouflage" Conference on Decision and Game Theory for Security (GameSec) , 2018
Blake Mason, "Learning Nearest Neighbor Graphs from NoisyDistance Samples" Conference on Neural Information Processing Systems , 2019
Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Xiaojin Zhu "Adversarial attacks on stochastic bandits" Advances in Neural Information Processing Systems (NeurIPS) , 2018
Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Xiaojin Zhu "Adversarial attacks on stochastic bandits" Advances in Neural Information Processing Systems (NeurIPS) , 2018
Lalit Jain, Blake Mason, Robert Nowak "Learning Low-Dimensional Metrics" 31st Conference on Neural Information Processing Systems (NIPS 2017) , 2018
Lalit Jain, Blake Mason, Robert Nowak "Learning Low-Dimensional Metrics" 31st Conference on Neural Information Processing Systems (NIPS 2017) , 2018
(Showing: 1 - 10 of 37)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Instructors often use visuals to help students learn (e.g., pie charts of fractions, ball-and-stick models of chemical molecules). Often, after having explained to students how to interpret the visual representation, the instructor assumes that students can quickly see relevant information in a visual representation (e.g., whether or not two visuals show the same chemical). But for students, it takes significant mental effort and time to interpret the visuals; and this effortful process may cause them to miss relevant information from the instructor?s explanation of other important topics. The goal of this project was to help students become visually fluent with visual representations (similar to becoming fluent in a second language). To achieve this goal, the project took four steps:

  1. Develop a measure of implicit visual fluency by studying which visual features undergraduate students and chemistry experts pay attention to.
  2. Develop a model that captures students? perception of visual representations (i.e., visual fluency) while they solve chemistry problems, similar to how Netflix captures what movies someone likes to watch.
  3. Develop an algorithmic approach to automatically optimize a sequence of instructional problems that maximizes students? visual fluency.
  4. Evaluate whether the sequence of instructional problems enhance students? learning of chemistry knowledge.

For the first step of the project, the project team used a type of machine learning method that identifies which features of a visual representation a student attends to, without requiring expensive methods such as eye tracking. The method was compared against other methods, such as students? self-reports, to establish that it really measures what it is intended to measure. The team used this method to determine which visual features students do not attend to, even though they show important information. The goal of the instructional activities is for students to learn to pay attention to those features.

For the second step, the team created a machine algorithm that models how human students learn visual fluency. The algorithm is based on what we know about how humans learn visual fluency and how they process visual images. The algorithm mimics this human behavior. It takes two images as an input, internally processes them, and outputs an answer to the question whether the two images show the same chemical molecule (i.e., yes or no). This algorithm allowed the team to test whether particular sequences of instructional problems are effective, without having to test these problems on human students. In essence, the algorithm is a testbed that emulates human learning of visual fluency.

For the third step, we used the algorithm to find effective sequences of instructional problems. To this end, the project team gave the algorithm a pretest of its visual fluency, asked the algorithm to produce a tentative instructional sequence, and then a posttest of visual fluency. This procedure was repeated many times while the algorithm adjusts the instructional sequence to increase the gain between the pretest-vs-posttest performance.   Eventually the algorithm selects the instructional sequence which leads to the highest gains in visual fluency.  Because the algorithm was developed to emulate human learning, the sequence that led to the highest learning gains for the algorithm was hypothesized to also lead to the highest learning gains for human students. To test this hypothesis, three experiments with human students compared the instructional sequence generated by the algorithm to other instructional sequences. The hypothesis was confirmed: the instructional sequence generated by the algorithm was most effective, especially for low-performing students.

For the fourth step, we developed, based on the results from the previous steps, adaptive instruction for visual fluency. The instruction is adaptive similar to how Netflix adaptively selects the next movie to watch, based on a model of what movies a given person likes. The adaptive instruction for visual fluency highlights visual features that students should learn to pay attention to if they make a mistake. It also adaptively provides particular instructional sequences based on an assessment of what the student still needs to learn. Both parts of the adaptive instruction were evaluated separately. One study showed that highlighting was not necessary to improve students? learning of visual fluency. We think that highlighting may detract students from noticing by themselves which visual features are important. Another study showed that adaptive selection of instructional sequences improved students? learning over non-adaptive selection. 

The final, effective instructional sequence was integrated into an existing, successful online learning technology for chemistry, which is available for free at https://chem.tutorshop.web.cmu.edu/.


Last Modified: 10/14/2020
Modified by: Martina A Rau

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page