Award Abstract # 1621344
Developing Preservice Elementary Teachers' Ability to Facilitate Goal-Oriented Discussions in Science and Mathematics via the Use of Simulated Classroom Interactions

NSF Org: DRL
Division of Research on Learning in Formal and Informal Settings (DRL)
Recipient: EDUCATIONAL TESTING SERVICE
Initial Amendment Date: July 18, 2016
Latest Amendment Date: July 18, 2019
Award Number: 1621344
Award Instrument: Continuing Grant
Program Manager: Michael Steele
DRL
 Division of Research on Learning in Formal and Informal Settings (DRL)
EDU
 Directorate for STEM Education
Start Date: August 1, 2016
End Date: July 31, 2021 (Estimated)
Total Intended Award Amount: $2,777,545.00
Total Awarded Amount to Date: $3,077,154.00
Funds Obligated to Date: FY 2016 = $1,370,649.00
FY 2017 = $744,802.00

FY 2018 = $662,094.00

FY 2019 = $299,609.00
History of Investigator:
  • Jamie Mikeska (Principal Investigator)
    jmikeska@ets.org
  • Heather Howell (Co-Principal Investigator)
Recipient Sponsored Research Office: Educational Testing Service
660 ROSEDALE RD
PRINCETON
NJ  US  08540-2218
(609)683-2734
Sponsor Congressional District: 03
Primary Place of Performance: Educational Testing Service
NJ  US  08540-2218
Primary Place of Performance
Congressional District:
03
Unique Entity Identifier (UEI): SZN1MPHQN853
Parent UEI: SZN1MPHQN853
NSF Program(s): Discovery Research K-12
Primary Program Source: 04001617DB NSF Education & Human Resource
04001718DB NSF Education & Human Resource

04001819DB NSF Education & Human Resource

04001920DB NSF Education & Human Resource
Program Reference Code(s):
Program Element Code(s): 764500
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.076

ABSTRACT

There is widespread recognition in educational literatures that academic discourse is important for supporting students' developing understanding in the disciplines of science and mathematics. College and career-ready standards also call for attention to supporting students' learning of how to think and communicate like disciplinary experts. The teaching practice of orchestrating classroom discussion is intended to support students in obtaining higher levels of academic achievement but also to support students' participation in a democratic society. However, research has found that teachers--particularly novice teachers--struggle to orchestrate discussion effectively for science and mathematics. The investigators of this project hypothesize that opportunities to 1) practice orchestrating discussions in simulated classroom environments; 2) receive constructive feedback on their practice; and 3) reflect on that feedback and their experiences with peers and teacher educators, develops preservice teachers' abilities to lead productive classroom discussion. This may allow them to be more effective at orchestrating discussion when they begin teaching real students in science and mathematics classrooms. The project team, which includes investigators from Educational Testing Service (ETS) and software engineers at Mursion, will develop, pilot, and validate eight discussion-oriented performance tasks that will be embedded in an online simulated classroom environment. The resulting research and development products could be used nationwide in teacher preparation and professional development settings to assess and develop teachers' ability to support classroom discussion in science and mathematics.

The Discovery Research K-12 (DRK-12) program seeks to significantly enhance the learning and teaching of science, technology, engineering and mathematics (STEM) by preK-12 students and teachers, through research and development of innovative resources, models, and tools. Projects in the DRK-12 program build on fundamental research in STEM education and prior research and development efforts that provide theoretical and empirical justification for proposed projects. This Early Stage Design and Development project will 1) iteratively develop, pilot, and refine eight science and mathematics discussion-oriented performance tasks (six formative, two summative), scoring rubrics, and rater training materials; 2) deploy the intervention in four university sites, collecting data from 240 prospective teachers in both treatment and business-as-usual courses; and 3) use data analyses and expert review to build a five-part argument for the validity of the assessment and scoring rubrics. Data sources include prospective teachers' background and demographic information, cognitive interviews, surveys, scores on content knowledge for teaching (CKT) instruments, performance and scores on the developed performance tasks, discussion scores on Danielson's Framework for Teaching observation protocol, and case study interviews with prospective teachers. The project team will also conduct interviews with teacher educators and observe classroom debrief sessions with prospective teachers and their teacher educators. The research will examine each teacher's scores on two summative performance tasks administered pre- and post-intervention and will look for evidence of growth across three formative tasks. Linear regression models will be used to understand relationships among teachers' CKT scores, pre-intervention performance task scores, group assignment, and post-intervention performance task scores. A grounded theory approach to coding qualitative data of 24 case study teachers, observations of debrief sessions, and interviews with teacher educators will generate descriptive use cases, illustrating how the tools can support prospective teachers in learning how to facilitate discussions focused on science and mathematics argumentation. Mursion will develop a webpage on its website dedicated to this project that will allow the team to post the new performance-based tasks, scoring rubrics, and examples of performance in the simulated environment for teacher educators, educational researchers, and policy makers and collect feedback from them that can be used as another information source for refining tools and their use. Research findings will also be disseminated by more traditional means, such as papers in peer-reviewed research and practitioner journals and conference presentations.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Mikeska, J.N., & Howell, H. "Authenticity perceptions in virtual environments" Information and Learning Sciences , v.122 , 2021 , p.480
Howell, H. & Mikeska, J.N. "Approximations of Practice as a Framework for Understanding Authenticity in Simulations of Teaching" Journal of Research on Technology in Education , v.53 , 2021 , p.8
Jamie N. Mikeska, Heather Howell, Carrie Straub "Using Performance Tasks within Simulated Environments to Assess Teachers? Ability to Engage in Coordinated, Accumulated, and Dynamic (CAD) Competencies" International Journal of Testing , v.19 , 2019 , p.128 10.1080/15305058.2018.1551223
Mikeska, J.N. & Howell, H. "Simulations as Practice-Based Spaces to Support Elementary Teachers in Learning How to Facilitate Argumentation-Focused Science Discussions" Journal of Research in Science Teaching , v.57 , 2020 , p.1356

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project focused on the development and use of a new, innovative, technology-based approach to help elementary teachers learn how to teach. The simulated classroom environment, provided by MursionTM, allows teachers to interact with and lead a discussion among five student avatars. The avatars are controlled by a trained actor called an interactor or simulation specialist, but the teachers do not see the interactor; instead they view the students on a laptop or television screen. Our focus was on helping preservice teachers currently enrolled in university-based teacher preparation programs practice facilitating argumentation-focused discussions. Facilitating argumentation-focused discussions is traditionally difficult for new teachers to learn to do well, and new teachers are unlikely to become good at leading such discussions without support. At the same time, engaging students in argumentation-focused discussions is a type of teaching practice that is known to be critical for student learning in both mathematics and science.

Working across four implementation sites (two implementations in mathematics methods courses, two in science methods courses), the project worked with teacher educators to implement three cycles of enactment into each course. Each cycle of enactment included: (1) the teacher educator preparing the class to lead a simulated discussion; (2) each preservice teacher individually leading a simulated discussion up to 20 minutes long and then receiving written feedback and a video of the simulated discussion provided by our team; and (3) a teacher educator leading reflection/debrief. While all of the preservice teachers were able to complete the simulation, only the subset who consented to participate in additional survey activities (n=65) are represented in our data. Additionally, each preservice teacher completed a simulated discussion at the start and end of the semester without any support from the teacher educator, so that we could measure improvement. To control for the amount of learning that might have happened in the course even without the simulations, we asked a separate group of preservice teachers enrolled in the same methods course with the same teacher educator in a prior year to complete the pre/post measures (n=43).

One project goal was to determine whether it was possible to design and implement high-quality, content-intensive simulated tasks around the challenging teaching practice of facilitating argumentation-focused discussions. We were able to do so, and one contribution that the work makes is the public release of the simulated discussion tasks themselves, including the written task the preservice teacher sees, a comprehensive set of training materials for interactors, and information about how to score discussion performances. This information is archived in the Qualitative Data Repository (https://data.qdr.syr.edu/) accessible by creating a free account and searching for “Go Discuss”.

A second set of questions we addressed was about how the simulations were used and how successful that use was. We examined this by looking at survey and interview data in which the teacher educators and preservice teachers reflected on how useful the simulations were, and by looking at direct evidence of preservice teacher improvement in leading argumentation-focused discussion between the pre and post timepoints. Our most critical finding is that the preservice teachers improved in their ability to lead argumentation-focused discussions over the course of the semester, and that that improvement was statistically significant in comparison to the no simulation group, even when controlling for differences in initial scores. This finding suggests a strong potential for simulation-based approaches to impact preservice teachers’ learning across different university contexts. Self-report data complement this finding, as teacher educators and preservice teachers alike overwhelmingly found the experience useful and would recommend the approach to others.

We also found that the cycle of enactment around the simulations was a driver of preservice teacher learning. Preservice teachers identified both the feedback they received and the class activities their teacher educators led as critical in their learning. Analysis of preservice teachers’ interviews (for a subset of participants) compared to their performances provided evidence that the preservice teachers paid close attention to the feedback they received, were able to make sense of it, and often applied it directly to the next cycle of enactment. Teacher educators also reported learning from the video records, feedback, and scored simulated discussions, citing these as helping them to monitor individual preservice teachers’ progress and to notice patterns within the class and adjust instruction accordingly.  

These findings suggest that feedback, particularly actionable and specific feedback, is an important support for learning from simulated teaching. They also point to the usefulness of teacher educators’ work to prepare preservice teachers for simulated tasks by helping them understand what argumentation-focused discussion is, unpacking the content of the task, and supporting productive reflections between discussions. Finally, findings suggest that teacher educators can be more responsive to their preservice teachers’ needs when they have the type of rich formative information that is provided by simulations.


Last Modified: 08/25/2021
Modified by: Jamie Mikeska

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page