
NSF Org: |
DUE Division Of Undergraduate Education |
Recipient: |
|
Initial Amendment Date: | September 3, 2013 |
Latest Amendment Date: | August 11, 2017 |
Award Number: | 1245436 |
Award Instrument: | Standard Grant |
Program Manager: |
Andrea Nixon
anixon@nsf.gov (703)292-2321 DUE Division Of Undergraduate Education EDU Directorate for STEM Education |
Start Date: | October 1, 2013 |
End Date: | September 30, 2018 (Estimated) |
Total Intended Award Amount: | $199,979.00 |
Total Awarded Amount to Date: | $239,974.00 |
Funds Obligated to Date: |
FY 2016 = $39,995.00 |
History of Investigator: |
|
Recipient Sponsored Research Office: |
3100 MARINE ST Boulder CO US 80309-0001 (303)492-6221 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
CO US 80309-0580 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): |
IUSE, TUES-Type 1 Project |
Primary Program Source: |
04001617DB NSF Education & Human Resource |
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.076 |
ABSTRACT
Efforts to broaden the reach of college instructor professional development will be more successful if the design of professional development services is informed by good evaluation evidence about whether and how it is working. While improving student learning is the ultimate goal of college level instructor professional development, measuring student outcomes directly is difficult and expensive as a means of evaluation. Moreover, there is often a long time lag between instructor participation and when a discernible impact on student outcomes can be detected.
This project is developing a more efficient and rapid way to evaluate the effectiveness of professional development and in particular to determine whether and how the instructors are changing their instruction as a result of participation. It is engaged in constructing and validating a self-report survey instrument that can be used to probe the initial classroom impact of professional development of college mathematics instructors, especially shifts in their use of class time and choice of instructional activities. The PI team is aware of sources of potential bias in self-reported survey responses and is focused on developing an instrument that is as free as possible from incentives to provide biased responses. Classroom observations are being compared with survey data from multiple sites to determine the conditions under which self-report accurately probes shifts in instructional practice that result from professional development. The goal is a validated survey instrument that will offer a general and inexpensive tool for measuring the impact of professional development in mathematics and, eventually, in science as well.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project has sought to construct and validate a survey tool for measuring classroom instructional practices used in college mathematics. Such a tool would be useful to measure the outcomes of professional development (PD) of college mathematics instructors, detecting change in practices before and after a PD experience, without requiring expensive and time-consuming observations.
To validate the survey, the project compares classroom observations with instructors' self-reports of their own practice on survey items. We adapted the survey items from measures we developed and used previously to evaluate a specific PD workshop. To compare instructors' responses to what a neutral observer sees, we adapted the observation measures from protocols developed by other researchers for science classes, because we found that different teaching practices are common in mathematics. Importantly, our survey and observation instruments are well aligned with each other, something that has not always been the case in prior studies. Both focus on classroom behaviors, rather than effectiveness, because a change in classroom behaviors (e.g., how class time is spent) may be the first change detected after PD, before instructors are skilled at new practices. We collected survey and observation data from 34 courses, coding 297 class sessions (an average of 9 sessions per course) to make sure that our description of the course as a whole was based on an adequate number of class days. Most statistical tests are designed for large numbers, but for observations, there is a challenge: the number of observations possible is limited by the high cost and effort to make them. To compensate for small sample numbers, we have researched and applied a variety of sophisticated statistical techniques.
We then assessed the criterion validity of our survey in two ways. First, by comparing survey to observation data, we showed that instructors can make moderately accurate estimates of the type and amount of teaching and learning activities in their classroom. This was especially true when we asked about identifiable activities such as group work or student presentations. Instructors were less able to report on the interactivity of their lecture style, although this comparison was possibly confounded by difficulties creating reliable comparison criteria. We established validity through comparing interactivity scales formed from the responses to multiple items on the survey (on one hand) to an interactivity scale formed from observational data. Survey and observation scales correlated robustly with correlation coefficients between 0.45 and 0.64. The creation of validated scales allows us to make a 'scorable' version of the survey for research and evaluation purposes. In exploring ways to make fair and useful comparisons of the data, we have also developed some rigorous metrics for analyzing these types of data, beyond simple means.
The project has intellectual merit in producing an aligned set of measurement instruments that we call TAMI, the Toolkit for Assessing Mathematics Instruction. So far TAMI includes an instructor survey, TAMI-IS, and an observation protocol, TAMI-OP, which is implemented as an automated, tablet-based observation worksheet that observers can use to time observations, record and store data. We can match general categories of instruction (e.g., lecture, group work) and characterize instructors' use of certain active learning approaches. While differences in lecture 'style' are readily apparent in the observations, it is more difficult to capture the level of interactivity in surveys.
Using generalizability theory, we have analyzed the conditions for drawing conclusions about teaching from time samples of courses, and this is a distinctive contribution to the field. Most discussions of uncertainty from observation data focus on interrater reliability, or different raters' adherence to the protocol, but our analysis shows that inherent day-to-day variability of teaching practices is a significant source of uncertainty neither well characterized in the literature nor well addressed by typical observation practices in use. Failure to account for day-to-day variation in time sampling may mischaracterize teaching practices in studies that seek to characterize courses as a whole, rather than single days.
The project has broader impact by making progress toward a useful, simple evaluation instrument to measure change in instructional practice, for example the kinds of changes that may result from a PD experience. Other mathematics PD projects have expressed interest in the tool, and ongoing work is comparing data from TAMI surveys and observations to other measures of instructor practice.
Last Modified: 01/30/2019
Modified by: Sandra L Laursen
Please report errors in award information by writing to: awardsearch@nsf.gov.