Development of a Validated Self-Report Instrument for Measuring the Classroom Impact of Student-Centered Professional Development for College Instructors
To broaden uptake of student-centered teaching and learning approaches, professional development of college instructors (CIPD) is crucial. But efforts to broaden the reach of CIPD must be based on good evaluation evidence about whether it is having the desired effect on
teaching. While improved student learning is the ultimate goal of CIPD, measuring student outcomes directly is not always feasible. Collecting and analyzing student data amplifies the cost and complexity of evaluation, while direct observation is costly in terms of time and resources. Developing a validated assessment benefits undergraduate instructors who participate in workshops by providing them feedback on any changes they have made to their instruction. It also benefits program administrators who are provided information about the efficacy of their programs.
The goals for the project include 1) creation of a usable survey to assess CIPD efforts, 2) validating this survey with comparisons of instructor survey responses and observations of classroom practices, and 3) presenting the findings in journal articles and conference presentations.
The following activities have either been completed or are in progress:
1) Piloting of survey with eight instructors in 'think aloud' interviews, and creation of usable survey for validation study.
2) Piloting of observation protocol for one semester with 6 instructors. Analysis of piloting data for interrater reliability and generalizability.
3) Two semesters of observations and surveys have been collected with 17 instructors at three institutions and 177 classes observed. Ongoing analysis of this data.
Our study examines if instructors can make accurate self-assessments of their own teaching if they are called upon to simply report their behaviors versus making judgments of their own teaching quality. Past assessment literature on self-report in this area is mixed; instructors in several studies have been found to make claims about the interactivity of their teaching that are not supported by other sources of evidence such as class observations. However, the literature in self report in general suggests that self-report can be valid in circumstances where people are not affected by social desirability in their responses, are not making quality ratings of their own performance, can access memories about behaviors, and when responses are not linked to consequential outcomes. To test this hypothesis, we are comparing survey responses by instructors who report on their activities at the end of a semester with observation of their behaviors in the classroom. Observational protocols are based upon the TDOP protocol and record instructor and student behavior for each two minute segment of class. Participants in the study are math instructors at 7 institutions. The goals of the project are being achieved as we are gathering these two sources of data and analyzing the results.
Data collection and analysis is ongoing, but suggest that instructors can accurately describe their general approach to teaching, but are sometimes inaccurate in estimating the exact amount of time devoted to each activity. We believe the survey will be a usable tool for evaluating professional development projects and workshops.
Our goal is to complete data collection next spring and write up results as a journal article. We have also completing an article describing our method for using observational protocols and testing the number of classroom observations needed to confidently describe a teacher's instructional style over the course of a semester.
The development of our assessment buttresses efforts to
enhance instructor uptake of student-centered instructional methods. For example, Developing
Faculty Expertise is a central component of NSFﾒs Transforming Undergraduate Education in
Science, Technology, Engineering, and Mathematics (TUES) program. At a recent meeting of
TUES PIs, 43 of 392 projects (11%) aimed to develop faculty expertise, representing at least $26
million of funding (George, Behar, Calinger, & Davies, 2013; NSF, 2013). And this effort is likely to expand, as TUES program officers consider immersive, multi-day workshops the most effective tool for propagating NSF-funded instructional initiatives at the college level
(Henderson, et al., 2013). Our instrument will directly aid the evaluation of such workshops and will help NSF and other funders to identify priorities for support of CIPD efforts.
Our project involves the creation of survey used to evaluate professional development projects for undergraduate STEM instructors. We are attempting to match what instructors say about their teaching on the survey and what they actually do in the classroom from our observations. We realized during the piloting phase of our study that teaching methods vary substantially for the same teacher from class to class. The necessitated changing our research plan to include more observations and a sampling scheme that confidently captures this variability.