Podcast #75

Outside observers can give instructors valuable formative feedback, and with the right observers and the right instruments, classroom observation can also be a useful (if incomplete) measure of teaching quality. Our guest, Marilyne Stains, teaches in the Department of Chemistry at the University of Nebraska in Lincoln where she specializes in chemical and science education. She has used a range of measures of instructor and student behavior in her research and recently co-authored the largest-ever study of STEM teaching practices that analyzed classroom observation data for more than 2,000 classes. In this episode, we discuss the pros and cons of a variety of classroom observation techniques from reliable objective measures like COPUS to completely unstructured note-taking.

You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.

Show Notes

0:00 Intro

0:34 Welcoming Marilyne Stains. What are people doing in the classroom? The culture of privacy around teaching. Faculty don’t automatically observe each other. But there is a lot to gain–for both the observed and the observer.

4:45 Videotaped observations can provide fodder for rich faculty development conversations.

5:25 COPUS (Classroom Observeration Protocol for Undergraduate Stem) is an objective measure of instructor and student behaviors. It’s NOT a measure of teaching quality. RTOP (Reformed Teaching Observation Protocol) measures the degree to which the instructor is doing inquiry-type teaching. The data collection gives the opportunity to, for example, measure the impact of faculty professional development. And these may be used for formative teacher assessment. COPUS has high inter-rater reliablity, and it’s easy to train undergraduates to use it.

8:28 Marilyne uses COPUS for research on teaching. But she has colleagues who uses COPUS for faculty development and formative feedback. How might you observe a peer or be observed for faculty development? Have a conversation first. Doug has been observed using COPUS ten times with an interval between, and he got valuable data. You may think you’re active, but you’re not–or vice-versa. COPUS is objective.

11:58 There are lots of things going on in a classroom that COPUS doesn’t capture. The OTOP (Oregon Teacher Observation Protocol) is also useful as a more qualitative measure. It’s a problem in general when the observer doesn’t know much about teaching. Summative teaching observation evaluation is a tough nut to crack. We often assume that experience teaching translates into knowledge about teaching. Newer faculty might actually have more exposure to how learning works. And senior faculty can ‘freak out’ when they visit an active learning classroom. Some faculty can imagine that active learning is less challenging: they may see it as entertainment.

17:28 Humanities teachers have done active learning for a long time. It’s not just seminars, it’s group work, projects, peer instruction, etc. Faculty who want to do active learning may not know what to do. POGIL (Process-Oriented Guided Inquiry Learning) for chemistry, biology, and math is one set of resources. But seminars can still be run in a teacher-centered way. It’s the bad Socratic Method: “Everyone Socrates talks to is stupid.” Classroom observation doesn’t measure any of the teaching that happens outside the classroom.

22:43 We may spend more time teaching outside the classroom than in–but we don’t know. Many organizations set standards for and evaluate the quality of online courses, but there is no equivalent for face-to-face courses.

25:23 Who should be doing the classroom observation? Only senior faculty observing pre-tenure junior faculty is just not enough. Options include faculty outside the department from neighboring disciplines. There are issues with observers with no disciplinary knowledge. So ultimately we may need difference people who bring different lenses. There are dangers in faculty feeling judged by colleagues who do research on teaching. Observation is ideally one part of a conversation and driven by faculty questions and goals.

32:34 Doug’s experience observing at another institution as part of a tenure review. Marilyne gave Doug the NSF report on teaching quality evaluation, which referred to the OTOP (Oregon Teacher Observation Protocol), and Doug found that useful. Some protocols focus on behaviors, others on how students participate in producing knowledge.

36:23 COPUS is built on the TDOP (Teaching Dimensions Observations Protocol). Marilyne had trouble establishing inter-rater reliability with TDOP and RTOP and found that easier with COPUS. Doug and Edward go meta on observation protocols.

38:55 Marilyne has been using COPUS to monitor change before and after participation in faculty development programs–and longitudinally. They found that faculty behavior do change, although the new behaviors show a downward movement over the long term. Doug and Marilyne discuss experimental design, and Marilyne shares the actual process of recruiting research subjects. Those who DON’T sign up have higher self-efficacy about teaching–which is probably why they didn’t sign up. But the workshop raised the participants self-efficacy about teaching in just two days to the level of non-participants. They tried to create communities to maintain the teaching behaviors, but they have not had luck.

45:05 COPUS adoption across whole departments seems to be rare. Doug suggests COPUS could be used to compare departments, not evaluate instructors. There is little research to link COPUS behaviors to learning outcomes. UBC has seen trends. The data may be noisy, like the correlation between a certain vitamin and certain health outcomes. Doug worries about teachers trying to ‘game’ COPUS by pseudo-active learning. Marilyne emphasizes that COPUS does NOT measure teaching quality. People ask Marilyne what good teaching is, and she says there are too many factors. Qualitative evaluation is hard–as you recognize when grading papers.

52:13 A mistake in the faculty development classroom. Faculty don’t credit education research. “That’s not true in my classroom.” Using principles and ‘case studies’ instead of data. Faculty believe in ‘personal empiricism: “I tried it once, and it didn’t work.” Using personal anecdotes to think about learning. Building a culture of collecting and analyzing data in order to talk about learning. The case for student portfolios.

1:03:20 Thanks and signing off.