Last fall I spent a fair bit of time analyzing the determinants of midterm performance (e.g., attendance and video lecture watching) in my big econometrics class. It was difficult to interpret many of the results because of the classic correlation does not equal causation problem. For example, I really wanted to know how time spent studying affected scores, and found that reported hours spent studying was negatively correlated with scores. I think it is unlikely that the causal effect of an additional hour of studying is negative and it is much more likely that the students having the most trouble with the material were the ones who studied the most. And then there’s the fact that quality of studying matters at least as much as quanitity.
At the end of that semester my friend (and cohost on the Teach Better Podcast) Edward O’Neill and I came up with a new question to try and capture how students study:
What did you do to study for the midterm exam? (Choose all that apply) a. Read the book b. Re-watch portions of the video lectures c. Go over your notes d. Study the problem sets and solutions e. Re-work problems from the problem sets f. Work new sample problems
This year I added this question to my midterm survey along with the same questions I asked last year (e.g., How much time did you spend studying for the midterm exam?). I also put the survey right into the exam and gave students two points for filling it out instead of emailing a link to a Google Form survey the next day. This brought the response rate from 74% to 100%. If only all surveys could use this method to combat non-response.
Here’s what my 121 students reported:
Type of study | Percent who did it -------------------------+-------------------- Read the book | 31% Re-watch video lectures | 55% Go over notes | 77% Study problems+solutions | 90% Re-work problem sets | 26% Work new problems | 57%
I think it’s fascinating how few students read the book or go back and re-work problems. Far more seem to be reading problems and solutions.
Here’s how midterm exam scores looked for those that did and did not engage in each type of studying:
| Avg midterm of | and those Type of study | those who did it | who did not -------------------------+------------------+------------ Read the book | 64.5 | 64.6 Re-watch video lectures | 62.5 | 67.1 Go over notes | 65.4 | 61.7 Study problems+solutions | 64.6 | 64.4 Re-work problem sets | 64.5 | 64.6 Work new problems | 65.0 | 64.0
The only difference that is even marginally significant (p=0.09) is between students who review and do not review the video. At least part of the reason for these somewhat unintuitive results is that students who differ in how they study likely differ on a variety of other relevant characteristics. For example, those students who reported re-watching video as a study strategy also attended far fewer lectures.
It is slightly more illuminating to look at the results with a regression model that tries to control for background knowledge of statistics, average weekly study hours, number of hours spent studying specifically for the midterm, and number of lectures attended. Next week I’ll analyze data on how many of the lectures they watched on video during the semester, but in the meantime, here’s what I have:
midterm | Coef. Std. Err. t P>|t| -----------------+--------------------------------------- somestats | 2.984823 3.310259 0.90 0.369 lotstats | .0202767 4.214091 0.00 0.996 reg_study_3to4 | -5.449729 5.051488 -1.08 0.283 reg_study_5to6 | -2.721032 5.322659 -0.51 0.610 reg_study_7plus | .6499088 6.988505 0.09 0.926 mt_study_4to8 | -2.496174 6.24482 -0.40 0.690 mt_study_9plus | -12.36427 6.362831 -1.94 0.055 | ms_nlectures | 6-8 | -.1417638 4.641844 -0.03 0.976 9-10 | 3.160453 4.600167 0.69 0.494 11-13 | 5.171283 4.372493 1.18 0.240 | ms_how_book | 1.15838 3.09346 0.37 0.709 ms_how_video | -2.205832 2.959473 -0.75 0.458 ms_how_notes | 2.535813 3.355605 0.76 0.452 ms_how_review_ps | 1.288894 4.962381 0.26 0.796 ms_how_rework_ps | 5.096169 4.452382 1.14 0.255 ms_how_new_probs | 5.07274 4.033747 1.26 0.211 _cons | 64.55459 7.971838 8.10 0.000
First the bad news: Not a single coefficient is statistically significant at the 5% level. This is in part due to a relatively small sample size (121 students) and in part due to missing important confounding variables. And the only coefficient that is significant at a 10% level is on studying at least 9 hours for the midterm. That means, holding everything else constant, students who studied 9 or more hours for the midterm scored 12 points lower than those students who studied less than four hours.
The slightly good news is that the coefficients on the variables representing how students studied are consistent with current research in cognitive science on effective studying, even if they aren’t statistically significant. In particular, the effects of actually working problems that will be similar to those found on the exam (about 5 points) are higher than any of the other study strategies.
In his fantastic video series on how students can get the most out of studying, Stephen Chew emphasizes study strategies that involve deep processing of information. He gives many examples, but one is to practice working problems. He also points out there are more and less effective ways to read a textbook and review notes. Next time I’m going to create a more detailed set of questions to better distinguish studying that involves shallow vs. deep processing.
In addition to Dr. Chew’s videos, the following podcast, video, and article also discuss how we can apply insights from cognitive science in a college context:
How to use cognitive psychology to enhance learning (Dr. Robert Bjork on the Teaching in Higher Ed podcast)
Science of Student Ratings (a 40m presentation by Sam Moulton, Director of Educational Research and Assessment at Harvard’s Derek Bok Center)
Applying Psychological Science to Higher Education: Key Findings and Open Questions (an article by Sam Moulton)