One of the tasks in my hamster wheel at the moment is following up on a recent local strategic review of teaching. In talking to teaching teams about this, one of the possible avenues to emerge is a call for a teaching methods evidence base.
Apart from the obvious dangers of reinventing the wheel, being scientists, these people mean numbers - as in proof something is "better" than something else. But what the heck is "better? A number of papers in the most recent edition of CBE Life Sciences Education address this point.
The standard of much published educational research is poor compared to, for example, most medical research. What is "poor" in this context? Small sample sizes, highly local circumstances (often reflecting unrecognized interventions), descriptive rather than analytical. CBE Life Sciences Education strives to avoid these problems and is, for the most part, statistically rigorous. But does this help? It's still easy to find conflicting evidence in the literature for just about anything. There are no global models - people need to consider the individual circumstances of their course, modules and students. For this reason (and others), I'm not attracted to the idea of building yet another evidence base. Exemplars rather than evidence must be the way forward for those who care enough.
How many biology teachers does it take to change a light bulb? None, they must want to change their own practice. Any illumination is going to come from high quality exemplars they can choose to adopt, not light bulbs.