Wednesday, December 18, 2013

Mastery learning - my struggle

Mastery learning Having had our shiny new biological sciences curriculum approved for 2014, one of the main things occupying me at the moment is planning the new module I am convenor of - BS2000: Research Skills.

Our curriculum redesign had several guiding principles. Reducing the overall number of modules and student assessment load were two of them. Partially as a consequence of that, previous key skills modules have been absorbed into subject modules. The Research Skills module is the exception to that principle as it is designed to equip students for the challenge of their final year research project. As I know from long experience, getting students to engage with anything that smacks of "skills" rather than biology is difficult, and I know exactly what approach I would like to take to overcome that problem: mastery learning. Consequently, I was very interested in a new paper, which describes almost exactly the approach I would like to take with BS2000:

Lesley J. Morrell: (2013) Use of Feed-forward Mechanisms in a Novel Research-led Module. Bioscience Education. DOI: 10.11120/beej.2013.00020
Abstract: I describe a novel research-led module that combines reduced academic marking loads with increased feedback to students, and allows students to reflect on and improve attainment prior to summative assessment. The module is based around eight seminar-style presentations (one per week), on which the students write 500-word ‘news & views’ style articles (short pieces highlighting new results to a scientific audience). Students receive individual written feedback (annotated electronically on the work), plus an indicative mark, on their first submitted report. For subsequent reports, only a subset is marked each week, such that each student receives feedback on two further submissions. Simultaneously, they have access to written feedback on their peers’ reports (a total of two reports per student enrolled on the module). Students are encouraged to read and apply the general and specific messages from all the feedback to their own subsequent work (using it as feed-forward). At the end of the module, students self-assess their eight submissions and select the two they believe are their best pieces to put forward for summative assessment. Combining data from three cohorts, student attainment increased throughout the module, with higher marks for the two chosen reports than for the two marked reports or their first report. Students selecting previously unmarked reports also showed a greater increase in their mark for the module than students selecting reports that had previously received a mark. Module evaluation forms revealed that the students found access to feedback on others’ work helpful in shaping their own assignments.

But there's a problem. BS2000 will have over 350 students on it, not 32 as described in this paper. Much as we would like to, the team of five co-convenors would not be able to cope with the workload of the formative assessments in this model. There are plenty of American models for mastery learning with large student cohorts (such as: The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring. (1984) Educational Researcher, 13(6), 4-161), but they mostly fall back on online testing and MCQs, which are not suitable for the skills we need to develop in this module (critical appraisal, ethics, information literacy). So I'm stuck, and how to square this circle is the problem the module team and our advisers have got to figure out over the next few months. Right now our proposed solution is group working, effectively dividing 350 by five, but it's not clear to me if that's the best way to get students who are still a year away from a final year project to engage with skills development.

1 If you're looking for someone to blame for Udacity, there's a direct lineage from this paper to the present day.


  1. Have you considered getting the students involved in the marking?
    For each submission - create a rubric, example mark one of the students (if you have 5 staff this will be 5 examples)
    Then each student (refering to the rubric and the 5 examples) has to mark 3 other students work (and in return gets their submission marked thrice).

    Students learn when they write theirs, learn again when they mark others, and learn for the 3rd time when they read their feedback.

    Sadly you will still be stuck with the 700 summative assessments - but at least the students will have some appreciation of how much work marking them all will be (and I guess as its summative at this stage you dont need to provide detailed feedback only a mark?)

    1. Thanks for the comment. I'm not sure how we handle the volume involved in this. MOOC platforms such as Coursera can do this but we don't have access to any such tools, and my experience of peer grading on Coursera at least has been very variable.

  2. A few thoughts:
    1: Following on from Joseph's suggestion - what's the VLE you have? I know that Moodle has Peer marking options - as does Turnitin, (if you have access to the full capabilities - unfortunately we don't via our Moodle/TII link. That might help you randomly assign a few items to each other.

    2: How to then deal with those that don't take it seriously/don't submit? Do you only get the submitters to do the reviewing, or do you get them all to do fewer - so that at least the reviewers would get *something* out of the exercise?

    3: Related to the US idea, if you had a few sample answers that you'd developed, you could use MCQ (or, better, ranking type questions) to order them from best to worst.

    4: I wonder if you could use something like Wordle somehow - perhaps put some sample bits up in the VLE, get students to write some free text on them, then combine all the answers for a single sample & wordle it. See if lots of them use similar words in the comments they write.

    5: But, yep! Research Skills is not an easy one to teach. Am just marking some group research projects now. This year's cohort seem to have gone overboard on the literature reviews - despite being told on many occasions that they have to do some primary research (you'd have thought they'd have worked out I was getting them to learn about questionnaires / surveys/ how to use Google forms for some reason other than filling up the time, wouldn't you?) So their primary data is rather limited in most cases. (Mind you, compared to last year, the lit reviews are generally better & most have found 'academic' sources rather than just general stuff, so we do have progress :)

    1. We're required to use Blackboard for this module. There is a peer marking option but it is so formidably complex that it looks like far more trouble that it's worth. I really don't want to use MCQs in anything other than a peripheral context on this module, they are alien to the skills we are trying to develop in students to support final year research projects.

  3. Maybe a bit off topic, but do you believe that research skills should be trained in a separate module? I think a more holistic approach is more suitable (see also 4C/ID model by Van Merrienboer)

    Regarding to your puzzle, I would go for a mix between group reports, intergroup peer review (groups assessing other groups), intragroup peer review (group members assessing each other) and teacher assessments of the groups. These assessments should not occur only at the end, but also during the course.