Pages

Monday, November 30, 2015

Assessment and learning without grades

Can you eliminate grades and stress from higher education and make university about learning rather than grades? Yes you can. Can you persuade a government which wants to reduce higher education to a meaningless league table of misleading metrics that it's a good idea?



Assessment and learning without grades? Motivations and concerns with implementing gradeless learning in higher education. (2015) Assessment & Evaluation in Higher Education, doi: 10.1080/02602938.2015.1114584
The relationship between assessment and learning in higher education often comes down to a single thing: a grade. Despite widespread criticism of grades as inexact tools, whose overemphasis undermines student learning and negatively affects student well-being, they continue to be the norm in the assessment of student learning. This paper analyses an alternate form of assessment: so-called ‘gradeless learning’. This study theoretically and geographically contextualises the recent implementation of a gradeless learning policy at a large public university in Asia, and presents findings from a student opinion survey about the policy. The paper shows that respondents overwhelmingly understand and often agree with the central claims of gradeless learning, including its potential to ease students into college life, allow them to make more daring choices in their studies and even develop as lifelong learners. However, the aim of relieving stress among one group of students has increased stress for others. The study explains the circumstances that create this divergence in student stress levels, which are both locally specific and common to all gradeless systems. The paper concludes by discussing the effectiveness of the gradeless system in achieving its aims and suggesting future research avenues.



Thursday, November 26, 2015

When The Money Runs Out

This paper asks a difficult question, and comes up with ... an answer. I don't think it is the answer, but it it an answer to a question which many of us are going to have to address soon.

Pull quote:
"Our students pay high fees, have high workloads and want face-to-face feedback from experienced tutors. There is a subtle problem here. Student-centred approaches involving self-regulation and active learning may promote deep learning and high generalisation of outcomes, but skills acquisition seems to require at least some degree of authoritative feedback. The question is how to provide the latter as efficiently as possible without diverting learners from the former."


What if best practice is too expensive? Feedback on oral presentations and efficient use of resources, (2015) Assessment & Evaluation in Higher Education, doi: 10.1080/02602938.2015.1109054





Thursday, November 12, 2015

Keynes: the return of the master

Keynes the return of the master I've been a Keynsian since before I had labels to put on these things, but I've been slow catching up with Robert Skidelsky's commentary on the current financial crisis.

Robert Skidelsky. (2010) Keynes: the return of the master is divided into three sections. The first is scene-setting history and biography. The second is likely to be rather heavy going for non-economists, but it was the third section that really grabbed me. Here, Skidelsky goes beyond analysis of the current crisis to propose solutions for our woes (and also has a pop at a few neo-Keynsians such as Stiglitz). But Skidelsky also goes way beyond economics with a discussion of religion, duty, ethics and post Utilitarianism, particularly G.E. Moore's influence on Keynes.

Skidelsky also has a shot at asking How much is enough?, suggesting that Keynes didn't quite hit that nail on the head. How Much is Enough? Money and the Good Life (2012) is on my C-word reading list now.





Monday, November 09, 2015

Chalk and Cheese

Contrast In a week when all was doom and gloom over #HEgreenpaper, something good happened. Paul Orsmond and Stephen Merry published a paper.

One day last week I was ranting at one of my project students about contrasts - as a piece of statistical jargon and as a vehicle to construct hypotheses. So let's have some contrasts. Orsmond and Merry have let the cat out of the bag - it's not all about teachers, it's about the students too. I misread one sentence of their paper and temporarily thought they were calling for "non-constructive alignment" - then I was disappointed that they hadn't. Anyhow, the contrast between the reductive approach of #HEgreenpaper and the constructive approach in this paper could not be greater. You've read one, now read the other:


Tutors’ assessment practices and students’ situated learning in higher education: chalk and cheese. Assessment & Evaluation in Higher Education, 28 Oct 2015. doi: 10.1080/02602938.2015.1103366
This article uses situated learning theory to consider current tutor assessment and feedback practices in relation to learning practices employed by students outside the overt curriculum. The case is made that an emphasis on constructive alignment and explicitly articulating assessment requirements within curricula may be misplaced. Outside of the overt curriculum students appear to be interdependent learners, participating in communities of practice and learning networks, where sense-making occurs through negotiation and there is identity development. Such negotiation may translate curriculum requirements articulated by tutors into unexpected meanings. Hence, tutors’ efforts might be better placed on developing students’ ability to self-assess and to effectively evaluate and negotiate information, rather than primarily on their own delivery of the curriculum content and feedback. Tutors cannot be fully effective if they fail to consider students’ learning outside the overt curriculum, and ways to facilitate such learning processes are suggested together with future research directions.




Friday, November 06, 2015

How to fix feedback

Checklist Yet another paper telling us how to (start to) fix the feedback problem. This one contains some very sensible recommendations which I have highlighted below. Those of you who been playing along for many years may feel that this manuscript is rather similar to this and this (not referenced, but perhaps that's understandable given that the HEA have binned all their publicly-funded open-access journals). And so we reinvent the wheel. Again. Anyway, here's how to fix feedback:

Clear Transferability - Programme-level assessment. Yes please. Can't see it happening.

Feedback On Draft Work - Yes please, it's feedback, not assessment. Lift and separate.

Directly Linked To Criteria - Rubrics. Hmm... maybe...

Wasted Opportunities - Separate feedback and assessment. It's simple isn't it? I'll say it again. Separate feedback and assessment. Want me to say it again? OK, separate feedback and assessment.



Making connections: technological interventions to support students in using, and tutors in creating, assessment feedback. (2015) Research in Learning Technology, 23: 27078 - http://dx.doi.org/10.3402/rlt.v23.27078
This paper explores the potential of technology to enhance the assessment and feedback process for both staff and students. The ‘Making Connections’ project aimed to better understand the connections that students make between the feedback that they receive and future assignments, and explored whether technology can help them in this activity. The project interviewed 10 tutors and 20 students, using a semi-structured approach. Data were analysed using a thematic approach, and the findings have identified a number of areas in which improvements could be made to the assessment and feedback process through the use of technology. The findings of the study cover each stage of the assessment process from the perspective of both staff and students. The findings are discussed in the context of current literature, and special attention is given to projects from the UK higher education sector intended to address the same issues.






Thursday, November 05, 2015

Let's be honest

Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2015) Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education, 23 Mar 2015, 1-16
Unreliability in marking is well documented, yet we lack studies that have investigated assessors’ detailed use of assessment criteria. This project used a form of Kelly’s repertory grid method to examine the characteristics that 24 experienced UK assessors notice in distinguishing between students’ performance in four contrasting subject disciplines: that is their implicit assessment criteria. Variation in the choice, ranking and scoring of criteria was evident. Inspection of the individual construct scores in a sub-sample of academic historians revealed five factors in the use of criteria that contribute to marking inconsistency. The results imply that, whilst more effective and social marking processes that encourage sharing of standards in institutions and disciplinary communities may help align standards, assessment decisions at this level are so complex, intuitive and tacit that variability is inevitable. We conclude that universities should be more honest with themselves and with students, and actively help students to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning.


"Accepting the inevitability of grading variation means that we should review whether current efforts to moderate are addressing the sources of variation. This study does add some support to the comparison of grade distributions across markers to tackle differences in the range of marks awarded. However, the real issue is not about artificial manipulation of marks without reference to evidence. It is more that we should recognise the impossibility of a ‘right’ mark in the case of complex assignments, and avoid overextensive, detailed, internal or external moderation. Perhaps, a better approach is to recognise that a profile made up of multiple assessors’ judgements is a more accurate, and therefore fairer, way to determine the final degree outcome for an individual. Such a profile can identify the consistent patterns in students’ work and provide a fair representation of their performance, without disingenuously claiming that every single mark is ‘right’. It would significantly reduce the staff resource devoted to internal and external moderation, reserving detailed, dialogic moderation for the borderline cases where it has the power to make a difference. This is not to gainsay the importance of moderation which is aimed at developing shared disciplinary norms, as opposed to superficial procedures or the mechanical resolution of marks."



It's quite easy to criticize this paper - small scale study (n=24), no attempt at statistical analysis or validation. But there's still an inescapable feeling that as the stakes have escalated, HE is kidding itself about assessment practices.