Tuesday, September 29, 2015

How to write an essay

Problem 1: Students write descriptive essays which do not demonstrate critical thinking.
Solution: Title as question.

Problem 2: Massive over reliance on essays as an assessment format in higher education.
Solution: ???

Henri D, Morrell L. and Scott G. Ask a clearer question, get a better answer. F1000Research 2015, 4: 901 doi: 10.12688/f1000research.7066.1
Many undergraduate students struggle to engage with higher order skills such as evaluation and synthesis in written assignments, either because they do not understand that these are the aim of written assessment or because these critical thinking skills require more effort than writing a descriptive essay. Here, we report that students who attended a freely available workshop, in which they were coached to pose a question in the title of their assignment and then use their essay to answer that question, obtained higher marks for their essay than those who did not attend. We demonstrate that this is not a result of latent academic ability amongst students who chose to attend our workshops and suggest this increase in marks was a result of greater engagement with ‘critical thinking’ skills, which are essential for upper 2:1 and 1st class grades. The tutoring method we used holds two particular advantages: First, we allow students to pick their own topics of interest, which increases ownership of learning, which is associated with motivation and engagement in ‘difficult’ tasks. Second, this method integrates the development of ‘inquisitiveness’ and critical thinking into subject specific learning, which is thought to be more productive than trying to develop these skills in isolation.

I'm not quite sure how peer review works on the F1000 education channel. I have been asked to peer review articles in the past and have done so, but I'm not sure how open the process is. So here's my open peer review of this paper.

This is an interesting and potentially valuable study of a method to improve the quality of student writing. The sample size is relatively small and the major weaknesses are pointed out by the authors in the Discussion:
"For the purpose of this study we assumed that students who posed a question in the title of their essay had attended the workshop and understood the underlying concepts of the workshop, and this has been used as the independent factor in our analysis. We acknowledge that this lack of certainty in the allocation of students to the did/did not attend category does need to be borne in mind when interpreting our results. Another possible confounding factor is that voluntary workshop attendance may be skewed towards individuals who are more engaged or motivated with the module; and these individuals are more likely to obtain higher grades because of this higher engagement with the module content"
To counteract these factors, the authors should cite an effect size to validate the p-values quoted in the results.

Thursday, September 17, 2015

The 3 P's of Feedback - again

Since I wrote about the 3 P's of good feedback a couple of weeks ago I've had several conversations with colleagues which seem to need some amplification.

Not much to add to what I said previously - I suspect that for students prompt means 21 seconds or 21 minutes after submission, not 21 days - except that electronic submission of work has changed the game and is probably responsible for some of the dissatisfaction we face. When students had to trek across campus to a printer, print their work, then track back to a departmental office to stand in a queue to hand it in, 21 days seemed like a fair interval to wait for a response. But that's not how it is any more. Clicking a Submit button online generates an expectation of a much quicker interaction. For anything other than simple automatically marked MCQs, this just isn't going to happen in the interval (seconds or minutes) that students want. I don't know what we do about this, other than to get better at explicit telling students what to expect "After submission you will receive your marks and feedback within 24 hours / one week / whenever."

Group feedback does not satisfy the feedback beast. However, it could buy us some time, if it's sufficiently prompt. Group feedback on essays (good points and common errors) within 24 hours with detailed personal feedback to follow in 7 days? Feedback on exams is another huge problem that group feedback won't solve. The examination process used to take a few days, or maybe even hours for a small class. Now it takes weeks. This is partly because of student numbers, but mostly because of the huge backend bureaucracy we have built. Students don't understand this, it further distances us from them. And yet I can't see the bureaucracy going away, so the only inadequate solution I have is to better communicate with students why it takes us so long.

This is the idea that has generated most heat in discussions. "If we don't tell students what's wrong with their work how can they improve?". I was raised in an academic era when my work was bluntly, sometime brutally, criticized (when needed). Crap was frequently scrawled in the margins, and it's hard for me to break out of that mould. It was tough at the time, but you could argue that it made me resilient (or maybe I was already resilient, which enabled me to survive). Either way, it doesn't work any more. If feedback is not positive, non-resilient students switch off and disengage. But that doesn't mean you can't tell students what's wrong with their work. Rather than writing "You didn't include any diagrams", say "If you include some diagrams in your next essay you will get better marks". One consequence of this is that the classic sh*t sandwich feedback formula is now beyond the pale. We're in the unbearably upbeat Have A Nice Day era. We need to adapt. Feedback is not peer review. Feedback is not performance monitoring, it is mentoring. Once again we have fallen down the crack between feedback and assessment.

Friday, September 11, 2015

Student use of Wikipedia as an academic resource

Wikipedia I am encouraged rather than discouraged by:

Selwyn, N., and Gorard, S. (2015) Students' use of Wikipedia as an academic resource - patterns of use and perceptions of usefulness. The Internet and Higher Education. 5 September 2015 doi: 10.1016/j.iheduc.2015.08.004
Wikipedia is now an established information source in contemporary society. With initial fears over its detrimental influence on scholarship and study habits now subsiding, this paper investigates what part Wikipedia plays in the academic lives of undergraduate students. The paper draws upon survey data gathered from students across two universities in Australia (n=1658), alongside follow-up group interview data from a subsample of 35 students. Analysis of this data suggests that Wikipedia is now an embedded feature of most students’ study, although to a lesser extent than other online information sources such as YouTube and Facebook. For the most part, Wikipedia was described as an introductory and/or supplementary source of information – providing initial orientation and occasional clarification on study topics. While 87.5 per cent of students reported using Wikipedia, it was seen to be of limited usefulness when compared with university-provided library resources, e-books, learning management systems, lecture recordings and academic literature databases. These findings were notably patterned in terms of students’ gender, year of study, first language spoken and subject of study.
  • Draws on survey data examining 1658 undergraduate students’ uses of digital technologies for academic purposes.
  • 87.5 per cent of students report having used Wikipedia to find information for their academic work, with 24.0 per cent of these considering Wikipedia to have been ‘very useful’.
  • Use and perceived usefulness of Wikipedia is most prevalent amongst students who are male, in advanced years of study, from non-English speaking households, and those studying engineering, science and medicine subjects.
  • Rather than constituting a primary source of information, students report Wikipedia mainly playing introductory or clarificatory roles in their information gathering and research.

See: Citing Wikipedia in Academic Work

Friday, September 04, 2015

Formative and shared assessment in higher education

Back to school "The aim of this article is to review the use of formative and shared assessment within higher education."

  • Setting clear learning goals that students can achieve
  • Providing students with feedback to guide them in their learning
  • Involving students in the learning process, self-evaluation and assessment
  • Promoting feedback as a process of dialogue
  • Finding a balance between the ideal time spent in formative assessment processes and conditions in which the course is developed

Formative and shared assessment in higher education. Lessons learned and challenges for the future. Assessment & Evaluation in Higher Education 03 Sep 2015 doi: 10.1080/02602938.2015.1083535
Formative assessment is the process by which teachers provide information to students during the learning process to modify their understanding and self-regulation. An important process within this is shared assessment, which refers to student involvement in the assessment and learning practice, a process of dialogue and collaboration between teacher and students aimed at improving the learning process, both individually and collectively. The purpose of this paper is to review the current state of affairs in depth. This paper therefore highlights, on the one hand, the lessons learned through research and development in higher education (i.e. providing clear learning goals and feedback, guiding learning, involving students in learning and assessment, promoting feedback as a process of dialogue, and making processes viable). On the other hand, these lessons also suggest some challenges and difficulties that must be addressed in the future in order to further improve formative and shared assessment in higher education. These include the need for more research on its effects, further conceptual clarification, the intersubjectivity of the process, recognition of the divergent processes and ethical principles, students’ involvement not only in assessment but also in determining academic grades, and broadening learning goals and objectives in FA & SA.

Wednesday, August 26, 2015

Facebook Addiction

Bergen Facebook Addiction Scale

Andreassen, C.S., Torsheim, T., Brunborg, G.S., & Pallesen, S. (2012) Development of a Facebook addiction scale 1, 2. Psychological Reports, 110(2), 501-517
The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO–FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times.


Tuesday, August 25, 2015

The 3 P's of Good Feedback

Butterfly Yet another paper about feedback in higher education - because it's still one of the major problems.

This fairly low-power study uses a budgeting methodology to ask what students value, in other words gives them a notional budget and ask them how they would spend it.
For "Lecturer qualities", Good feedback comes top, Interactive lecturing style bottom. So all those years of being told to be interactive in lectures don't mean much - students want your boring PowerPoints (and to know what's in the exam).
For "Feedback information" Highlights the skills I need to improve for future assignments is top and Corrects grammatical errors is bottom.

For me however the most striking message from this paper is an almost throwaway comment in the Introduction on what students want from feedback:
  • Prompt - fair enough, although I suspect that for students prompt means 21 seconds or 21 minutes after submission, not 21 days.
  • Personal - group feedback for that class of over 300 is a stopgap which really isn't going to satisfy demand.
  • Positive - it doesn't matter if they can't write (in spite of what employers say), you can only engage them if you give them good news quickly.

Winstone, N.E., Nash, R.A., Rowntree, J., & Menezes, R. (2015) What do students want most from written feedback information? Distinguishing necessities from luxuries using a budgeting methodology. Assessment & Evaluation in Higher Education, 20 Aug 2015 doi: 10.1080/02602938.2015.1075956
Feedback is a key concern for higher education practitioners, yet there is little evidence concerning the aspects of assessment feedback information that higher education students prioritise when their lecturers' time and resources are stretched. One recent study found that, in such circumstances, students actually perceive feedback information itself as a luxury rather than a necessity. We first re-examined that finding by asking undergraduates to "purchase" characteristics to create the ideal lecturer, using budgets of differing sizes to distinguish necessities from luxuries. Contrary to the earlier research, students in fact considered good feedback information the single biggest necessity for lecturers to demonstrate. In a second study, we used the same method to examine the characteristics of feedback information that students value most. Here, the most important perceived necessity was guidance on improvement of skills. In both studies, students? priorities were influenced by their individual approaches to learning. These findings permit a more pragmatic approach to building student satisfaction in spite of growing expectations and demands.

Friday, August 14, 2015

Assessment and feedback - what do students want?

Ray os sunshine We have moved to online marking (mostly via Turnitin) substantively over the last 18 months. In reality, it's probably the only way we could cope with numbers of students we have. But what do students think of this change? We don't know (because we haven't asked them, although they have not complained). Has this change helped with the NSS feedback question? (No.) This new paper addresses some of these questions and comes up with some interesting findings.

Individual students like or dislike online marking - individual preferences are negatively correlated. That means that as a population, students are broadly neutral, which has been our experience.

On the question of feedback, the findings are more interesting. Students who like online marking tend to view it as a gateway to staff contact - the start of a conversation. This is problematic because online assessment is primarily seen by staff as a file and forget exercise. So even with the students who are pro-online marking, we are not meeting their expectations. But most importantly of all - STUDENTS HATE NEGATIVE FEEDBACK ... which explains the NSS results.
"We suggest that markers should consider developing a small bank of brief but positive comments (for example “nicely written” “good argument” ) that can be readily added to the assignment in the place of the ticks that might have been given on a traditionally submitted assignment. Appropriate positive comments specific to particular sections of the assignment could then easily be added to a pdf (through annotation), word document (in a comment box), or included in the suite of QuickMarks used in submission services such as Turnitin. These recommendations notwithstanding, we also advocate that university budget centres acknowledge that although online marking has many benefits, relative to offline marking, more time will be needed by markers if students are to receive appropriate positive feedback on their work, and for the benefits of online assignments to be fully realised."

Assignments 2.0: The Role of Social Presence and Computer Attitudes in Student Preferences for Online versus Offline Marking. The Internet and Higher Education, 8 August 2015. doi: 10.1016/j.iheduc.2015.08.002
This study provided the first empirical and direct comparison of preferences for online versus offline assignment marking in higher education. University students (N= 140) reported their attitudes towards assignment marking and feedback both online and offline, perceptions of social presence in each modality, and attitudes towards computers. The students also ranked their preferences for receiving feedback in terms of three binary characteristics: modality (online or offline), valence (positive or negative), and scope of feedback (general or specific). Although attitudes towards online and offline marking did not significantly differ, positive attitudes toward one modality were strongly correlated with negative attitudes toward the other modality. Greater perceptions of social presence within a modality were associated with more positive attitudes towards that modality. Binary characteristics were roughly equally weighted. Findings suggest that the online feedback modality will most effectively maximise student engagement if online assignment marking and feedback tools facilitate perceptions of social presence.