Wednesday, October 05, 2016

Microsoft Madness #OneNoteFail

OneNote YiL that the desktop version of Microsoft OneNote 2016 (but none of the many, many other versions of OneNote as far as I can tell) allows students to alter the creation dates of documents.

I can't imagine what the possible justification for this is, but as we were relying on OneNote to datestamp student diaries, it looks as if Microsoft may just have confined OneNote to oblivion as far as we are concerned.

Why Microsoft, why?

Thursday, September 22, 2016

Use of statistics packages Statistics in science as a whole is a mess. Ecology is no different from the rest of the field, although maybe slightly better than some parts. Statistical analysis is becoming more sophisticated but one thing is clear - R is winning the race.

The mismatch between current statistical practice and doctoral training in ecology. Ecosphere 17th August 2016. doi: 10.1002/ecs2.1394
Ecologists are studying increasingly complex and important issues such as climate change and ecosystem services. These topics often involve large data sets and the application of complicated quantitative models. We evaluated changes in statistics used by ecologists by searching nearly 20,000 published articles in ecology from 1990 to 2013. We found that there has been a rise in sophisticated and computationally intensive statistical techniques such as mixed effects models and Bayesian statistics and a decline in reliance on approaches such as ANOVA or t tests. Similarly, ecologists have shifted away from software such as SAS and SPSS to the open source program R. We also searched the published curricula and syllabi of 154 doctoral programs in the United States and found that despite obvious changes in the statistical practices of ecologists, more than one-third of doctoral programs showed no record of required or optional statistics classes. Approximately one-quarter of programs did require a statistics course, but most of those did not cover contemporary statistical philosophy or advanced techniques. Only one-third of doctoral programs surveyed even listed an optional course that teaches some aspect of contemporary statistics. We call for graduate programs to lead the charge in improving training of future ecologists with skills needed to address and understand the ecological challenges facing humanity.

Friday, September 16, 2016

Motivation to learn

Motivation Five theories:
  1. Expectancy-value
  2. Attribution
  3. Social-cognitive
  4. Goal orientation
  5. Self-determination

Motivation to learn: an overview of contemporary theories. Medical Education 15 September 2016 doi: 10.1111/medu.13074
Objective: To succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.
Results: Motivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a ‘growth’ mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a ‘fixed’ mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.
Conclusions: To avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.

Nicely contextualizes Carol Dweck's work.

Thursday, September 15, 2016

Assessing Student Learning

Assessing Student Learning For those of us involved in curriculum redesign, the University of Leicester Learning Institute has put together a useful page on assessment and feedback in the form of a "traffic lights" system for a wide range of assessment strategies (not just essays and exams!). You may or may not agree with all their assessments of how long each form of assessment takes, but overall it's a very useful "thinking aid" when you are trying to redesign assessments - worth bookmarking.

Monday, August 08, 2016

A quiet life

Uphill struggle "Students collude with academics in a ‘disengagement compact’, which is code for ‘I’ll leave you alone if you’ll leave me alone’... This promotes a cautious culture of business-as-usual. In most institutions, internal quality assurance departments prefer a quiet life and reinforce the status quo. In the end, the challenge of motivating students to undertake formative tasks surmounts the potential value of those tasks. The idea that well-executed formative assessment could revolutionise student learning has not yet taken hold."

The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education 02 Aug 2016 doi: 10.1080/02602938.2016.1217501
Evidence from 73 programmes in 14 U.K universities sheds light on the typical student experience of assessment over a three-year undergraduate degree. A previous small-scale study in three universities characterised programme assessment environments using a similar method. The current study analyses data about assessment patterns using descriptive statistical methods, drawing on a large sample in a wider range of universities than the original study. Findings demonstrate a wide range of practice across programmes: from 12 summative assessments on one programme to 227 on another; from 87% by examination to none on others. While variations cast doubt on the comparability of U.K degrees, programme assessment patterns are complex. Further analysis distinguishes common assessment patterns across the sample. Typically, students encounter eight times as much summative as formative assessment, a dozen different types of assessment, more than three quarters by coursework. The presence of high summative and low formative assessment diets is likely to compound students’ grade-orientation, reinforcing narrow and instrumental approaches to learning. High varieties of assessment are probable contributors to student confusion about goals and standards. Making systematic headway to improve student learning from assessment requires a programmatic and evidence-led approach to design, characterised by dialogue and social practice.

Friday, July 29, 2016

You are who you learn: authenic assessment and team analytics?

Forget facts - that's the learning of the past. The learning of the future is soft skills. But how do you assess it? Peter Williams argues that we can approach the problem though learning analytics. Along the way he also provides the best theoretical introduction to authentic assessment that I've read yet. He also flags Lombardi's definition of authentic learning:

  1. Real-world relevance: the need for authentic activities within a realistic context.
  2. Ill-defined problem: confronting challenges that may be open to multiple interpretations.
  3. Sustained investigation: undertaking complex tasks over a realistic period of time.
  4. Multiple sources and perspectives: employing a variety of perspectives to locate relevant and useful resources.
  5. Collaboration: achieving success through division of labour and teamworking.
  6. Reflection (metacognition): reflection upon individual and team decisions.
  7. Interdisciplinary perspective: encouraging the adoption of diverse roles and thinking.
  8. Integrated assessment: coinciding the learning process with feedback that reflects real-world evaluation.
  9. Polished products: achieving real and complete outcomes rather than completing partial exercises.
  10. Multiple interpretations and outcomes: appreciating diverse interpretations and competing solutions.

So this paper is worth reading for the above reasons. But am I convinced about his pitch for learning analytics as the way forward? No - it's completely fanciful and unsupported by any evidence. Which makes me feel better - we agree on the problem and it's not just me being thick because I can't quite figure the solution.

Assessing collaborative learning: big data, analytics and university futures. Assessment & Evaluation in Higher Education 28 Jul 2016 doi: 10.1080/02602938.2016.1216084
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally stored student activity data, open new practical and epistemic possibilities for assessment, and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge, this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address twenty-first century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts.

Tuesday, July 26, 2016

Teacher feedback or peer feedback - which is better?

Teacher feedback, obviously.

Fostering oral presentation performance: does the quality of feedback differ when provided by the teacher, peers or peers guided by tutor? Assessment & Evaluation in Higher Education 21 Jul 2016 doi: 10.1080/02602938.2016.1212984
Previous research revealed significant differences in the effectiveness of various feedback sources for encouraging students’ oral presentation performance. While former studies emphasised the superiority of teacher feedback, it remains unclear whether the quality of feedback actually differs between commonly used sources in higher education. Therefore, this study examines feedback processes conducted directly after 95 undergraduate students’ presentations in the following conditions: teacher feedback, peer feedback and peer feedback guided by tutor. All processes were videotaped and analysed using a coding scheme that included seven feedback quality criteria deduced from the literature. Results demonstrate that teacher feedback corresponds to the highest extent with the majority of the seven identified feedback quality criteria. For four criteria, peer feedback guided by tutor scores higher than peer feedback. Skills courses should incorporate strategies focused on discussing perceptions of feedback and practising providing feedback to increase the effectiveness of peer feedback.

Wednesday, July 13, 2016

Thanks but no-thanks for the feedback

AJC ELF A timely article which chimes with discussions yesterday in our departmental ELF (Enhancing Learning Forum) meeting. Forsythe and Johnson's paper leans on Carol Dweck's Growth Mindset theory, so it's immediately attractive to me. It confirms my biases by having bad things to say about anonymous marking:
"The push towards anonymous, online marking can mean that personal feedback sessions are an incompatible part of the assessment and feedback loop. Anonymous marking is disruptive to the process because it prevents the tutor from giving connected guidance to students on their progress..."

Worth a read then.

Thanks, but no-thanks for the feedback. Assessment & Evaluation in Higher Education, 05 Jul 2016 DOI: 10.1080/02602938.2016.1202190
Feedback is an emotional business in which personal disposition influences what is attended to, encoded, consolidated and eventually retrieved. Here, we examine the extent to which students’ perceptions of feedback and their personal dispositions can be used to predict whether they appreciate, engage with and act on the feedback that they receive. The study is framed in psychological theories of mindset, defensive behaviours and new psychometric measures of the psychological integration of assessment feedback. Results suggest that, in this university population, growth mindset students were in the minority. Generally, students are fostering self-defensive behaviours that fail to nurture remediation following feedback. Recommendations explore the implications for students who engage in self-deception, and the ways in which psychologists and academics may intercede to help students progress academically by increasing their self-awareness.

Tuesday, July 05, 2016

Dissecting the assessment treadmill

We over assess students because it is difficult to motivate them to engage without frequent deadlines. But what are the the true effects of frequent assessment? This new paper describes a well conducted study of frequent assessment on Dutch Engineering students (n=219). Using principal component analysis the authors identified and analyzed four elements of assessment:

  • Value - how much value students attribute to frequent assessments: assessment is popular with students (= "value for money"?)
  • Formative function - no evidence that frequent testing had any formative value!
  • Positive effects and Negative effects - no strong cohort wide evidence for either of these (although they may affect individuals).

Summary: Assessment is popular with students but has no demonstrable educational value!

Students’ perception of frequent assessments and its relation to motivation and grades in a statistics course: a pilot study. Assessment & Evaluation in Higher Education 03 Jul 2016 doi: 10.1080/02602938.2016.1204532
This pilot study measures university students’ perceptions of graded frequent assessments in an obligatory statistics course using a novel questionnaire. Relations between perceptions of frequent assessments, intrinsic motivation and grades were also investigated. A factor analysis of the questionnaire revealed four factors, which were labelled value, formative function, positive effects and negative effects. The results showed that most students valued graded frequent assessments as a study motivator. A modest number of students experienced positive or negative effects from assessments and grades received. Less than half of the students used the results of frequent assessments in their learning process. The perception of negative effects (lower self-confidence and more stress) negatively mediated the relation between grades and intrinsic motivation. It is argued that communication with students regarding the purpose and benefits of frequent assessments could mitigate these negative effects.

Monday, July 04, 2016

Xenotransplantation and the future of human organ transplants

Figshare It's always a pleasure when a student working on a final year project produces a piece of work which is worthy of a wider audience beyond examiners. I was in this fortunately position this year. Unfortunately, I was not in the position of being able to spend months of my time on the tortuous process of negotiating a paper into a traditional journal, so we decided to go down the Open Access route. My first thought was to try the relatively new bioRxiv, but the paper was rejected by them because they do not publish theses (so only partly open access then). After that it was back to the trusty figshare:

Morgan, Owen; Cann, Alan J. (2016): Xenotransplantation and the future of human organ transplants. figshare.

Tuesday, June 21, 2016

Microsoft Forms

I'm still trying to unpack the multitude of components of Office 365 and map them onto a vision of a loosely-joined VLE. Last week we figured out that OneNote Class Notebook (an administrative tool for OneNote) is an LMS add-in rather than a freestanding product. It's not clear whether the new Microsoft Forms is the same or not, but I'd certainly like to get my hands on it and kick the tyres.

Microsoft Forms - a new formative assessment and survey tool in Office 365 Education

Wednesday, June 15, 2016

Student peer review

Peer review I'd love to add a peer-review layer to a module I teach in which student write research proposal - but with the salami slicing of module credits, who's got the time?

Student peer review: enhancing formative feedback with a rebuttal. Assessment & Evaluation in Higher Education 13 Jun 2016 doi: 10.1080/02602938.2016.1194368
This study examines the use of peer review in an undergraduate ecology programme, in which students write a research proposal as a grant application, prior to carrying out the research project. Using a theoretical feedback model, we compared teacher and student peer reviews in a double blind exercise, and show how students responded to feedback given by each group. In addition, students wrote a rebuttal for every feedback point before re-drafting and submission. Despite students claiming they could tell if the reviewer was a teacher or student, this was not always the case, and both student and teacher feedback was accepted on merit. Analysis of feedback types and rebuttal actions showed similar patterns between students and teachers. Where teachers differed slightly was in the use of questions and giving direction. Interviews with students showed the rebuttal was a novel experience, because it required a consideration of each comment and justification as to why it was accepted, partially accepted or rejected. Being a reviewer helped students to learn about their own work, and it changed the way they understood the scientific literature. In addition, some students transferred their new peer review skills to help others outside of the ecology programme.

Tuesday, June 14, 2016

Unpacking a little more Microsoft Office 365 confusion

Office 365 Yesterday I wrote that I found Office 365 confusing. Digging around, I found this about some of the different options available:

  • Office 365 is a subscription service that includes the most recent version of Office, which currently is Office 2016. It comes with the applications like Word, PowerPoint, and Excel, plus extra online storage and ongoing tech support.
  • Office 2016 is also sold as a one-time purchase, which means you pay a single, up-front cost to get Office applications for one computer.
  • Office Online is the free version of Office that you can use in your web browser. Try the Office Online apps.

It seems to me that any movement towards Office 365 as a VLE replacement is a group work requirement. Office 365 has groups:

On a related theme, I clearly need to get to grips with the OneNote Class Notebook, all the more so because MS has just announced that this is now available for Macs.

Monday, June 13, 2016

Microsoft Office Mix

Office Mix Microsoft Office Mix looks interesting, but after 5 minutes of scratching the surface, seems to have holes in it.

Doesn't currently work on Macs? (it's not even fully cross-browser compatible). It's not clear if it works with Office 2016 on OS X or not. That may have just killed it.

It's not clear what MS means by "polls". They seem to mean multiple-user input quizzes for online presentations (as in "Facebook poll"), rather than live event polls a la TurningPoint - but I may have got that wrong?

Welcome to the Future? Microsoft OneNote

Microsoft OneNoteThe past 20 years of my interaction with educational technology is a story starting with radical experimentation with the Internet (not quite an Outlaw phase because nobody cared), then gradual codification into an institutional tool, another brief period of radical experimentation (Edupunk phase), and finally settling quietly into the bigbox institutionally-owned VLE because a) Facebook was just to difficult to scale and b) it worked, even if imperfectly. Now, for various reasons, that bigbox phase might be coming to an end and I'm starting to think about what I would like to replace it. (Not that what I would like means much.)

Do I want another bigbox (everything in one place, just "works", inevitable compromises and frustrations) or do I want a more agile collection of individual tools (inevitable compromises and frustrations of moving from one space to another)? The answer is I don't know. Just like the referendum, neither option is very palatable so which is the least worst?

There are institutions using Microsoft Office 365 as their VLE. We had an immediate need to get students to write project diaries, so rather than use older solutions we decided to experiment with Microsoft OneNote as a gentle toe in the water. What follows is very much first impressions after one week.

As an old skool Microsoft Office user, I currently find Office 365 confusing and threatening. Because namespaces (Word, Excel, OneNote) overlap in desktop/mobile cloud domains, I don't know where I am all the time, and more to the point, I don't know where my/student data is. Although the user interface is fine (if a little simplistic), other aspects of the system are real ugly. An example of this are system-generated URLs - which frequently contain the word Sharepoint (be afraid, be very afraid), whereas other URLs to the same classes of student-generated resources contain the words SkyDrive. Are these equivalent or not? Sharing is clunky - the price you pay for moving out of the bigbox - and notifications non-existent (so far, unless I've missed something), so resources start to look like silo destinations you have to track down and visit individually. Dashboard? What dashboard?

I hope these are just the ramblings of a tired academic at the end of a long hard year, and by the time Office 365 "clicks" with me I'll be happy to wander off into the sunlit uplands of the future. It's very important that everyone remembers that this contemporary reflection is intended to capture my current feelings so that I can assist others in future, and be able to look back on this and laugh at my naivety. Alternatively, it is possible that the future is a horrible as both sides in the referendum debate say that it will be.

Thursday, May 26, 2016

The extent of students’ feedback use has a large impact on subsequent academic performance

... which you'd kinda hope it would! However, it's important to get empirical evidence that it does, and this well-conducted study proves that (only marred the the absence of Effect Sizes!). But since correlation does not equal causation, does feedback use improve academic performance, or is it just a proxy for engagement?

Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment & Evaluation in Higher Education 19 May 2016 doi: 10.1080/02602938.2016.1174187
Feedback is known to have a large influence on student learning gains, and the emergence of online tools has greatly enhanced the opportunity for delivering timely, expressive, digital feedback and for investigating its learning impacts. However, to date there have been no large quantitative investigations of the feedback provided by large teams of markers, feedback use by large cohorts of students, nor its impact on students’ academic performance across successive assessment tasks. We have developed an innovative online system to collect large-scale data on digital feedback provision and use. Our markers (n = 38) used both audio and typed feedback modalities extensively, providing 388 ± 4 and 1126 ± 37 words per report for first- and second-year students, respectively. Furthermore, 92% of first year and 85% of second-year students accessed their feedback, with 58% accessing their feedback for over an hour. Lastly, the amount of time students spent interacting with feedback is significantly related to the rate of improvement in subsequent assessment tasks. This study challenges assertions that many students do not collect, or use, their feedback. More importantly, we offer novel insights into the relationships between feedback provision, feedback use and successful academic outcomes.

Friday, May 06, 2016

Heads of University Biosciences Annual Meeting

Heads of University Biosciences Annual Meeting
4-5 May 2016 College Court, University of Leicester
Special Interest Group of the Royal Society of Biology

Organised by Professor Jon Scott (University of Leicester) and Professor Judith Smith (University of Salford)


HE Bioscience Teacher of the Year: Finalist Case Studies
a) Dr Kevin Coward (University of Oxford) Problem based teaching the development of laboratory skills.
In a non-assessed activity, postgraduate students devise an experimental protocol based on a scenario which are then applied in the laboratory. Students take turns in acting as students and teachers. Links the syllabus to the "real world", both science and delivery of teaching.
b) Dr Lesley Morrell (University of Hull) Increasing feedback, reducing marking
In an undergraduate research skills module, staff explain their published papers to students in a seminar programme. Students write eight weekly 500 word "News and Views" summary articles on one of the published papers. Feedback is given weekly leading to feed-forward within a single module, together with a rubric-generated mark. To make the module sustainable, feedback is tapered as the module continues, anonymized feedback is made available to all students. Summative assessment is performed on two student selected articles from the course. There is statistical evidence of mark improvement during the course. Weaker students improve more than stronger students.
c) Dr Katharine Hubbard (University of Hull) Building partnerships with students WINNER
Students in practical classes suffer information overload. Levels of confidence vary considerably. Because the lab environment is stressful, student-produced pre-lab video tutorials and post-lab online revision quizzes were added. Students are involved at all stages - design, execution, evaluation and dissemination.

Dr Anna Zecharia (British Pharmacological Society) The Pharmacological Core Curriculum
The Delphi process used to build a consensus curriculum covering subject knowledge, research and practical skills, transferrable skills. Process is ongoing.

Session One Academic Integrity
Professor Jon Scott (University of Leicester) Introduction & Institutional Strategies
Spectrum from poor academic practice to cheating. Strategies range from deterrence through detection, education and assessment design.
Dr Phil Newton (Swansea University) Ghostwriting - Essay Mills
Essay mills now specialize in custom writing driven by an auction process. Average price for a standard essay starts from 100, turnaround time 1-5 days. Buyers market (Mechanical Turk). Claim to be providing model answers, if student submits the work provided they are committing the offence. Well established business run by many umbrella companies under many different names. Most important response is assessment design - increasing student numbers are a challenge.
Dr Irene Glendinning (Coventry University) European Perspectives on Academic Integrity
Findings of IPPHEAE Erasmus project, 27 EU countries, 5000 survey and interview responses. Wide variation in attitudes and responses across Europe, but very difficult to compare statistics. Inconsistent views on acceptable academic practice across Europe. "Academic Maturity Model" implies UK is doing better than most of EU due to emphasis on training.

Session Two - Designing out Plagiarism
Dr Erica Morris (Anglia Ruskin University) Designing out Plagiarism
Gill Rowell (Turnitin)
Dr Heather McQueen (University of Edinburgh) Plagiarism: The Student View
Session Three Teaching Excellence Framework (TEF)
Professor Sean Ryan (Higher Education Academy-STEM) Achieving and demonstrating teaching excellence
Discussion Workshop - What does the TEF mean for us?
Unfortunately I was called away on departmental duties and was not able to attend this session.

Session Four Wider Outreach
Professor Andy Miah (University of Salford) The Pathway to Impact
Professor Miah talked about science communication.
Professor Adam Hart (University of Gloucestershire) Citizen Science
Awareness raising may be more important than the scientific output. Data generation is a bonus.

Friday, April 29, 2016

Who needs anonymity? The role of anonymity in peer assessment

Golden Pygmy Can students reliably and fairly assess the work of peers that are known to them? A solution to this problem (if it exists) is anonymity, but what about open learning situations where anonymity is not possible? And is anonymity desirable anyway? This well conducted study shows that anonymity improves the reliability of peer marking - but it's not essential - training of markers improves outcomes to the same extent.

The role of anonymity in peer assessment. Assessment & Evaluation in Higher Education 22 Apr 2016 doi: 10.1080/02602938.2016.1174766
This quasi-experimental study aimed to examine the impact of anonymity and training (an alternative strategy when anonymity was unattainable) on students’ performance and perceptions in formative peer assessment. The training in this study focused on educating students to understand and appreciate formative peer assessment. A sample of 77 students participated in a peer assessment activity in three conditions: a group with participants’ identities revealed (Identity Group), a group with anonymity provided (Anonymity Group) and a group with identities revealed but training provided (Training Group). Data analysis indicated that both the Anonymity Group and Training Group outperformed the Identity Group on projects. In terms of perceptions, however, the Training Group appreciated the value of peer assessment more and experienced less pressure in the process than the other two groups.

Wednesday, April 20, 2016

Student motivation in low-stakes assessment

A knotty problem Although densely written in stats-speak, this is an interesting paper for all those who, like me, have failed to get many student cohorts to engage with formative assessment. The major finding of interest here is that cohort effects trump other factors, including prior mathematical knowledge. It works in some groups, not in others. What this paper is not able to sort out is whether these differences are due to the quality of teaching groups experience, or unknown (and possibly unmeasurable) stochastic factors.

Student motivation in low-stakes assessment contexts: an exploratory analysis in engineering mechanics. Assessment & Evaluation in Higher Education 19 Apr 2016 doi: 10.1080/02602938.2016.1167164
The goal of this paper is to examine the relationship of student motivation and achievement in low-stakes assessment contexts. Using Pearson product-moment correlations and hierarchical linear regression modelling to analyse data on 794 tertiary students who undertook a low-stakes engineering mechanics assessment (along with the questionnaire of current motivation and the ‘Effort Thermometer’), we find that different measures of student motivation (effort, interest, sense of challenge, expectancy of success and anxiety to fail) showed atypical correlation patterns. The nature of the correlations further varies depending on the type of test booklet used by students. The difficulty of the early items in the assessment were positively correlated with ‘anxiety’ and ‘success’, but negatively correlated with ‘interest’. In the light of our findings, we suggest that future research should systematically explore (taking into account testing conditions like test booklet design and test-item format) the implications of student motivation for achievement in low-stakes assessment contexts. Until the consequences of these processes are better understood, the validity of assessment data generated in low-stake conditions in the higher education sector will continue to be questioned. With a greater understanding of these processes, steps could be taken to correct for student motivation in such settings, thus increasing the utility of such assessments.

Friday, January 08, 2016

Reconsidering the role of recorded audio in learning

Audio "the educational use of the recorded voice needs to be reconsidered and reconceptualised so that audio is valued as a manageable, immediate, flexible, potent and engaging medium."

Yes, it does. Audio remains the greatest under-utilized technical resource in education - potentiated by the fact that it is well suited to mobile devices. But on reading this paper, the question I ask myself is "Why?".

"Technically speaking, podcasting is the serial distribution of locally generated downloadable digital media episodes, usually audio, via RSS (Really Simple Syndication) feeds to niche audiences of subscribers. RSS incorporates structured information about the podcast channel and the appended items (‘episodes’). In this way the RSS feed file can be automatically and regularly checked by the end-user’s aggregation software (e.g. iTunes), which triggers the downloading of new episodes whenever they become available."

And there's the rub. The death of RSS and the iTunes Walled Garden almost killed true (subscription channel) podcasting. But there are other reasons. We still have a ridiculous over reliance on keyboard input - laughable when mobile phone keyboards are considered. Captain Kirk is laughing himself silly, and I suspect the USS Enterprise computer is having a chuckle too.

So why am I typing this rather than speaking it? Permanence is one reason. Editability is another. I can edit my writing as I go but not my spoken words. I can speak from a script but that level of production takes longer that I have for this communication and robs the medium of immediacy and engagement unless I am a professional actor.

So yes, lets reconsider the role of recorded audio in learning. But let's not kid ourselves that it's easy.

Reconsidering the role of recorded audio as a rich, flexible and engaging learning space. (2016) Research in Learning Technology 24: 28035
Audio needs to be recognised as an integral medium capable of extending education’s formal and informal, virtual and physical learning spaces. This paper reconsiders the value of educational podcasting through a review of literature and a module case study. It argues that a pedagogical understanding is needed and challenges technology-centred or teacher-centred understandings of podcasting. It considers the diverse methods being used that enhance and redefine podcasting as a medium for student-centred active learning. The case study shows how audio created a rich learning space by meaningfully connecting tutors, students and those beyond the existing formal study space. The approaches used can be categorised as new types of learning activity, extended connected activity, relocated activity, and recorded ‘captured’ activity which promote learner replay and re-engagement. The paper concludes that the educational use of the recorded voice needs to be reconsidered and reconceptualised so that audio is valued as a manageable, immediate, flexible, potent and engaging medium.