When MOOCs began to take off in 2012 and 2013, Denise Comer was intrigued. A writing instructor and director of Duke University’s first-year writing program, Comer was inspired to design and teach a first-year writing MOOC and saw it as an opportunity to both reach a diverse group of global learners and to conduct research. […]
When MOOCs began to take off in 2012 and 2013, Denise Comer was intrigued. A writing instructor and director of Duke University’s first-year writing program, Comer was inspired to design and teach a first-year writing MOOC and saw it as an opportunity to both reach adiverse group of global learners and to conduct research. In particular, Comer wanted to know: Could writing be taught online? And if it can be taught, how can it be graded and assessed?
Comer partnered with Edward M. White of the University of Arizona to tackle these questions by studying student learning in English Composition I: Achieving Expertise, a MOOC Comer taught on Coursera for 12 weeks in 2013. Their results were published in the February 2016 issue of College Composition and Communications.
Their conclusions: students can learn writing in a MOOC, and writing assessment can be effectively adapted for the MOOC environment.
For their research, Comer and White looked at student survey data and writing portfolios of students who completed Comer’s course. They trained a team of nine professional writing instructors and tutors to code student essays based on how well the essays reflected mastery of the course learning objectives.
The researchers also looked at peer assessment, a process in which students, rather than instructors, read and grade each other’s writing. Peer assessment was used in the course largely because it was impossible for the instructor and course team to grade the thousands of writing assignments students produced each week. It turned out that peer assessment worked on average as well as expert assessment at grading writing. (The actual peer feedback on that writing, however, was determined to be less useful than expert feedback.)
More broadly, Comer and White conclude that “evaluation of student portfolios seems the best way to measure the success of a MOOC in helping students improve their writing.” They call for further research into teaching and assessing writing in MOOCs, and also caution over-reliance on the huge amount of data generated by MOOC platforms for evaluating writing. They write, “…our experience of the MOOC suggests that Big Data must be fused deliberately with a more individualized, learner-driven, and learner autonomous approach toward assessment.”