Robo-Grading Essays with EdX Discern: A Middle Path with Our New Robot Overlords

The installation 'bios [bible]' consists of an...
The installation ‘bios [bible]’ consists of an industrial robot, which writes down the bible on rolls of paper. The machine draws the calligraphic lines with high precision. Like a monk in the scriptorium it creates step by step the text. (Photo credit: Wikipedia)
Recently MOOC provider EdX unveiled Discern, a tool for automatically grading written student work like essays and short answer questions. The NY Times article aptly captures the resulting hullabaloo from all sides as people envision a world where robots grade students’ papers.

Proponents of robo-grading point to the prompt feedback and scalability that it brings to grading written work. The current paradigm is slow and labor-intensive for instructors, meaning students have to wait days or weeks for feedback on their work. Robo grading could potentially allow us to provide students (even thousands of students) with instant feedback– the quality of which is improving all the time.

Detractors cite the lack of evidence that programs like Discern can possibly “read” a student’s ideas the way a human mind can and provide high quality feedback. Mixed in, no doubt, are genuine fears about human teachers’ tenuous place in a future education system where all lecturing, grading, and course facilitation would be done for free by an algorithm.

In fact, the documentation for Discern describes a grading process that’s pretty different from what we might imagine as true robo-grading. It’s more like training a machine to recognize and interpret patterns it identifies over thousands of graded essays. Nonetheless, it’s a first step towards a system in which robots bear some of the burden of grading student work.

The Middle Path: Humans and Robots Living in Harmony

Not to naively welcome our new robot overlords, but I love the idea of offloading “administrivia” tasks to computers to both improve instruction and increase the quality and quantity of interaction students get from their teachers. If we can all agree that “replacing teachers” is not the goal here, but rather “improving instruction”, then how could human and robo-grading essays work together to provide students with a better quality education than they’re getting now?

Even if the quality of grading is not yet up to the standard of a good teacher, robo-grading can serve as a “first pass” to catch the low hanging fruit of feedback that students need to work on. Think of it as having a team of grad student TAs looking over every paper and flagging trouble areas for the instructor to fully review and comment on. The instructor can use all this newly freed up time to schedule one-on-one feedback sessions with students to correct serious writing problems.

What I’m describing reminds me of using Google Translate in the late 1990s to convert my emails from English into Spanish. I had only a high-school level mastery of the language at the time, and Google’s wasn’t much better. I would drop my English-language email into the translator, let it produce a mangled Spanish translation, and then do a final pass by hand to correct all the mistakes I could find.

The result was a better email in Spanish than I could produce alone, completed faster than translating the whole thing word for word.

You can use Google Translate for the heavy lifting of translating your docs, then proofread for errors. If robo-grading is like this, I’m in.


I think it’s clear that “Teacherminators” are not coming to take our jobs in the too-near future, but this technology is advancing quickly. We need to have a clear idea of how it can be used to improve student outcomes, not by replacing human teachers but by helping them to give better feedback to students faster than they could alone.

Liked this post? Follow this blog to get more. 


This site uses Akismet to reduce spam. Learn how your comment data is processed.