Until recently, I worked on Propero.org, a collection of accredited, self-paced, online learning courses where students work at their own pace and pick up college credits with maximum convenience. I joined the team after the curriculum had been developed, so it was my job to review the courses and determine which changes we should make to further develop them.
I instantly noticed that the learning objectives were all clustered on the low-end of Bloom’s Taxonomy — focusing mostly on Recall and Understanding objectives and less on higher order thinking skills.
This is not uncommon — research shows that most instruction from K through college (!!) is heavily weighted towards accurate recall of information and fails to assess or instruct students in Higher Order Thinking Skills like Analysis, Evaluation, and Synthesis.
This focus on objective, multiple choice questions that can be scored by a machine is especially common in online courseware that is designed to scale — that is, not to require human instructors — though I seem to recall a lot of Scantron tests in my own schooling in the ’80s and ’90s.
The assessment strategist who designed these courses decided that, in order for them to be fully self-paced, they would use only objective assessments that could be scored by a machine. The problem with machines is that they cannot look at an authentic student work artifact and identify evidence that the student is Analyzing, Evaluating, or Synthesizing information. They cannot read a dense text or look at an authentic work artifact and infer what kind of thinking is evident therein. Only highly-trained human instructors can do that (it should be said, with a great expenditure of time and effort!) Consequently, the critical step of guiding students to synthesize their newly-acquired learning into the kinds of work products they will create in their future careers is often missing from MOOCs, self-paced courseware, and most on-ground classrooms as well.
The challenge is — how do we take a course that’s already geared towards testing students’ low-level comprehension of facts and turn it into a more rigorous and relevant learning experience?
More broadly, though, how can we stimulate students to think critically in an online learning experience that is delivered mostly by machines?
From Comprehension and Recall to Mastery
I’m interested in how we can use online learning technologies to preserve the best things about classroom instruction while overcoming its limitations. Much of the off-the-shelf courseware I’ve seen looks to me like recycled textbook activities — y’know, the same questions at the end of the textbook chapter we all had to answer. The only innovation is that the’ve been repackaged for delivery in the LMS’ quiz tool. In the end, though, it’s just a textbook. You read, you answer questions, you get a score. Not terribly compelling.
The most memorable and powerful learning experiences I ever had as a student were the ones where I had to actively create a rich work artifact like an essay or multimedia project, get personalized feedback from the teacher, and revise to meet their expectations. I can pinpoint key turning points in my development as a thinking, caring person to assignments where a teacher challenged me to question my own assumptions, consider other possibilities, and push my work to a higher standard of quality than I would otherwise produce. It was the teacher’s personalized engagement with me and my work that motivated me to greater heights.
Critically, though, it was also the chance to try again that was so powerful. Too often in education, students complete an assessment, get a grade (let’s say a “C+”) and have to move on to the next unit, the next grade. Their C+ indicates that they failed to master much of the previous lesson, but since it’s technically “passing”, the student moves on, and their gaps in comprehension and skills go unfilled. Cumulatively over years, students move through the education system, grade by grade, without mastering the core skills they need to be successful. They show up in college with debilitating gaps that prevent them from being able to access real college work.
What the student needs is timely, personalized, constructive feedback about what they got wrong, and the chance to try again, to make it right! My experience working at City Arts and Technology High School in San Francisco showed me how transformative this revision process is. We took a heterogeneous group of students from extremely diverse skill and socioeconomic levels and mentored them all to mastery in a college-preparatory curriculum. We had a significant group of students who would normally be placed in a remedial track in a regular high school were coached to create college-ready work — it just took a lot of feedback, revision, and support to make it happen. Some might even say an “unreasonable” amount of revision and support, but with the results we produced at CAT, it made me a believer that teachers should spend less time in the performance of teaching and much more time in individual consultation with students.
This deep engagement with a caring, highly trained adult expert is something modern online learning offers far too little of, and it is the most powerful catalyst for transformational change in education. Thankfully, online learning offers us the tools to be able to automate the “performance of teaching” so that teachers’ time is freed up to engage in this deep process of mastery revision with students.
The Course-Level Assessment
It was this desire to transcend the basic read/quiz/repeat pedagogy evident in Propero that led me to add the Course-Level Assessment into these courses.
The Course-Level Assessment is a project-based assignment where students create an authentic workplace artifact like a research report or product presentation that shows Analysis, Evaluation, and Synthesis of the content they’ve just demonstrated Comprehension of. (See the Sequence of Online Instruction below.) This way, after students have taken the traditional multiple-choice, self graded LMS quiz, they can use that same learning in novel, personally relevant ways that will require a highly-trained instructor to evaluate.
I implemented 3-4 CLA assignments per course in 10 courses, and we had to add in that rich human engagement to a curriculum that had previously been 100% instructor-free. I contracted a team of highly-experienced educators with Masters and PhDs in Education to evaluate each CLA assignment according to an explicit rubric. The student received a scored rubric and personalized written feedback designed to give them enough information so they could successfully revise and re-submit these assignments for a better grades. Students were matched with our team of Smarthinking tutors to provide personalized writing help on their revisions. Both our scoring team and the Smarthinking tutors were existing teams at Pearson, designed for delivery at scale, and easily adapted to the rich personalized feedback process I had designed for Propero. In this way, we were able to offer a level of personalized attention I’ve never seen in any other fully-online learning program.
“We were able to offer a level of personalized attention I’ve never seen in any other fully-online learning program. ”
The CLA program was in production for roughly six months before budget shortfalls at Pearson led to my position being downsized. At the time I left, my supervisor and I were already drafting plans to expand the CLA program to Propero’s whole line of 48 online courses. He wisely saw the CLA as a key differentiator, setting Propero up as a more rigorous and engaging alternative to fully automated online courseware. I was also planning a revision of the existing CLAs to stimulate students to develop more workplace-appropriate technology skills such as presentation design, video development, blogging, and collaborative document editing (via Google Docs).
The CLA program continues on in a limited form to this day, but its ultimate fate is now in someone else’s hands.
My addition of the CLA to Propero’s fully self-paced course experience was an attempt to build increased academic rigor and rich teacher-student contact hours into a scalable online learning course. The addition of highly-trained human scorers and tutors was a scalable way of offering a rich, personalized revision cycle to students, challenging them to greater heights of achievement. It was a good step towards using our modern toolkit of online learning to replicate and even extend that which is good about in-person, on-ground instruction — the rich interactions that can happen, MUST happen, between teachers and learners.
In future iterations, I was planning on extending the Propero experience to more closely match my Sequence of Online Instruction (see Appendix below), including more interactive review exercises, reflection experiences, and more diverse and compelling project artifacts. I would like to continue developing this sequence of instruction in my next position, as I see it as the best way to create transformational learning experiences, especially for under-prepared learners who would benefit from the additional support and interaction.
Sequence of Online Instruction
This is my model for sequencing instruction, guiding studens from their first access of the information through increasing stages of critical thinking on that information. It’s a rough amalgam of the project-based curriculum I learned at Envision Schools and the online learning research I did at Samuel Merritt University.
|Discovery||Students are accessing new information for the first time.||Use concrete examples, visual aids, repeat information multiple ways, use attractive multimedia and accessible language to promote convenience, comprehensibility, and appeal for students.|
|Rehearsal/ Review||Students need to actively rehearse what they know and identify gaps to review||Non-graded comprehension check quizzes, memory games to enhance recall, interactive study guide. At this stage students need feedback about any gaps in their understanding in a safe environment (no penalty for failure). The modern LMS has several tools to allow students to try, fail, get feedback, and try again using automated tools that minimize embarrassment at failure and free the instructor to engage in rich feedback later in the sequence.|
|Comprehension||Students take a higher-stakes assessment to demonstrate accurate recall of course concepts and establish readiness for higher-order thinking activities to come.||Auto-graded objective exam, scored by the LMS, gives students feedback on progress, and gives instructors a snapshot of basic comprehension. The score at this stage provides a motivating factor for students. Arguably, these scores should be a smaller percentage of the grade than is common now.|
|Application||Students apply learned knowledge to novel, personally- or professionally- relevant workplace situations to show working knowledge of abstract course concepts.||WebQuests, Course-Level Assessment, group projects. These rich assessments cannot be graded by a machine. Rather, a highly trained expert scores an authentic assessment against a rubric that specifically assesses for higher-order thinking skills, such as Analysis, Evaluation, Synthesis. Here “soft skills” like collaboration and project management may also be taught and assessed.|
|Remediation / Revision||Students receive personalized feedback and support to revise until mastery is reached.||HOTS skills are notoriously hard to teach and students frequently fail to demonstrate mastery on the first try. This is where the instructor provides targeted coaching and mentoring, allowing students to revise and resubmit work until there is evidence of mastery.|
|Reflection||Students analyze their own thinking, synthesize their new learning into their existing worldview, and discuss their process for working through challenges.||Through the revision process, students will often run up against their cognitive limitations, and often must confront these limitations in order to master a skill. Analyzing their own experience of learning is challenging but|
Liked this post? Follow this blog to get more.