“At this time in our history, the most valuable institutions will be those that generate real learning from average or disadvantaged students, not those that generate learning from those students for whom learning is, frankly, easy.”
My goal for this article is to develop a blueprint for a “high-touch” Competency Based Education model that is specifically tailored to meet the unique needs of the nontraditional learner that comprises as much as 75% of the higher ed population. These students benefit from a performance-based assessment model where they are given multiple options to try and revise work up to mastery in a supported, responsive, and validating environment. Though CBE is often marketed for its low cost and speed, it can also support students who require extra time and attention to master skills and knowledge.
The Student of the Future Present
The demographics of higher education are changing rapidly, The “nontraditional” student is becoming the norm, and that student is over age 25, non-white, first in their family to go to college, working while learning, requiring some academic remediation, and of low socioeconomic status (Azziz, 2008). This type of student struggles with abstract, context-reduced content, and thrives in an environment that is personalized, relevant, and supported. We need to design learning experiences for these students with low cognitive overhead, a focus on active learning, and with high rates of interaction with classmates, instructors, and coaches so they feel connected and empowered to succeed.
Credit for Prior Learning
Prior Learning Assessment is a well known component of Competency-Based Education, allowing adults with prior work experience to “test out” of skills they have already acquired in their previous work or life experience. Several options for supporting PLA are widely practiced in adult education, including
- Advanced Standing and Advanced Placement
- College Level Examination Program (CLEP)
- Portfolio Review and
- American Council of Education Guidelines for Corporate/Certificate Training.
Instruction ≠ Delivery of Content
The curriculum this student is likely to find in traditional face to face and online courses is still predicated on the assumption that the objective of a course is to expose students to content and check if they’ve memorized it. This Banking Model of education (Freire, 1993), though widely practiced in higher ed, does not represent the best we know about instruction. Worse, it poses significant access problems for the nontraditional learner and reinforces existing societal inequalities.
The first thing we need to do is accept that content delivery is not the core function of education, mastery is.
Instruction = Mastery of skills and knowledge
CBE calls us to base grading decisions on students’ successful demonstration of their mastery of intended outcomes. Students’ grades should be based on performing rigorous real-world authentic tasks (including higher order thinking skills) at a criterion-referenced level of proficiency. The grading rubrics should explicitly measure learning outcomes, including explicit critical thinking skills from the upper levels of Bloom’s Revised Taxonomy.
In short, we should be asking “what will students be able to do after this class is over as a result of instruction?” then verifying that they can do it before they move on.
Course Organization
The flow of the class should look like students working on authentic projects, getting frequent personalized feedback from instructors, going back for revisions, and filling in gaps by reading, watching videos, discussing with classmates, and taking self-quizzes to check their understanding. Notice how the active tasks come first, the reading and self-quizzing are done to support the production of real-world tasks.
Automated Content Delivery
The learning technologies should automate content delivery tasks using video, e-texts, and interactive simulations and assessments so the instructor can commit maximum time on interacting with students around revising work to mastery. Content should be offered in multiple formats, so the same content can be consumed via text, video, and/or audio recording. This supports students’ ability to access information while meeting their work and family responsibilities.
Automated, ungraded self-tests
Students should be given multiple opportunities to check their understanding within the LMS, and these data points should be monitored by analytics to alert learning specialists if students are in need of targeted remediation. These checks for understanding should not count towards students’ overall grades, as they should be designed to support the completion of the larger project.
Authentic, Real World Assessment Projects
The assessment should be designed to measure students’ ability to produce an authentic, real-world work artifact that matches what they would be expected to do in the working world. These artifacts can be stored, displayed as an ePortfolio that follows the student professionally (ideally the data is under their control), and those become part of the students’ ongoing personal record of their learning.
After successful completion and assessment, students can choose to display their work products in a public-facing e-portfolio alongside competency badges they have earned. Students can download and repost their portfolio in an interoperable format (say, on their personal blog) display them alongside professional degrees and certifications (open badges, LinkedIn profile, personal resume site). This enables potential employers to clearly see the competencies students have mastered, granularly, and dig down into those certifications so they can align their needs with students’ unique qualifications.
Grading Based on Competencies, not Percentages
In the course gradebook, we need to do away with the traditional 100-point, number based system. It is an abstraction that imperfectly reflects student learning, and it takes the focus away from learning and puts it (unnaturally) on these abstract “points”.
“Teachers make up all kinds of complex weighting systems, dropping the lowest, assigning a percentage weight to different classes of assignments, grading on curves, and so on. Faculty often spend a lot of energy first creating and refining these schemes and then using them to assign grades. And they are all made up, artificial, and often flawed. (For example, many faculty who are not in mathematically heavy disciplines make the mistake at one time or another of mixing points with percentage grades, and then spend many hours experimenting with complex fudge factors because they don’t have an intuition of how those two grading schemes interact with each other.)” (Feldstein, 2015)
Mastery Grading
I use four levels to assess with:
- Emerging
- Approaching Competency
- Competency
- Exceeds Competency
Let’s look at how to assess using this scale with an example: The competency states “students must be able to do ten push-ups”. If students can do ten, they have demonstrated competence and get a 3. If they can do more than ten, they exceed the competency and get a 4. If they can only do seven push-ups, they get a 2 which triggers a targeted intervention with a teacher or mentor who gives them feedback, strategies, practice, and whatever support they need so they can retake the assessment and do ten push ups. Similarly, 1 s are reserved for work that is far below competency, where intensive academic remediation (and maybe even life counseling services) can be activated to help troubleshoot what’s keeping students from demonstrating mastery.
These grades are entirely independent of letter grades, but could be mapped to A-D, depending on institutional standards.
The beauty of CBE is that none of those grades has to be permanent. If you try on an assignment and you get a 1, you go back for more practice, remediation, and tutoring. You can try on the same project and improve it.
One thing people don’t recognize about CBE and mastery learning is that it’s fundamentally different from our traditional practice of getting “right answers” to regurgitation questions. We all have this sort of gut feeling that if you see the test questions before the test, that that’s an unfair advantage. That you shouldn’t see the test questions before the test (so you can STUDY FOR THEM, god forbid!) but when you’re assessing students’ higher level thinking skills, the quality of their work goes from being “binary” — do you have it right or not? — to being “quantum”. It’s got a whole extra dimension of complexity that’s added when students are applying their knowledge to solving a problem. So you can look for evidence that they have the “right answers” to core understandings, but you can also look at whether they’ve successfully used what they know to solve the problem. Whether that LEED certification student produced the kind of report that would make them to a successful addition to a workplace that does that. And that’s something that, if they don’t get it right the first time, you can give them concrete ways to improve it. And it’s not like they’re doing the same thing over, they’re actually finding their own new pathway from understanding to creation. This is the kind of learning task that can only be assessed by a human being, that can only be mentored by a human being.
Where Vocational Ed. Meets The Ivory Tower
I’ve heard academics reflexively dismiss competency-based education as a “dumbing down” of the college curriculum – as if it would remove the high-level thinking demanded of university students and reduce it to a vocational education program. It seems to me that vocational ed and four-year institutions can actually learn a lot from one another.
In higher ed, most assessment does not rise above the lowest Bloom’s Taxonomy verbs — namely, Recall and Understanding. Moving up Bloom’s Taxonomy, we get to Application, Analysis, Synthesis, Evaluation, and Creation. These higher-order thinking skills are highly prized by employers and virtually ignored by college faculty. This reminds me of the Zig Ziglar quote:
While university instructors see evidence of critical thinking all around them, they don’t explicitly teach it or assess for it – it just magically happens (in some students) as a by-product of a college education. The reason they see it so frequently is because universities attract elite students who are well-prepared to think critically about content without being explicitly taught how. However, if we are committing to attract less-prepared students and stimulate them to develop critical thinking skills, it won’t happen unless we teach and assess these skills explicitly – as competencies.
Even though such a learning model would focus on the low rungs of Bloom’s, they’re a critical first step in the student’s next tasks, which require them to apply this understanding to a novel project or problem. They have to make sure they’ve fully understood the content they’re responsible for, but the real assessment, where their grade comes from, is their ability to use that mastered information to create an authentic work artifact – something that would be part of their job if they were in the workplace.
Think of the “star worker” on your team at your job. The person that, if you had ten of them, it could take your work to the next level. What is it that that person does that adds so much value to your team? What are the tasks, skills, and habits that make this person different from other workers on your team? The projects students create should look very much like the unique work products from your star employee. Students who can perform at that level pass the competency, and students who don’t get it on the first try get more opportunities to revise, practice, and work with instructors until they demonstrate mastery.
Faculty Revision, Mentor Coaching
We have automated the delivery of content so faculty subject-matter experts can spend a much higher percentage of their time reading, responding, and helping students to revise on those authentic work artifacts. They are working asynchronously, so students (wherever they are in the course sequence) can submit work and receive personalized feedback from an expert in the field. Faculty can grade student work on a rubric from within the LMS, and engage in a dialogue around how to achieve mastery on every rubric criterion.
Student contact that does not require a subject matter expert is offloaded to para-professional staff mentors who focus on motivating students, checking progress, offering encouragement, and helping them navigate the course environment. These tasks, which teachers refer to as administrivia, are both important to students and not necessarily appropriate for highly paid, highly trained faculty experts to spend time on.
In my professional experience, project based learning and performance based assessment is a transformational learning modality when working with struggling students. Students who come into the learning task underprepared do very well when they have a safe place to try and fail, with supportive, responsive adults whose focus is on mentoring them up to success on the assessment.
By automatic many of the tasks that teachers use most of their time on, the objective is not to do away with teachers, but to use teachers in a much more focused and efficient way — so they are (1) responding directly to events in the LMS when a student is identified as being in danger of failing. If you design courses so that students have multiple opportunities to check their understanding, you generate data, you get analytics, and you can build in alerts when it’s clear a student needs targeted remediation. That is when it makes a huge difference to the student that someone knows right away that they’re in trouble, reach out to them proactively and help them get back on track..
The second place where teachers should be spending their time is in having deep one-on-one, collaborative conversations around real world artifacts with students, and spending multiple revisions until the work product meets desired outcomes.
The Need for Rubrics
A lot of times, teachers are actually responding to other, extraneous characteristics of student work when they assign grades.
We all knew the kid who would get extra points for putting one of those cute little plastic wrappers around their report when they turned it in, and making sure there were no jelly stains on the paper, and making sure that it was typed — these are all “teacher pleasing behaviors” that have nothing to do with showing learning. They are sort of a shortcut to getting ahead in a system where the teacher has the freedom to grade you on whether they like you, whether you seem like you’re trying hard, whether your views didn’t irritate your teacher too much, but not about whether you’re actually showing mastery of the kinds of skills you need to pass.
It’s the best way that I know of to reach students who are alienated from our traditional education system and who feel like they don’t have the tools they need to be successful within that paradigm. It’s actually a transformational experience to take a student who will fail miserably on the first chance, and maybe on the second chance, and then they’ll throw a tantrum, and then they’ll disappear for a week, and then they’ll come back and do it right they’ll pass your class. It’s kind of a messy process, but it’s very rewarding, and it’s the difference between throwing away a generation of students and helping them to master the skills they need to be successful in the 21st Century.
References
Liked this post? Follow this blog to get more.