I’ve been through a few LMS evaluation processes, and I’ve assisted others via Quora.com and commenters on my blog. One thing I’m always amazed by is how very smart people fail to focus on the right things when evaluating an LMS.
The best way I know to avoid the common pitfalls in an LMS evaluation and keep your focus on what matters is to treat this process like a formal research project. This means developing focused research questions, and testing them explicitly.
Create a list of 10-15 “top tasks” that your users must be able to perform for your institution to fulfill its mission. Think about the most important actions that people need to take through your LMS, and list them using the formula below:
How can [stakeholder x] [perform action y] in each LMS?
Some good examples might look like these:
How can [students] [collaborate on group projects] in each LMS?
How can [faculty] [grade assignments with rubrics] in each LMS?
How can [student coaches] [get alerts when a student is in danger of failing] in each LMS?
These are just examples, but you should create a short, manageable list of these types of questions that reflect your institution’s top priorities for how the LMS is used. Then… you test them!
The key is to then take your short list of “top tasks” and actually test them in a demo instance of the LMSes you’re interested in. Each research question should have its own wiki page or Google doc, shared with your evaluation team, where you’ve clearly stated the question you want to test, and then use that page to collect all the research data you collect as you test that question.
Ask real faculty, student, and support staff to try each “top task” and record how effective, enjoyable, or even possible(!) it is for them to complete that task.
You can record your step-by-step testing process, take screenshots, link to help materials you find online, user reactions, quotes, new questions — everything that presents a clear picture of whether LMS #1 does this task better than LMS #2.
What you should have at the end of each page is a conclusion (“LMS #2 is better for this task!”) as well as a lot of tangible evidence you can share with your administrators and campus community to justify your conclusion.
Though it seems like a lot of work, it will help keep you focused on making sure your users can do their jobs most effectively in the system you ultimately adopt. The evidence you generate will also help you communicate to your campus community how this LMS was selected, and can even form the basis for your in-house help and marketing materials!
Hopefully, the added rigor will help you overcome the most common pitfalls when evaluating an LMS.
Common Pitfalls when Evaluating an LMS
RFP-style “death by bullet points”
Traditionally, institutions will create a lengthy RFP or “request for proposal” document to send to their candidate vendors, outlining their needs and requesting a demo of how product X addresses those needs. I don’t think this is a great idea because it lets vendors throw out a barrage of feature blah blahs at you and try to convince you whatever’s in their product will make all your dreams come true. Usually, it’s a pale substitute for the feature you want, but it’s in there(!!!) so they can check off the box.
- “Does it have realtime cloud collaboration like Google Docs?”
- “Oh yeah. It’s got our Collabo-docu-matic 9000!”
- “Great! So students can work at the same time on the same document over the cloud?”
- “Well no, but it’s got our “wait-until-your-groupmate-is-done” co-editing feature that…”
- “So it’s not really cloud collaboration, is it?”
- “Well, it’s hosted on our blah blah blah…”
This is why, instead of asking “is feature X included?”, you should think in terms of the top tasks that your users need to get done, and insist on seeing how that workflow is designed to work.
Most campus stakeholders, when opening up a new LMS, might cruise around a while, click a few things, and call it quits. They always have feedback to share after such a short inspection, and usually that feedback is all based on the UI experience. They say things like “it seems old and clunky” or “it looks like Facebook” — in other words, mostly commenting on the surface styling of the system, not its core features.
Steve Jobs built Apple knowing that people don’t understand the complicated inner workings of a tech product, but that they judge the quality from surface design:
“People DO judge a book by its cover. We may have the best product, the highest quality, the most useful software etc.; if we present them in a slipshod manner, they will be perceived as slipshod; if we present them in a creative, professional manner, we will impute the desired qualities.”
In other words, people see the cover and they assume they know what the book contains. This is a persistent failing of people’s perception, and one that often comes up when evaluating LMSes. Just know that this happens regularly, and that you can’t depend on your users to test the technical features of an LMS with the rigor necessary.
Let them focus on what their own personal workflows are, and if it’s easier to complete their most common tasks in different systems. Their feedback is this area is valuable and should definitely be taken into account when choosing an LMS.
Strike the word “intuitive” from your vocabulary
The word “intuitive” became a bad word to us when describing edtech software, because people’s intuition is so deeply personal, idiosyncratic, and more about the user than the software itself. LMSes are very complex software bundles with hundreds of little features crammed in on every screen — each feature reflecting the programmer’s best guesses and assumptions about how users are going to use that feature.
No LMS is going to be as simple and straightforward as Facebook — even that comparison is unfair — Facebook has tons of features that most people don’t know about and never use!
We found that often, when a user said something wasn’t “intuitive”, it meant that it didn’t work the same in Canvas as the same feature did in Blackboard. Their previous training and experience in a different LMS colored their expectations for how this new one was supposed to work.
Investigate Users’ Negative Evaluations
Quite often during our LMS evaluation process, we’d hear about someone badmouthing one product or another to their colleagues, saying it didn’t have feature X. We would follow up with those complaints, research them, and actually test the feature they were complaining about. Almost every time, the feature was there and did do what the user wanted, but it wasn’t obvious to them when they tried it. When you treat your LMS evaluation process like a research study, each allegation that “it doesn’t do what I want!” should become one of your research questions to test.
When you quit, don’t stop
The average computer user goes, goes, goes, happily along… until they hit a bump that makes them stop. At this point, they get angry, they start complaining, and they’ll tell anyone who’ll listen about that stupid program that “doesn’t work”. This can easily happen when you let your campus stakeholders test features in the LMS. Inaccurate complaints about your candidate LMS being shared around the faculty lounge can create rampant fear, uncertainty, and doubt about the process, your judgement, etc. It’s no good.
It happens to the best of us, but, at this critical moment, please share the Internet Mantra:
Take a moment, try and put the problem you’re facing into words, and search the web for all the other poor souls like you who’ve gotten stuck here. You may want to, y’know, RTFM, or at least skim some relevant help articles to see if there’s a different way you can approach what you’re doing. If your LMS vendor offers customer support, take the time to file a ticket and get an answer from a professional that knows the product better than you do.
There almost always IS a solution to get you going again, but it always lies beyond that moment of crisis where you’re tempted to stop.
Liked this post? Follow this blog to get more.