The Big Question: How to assess learning initiatives…

How do you assess whether your informal learning, social learning, continuous learning, and performance support initiatives have the desired impact or achieve the desired results?

Why single out informal, social, and continuous learning? We need better approaches for assessing all learning initiatives. People who think the old 1-2-3 is adequate for assessing formal learning are kidding themselves.

The purpose of learning is behavior change. Up front, we need to secure our sponsor’s agreement on what behavior we’re trying to change, why it matters, and what evidence will credibly demonstrate that the new behavior is taking place. Knowing something is not enough; we’re after people doing things. The behavior change best expressed in business terms.

You need to wait a while before taking the assessment. Smile-sheets and test scores prove nothing because they are administered before the forgetting curve sets in. The reason only 10%-15% of what is learned shows up on the job is that most of what you learn disappears rapidly unless it’s reinforced by reflection and practice. That’s why it’s a good idea to wait three to six months — to see what sticks.

When the time is ripe, there are several approaches to assessment. First is to use the yardstick the sponsor agreed to upfront. Did the needle move or not? This is often insufficient, because learning initiatives are never isolated acts. Sure, we had sales training on the new product, but we also had a publicity campaign, the product was better than the competition’s, and everyone was enthusiastic. How can we isolate the impact of the learning? Sometimes we can’t, because learning was indeed one component of a multi-pronged solution.

However, you can find out a lot by interviewing a sample of people. Ask them what they had to know to succeed and how they learned it.

Some would suggest that this is not scientific, that you would have to interview everybody, and nobody’s got time for that. It’s a bogus argument. I used to work in public opinion polling. You can generalize results for the whole group by interviewing a small sample of people. A formula can determine what’s statistically significant.

Furthermore, asking open-ended questions yields a lot more meaningful information than check-boxes and rating scales. It yields stories and anecdotes that are more persuasive than percentages.

It would be sweet if you could punch a button on an LMS and get an instant evaluation. That’s a pipe dream. An LMS measures activity, not outcomes.

Besides, as we noted earlier, results are in the eye of the sponsor. This is why no training department can ever claim to have reached Level 4; they don’t own the yardstick by which Level 4 is measured.

Leave a Reply

Your email address will not be published. Required fields are marked *