Is there a question that you hear so often in your professional life that you feel you could valuably have a response card, or a recorded message, made with your answer? I have. My oft-repeated question is “How do you measure the effectiveness of the training / programmes / learning you design?”
Now if I was designing learning that could be measured with standard response forms or tests i.e. cognitive content, I could easily craft an answer that suggested the type of questionnaire that best reflected the content I’d designed, and how it should be administered in order to best measure the acquisition of this learning. However, my specialism is in designing learning that has content that is variously described as Interpersonal skills, Soft skills, Personal Development skills, or Employability skills, best described by the term ‘affective learning’. My work is all about behavioural change, and I’m afraid that that isn’t something you can easily measure using a standardised test.
So what would be on the response card or recorded message that I’d offer in answer to the question about how I DO measure the effectiveness of my design?
Firstly you can’t divorce learning objectives from learning assessment.
If you compose a set of learning objectives that involve people doing something different as a result of your content you can’t measure it by people knowing something different.
Take Resilience as an example. If I’m asked to design content or materials that build resilience I need to acknowledge what resilience is i.e.
“a range of skills, behaviours and attitudes that are beneficial in the development of personal mental toughness and the ability to deal with (and bounce forward from) challenges, pressures and stress in both personal and professional environments.”
Nothing there about building knowledge i.e. cognitive learning. The content I design will have some knowledge building in it, but it can’t be the focus of my measurement because ‘skills, behaviours and attitudes’ are things we do rather than things we know. What is appropriate is a measurement that asks the question “is there a change in what this person does that can be attributed to this learning intervention?”
Secondly behavioural change rarely happens overnight.
Whilst it may be the case that, on return to the workplace, or classroom, a learner is seen and heard to do things differently, it’s unwise to take this as evidence that the learning intervention has been effective. A better measure is to determine whether they are still doing things differently after an extended period of time. So building this into the next appraisal, and looking for evidence of change to inform the conclusions, is a more accurate measure of the effectiveness of the intervention they’ve experienced.
Thirdly people are rarely the best observers / assessors of their own behaviour.
Some kind of external observation is the more accurate way to determine whether people are doing things differently. In some situations this might be about involving line managers or team leaders: giving them questions such as “How often do you hear person X saying/ asking things like….?” or “How does person X cope with situations like…?” This is particularly effective if the observation is made twice (or more) i.e. before and then after the intervention, and the results are brought into the appraisal meeting. Alternatively, setting up peer monitoring as part of the intervention can be effective as it creates a workplace climate where people are more conscious of the target behaviours.
Finally, people are much more likely to change behaviour if they have been part of the process that defines what the desired changes are.
What this means is that if a significant part of the learning intervention involves people in an exploration of current behaviour, the benefits of change, and what these changes would look like, then the intervention is more likely to be effective. The ease of integrating these stages into a learning design is why we are such strong advocates of experiential learning pedagogies, and why our learning tools are designed the way they are. It can be acknowledged that working in this way does place extra responsibilities on the facilitator (4 Basic Facilitation Skills for Trainers) but if the design we’re working with accommodates this element then it shouldn’t be too onerous.
An example of this would be that if every experiential tool that is designed into the process has a debrief that defines the learning from the shared experience, but is then extended to explore the implications of transferring this learning back into the workplace. This is facilitated through questions such as “What would be the positive impact of doing that back at work?” and “What changes would we notice back at work if people really undertook to do more of that?” This raises the level of consciousness around behavioural change and makes it easier to set up the line-manager or peer observation that I’m advocating.
So, that’s a lot to fit on a printed response card, but basically it comes down to answering a question with a question. If I’m asked “How do you measure the effectiveness of your learning designs?” I answer “Are people doing the things we wanted them to do differently, differently, 6 months after the intervention?”
Dr Geoff Cox
RSVP Design Director
Click below to download our learning design manual to read 20 characteristics of effective and successful learning designs, based on more than 30 years’ of Geoffs experience in designing learning environments:
Free Learning Design Manual