
Are you stopping at the “Smile Sheet”? Discover how to use Kirkpatrick’s Four Levels of Evaluation to measure real impact, prove ROI, and transform your L&D team into a strategic business partner.
Reading Time: mins

You’ve analysed the needs. You’ve designed the storyboard. You’ve developed the assets. You’ve launched the course.
Now, you celebrate. Right?
Wrong.
In the rush to meet deadlines, the final step of the ADDIE process, Evaluation often gets reduced to a generic survey asking, “Did you enjoy this course?”
This is a missed opportunity of massive proportions. Without rigorous evaluation, you are flying blind. You don’t know if your training changed behaviour, you don’t know if it improved the business, and you certainly can’t prove your Return on Investment (ROI) to leadership.
Evaluation isn’t just about grading the learner; it’s about grading the design. It is the only way to know if your solution actually worked.
To do this right, we look to the gold standard: The Kirkpatrick Model.
Created by Donald Kirkpatrick in the 1950s, this model breaks evaluation down into four distinct levels. Most organisations stop at Level 1. To be an award-winning designer, you need to climb the pyramid.
The Question: Did they like it? This is the immediate feedback survey at the end of a course.
The Question: Did they learn it? This is where we measure the increase in knowledge or capability.
The Question: Did they use it? This is the chasm where most training fails. This level measures whether the learner applied the training back on the job.
The Question: Did it impact the business? This is the Holy Grail. It connects the training directly to organisational goals.
If you don’t evaluate, you can’t iterate.
Imagine a marketing team launching a million-dollar ad campaign and never checking to see if it sold any products. They would be fired. Learning & Development should be no different.
Evaluation provides the data you need to:
A common objection: “I’m a freelancer/consultant. Once I hand over the SCORM file, I never see the learners again. How can I possibly measure Level 3 or 4?”
This is the “Black Box” problem. You build it, ship it, and lose all visibility. But lack of access is not an excuse for lack of data. You just have to be smarter about how you gather it.
If you know you won’t be there in three months, use these “Proxy Metrics” instead:
1. Measure Confidence (The Predictor) If you can’t be there to watch them do the job, measure how ready they feel to do the job.
2. The “Trojan Horse” Resource Embed a digital “hook” inside the course that requires the learner to reach out to a server you control (or at least can monitor).
3. The Automated “Boomerang” If you are handing off to a client’s LMS, negotiate one simple automation before you leave.
Stop treating evaluation as an administrative afterthought. It is the most critical part of the design cycle.
If you aren’t measuring the result, you aren’t managing the learning.
Don’t guess if your training worked—know for sure. Book a coaching session with me and start evaluating for impact today.