My current role is both Tech Writer and Trainer. As such, I write Training Effectiveness Surveys for key topics and courses. While drafting one for a recently rolled-out product improvement, it occurred to me that I had never documented a process for this.
As with any other piece of technical writing, to start, I need to determine the goal, scope, and audience for the survey. The goal can be to measure the effectiveness of the training course (duh) as a whole, the trainer, the training materials, or the attendees’ attention. Trying to cover more than one of these goals tends to lead to redundant questions, for example, having to ask the same question about the trainer/ training session, and the course materials.
The scope of the survey usually breaks down into, immediately after the course, and the delayed follow-up. A survey immediately after the course will tell me if the goal of the course was communicated to the attendees. Did they learn the material? The delayed follow-up, usually two weeks later, will tell me if the training has been integrated into the attendees’ workflow. Did they retain the material?
Audience can include more than just those who attended the course. If that’s the case, then I need to include responses for those who haven’t had the chance to use the product or process that was the focus of the training. It also determines if the language needs to be more formal, or if jargon can be used. (Do I need to define my terms?) That can be a test in and of itself: if the survey needs a glossary, then something wasn’t effective.
The Questions
- How would you rate the product/ process/ enhancement overall?
- Did the product work as expected? Was the training/ training material accurate?
- Specifically for enhancements: Is the change beneficial? Does the workflow change represent an improvement? Is it taking more steps or time to accomplish this task?
- And the generic, Do you have any questions or comments?
The Answers
In the case of a recent survey, it needed to measure the effectiveness of the user guide, the training itself, and the two features rolled out. The survey was 7 questions, and the response options followed the pattern
- a. Yes/ Total success (no comment needed)
- b. Yes/ Mostly successful (no comment needed)
- c. No/ Fail (comment required)
- d. N/A
The four choices are predicated on human behavior. We don’t always want to give total endorsement for things, for fear that we’ll look foolish if it’s not judged “perfect” by the group. Option “b” allows respondents the “yeah, whatever” or “fine” response that means much or little, depending on 1) the respondent’s role and 2) their responses to the “comments or questions” question.
For non-anonymous surveys, multiple “b” responses require follow-up. That includes talking to the respondent, and also checking the training course and the training materials. Was something unclear? Was the trainer (me?) not likable? Was not enough time given for the course?
For anonymous surveys, checking for a pattern of “b” responses AND a message in the generic, “do you have any comments or questions?” gives the surveyor the most information. A string of “b” responses and no comment probably says more about the respondent than the course.
More reading
· https://www.genroe.com/blog/training-survey-questions/11413
· https://www.questionpro.com/a/showSurveyLibrary.do?surveyID=370147
· https://www.mindtickle.com/blog/measure-impact-training-program/
· https://www.questionpro.com/a/showSurveyLibrary.do?surveyID=367090
· http://article.sapub.org/10.5923.j.hrmr.20110101.01.html
· http://fluidsurveys.com/survey-templates/training-survey/
· https://www.marsdd.com/mars-library/training-evaluation-sample-feedback-questionnaire-for-trainees/
· https://www.efrontlearning.com/blog/2017/12/element-post-evaluation-training-questionnaire.html