A customizable evaluation instrument to facilitate comparisons of existing online training programs

Cheryl A. Murphy, Elizabeth A. Keiffer, Jack A. Neal, Philip G. Crandall

Abstract


A proliferation of retail online training materials exists, but often the person in charge of choosing the most appropriate online training materials is not versed in best practices associated with online training. Additionally, the person must consider the context of the training situation when choosing a training solution. To assist this decision-making process an evaluation instrument was developed. The instrument was designed to help decision-makers 1) assess multiple online training programs against known best practices, and 2) consider context specific training needs via a weighting process. Instrument testing across multiple online training programs was performed, and weighted and unweighted results were examined to determine the impact of contextualized weighting. Additionally, evaluation data from the new instrument were compared to data from an existing online training evaluation instrument. Results indicated the new instrument allowed for consistent rankings by raters across multiple programs, and when the new weighting process was applied small differences were magnified making them more noticeable in overall rating scores. Thus the new weighted instrument was effective in 1) assessing multiple online training programs, and 2) providing reviewers clearer context-specific rating data on which they could make purchasing decisions.

https://doi.org/10.34105/j.kmel.2013.05.018


Full Text:

PDF

Refbacks

  • There are currently no refbacks.


This work is licensed under a Creative Commons Attribution 4.0 License.

Laboratory for Knowledge Management & E-Learning, The University of Hong Kong