Knowledge Checks: Design and Methodology
July 14, 2008 12:00 AM
I totally agree with Justin's suggestion. If that's not feasible, then I'd say that my answer to your original question would be "both." You could have a quick question or two after each task and then a summary activity after several tasks with questions about all of them (preferably in mixed up order). The problem with only having questions presented directly after the related content is that it doesn't allow the learner (or you) to see whether they can actually *retain* what they've learned, especially once their minds become distracted with other, new information. In my e-courses, I usually have knowledge/skill checks after each concept (or small group of concepts, if they're simple enough and/or highly related to one another) and then an end-of-module summary activity, usually in the form of some kind of "game" or other more "fun" kind of activity, so it feels like a bit of a reward for getting through the material, as well as a good review. (The "game" could be an interactive scenario/simulation, where the decisions/choices they make can lead to different outcomes -- "game" doesn't have to mean Jeopardy or something like that, although it could.)I should add that all of these in-Lectora questions/activities I create are used primarily for the learner to assess their own knowledge, not as testing/assessment tools, since the ultimate objectives for our courses usually require that the learner be able to "explain" or "describe" some things, which requires us to use open-ended questions, phone simulations, or other testing approaches that can't be evaluated and/or administered by computer.Laura
Discussions have been disabled for this post