Directed Evaluation Questions Based Variable

So this is a fairly large question for a tip or trick, but I think I've thought it out fairly well - but I just want to see if anyone else has done this or something similar.

We have a geographically diverse population with different skills and different requirements at over 90 sites. We'd like to administer one test so that it roles up properly within our LMS and it's always one enrollment for everyone as opposed to different enrollments based on the site (LMS administration is easier).

Is there a way to have a pool of 50 questions and then only display certain ones based on information obtained from the LMS (e.g. user ID or site) or the user (e.g. self selection of the site)? I was thinking of using an IF/THEN check that would run when the page is loaded that would either automatically skip to the next applicable question using a next page action or display the page for user response.

Discussion (3)

I'd say it depends on how exactly you want to set up the test. Do you plan to present each learner with an individual selection of questions or something like 3 levels, e.g. "Beginner", "Advanced" and "Expert"? The latter should be easier to do. You write that you want ONE test. Is there a reason for that? A Lectora course can easily have several tests for each level, but also one test with a section for each level. Both ways you can calculate a LMS score from the separate test or section scores. Skipping questions is more difficult to handle because skipped questions are still part of the test and will be evaluated as "incorrect" when empty.

I'd agree with Tim. Don't try and achieve this using one single test in Lectora. If you have a core set of questions that everyone should get asked then group them in one test. This would get shown to everyone. Then create separate tests for all other variations that you want to target - skill level or location etc. Just show these tests to whoever you wish using your IF THEN reasoning (i'd think that it probably will require the user to self-select).

You will then need to convert the score at the end module as the default % score will not be correct (as, for example, the user will only have completed 2 tests out of a possible 10). In this example you would perhaps want to multiply the total score by 5 and then pass this value to the LMS (or alternatively you could work with the individual test scores to calculate the proper score that way). NB: To make the calculated total scores consistent and fair, all of the optional tests should really have the same number of questions / weightings etc.

And I agree with both Tim and Mallow. The system mallow76 describes is, in fact, exactly what I've done in the past and it works well.

Discussions have been disabled for this post