test question processing

I have developed a test with 20 multiple choice questions.


I have selected the test content to contain 10 randomized questions.


I have also programmed (using actions/variables) the test so when the user fails the test he/she only has to retake the questions that were incorrect.


This process works perfectly inside Lectora. However, in ReviewLink and my LMS, the fail screen pops up sporatically. There is no consistancy in which question or the position of the question in the randomizing process where this incorrect functionality occurs.


Ex: I published test in ReviewLink. I tested as listed below (a total of 10 times - and program seems to work as designed about 3 out of 4 times). (1) I answered all questions incorrectly and the program worked as designed. (2) I relaunched the program, answering all questions correctly and the program worked as designed. (3) I did a variation of correct/incorrect and the program worked as designed. However, occasionally, one of these ways of testing the program doesn't work as designed (with no changes made in the programming)...there is no consistancy in the way the misfunction occurs.

Discussions have been disabled for this post