Scoring a drag and drop quiz
December 23, 2017 12:00 AM
Hello
I am stumped by this. I have created a "test". The test has some drag and drop questions in it. I would like to do the following:
(a) give partial rewards for getting the question partially right, e.g. there are four correct placements for the boxes. The student gets 3 right. I would like to give them 3 points. If they got the question 100% right they would get 4 points. I have no idea how to access this partial information.
(b) I also want to keep a cumulative score. I notice that Lectora has created a new variable called Test_1_Score (which presumably is meant to hold the cumulative score for the test. The test is called "Test 1"). I seem to be unable to get any value for any likely variable.
One thing I have not yet tried yet is to actually process the quiz/test. Maybe that is what is lacking. I have been presuming that processing the question would trigger the setting of the variables. Perhaps I am wrong.
Help!!!
Andrew
Discussion (7)
Hi Andrew - Hope you had a wonderful holiday!
I did some work with @timk very early on in the development of my courses on this very question. It comes down to knowing the variables and how they match, since in essence, drag and drop is an "animated" matching question.
Drag and drop questions use a format of "ItemNumber-ZoneNumber) to track answers and is all on one line in Lectora (i.e., after dragging item 1 to drop zone 1 only, the question variable value "i01-d01,i02-(na)", etc. From there, you can set up an action group that checks the question variable, then adds to a custom variable to count the number correct when the question is processed. For example:
onProcessQuestion -> Add to variable "correct" IF Question_0001 Contains "d01-i01" .
I've attached the example Tim helped me with so you can see it in action. Instead of a process question, I used a submit that checked the answers, but didn't process the question. If it's in a quiz, I'm not 100% sure what Lectora does in terms of correct or incorrect when processing, since the variable for the question is populated with the answer given by the user. For "tests" that aren't an actual "final" test, I prefer to use a custom variable to calculate scoring and the "pass/fail" percentage.
Side note: keep in mind that a drag and drop question is NOT accessible for keyboard users. Here's a link to an example alternative that helps for keyboard only users, depending on the design of your question. You can only tab to the other options, which are checkboxes that trigger animation to move the item to the plate:
Accessible drag and drop using storyline
undefined
Lectora creates variables for each test and each test section, e.g. "Test_1_Score" or "Test_1_Test_Section_1_Score". These variables contain the score for the respective scope in %. But there's no separate score for a single question.
A way to get a score for a single drag & drop question can be to put it into a separate section and check "Grade each choice". For the evaluation Lectora will treat each dropzone as a separate question. For a dnd question with 4 dropzones and the user dropping 3 of 4 correctly the score for the section will be 75 for the 3 separate questions that have been answered correctly. This score is also relevant for the total score of the test. Minor drawback is that Lectora also treats the parts separately when displaying test results.
If you wanted to evaluate the question with 0 for the test score and still react to the partially correct answer, Jasons' approach would be the way to go.
By the way:
"Process test" and "Process question" are completely unrelated actions. You must process a test to get a score or run actions for "On: passed" and "On: failed". Without processing a test you won't have any score neither for the test nor the sections, thus the test couldn't be passed.
A question doesn't have to be processed. The moment a user changes the question it's either correct or not. Process question means to run actions that have been defined on the "Feedback" and the "Attempts" tab of a question. If you don't use these settings there'll be no need to process it. It doesn't have any effect on the score of a test the question may be in, although it won't do any harm to have the action with no Feedback or Attempts settings.
Hello Jason and Tim and Happy Holidays to you too!
Thank you both for the information in your answers. I am aware of most of the scoring issues including the way Lectora pairs things for dnd exercises. However, I had the hope that maybe someone could provide some solutions using ready-made Lectora structures such as the "grade each question separately" (without using sections) (at least the option hinted at the possibility and I thought I was hoping I had not understood how it worked). The reason for this is to avoid novice lesson writers having to delve into the innards of Lectora - but that now seems inevitable.
Thank you for the great help from both of you.
Andrew
Ah, I apologize as I think I misinterpreted the original question. It would be nice to have more control over the drag and drop questions the same way you can for multiple answer questions. From my perspective, there are quite a few things with questions that could be improved upon given the ways I've used them outside of a test, such as allowing there to be an "or" option on multiple answer questions or, in my most recent work, the ability to compare selected choices when there is more than one multiple choice answer in a drop down and prevent duplicate answers.
Oh no, please don't apologize Jason. I think I did not give enough information. I think I get a bit long-winded in describing problems and often that discourages people from answering - so my questions tend to be less informative than they could be. I am very grateful for the feedback from both you and Tim. I don't use quizzes much in my work - most of the answer-evaluation I do is on strings and, so far, I am not a regular lesson-writer with Lectora (or anything). Hopefully that will change soon as I am less fluent with Lectora than I should be. Every little bit helps greatly. Thank you!
Andrew
Hi Andrew,
you only need a separate section if you want to know the score for the individual question. Checking "Grade each choice" is enough to calculate the "partial" score into the test score.
Tim
Discussions have been disabled for this post