Manually set question status to correct

I'm working on a course where instead of having a standard "test", I have tasks set up at various points throughout the course. Each task is considered a "question" that is right or wrong. What I'm trying to do is figure out if I can either:

Manually increase the AICC_Score each time they get a correct answer

Use dummy questions that are marked as correct as the learner goes through (i.e. when the user clicks on the correct item, Question 1 is marked as correct.

I may be getting over my head and trying to overachieve with this. In essence, I'd like to avoid using a standard "Test" and be able to score the user based on the number of times they take the correct action.

Thanks!

Discussion (8)

No, this is a great way to have reinforcement and retrain learners to bring themselves back to the correct actions. If done correctly, it can be very effective.

Is this a question or are you just offering up wisdom?

If I am understanding correctly, you want to increase their score by approximation or by re-direction. The incorrect answer makes them answer more questions to get them back around to the correct "path" of action. Eventually, the original question (or the same one asked in a different way) is asked and answered correctly.

I think that this is a good way of gauging instead of testing to make improvement which is real learning. It is how we should prove our ability to change someone's behavior. That IS Learning. If you need ROI you have it in your reports if you use an LMS that can give it back to you. Most do.

The way I usually do it is using a hidden question of some type on the page. Once they get through the learning activity just set that questions answer to whatever the correct value is (if they got it right). So you can have a long multi step scenario based task that on the last step just sets the question (fill in the blank works great) to a value that you determine to be correct, If they go a different route and end up in a fail case you can set the question answer to something else.

It definitely can be done as you describe it. The benefits of doing it this way over random variables is that you can take advantage of the results section and scoring. Being as you can write anything into a fill in the blank you can see exactly what lead them to the end result.

It is more of a question at this point. The "getting over my head" comment had more to do with the time constraints I have with trying to puzzle out functionality I'm not experienced with. I have to believe there's a way to do this, but the path to success escapes me.

I thoughtI could create a question in a test and then when the task was done successfully, mark the "question" as correct. However, I'm starting to think I may be better using a completely independent variable that is increased incrementally as each task is completed correctly. Where I struggle with that is how to get a correct "answer" adding a point to the variable and how to translate that into pass/fail.

In the end, I'll be mulling this over during the weekend and may have to hand write the path out more unless there's some feedback here.

In a flash of insight inspired by your post, I realized I could use a hot spot question, since the answers are dependent on clicking a specific menu option. The only question I have is how it works wi a test...does any question in any part of the course apply to the test as long as the question is graded? My inclination is yes, but my memory isn't sure.

Hmmm...then there's the rub. It sounds like I need to determine a variable to use that is updated incrementally when a question is answered correctly, which is something I'm not 100% certain how to do. This is a case where the user needs to click on a "button" in the menu and the menu needs to disappear after the user has clicked on it.

I've attached my attempt to do this with a hotspot, but was not able to get the button and hotspot to cooperate.

At this point, I think any input on how to setup the incremental variable would be really helpful, as I have several items like this for testing purposes. Once I have that setup, I know I can use that variable to determine lesson status and even figure out an "AICC_Score" to send back to the LMS.

G'day Jason,

Why not add a hidden True/False question for each correct action and then when that action has been performed trigger the correct answer of the True/False question. With the question hidden the learner cannot manually answer it and if the question is not answered it is considered incorrect.

You can trigger the correct answer with one line of JavaScript radio396id.click(). This would be added as an Action to the object and you can select the event i.e. Click or Mouse Enter

To find the HTM Name of the correct answer simply select the "True" radio button, then select Properties, now move your mouse over the "Add description" button, you can find this at the bottom right of where the "Name" and "Label" is on the top left. Notice how you need to add "id", very important.

Using this method you can use the question's feedback to display info. or trigger another event. The downside is that without some other object that is clicked there is no incorrect feedback or incorrect trigger. I guess if you have button that is "Show Me" that will show all appropriate menu objects including an arrow that also triggers the "False" radio button. The "Show Me" should be hidden when the correct actions have been performed so the learner does not inadvertently click it:-)

This JavaScript works in all browsers and preview mode.

HTH

Regards, Peter

Only questions that are in the test chapter are relevant for the test score. All other questions in your course aren't part of the test.

Using custom variables to track the tasks a user has fulfilled and to calculate a score from it, is (I believe) the best way to go. Invisible questions can be very helpful, but only for more complex settings / tasks than a single click on a button.

I've attached a small sample how to track clicks on buttons in a variable, how to calculate a score in dependance of the tasks a user has done and set the AICC_Score to the score achieved with the tasks.

Tim

Discussions have been disabled for this post