Many model-tracing intelligent tutoring systems give, upon demand, a series of hints until they reach the bottom-out hint that tells them exactly what to do (exact answer of the question). Since students don't know the number of hints available for a given question, some students might be surprised to, all of a sudden, be told the final answer; letting them know when the bottom out hint is getting close should help cut down on the incidence of bottom-out hinting. We were interested in creating an intervention that would reduce the chance that a student would ask for the bottom out hint. Our intervention was straight-forward; we simply told them the number of hints they had not yet seen so that they could see they were getting close to the bottom out hint. We conducted a randomized controlled experiment where we randomly assigned classrooms to conditions. Contrary to what we expected, our intervention led to more, not less, bottom out hinting. We conclude that the many intelligent tutoring systems that give hints in this manner should not consider this intuitively appealing idea.
展开▼