The proposed workshop will focus on the design and application of randomized experimental comparisons, that investigate how components of digital problems impact students' learning and motivation. The workshop will demonstrate how randomized experiments powered by artificial intelligence can enhance personalized components of widely-used online problems, such as prompts for students to reflect, hints, explanations, motivational messages, and feedback. The participants will be introduced to dynamic experiments that reweight randomization to be proportional to the evidence that conditions are beneficial for future students and will consider the pros and cons of using such more advanced statistical methods to ensure research studies lead to practical improvement. The focus will be on real-world online problems that afford the application of randomized experiments; examples include middle school math problems (www.assistments.org), quizzes in on-campus university courses, activities in Massive Open Online Courses (MOOCs). The attendees will have the opportunity to collaboratively develop hypotheses and design experiments that could then be deployed, such as investigating the effects of different self-explanation prompts on students with varying levels of knowledge, verbal fluency, and motivation. This workshop aims to identify concrete, actionable ways for researchers to collect data and design evidence-based educational resources in more ecologically valid contexts.
展开▼