This paper explores the application of expert tracking to online user adaptation based on a set of basic predictors in order to classify input in multimodal interaction settings. We compare the performances of this approach to other common approaches that aggregate multiple predictors, like stacking and voting. To realistically assess the performances of algorithms that require feedback, we added noise to feedback to simulate an imperfect system. Using two datasets, we obtained inconsistent results. With one dataset, expert tracking was the best option for short interactions, but with the other dataset, it was outperformed by other algorithms. In contrast, voting worked surprisingly well. On the basis of these results, we discuss implications and future directions.
展开▼