We show that the methodology currently in use for comparing symbolic supervised learning methods applied to human language technology tasks is unreliable. We show that the interaction between algorithm parameter settings and feature selection within a single algorithm often accounts for a higher variation in results than differences between different algorithms or information sources. We illustrate this with experiments on a number of linguistic datasets. The consequences of this phenomenon are far-reaching, and we discuss possible solutions to this methodological problem.
展开▼