The task of information filtering is to classify documents from a stream as either relevant or non-relevant according to a particular user interest with the objective to reduce information load. When using an information filter in an environment that changes over time, methods for adapting the filter should be considered in order to retain classification performance. We favor a methodology that attempts to detect changes and adapts the information filter only if need be. Thus the amount of user feedback for providing new training data can be minimized. Nevertheless, detecting changes may also require expensive hand-labeling of cdocuments. This paper explores two methods for assessing performance indicators without user feedback. The first is based on performance indicators without user feedback. The first is based on performance estimation and the second counts uncertain classification decisions. Empirical results for a simulated change scenario with realworld text data show that our adaptive information filter can perform well in changing domains.
展开▼