We generalize the recent worst-case loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts of each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single expert case the additional loss is proportional to logn, where n is the number of experts and the constant of proportionality depends on the loss function.
展开▼