Classifying M query examples using a support vector machine containing L support vectors traditionally requires exactly M · L kernel computations. We introduce a computational geometry method for which classification cost becomes roughly proportional to each query's difficulty (e.g. distance from the discriminant hyperplane). It produces exactly the same classifications, while typically requiring vastly fewer kernel computations. Related "reduced set" methods (e.g. (Burges, 1996)) similarly lower the effective L, but provide neither proportionality with difficulty nor guaranteed preservation of classifications. Experiments on UCI and NASA data illustrate 2 to 64-fold speedups, across both SVMs and kernel Fisher discriminants.
展开▼