1-norm support vector machine (SVM) has attracted substantial attentions for its good sparsity. However, the computational complexity of training 1-norm SVM is about the cube of the sample number, which is high. This paper replaces the hinge loss or the ε-insensitive loss by the squared loss in the 1-norm SVM, and applies orthogonal matching pursuit (OMP) to approximate the solution of the 1-norm SVM with the squared loss. Experimental results on toy and real-world datasets show that OMP can faster train 1-norm SVM and achieve similar learning performance compared with some methods available.
展开▼