Although multi-label classification has become an increasingly important problem in machine learning, current approaches remain restricted to learning in the original label space (or in a simple linear projection of the original label space). Instead, we propose to use kernels on output label vectors to significantly expand the forms of label dependence that can be captured. The main challenge is to reformulate standard multi-label losses to handle kernels between output vectors. We first demonstrate how a state-of-the-art large margin loss for multi-label classification can be reformulated, exactly, to handle output kernels as well as input kernels. Importantly, the pre-image problem for multi-label classification can be easily solved at test time, while the training procedure can still be simply expressed as a quadratic program in a dual parameter space. We then develop a projected gradient descent training procedure for this new formulation. Our empirical results demonstrate the efficacy of the proposed approach on complex image labeling tasks.

Additional Metadata
Persistent URL dx.doi.org/10.1007/978-3-642-40991-2_27
Series Lecture Notes in Computer Science
Citation
Guo, Y, & Schuurmans, D. (Dale). (2013). Multi-label classification with output kernels. In Lecture Notes in Computer Science. doi:10.1007/978-3-642-40991-2_27