Visual attribute learning is a fundamental and challenging problem for image understanding. Considering the huge semantic space of attributes, it is economically impossible to annotate all their presence or absence for a natural image via crowdsourcing. In this paper, we tackle the incompleteness nature of visual attributes by introducing auxiliary labels into a novel transductive learning framework. By jointly predicting the attributes from the input images and modeling the relationship of attributes and auxiliary labels, the missing attributes can be recovered effectively. In addition, the proposed model can be solved efficiently in an alternative way by optimizing quadratic programming problems and updating parameters in closedform solutions. Moreover, we propose and investigate different methods for acquiring auxiliary labels. We conduct experiments on three widely used attribute prediction datasets. The experimental results show that our proposed method can achieve the state-of-the-art performance with access to partially observed attribute annotations.

Additional Metadata
Persistent URL dx.doi.org/10.24963/ijcai.2017/313
Conference 26th International Joint Conference on Artificial Intelligence, IJCAI 2017
Citation
Liang, K. (Kongming), Guo, Y, Chang, H. (Hong), & Chen, X. (Xilin). (2017). Incomplete attribute learning with auxiliary labels. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2252–2258). doi:10.24963/ijcai.2017/313