Using high-level representation of images, e.g., objects and discriminative patches, for scene classification has recently drawn increasing attention. Compared with low-level image features, the high-level features carry rich semantic information that is useful for improving semantic scene classification. Nevertheless, acquiring scene level annotations remains a bottleneck for automatic scene classification, although plenty of related auxiliary resources such as images with object tags are free available on the Internet. In this paper we propose a simple and novel methodology that exploits the rich auxiliary image and text resources to perform labelless automatic scene classification without acquiring training images annotated with scene labels. The key of our methodology is to utilize existing object detectors to represent images in terms of high-level objects and then automatically categorize them based on the semantic relatedness of the object names and scene labels. We further incorporate a label propagation step to refine the automatic scene categorization results. Experiments are conducted on three standard scene classification datasets. The results show that our labelless semantic method can achieve reasonable performance and alleviate considerable amount of scene annotation effort by comparing with supervised scene categorization baselines.

Additional Metadata
Conference 28th British Machine Vision Conference, BMVC 2017
Citation
Ye, M. (Meng), & Guo, Y. (2017). Labelless scene classification with semantic matching. In British Machine Vision Conference 2017, BMVC 2017.