A Novel Multi-Modal One-Shot Learning Method for Texture Recognition
Most machine learning algorithms require a large set of training samples in order to achieve satisfactory performance. However, this requirement may be difficult to satisfy in practice. Take the one-shot learning (OSL) problem on texture recognition for example; the machine learning algorithm is difficult to achieve satisfactory results. In order to solve this problem, a novel multi-modal one-shot learning method for texture recognition is presented. First, in order to improve the robustness of identification and the anti-interference to noise, we addressed the nontravel texture recognition challenges of learn information about object categories from only one training sample by fusing varied modalities data, including image, sound and acceleration, which provides rich information regarding textures. Second, a novel dictionary learning model is designed, which contains the various modalities information, and can simultaneously learn the latent common sparse code for the different modalities. Third, an original regularization term is developed to enhance the degree of distinction of different classes. Furthermore, the common features of the three modalities are evaluated in the case of one-shot learning and used as the basis for feature selection. In the end, experiments were performed based on a data set which was published openly to validate the effectiveness of the presented method.
|Keywords||dictionary learning, multi-modal fusion, one-shot learning, Texture recognition|
Xiong, P. (Pengwen), He, K. (Kongfei), Song, A. (Aiguo), & Liu, P. (2019). A Novel Multi-Modal One-Shot Learning Method for Texture Recognition. IEEE Access, 7, 182538–182547. doi:10.1109/ACCESS.2019.2959011