Acquisition of tactile data requires a direct contact with the object, and in order to achieve object recognition, the process of moving and positioning the sensor to probe the object surface is often time consuming. The paper explores the use of visual information, in form of features extracted by a visual attention system, to guide the tactile data acquisition process. To reduce the effort and time required by the real data collection, the data acquisition procedure is first simulated. This enables the identification of the most promising selective data acquisition algorithm that allows for the recognition of the probed objects based on the acquired tactile data. Several features and classifiers are tested for this purpose. Among them, an improved version of a computational visual attention model associated with the k-nearest neighbors algorithm obtained the best performance (94.51%) during the simulation, while a performance of 68.75% is obtained with the same visual attention model combined with the Naïve Bayes algorithm when using real measurements collected with a piezo-resistive tactile sensor array.

Additional Metadata
Keywords 3D objects, machine learning, tactile images, tactile sensing, visual attention
Persistent URL dx.doi.org/10.1109/IRIS.2017.8250113
Conference 5th IEEE International Symposium on Robotics and Intelligent Sensors, IRIS 2017
Citation
Pedneault, N. (Nicolas), & Cretu, A.M. (2018). 3D object recognition from tactile data acquired at salient points. In Proceedings - 2017 IEEE 5th International Symposium on Robotics and Intelligent Sensors, IRIS 2017 (pp. 150–155). doi:10.1109/IRIS.2017.8250113