This paper presents a framework for intelligent tactile object recognition with focus on the influence of the size and precision of tactile sensors on the recognition rate. To avoid probing the entire surface of objects which is a time-consuming task and considering the fact that psychological studies seem to support the idea that visual salient points are also salient by touch, we have determined the probing locations using an enhanced model of visual attention. A virtual tactile sensor based on the working principle of piezo-resistive sensors is simulated to capture tactile information. Four classifiers are then trained to learn the tactile properties of virtual objects belonging to four classes and are tested over new objects belonging to the same categories. The K-nearest neighbors algorithm outperforms all the other tested classifiers when the imprints are captured using large sensors of size 32 × 32 with low precision. An accuracy of 95.18% is achieved for this case.

3D modeling, Mesh, Mesh simplification, Virtual reality applications, Visual attention
dx.doi.org/10.1109/CIVEMSA.2018.8439966
23rd Annual IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, CIVEMSA 2018
Department of Systems and Computer Engineering

Rouhafzay, G. (Ghazal), & Cretu, A.M. (2018). A virtual tactile sensor with adjustable precision and size for object recognition. In CIVEMSA 2018 - 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications, Proceedings. doi:10.1109/CIVEMSA.2018.8439966