Empirical study of a vision-based depth-sensitive human-computer interaction system
This paper proposes the results of a user study on vision-based depth-sensitive input system for performing typical desktop tasks through arm gestures. We have developed a vision-based HCI prototype to be used for our comprehensive usability study. Using the Kinect 3D camera and OpenNI software library we implemented our system with high stability and efficiency by decreasing the ambient disturbing factors such as noise or light condition dependency. In our prototype, we designed a capable algorithm using NITE toolkit to recognize arm gestures. Finally, through a comprehensive user experiment we compared our natural arm gestures to the conventional input devices (mouse/keyboard), for simple and complicated tasks, and in two different situations (small and big-screen displays) for precision, efficiency, ease-of-use, pleasantness, fatigue, naturalness, and overall satisfaction to verify the following hypothesis: on a WIMP user interface, the gesture-based input is superior to mouse/keyboard when using big-screen. Our empirical investigation also proves that gestures are more natural and pleasant to be used than mouse/keyboard. However, arm gestures can cause more fatigue than mouse. Copyright 2012 ACM.
|Keywords||3D, Gesture interaction, HCI, Usability, Vision|
|Conference||10th Asia-Pacific Conference on Computer-Human Interaction, APCHI 2012|
Farhadi-Niaki, F. (Farzin), GhasemAghaei, R. (Reza), & Arya, A. (2012). Empirical study of a vision-based depth-sensitive human-computer interaction system. Presented at the 10th Asia-Pacific Conference on Computer-Human Interaction, APCHI 2012. doi:10.1145/2350046.2350070