3-D pose presentation for training applications
Purpose – In the authors' experience, the biggest issue with pose-based exergames is the difficulty in effectively communicating a three-dimensional pose to a user to facilitate a thorough understanding for accurate pose replication. The purpose of this paper is to examine options for pose presentation. Design/methodology/approach – The authors examine three methods of presentation and feedback to determine which provides the user with the greatest improvement in performance. An on-body sensor network system was used to measure success rates, and address the challenges and issues that arise throughout the process. Findings – A three-dimensional interface allows for full control of the camera, and after conducting all of the experiments, the importance of this feature became exceedingly apparent. Though other elements of feedback were able to illustrate specific problem areas, the camera rotation improved some success rates by more than double. Research limitations/implications – Refinements of visual feedback methods during training could include determining the ideal position for the camera to view the avatar after the rotation to maximize pose comprehension. Future research could also include working towards providing the participant with more specific instructions, verbally or symbolically. Originality/value – In a traditional setting, such as a yoga class, a physically present moderator would provide coaching to participants who struggled with pose reproduction. However, for obvious reasons, this cannot be implemented in a computer-based training setting. This research begins to examine what is the necessary user interface for activities that are traditionally very closely monitored.
|Keywords||Computer games, Exercise, Exergaming, Sensor networks, Training, User interfaces|
|Journal||Interactive Technology and Smart Education|
Ketterl, M. (Markus), Fox, K. (Kaitlyn), & Whitehead, A. (2011). 3-D pose presentation for training applications. Interactive Technology and Smart Education, 8(4), 249–262. doi:10.1108/17415651111189487