Large–display environments like Reality Center or Powerwall are recent equipments used in the Virtual Reality (VR) field. In contrast to HMDs or similar displays, they allow several unadorned users to visualize a virtual environment. Bringing interaction possibilities to those displays must not suppress the users liberty. Thus, devices based on trackers like DataGlove or wand should be forgotten as they oblige users to don such gear. On the contrary, video cameras seem very promising in those environments: their use could range from looking for a laser dot on the display to recovering each users full body posture. The goal we are considering is to film ones hand in front of a large display in order to recover its posture, which will then be interpreted according to a predefined interaction technique. While most of such systems rely on appearance–based approaches, we have chosen to investigate how far a model–based one could be efficient. This paper presents the first steps of this work, namely the real–time results obtained by using hand silhouette feature and some further conclusions related to working in a large–display VR environment.
展开▼