Hand-eye coordination is an important skill that a person performs effortlessly. It is a typical scenario that links perception to action in an iterative manner because human vision is not quantitative. Therefore, it's an interesting topic to develop a computational principle that imitates humanlike hand-eye coordination. This paper presents a developmental framework to address the first issue of mapping the visual sensory data into the hand's motion parameters in the Cartesian space. Our previous original research results have proven that a linear model is applicable to explicitly describe the mapping from the visual sensory data to the ordered motion sequence for the position-based hand-eye coordination. Here, we discuss the extension of our original research into the case of using a quadratic model with the attempt to achieve better performance. Hand-Eye coordination involves the arm's kinematics modelling. Especially, the computation of inverse kinematics is not a simple process. We propose a new scheme, named as discrete kinematic mapping, to directly map the coordinates in Task space into the coordinates in Joint space at different levels of granularity that can be gradually refined through the development. This new approach may serve an evidence to explain why human brain can effortlessly undertake complex motions of the two arm/hand systems regardless the complexity of inverse kinematics.
展开▼