首页> 外文会议>Conference on Bioelectronics, biomedical, and bioinspired systems >Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots
【24h】

Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots

机译:嵌入ANAFOCUS'Ey-RIS视觉系统中的视觉例程,以关闭对粗纱机器人的动作环路的感知

获取原文

摘要

The purpose of the current paper is to describe how different visual routines can be developed and embedded in the AnaFocus' Eye-RIS Vision System on Chip (VSoC) to close the perception to action loop within the roving robots developed under the framework of SPARK II European project. The Eye-RIS Vision System on Chip employs a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. At the second step, processing is realized on digitally-coded information data by means of digital processors. All these capabilities make the Eye-RIS VSoC very suitable for the integration within small robots in general, and within the robots developed by the SPARK II project in particular. These systems provide with image-processing capabilities and speed comparable to high-end conventional vision systems without the need for high-density image memory and intensive digital processing. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the consequent action. In this paper the development of such algorithms and their implementation taking full advantage of the sensing-processing capabilities of the Eye-RIS VSoC are described
机译:本文的目的是描述如何在芯片(VSOC)的Anafocus的Eye-Ris视觉系统中开发和嵌入不同的视觉例程,以关闭在Spark II的框架下开发的漫游机器人内的动作环路的感知欧洲项目。芯片上的Eye-RIS视觉系统采用生物启发架构,其中图像采集和处理真正混合,并且处理本身以两步进行。在第一步,由于集成了靠近传感器的专用电路结构,处理完全并行。在第二步,通过数字处理器在数字编码的信息数据上实现处理。所有这些功能都使Eye-RIS VSoc通常非常适合在小型机器人内的集成,并且特别是由Spark II项目开发的机器人。这些系统提供了与高端传统视觉系统相当的图像处理能力和速度,而无需高密度图像存储器和密集的数字处理。就感知而言,目前的感知方案通常基于来自视觉例程的信息。由于具有传统方法的感知需求来处理真实世界的图像非常复杂,因此需要实时从场景中提取所需特征的更多计算可行的算法,以便有效地继续进行随后的动作。在本文中,描述了这种算法的开发及其实现,以充分利用Eye-RIS VSOC的传感处理能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号