...
首页> 外文期刊>Autonomous Robots >Structure-based color learning on a mobile robot under changing illumination
【24h】

Structure-based color learning on a mobile robot under changing illumination

机译:照明变化下的移动机器人基于结构的色彩学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

A central goal of robotics and AI is to be able to deploy an agent to act autonomously in the real world over an extended period of time. To operate in the real world, autonomous robots rely on sensory information. Despite the potential richness of visual information from on-board cameras, many mobile robots continue to rely on non-visual sensors such as tactile sensors, sonar, and laser. This preference for relatively low-fidelity sensors can be attributed to, among other things, the characteristic requirement of real-time operation under limited computational resources. Illumination changes pose another big challenge. For true extended autonomy, an agent must be able to recognize for itself when to abandon its current model in favor of learning a new one; and how to learn in its current situation. We describe a self-contained vision system that works on-board a vision-based autonomous robot under varying illumination conditions. First, we present a baseline system capable of color segmentation and object recognition within the computational and memory constraints of the robot. This relies on manually labeled data and operates under constant and reasonably uniform illumination conditions. We then relax these limitations by introducing algorithms for (i) Autonomous planned color learning, where the robot uses the knowledge of its environment (position, size and shape of objects) to automatically generate a suitable motion sequence and learn the desired colors, and (ii) Illumination change detection and adaptation, where the robot recognizes for itself when the illumination conditions have changed sufficiently to warrant revising its knowledge of colors. Our algorithms are fully implemented and tested on the Sony ERS-7 Aibo robots.
机译:机器人技术和AI的主要目标是能够部署代理以在较长的时间内在现实世界中自主行动。为了在现实世界中运行,自主机器人依赖于感官信息。尽管车载摄像头可能提供丰富的视觉信息,但许多移动机器人仍继续依赖非视觉传感器,例如触觉传感器,声纳和激光。相对低保真度传感器的这种偏爱除其他因素外,可以归因于在有限的计算资源下实时操作的特性要求。照明变化带来了另一个巨大挑战。为了实现真正的扩展自治,代理必须能够自己识别何时放弃其当前模型,而转向学习新模型。以及在当前情况下如何学习。我们描述了一个独立的视觉系统,该系统可在变化的光照条件下在基于视觉的自主机器人上运行。首先,我们介绍了一个能够在机器人的计算和内存限制内进行颜色分割和对象识别的基准系统。这依赖于手动标记的数据,并在恒定且合理均匀的照明条件下运行。然后,我们通过引入用于(i)自主计划的色彩学习的算法来放松这些限制,在该算法中,机器人使用其周围环境(物体的位置,大小和形状)的知识自动生成合适的运动序列并学习所需的色彩,并且( ii)照明变化检测和适应,当照明条件发生足够的变化以保证对其颜色知识进行修改时,机器人会自行识别。我们的算法已在Sony ERS-7 Aibo机器人上完全实现并经过测试。

著录项

  • 来源
    《Autonomous Robots》 |2007年第3期|161-182|共22页
  • 作者

    Mohan Sridharan; Peter Stone;

  • 作者单位

    Electrical and Computer Engineering The University of Texas at Austin 1 University Station C0803 Austin TX 78712 USA;

    Electrical and Computer Engineering The University of Texas at Austin 1 University Station C0803 Austin TX 78712 USA;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Color learning; Illumination invariance; Real-time vision;

    机译:色彩学习;照明不变性;实时视觉;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号