首页> 外文学位 >Robust structure-based autonomous color learning on a mobile robot.
【24h】

Robust structure-based autonomous color learning on a mobile robot.

机译:在移动机器人上进行基于结构的强大的自主色彩学习。

获取原文
获取原文并翻译 | 示例

摘要

Mobile robots are increasingly finding application in fields as diverse as medicine, surveillance and navigation. In order to operate in the real world, robots are primarily dependent on sensory information but the ability to accurately sense the real world is still missing. Though visual input in the form of color images from a camera is a rich source of information for mobile robots, until recently most people have focussed their attention on other sensors such as laser, sonar and tactile sensors. There are several reasons for this reliance on other relatively low-bandwidth sensors. Most sophisticated vision algorithms require substantial computational (and memory) resources and assume a stationary or slow moving camera, while many mobile robot systems and embedded systems are characterized by rapid camera motion and real-time operation within constrained computational resources. In addition, color cameras require time-consuming manual color calibration, which is sensitive to illumination changes, while mobile robots typically need to be deployed in a short period of time and often go into places with changing illumination.; It is commonly asserted that in order to achieve autonomous behavior, an agent must learn to deal with unexpected environmental conditions. However, for true extended autonomy, an agent must be able to recognize when to abandon its current model in favor of learning a new one, how to learn in its current situation, and also what features or representation to learn. This thesis is a fully implemented example of such autonomy in the context of color learning and segmentation, which primarily leverages the fact that many mobile robot applications involve a structured environment consisting of objects of unique shape(s) and color(s) - information that can be exploited to overcome the challenges mentioned above. The main contributions of this thesis are as follows.; First, the thesis presents a hybrid color representation that enables color learning both within constrained lab settings and in un-engineered indoor corridors, i.e. it enables the robot to decide what to learn. The second main contribution of the thesis is to enable a mobile robot to exploit the known structure of its environment to significantly reduce human involvement in the color calibration process. The known positions, shapes and color labels of the objects of interest are used by the robot to autonomously plan an action sequence to facilitate learning, i.e. it decides how to learn. The third main contribution is a novel representation for illumination, which enables the robot to detect and adapt smoothly to a range of illumination changes, without any prior knowledge of the different illuminations, i.e. the robot figures out when to learn. Fourth, as a means of testing the proposed algorithms, the thesis provides a real-time mobile robot vision system, which performs color segmentation, object recognition and line detection in the presence of rapid camera motion. In addition, a practical comparison is performed of the color spaces for robot vision -- YCbCr, RGB and LAB are considered. The baseline system initially requires manual color calibration and constant illumination, but with the proposed innovations, it provides a self-contained mobile robot vision system that enables a robot to exploit the inherent structure and plan a motion sequence for learning the desired colors, and to detect and adapt to illumination changes, with minimal human supervision.; Keywords. Autonomous Color Learning, Illumination Invariance, Realtime Vision, Legged robots.
机译:移动机器人正在越来越多地应用于医学,监视和导航等领域。为了在现实世界中进行操作,机器人主要依赖于感官信息,但仍然缺少准确感知现实世界的能力。尽管来自摄像机的彩色图像形式的视觉输入是移动机器人的丰富信息来源,但直到最近,大多数人还是将注意力集中在其他传感器上,例如激光,声纳和触觉传感器。这种依赖其他相对低带宽传感器的原因有很多。大多数复杂的视觉算法都需要大量的计算(和内存)资源,并需要固定或缓慢移动的摄像头,而许多移动机器人系统和嵌入式系统的特点是在受限的计算资源内,摄像头运动迅速且实时运行。另外,彩色照相机需要费时的手动颜色校准,这对照明变化很敏感,而移动机器人通常需要在短时间内部署,并且经常会在照明变化的地方使用。通常认为,为了实现自主行为,代理商必须学会应对意外的环境条件。但是,对于真正的扩展自治,代理必须能够识别何时放弃其当前模型,而倾向于学习新模型,如何在当前情况下学习,以及要学习什么功能或表示。本论文是在颜色学习和分割的背景下这种自治的完全实现的示例,它主要利用了以下事实:许多移动机器人应用程序涉及由唯一形状和颜色的对象组成的结构化环境-可以用来克服上述挑战。本论文的主要贡献如下。首先,论文提出了一种混合色彩表示法,该色彩表示法既可以在受限的实验室环境中也可以在未经工程设计的室内走廊中进行色彩学习,即可以使机器人决定要学习的内容。论文的第二个主要贡献是使移动机器人能够利用其环境的已知结构来显着减少人类对颜色校准过程的参与。机器人使用感兴趣对象的已知位置,形状和颜色标签来自主计划一个动作序列以促进学习,即它决定如何学习。第三主要贡献是照明的新颖表示,它使机器人能够在不事先了解不同照明的情况下,就能够平稳地检测并适应一系列照明变化,即机器人会指出何时学习。第四,作为一种测试算法的方法,本文提供了一种实时移动机器人视觉系统,该系统在相机快速运动的情况下进行颜色分割,目标识别和线条检测。此外,针对机器人视觉对色彩空间进行了实际比较-考虑了YCbCr,RGB和LAB。基准系统最初需要手动进行颜色校准和恒定照明,但是通过提出的创新技术,它提供了一个独立的移动机器人视觉系统,该系统使机器人能够利用固有结构并计划运动序列以学习所需的颜色,并检测并适应照明变化,而无需人工监督。关键字。自主色彩学习,照度不变,实时视觉,有腿机器人。

著录项

  • 作者

    Sridharan, Mohan.;

  • 作者单位

    The University of Texas at Austin.$bElectrical and Computer Engineering.;

  • 授予单位 The University of Texas at Austin.$bElectrical and Computer Engineering.;
  • 学科 Engineering Robotics.; Artificial Intelligence.; Computer Science.
  • 学位 Ph.D.
  • 年度 2007
  • 页码 144 p.
  • 总页数 144
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 人工智能理论;自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号