首页> 外文会议>Intelligent Robots and Computer Vision XXIII: Algorithms, Techniques, and Active Vision >Perception System with Scene Understanding Capabilities upon Network-Symbolic Models for Intelligent Tactical Behavior of Mobile Robots in Real-World Environment
【24h】

Perception System with Scene Understanding Capabilities upon Network-Symbolic Models for Intelligent Tactical Behavior of Mobile Robots in Real-World Environment

机译:基于网络符号模型的场景理解能力感知系统在现实世界中的移动机器人智能战术行为

获取原文
获取原文并翻译 | 示例

摘要

Tactical behavior of UGVs, which is needed for successful autonomous off-road driving, can be in many cases achieved by covering most possible driving situations with a set of rules and switching into a "drive-me-away" semi-autonomous mode when no such rule exists. However, the unpredictable and rapidly changing nature of combat situations requires more intelligent tactical behavior that must be based on predictive situation awareness with ongoing scene understanding and fast autonomous decision making. The implementation of image understanding and active vision is possible in the form of biologically inspired Network-Symbolic models, which combine the power of Computational Intelligence with graph and diagrammatic representation of knowledge. A Network-Symbolic system converts image information into an "understandable" Network-Symbolic format, which is similar to relational knowledge models. The traditional linear bottom-up "segmentation-grouping-learning-recognition" approach cannot provide a reliable separation of an object from its background/clutter, while human vision unambiguously solves this problem. An Image/Video Analysis that is based on Network-Symbolic approach is a combination of recursive hierarchical bottom-up and top-down processes. Logic of visual scenes can be captured in the Network-Symbolic models and used for the reliable disambiguation of visual information, including object detection and identification. Such a system can better interpret images/video for situation awareness, target recognition, navigation and actions and seamlessly integrates into 4D/RCS architecture.
机译:UGV的战术行为是成功进行自动越野驾驶所需要的,在许多情况下,可以通过使用一组规则涵盖大多数可能的驾驶情况并在没有驾驶时切换到“驾驶者离开”半自动模式来实现这样的规则存在。但是,战斗情况的不可预测和快速变化的性质需要更智能的战术行为,这些行为必须基于对预测情况的了解以及持续的现场了解和快速的自主决策。通过生物学启发的网络符号模型的形式,可以实现图像理解和主动视觉,该模型将计算智能的功能与知识的图形表示形式结合在一起。网络符号系统将图像信息转换为“可理解的”网络符号格式,类似于关系知识模型。传统的线性自下而上的“分段-分组-学习-识别”方法无法提供对象与其背景/杂波的可靠分离,而人类的视觉却能明确解决此问题。基于网络符号方法的图像/视频分析是递归层次结构自下而上和自上而下的过程的组合。视觉场景的逻辑可以在网络符号模型中捕获,并用于视觉信息的可靠消歧,包括对象检测和识别。这样的系统可以更好地解释图像/视频,以实现态势感知,目标识别,导航和动作,并无缝集成到4D / RCS体系结构中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号