首页> 外文会议>IEEE International Conference on Robotics and Automation >Deep Forward and Inverse Perceptual Models for Tracking and Prediction
【24h】

Deep Forward and Inverse Perceptual Models for Tracking and Prediction

机译:用于跟踪和预测的深度前向和反向感知模型

获取原文

摘要

We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.
机译:我们考虑将映射到高维图像和逆模型的向前映射到机器人中的状态的高维图像和逆模型的问题。具体地,我们提出了一种感知模型,用于从具有深度网络的状态生成视频帧,并为其在跟踪和预测任务中使用的框架提供框架。我们表明我们所提出的模型极大地优于标准的成卷积方法和用于图像生成的GAN,产生清晰的照片逼真的图像。我们还开发了一个用于状态估计的卷积神经网络模型,并将结果与​​扩展卡尔曼滤波器进行比较以估计机器人轨迹。我们在真正的机器人系统上验证所有模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号