首页> 外文会议>International Conference on Emerging Technologies >Hash Table Based Feed Forward Neural Networks: A Scalable Approach towards Think Aloud Imitation
【24h】

Hash Table Based Feed Forward Neural Networks: A Scalable Approach towards Think Aloud Imitation

机译:基于哈希表的馈送前向神经网络:思考思考的可扩展方法

获取原文

摘要

In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.
机译:在本文中,我们处理经常性网络(RNS)的低效上下文模块的问题,这构成了大声思考的基础:模仿策略。从观察中学习提供了一种明显的展示任务的精致方法。为了学习复杂的任务,然后简单地学习动作序列,思考思想思考学习的策略适用复发网络模型(RNM)[1]。我们提出了动态任务模仿架构及时和存储有效的方式。使用更新的馈送前向网络(FFN)替换效率低效的复发节点。我们修改的架构基于哈希表。使用单个哈希存储代替多个反复间节点。输入可用性的历史保存用于基于体验的任务学习。这种方法的绩效评估使机器人培训的成功保障。通过使用我们设计的方法将这种低效网络替换为基于经常性神经网络的所有应用是最合适的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号