首页> 外国专利> METHODS AND SYSTEMS FOR BUDGETED AND SIMPLIFIED TRAINING OF DEEP NEURAL NETWORKS

METHODS AND SYSTEMS FOR BUDGETED AND SIMPLIFIED TRAINING OF DEEP NEURAL NETWORKS

机译:深层神经网络预算和简化训练的方法和系统

摘要

Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
机译:公开了用于深度神经网络(DNN)的预算和简化训练的方法和系统。在一个示例中,训练者将使用从下采样的训练图像中导出的多个训练子图像来训练DNN。测试人员将使用从下采样测试图像中得出的多个测试子图像来测试训练后的DNN。在另一个示例中,在具有位于卷积神经网络(CNN)和长短时记忆(LSTM)之间的局部关注机制的递归深度Q网络(RDQN)中,CNN会从中生成多个特征图输入图像。通过选择所生成的特征图的子集,通过局部关注机制将硬注意力应用于所生成的多个特征图。通过在获得加权特征图时向生成的特征图的所选子集提供权重,通过局部关注机制将软注意力应用于生成的特征图的所选子集。加权特征图存储在LSTM中。根据LSTM中存储的加权特征图为不同的动作计算Q值。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号