首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Invited Talk: Deep Neural Networks, and what they're not very good at
【24h】

Invited Talk: Deep Neural Networks, and what they're not very good at

机译:邀请谈话:深神经网络,他们不是很擅长的

获取原文

摘要

Deep Neural Networks have had an incredible impact in a variety of areas within machine learning, including computer vision and natural language processing. Deep Neural Networks use implicit representations that are very high-dimensional, however, and are thus particularly well suited to problems that can be solved by associative recall of previous solutions. They are ill-suited to problems that require human-interpretable representations, explicit manipulation of symbols, or reasoning. The dependency of Deep Neural Networks on large volumes of training data, also means that they are typically only applicable when the problem itself, and the nature of the test data, are predictable long in advance. The application of Deep Neural Networks to Visual Question Answering has achieved results that would have been thought impossible only a few years ago. It has also thrown a spotlight on the shortcomings of current Deep Nets in solving problems that require explicit reasoning, the use of a knowledge base, or the ability to learn on the fly. In this talk I will illustrate some of the steps being taken to address these problems, and a new learning-to-learn approach that we hope will combine the power of Deep Learning with the significant benefits of explicit-reasoning-based methods.
机译:深度神经网络在机器学习中的各种区域具有令人难以置信的影响,包括计算机视觉和自然语言处理。深度神经网络使用非常高维的隐式表示,因此特别适用于先前解决方案的关联召回可以解决的问题。它们对需要人类可解释的陈述的问题,明确地操纵符号或推理。深度神经网络对大量训练数据的依赖性,也意味着它们通常仅适用于问题本身以及测试数据的性质,预先预测。深度神经网络在视觉问题的应用中的应用已经取得了几年前只想被认为是不可能的。它还在解决需要明确推理的问题,使用知识库的问题或从飞行中学到学习的能力时,它也抛弃了当前深网络的缺点。在这次谈话中,我将说明要解决这些问题的一些步骤,以及我们希望将深入学习的力量与基于显式推理的方法的显着优势结合起来的新的学习方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号