首页> 外文会议>International Conference on speech and computer >Improving Neural Models of Language with Input-Output Tensor Contexts
【24h】

Improving Neural Models of Language with Input-Output Tensor Contexts

机译:使用输入输出张量上下文改善语言的神经模型

获取原文

摘要

Tensor contexts enlarge the performances and computational powers of many neural models of language by generating a double filtering of incoming data. Applied to the linguistic domain, its implementation enables a very efficient disambiguation of polysemous and homonymous words. For the neuro-computational modeling of language, the simultaneous tensor contextualization of inputs and outputs inserts into the models strategic passwords that rout words towards key natural targets, thus allowing for the creation of meaningful phrases. In this work, we present the formal properties of these models and describe possible ways to use contexts to represent plausible neural organizations of sequences of words. We include an illustration of how these contexts generate topographic or thematic organization of data. Finally, we show that double contextualization opens promising ways to explore the neural coding of episodes, one of the most challenging problems of neural computation.
机译:张量上下文通过生成传入数据的双重过滤来扩大语言的许多神经模型的性能和计算能力。应用于语言领域,它的实现可以非常有效地消除多义词和同义词的歧义。对于语言的神经计算建模,输入和输出的同时张量上下文化将策略密码插入模型,该密码将单词引向关键的自然目标,从而允许创建有意义的短语。在这项工作中,我们介绍了这些模型的形式属性,并描述了使用上下文来表示单词序列的合理神经组织的可能方法。我们将举例说明这些上下文如何生成数据的地形或主题组织。最后,我们证明了双重情境化为探索情节的神经编码开辟了有希望的方式,这是神经计算中最具挑战性的问题之一。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号