【24h】

Learning with Latent Linguistic Structure

机译:具有潜在语言结构的学习

获取原文
获取原文并翻译 | 示例

摘要

Neural networks provide a powerful tool to model language, but also depart from standard methods of linguistic representation, which usually consist of discrete tag, tree, or graph structures. These structures are useful for a number of reasons: they are more interpretable, and also can be useful in downstream tasks. In this talk, I will discuss models that explicitly incorporate these structures as latent variables, allowing for unsupervised or semi-supervised discovery of interpretable linguistic structure, with applications to part-of-speech and morphological tagging, as well as syntactic and semantic parsing.
机译:神经网络提供了一种强大的工具来对语言进行建模,但也背离了标准的语言表示方法,该方法通常由离散的标签,树或图形结构组成。这些结构之所以有用,有多种原因:它们更具解释性,并且在下游任务中也很有用。在本演讲中,我将讨论将这些结构显式地合并为潜在变量的模型,从而允许无监督或半监督地发现可解释的语言结构,并将其应用于词性和词法标记以及句法和语义解析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号