首页> 外文会议>International workshop on semantic evaluation >UMDeep at SemEval-2017 Task 1: End-to-End Shared Weight LSTM Model for Semantic Textual Similarity
【24h】

UMDeep at SemEval-2017 Task 1: End-to-End Shared Weight LSTM Model for Semantic Textual Similarity

机译:umdeep在Semeval-2017任务1:结束共享权重LSTM模型,用于语义文本相似性

获取原文
获取外文期刊封面目录资料

摘要

We describe a modified shared-LSTM network for the Semantic Textual Similarity (STS) task at SemEval-2017. The network builds on previously explored Siamese network architectures. We treat max sentence length as an additional hy-perparameter to be tuned (beyond learning rate, regularization, and dropout). Our results demonstrate that hand-tuning max sentence training length significantly improves final accuracy. After optimizing hyperparameters, we train the network on the multilingual semantic similarity task using pre-translated sentences. We achieved a correlation of 0.4792 for all the subtasks. We achieved the fourth highest team correlation for Task 4b, which was our best relative placement.
机译:我们在Semeval-2017中描述了用于语义文本相似性(STS)任务的修改的共享-LSTM网络。网络构建在以前探索过的暹罗网络架构上。我们将最大句子长度视为要调整的额外的hyplameter(超出学习率,正则化和丢弃)。我们的结果表明,手工调整最大句子训练长度显着提高了最终的准确性。优化HyperParameters后,我们使用预翻译句子培训网络上的多语言语义相似性任务。我们实现了所有子任务的0.4792的相关性。我们实现了任务4B的第四次最高团队相关性,这是我们最好的相对放置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号