首页> 外文会议>Workshop on Domain Adaptation for NLP >Locality Preserving Loss: Neighbors that Live together, Align together
【24h】

Locality Preserving Loss: Neighbors that Live together, Align together

机译:位置保存损失:生活在一起的邻居,在一起对齐

获取原文

摘要

We present a locality preserving loss (LPL) that improves the alignment between vector space embeddings while separating uncorre-lated representations. Given two pretrained embedding manifolds, LPL optimizes a model to project an embedding and maintain its local neighborhood while aligning one manifold to another. This reduces the overall size of the dataset required to align the two in tasks such as crosslingual word alignment. We show that the LPL-based alignment between input vector spaces acts as a regularizer, leading to better and consistent accuracy than the baseline, especially when the size of the training set is small. We demonstrate the effectiveness of LPL-optimized alignment on semantic text similarity (STS), natural language inference (SNLI), multi-genre language inference (MNLI) and cross-lingual word alignment (CLA) showing consistent improvements, finding up to 16% improvement over our baseline in lower resource settings.
机译:我们介绍了一个地方保存损失(LPL),其提高了矢量空间嵌入之间的对齐,同时分离了不计算的表示。 考虑到两个备用嵌入歧管,LPL优化模型来投影嵌入并保持其本地社区,同时将一个歧管对齐到另一个歧管。 这减少了对准两个任务中的数据集所需的数据集的整体大小。 我们表明输入向量空间之间的基于LPL的对齐充当规范器,导致比基线更好和一致的精度,尤其是当训练集的大小很小时。 我们展示了LPL优化对准对语义文本相似性(STS),自然语言推断(SNLI),多类型语言推断(MNLI)和交叉词语对齐(CLA)的有效性,显示出一致的改进,找到高达16% 在较低的资源设置中对我们的基线改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号