首页> 外文期刊>ACM transactions on intelligent systems >Few-Shot Text and Image Classification via Analogical Transfer Learning
【24h】

Few-Shot Text and Image Classification via Analogical Transfer Learning

机译:通过类比转移学习进行少量文本和图像分类

获取原文
获取原文并翻译 | 示例
       

摘要

Learning from very few samples is a challenge for machine learning tasks, such as text and image classification. Performance of such task can be enhanced via transfer of helpful knowledge from related domains, which is referred to as transfer learning. In previous transfer learning works, instance transfer learning algorithms mostly focus on selecting the source domain instances similar to the target domain instances for transfer. However, the selected instances usually do not directly contribute to the learning performance in the target domain. Hypothesis transfer learning algorithms focus on the model/parameter level transfer. They treat the source hypotheses as well-trained and transfer their knowledge in terms of parameters to learn the target hypothesis. Such algorithms directly optimize the target hypothesis by the observable performance improvements. However, they fail to consider the problem that instances that contribute to the source hypotheses may be harmful for the target hypothesis, as instance transfer learning analyzed. To relieve the aforementioned problems, we propose a novel transfer learning algorithm, which follows an analogical strategy. Particularly, the proposed algorithm first learns a revised source hypothesis with only instances contributing to the target hypothesis. Then, the proposed algorithm transfers both the revised source hypothesis and the target hypothesis (only trained with a few samples) to learn an analogical hypothesis. We denote our algorithm as Analogical Transfer Learning. Extensive experiments on one synthetic dataset and three real-world benchmark datasets demonstrate the superior performance of the proposed algorithm.
机译:从极少的样本中学习对机器学习任务(例如文本和图像分类)是一个挑战。可以通过从相关领域转移有用的知识来增强此类任务的性能,这被称为转移学习。在先前的转移学习工作中,实例转移学习算法主要集中在选择类似于源域实例的源域实例进行转移。但是,所选实例通常不会直接对目标域中的学习成绩有所帮助。假设转移学习算法专注于模型/参数级转移。他们将源假设视作训练有素,并根据参数传递其知识以学习目标假设。这样的算法通过可观察到的性能改进直接优化了目标假设。但是,正如实例转移学习所分析的那样,他们没有考虑到构成源假设的实例可能对目标假设有害的问题。为了缓解上述问题,我们提出了一种新颖的转移学习算法,该算法遵循一种类比策略。特别地,提出的算法首先学习仅具有对目标假设有贡献的实例的修正的源假设。然后,所提出的算法同时传递修正后的源假设和目标假设(仅使用少量样本进行训练)以学习类比假设。我们将算法表示为类比转移学习。在一个合成数据集和三个现实基准数据集上进行的大量实验证明了该算法的优越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号