首页> 外文期刊>Neural Networks: The Official Journal of the International Neural Network Society >Improving importance estimation in pool-based batch active learning for approximate linear regression
【24h】

Improving importance estimation in pool-based batch active learning for approximate linear regression

机译:在基于池的批量主动学习中提高重要性估计,以进行近似线性回归

获取原文
获取原文并翻译 | 示例
       

摘要

Pool-based batch active learning is aimed at choosing training inputs from a 'pool' of test inputs so that the generalization error is minimized. P-ALICE (Pool-based Active Learning using Importance-weighted least-squares learning based on Conditional Expectation of the generalization error) is a state-of-the-art method that can cope with model misspecification by weighting training samples according to the importance (i.e., the ratio of test and training input densities). However, importance estimation in the original P-ALICE is based on the assumption that the number of training samples to gather is small, which is not always true in practice. In this paper, we propose an alternative scheme for importance estimation based on the inclusion probability, and show its validity through numerical experiments.
机译:基于池的批量主动学习旨在从测试输入的“池”中选择训练输入,以使泛化误差最小。 P-ALICE(使用基于广义误差的条件期望的重要性加权最小二乘学习的基于池的主动学习)是一种最新方法,可以通过根据重要性对训练样本进行加权来应对模型错误指定(即测试和训练输入密度的比率)。但是,原始P-ALICE中的重要性估计是基于以下假设:要收集的训练样本数量很少,实际上并不总是如此。在本文中,我们提出了一种基于包含概率的重要性估计的替代方案,并通过数值实验证明了其有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号