【24h】

On batch learning in a binary weight setting

机译:在二进制权重设置中进行批量学习

获取原文

摘要

Considers the problem of inferring a finite binary sequencew*∈{-1,1}n from a random sequence of half-space data {u(t)∈{-1,1}n:〈w*,u/sup(t/)〉⩾0,t⩾1}. In this context, we show that a previouslyproposed randomised on-line learning algorithm dubbed directed drift[Venkatesh, 1993] has minimal space complexity but an expected mistakebound exponential in n. We show that batch incarnations of the algorithmallow of massive improvements in running time. In particular, using abatch of ½πn log n examples at each update epoch reduces theexpected mistake bound to 𝒪(n) in a single bit update mode, whileusing a batch of πn log n examples at each update epoch in a multiplebit update mode leads to convergence to w* with a constant (independentof n) expected mistake bound
机译:考虑推断有限二进制序列的问题 从半空间数据的随机序列{u的随机序列W *∈{-1,1} n (t)∈{-1,1} n ⩾ 0,t⩾ 1}。在这种情况下,我们展示了先前 提出随机在线学习算法被称为定向漂移 [Venkatesh,1993]具有最小的空间复杂性,但预期的错误 绑定次数我们显示算法的批量生成 允许运行时间的大规模改进。特别是,使用a 每次更新时epoch的批次½πnlog n示例会减少 预期的错误绑定到ℴ(n)在单个比特更新模式下,同时 在多个更新epoch中使用一批πnlog n示例 比特更新模式导致常量收敛到W *(独立 n)预期的错误束缚

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号