...
首页> 外文期刊>LIPIcs : Leibniz International Proceedings in Informatics >Algorithms and Adaptivity Gaps for Stochastic k-TSP
【24h】

Algorithms and Adaptivity Gaps for Stochastic k-TSP

机译:随机K-TSP的算法和适应性间隙

获取原文
           

摘要

Given a metric (V,d) and a root a^^ V, the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [Ene et al., 2018], each vertex v in the given metric (V,d) contains a stochastic reward R_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost C_v, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [Singla, 2018] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedmana??s and Jogdeo-Samuels' inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a "critical" scale.
机译:给定度量(v,d)和根A ^^ v,经典k-tsp问题是找到一个源自在最小长度的根目录的巡视,该巡回巡回巡视的巡视在此工作中访问V的最小k节点。在这项工作中,由应用程序激励在优化问题的输入不确定的情况下,我们研究了两个随机版的K-TSP。在Stoch-redge K-TSP中,最初由Ene-Nagarajan-Saket [Ene等,2018],给定度量(V,D)中的每个顶点V都包含一个随机奖励R_V。目标是自适应地找到至少收集至少奖励k的最小预期长度的游览;这里“自适应”意味着我们的下一个决定可能取决于先前的结果。 Ene等人。为此问题提供O(log k) - 批准自适应算法,如果存在o(1)千克估计算法,则打开打开。我们完全解决了他们的开放问题,甚至给了一个O(1) - 用于STOCH奖励K​​-TSP的批准非自适应算法。我们还介绍并获得类似的STOCH成本K-TSP问题的结果。在此问题中,每个顶点V都有一个随机成本C_V,目标是访问并选择至少k个顶点,以最小化所选顶点的预期旅游长度和成本。除了作为K-TSP的自然随机泛化之外,这个问题也很有趣,因为它概括了信息框架[Singla,2018]的价格从确定性探测成本到公制探测成本。我们的技术基于两个至关重要的想法:“重复”和“临界缩放”。通常,替换随机变量的预期导致结果非常差。我们展示了我们的问题,如果我们以理想的阈值截断随机变量,那么他们的预期值形成一个好的代理。在这里,我们依靠使用相同的阈值运行多次算法,然后使用Freedmana的函数和Jogdeo-Samuels的不等式进行争论。不幸的是,这种理想的阈值取决于我们从实现目标k的距离,这是非自适应算法不知道的目标k。为了克服这一屏障,我们截断了各种不同尺度的随机变量,并识别“关键”比例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号