首页> 外文会议>International Conference on Complex Networks and Their Applications >Deep Reinforcement Learning for Task-Driven Discovery of Incomplete Networks
【24h】

Deep Reinforcement Learning for Task-Driven Discovery of Incomplete Networks

机译:不完整网络的任务驱动发现的深度增强学习

获取原文

摘要

Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem in an incomplete network setting as a sequential decision making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called Network Actor Critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. A quantitative study is presented on several synthetic and real benchmarks. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms.
机译:复杂网络往往要么太大,富于探索,部分访问,或部分观察。对这些不完整的网络下游的学习任务会产生低质量的结果。此外,降低网络的不完备性可能是昂贵和平凡的。因此,对于给定的资源集合的限制特定下游学习任务进行了优化网络搜索算法是极大的兴趣。在本文中,我们制定在一个不完整的网络设置为连续的决策问题的特定任务的网络发现问题。我们下游的任务是有选择性的收获,具有特定属性的顶点的最佳集合。我们提出了一个框架,称为网络演员评论家(NAC),其通过深强化学习算法学习在离线环境政策和未来的奖励的概念。定量研究,提出若干合成和实际基准。我们发现,相比于竞争激烈的在线发现算法时,奖励和网络发现政策的离线模式导致显著提高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号