首页> 外文期刊>International journal of machine learning and cybernetics >Joint resource allocation for emotional 5G loT systems using deep reinforcement learning
【24h】

Joint resource allocation for emotional 5G loT systems using deep reinforcement learning

机译:利用深增强学习的情绪5G批次系统联合资源分配

获取原文
获取原文并翻译 | 示例
           

摘要

In emotional computing related IoT system, emotional sensors, as the IoT devices, are usually deployed to collect the emotional data from humans. The IoT devices need wireless connections to send the collected data to the server, that conducts the prediction to give user instructions. Mobile edge computing (MEC) is a promising technology to fit into this scenario. However, the IoT devices are usually short of energy supply and the local computation gives less accurate emotional computing results. To solve the problem, this paper intends to maximize the total energy efficiency of communication and computation within the MEC servers and sensors by jointly optimizing the allocation of channels and computing resources. The formulated problem is non-convex and usually solved through the successive convex approximation (SCA) method. Compared to SCA, deep Q network (DQN) method is used in this paper, which involves less computation cost to be more practically deployed. The simulation results show that the DQN solution outperforms the other benchmarking solutions, and the total energy consumption of the system is effectively reduced with a guaranteed emotional computing accuracy.
机译:在情绪化计算相关的IOT系统中,作为物联网设备,情感传感器通常部署以收集人类的情绪数据。物联网设备需要无线连接以将所收集的数据发送到服务器,以便为用户指令提供预测。移动边缘计算(MEC)是一个适合这种情况的有希望的技术。然而,物联网设备通常是能源供应的缺乏,并且本地计算提供了更准确的情绪计算结果。为了解决这个问题,本文打算通过联合优化通道和计算资源的分配来最大限度地提高MEC服务器和传感器内的通信和计算的总能量效率。配制的问题是非凸起的并且通常通过连续的凸近似(SCA)方法来解决。与SCA相比,本文使用了深度Q网络(DQN)方法,这涉及更少的计算成本以更实际地部署。仿真结果表明,DQN解决方案优于其他基准测试解决方案,系统的总能耗有效地减少了保证的情绪计算精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号