首页> 外文会议>AIAA SciTech forum and exposition >Experimental Study on Application of Distributed Deep Reinforcement Learning to Closed-loop Flow separation Control over an Airfoil
【24h】

Experimental Study on Application of Distributed Deep Reinforcement Learning to Closed-loop Flow separation Control over an Airfoil

机译:分布式深度强化学习在机翼闭环分流控制中的应用实验研究

获取原文

摘要

This paper experimentally investigates a closed-loop flow separation control system on a NACA 0015 airfoil using a DBD plasma actuator at the chord-Reynolds number of 63,000. The closed-loop control system is constructed utilizing the Deep Reinforcement Learning(DRL). The plasma actuator is installed to the surface of the airfoil at 5% of the chord length from the leading edge and driven with AC voltage. The time series data of surface pressure are used as the input data to the neural network, and the neural network is trained to select the optimum burst frequency of the actuator at angles of attacks of 15 degrees. Ape-X DQN, which is the latest algorithm of DRL is used to improve the training of the neural network. As a result, the neural network is trained stably in Ape-X DQN compared to Deep Q Network (DQN), which is the old algorithm. The result of the time-averaged pressure measurements indicates that the flow controlled by the network trained by Ape-X DQN is suppressed more than the network trained by DQN at angle of attack of 15degrees.
机译:本文实验性地研究了使用DBD等离子作动器在NACA 0015机翼上的闭环流分离控制系统,其弦-雷诺数为63,000。闭环控制系统是利用深度强化学习(DRL)构建的。等离子致动器安装在机翼表面,距前缘的弦长的5%,并通过AC电压驱动。将表面压力的时间序列数据用作神经网络的输入数据,并对神经网络进行训练,以选择迎角为15度的执行器的最佳猝发频率。 Ape-X DQN是DRL的最新算法,用于改进神经网络的训练。结果,与传统算法Deep Q Network(DQN)相比,在Ape-X DQN中稳定地训练了神经网络。时间平均压力测量的结果表明,在15度迎角下,由Ape-X DQN训练的网络控制的流量比DQN训练的网络抑制的更多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号