Abstract:
In this study, an RLID (reinforcement learning inverse design) framework is proposed and applied to the reverse design of three-dimensional morphing wings for adaptive morphing flight missions under variable operating conditions. The CST parameterization method is chosen to define three-dimensional morphing wings, and the Latin hypercube sampling method is used to sample in the design space and generate sample points. Computational fluid dynamics simulations are performed to obtain corresponding aerodynamic parameters, and the deep belief network surrogate model is constructed to map the input-output relationship between morphing design parameters and aerodynamic parameters. To address the variable operating conditions, a DQN (deep Q-network) reinforcement learning agent, leveraging unsupervised learning, is used to provide real-time morphing strategies to achieve expected aerodynamic performance. Furthermore, the design results via the DQN agent are compared with those via the G-CGAN (greedy-based conditional generative adversarial network) agents. The results indicate that the proposed RLID framework efficiently obtains a satisfactory strategy of morphing wings under variable operating conditions and that the DQN agent focuses more on overall task rewards than the G-CGAN agent.