Dynamic Scene Path Planning of UAVs Based on Deep Reinforcement Learning
-
Tang, Jin
12
- Liang, Yangang 12
-
Li, Kebo
12
- González-Aguilera, Diego ed. lit. 3
-
1
National University of Defense Technology
info
- 2 Hunan Key Laboratory of Intelligent Planning and Simulation for Aerospace Mission, Changsha 410073, China
-
3
Universidad de Salamanca
info
ISSN: 2504-446X
Year of publication: 2024
Volume: 8
Issue: 2
Pages: 60
Type: Article
More publications in: Drones
Funding information
Funders
Bibliographic References
- Bulka, (2019), J. Intell. Robot. Syst., 93, pp. 85, 10.1007/s10846-018-0790-z
- Chen, (2020), Aerodyn. Missile J., 5, pp. 54
- Chen, (2014), Electron. Des. Eng., 19, pp. 96
- Liu, (2017), J. Shenyang Ligong Univ., 1, pp. 61
- LaValle, S. (1998). Rapidly-exploring random trees: A new tool for path planning. Res. Rep. 9811, 293–308.
- Li, (2012), Comput. Sci., 39, pp. 334
- Xu, (2014), Comput. Simul., 31, pp. 357
- Kang, (2014), J. Jilin Univ., 44, pp. 1062
- Li, (2013), Control. Decis., 28, pp. 873
- Wang, (2017), J. Nanjing Univ. Technol. Nat. Sci. Ed., 41, pp. 627
- Shi, (2014), Trans. Chin. Soc. Agric. Mach., 45, pp. 53
- Wang, (2016), Eng. Optim., 48, pp. 299, 10.1080/0305215X.2015.1005084
- Contreras, (2015), Appl. Soft Comput. J., 30, pp. 319, 10.1016/j.asoc.2015.01.067
- Mnih, (2015), Nature, 518, pp. 529, 10.1038/nature14236
- Zhao, Y., Zheng, Z., Zhang, X., and Liu, Y. (2017, January 26–28). Q learning algorithm-based UAV path learning and obstacle avoidance approach. Proceedings of the 36th Chinese Control Conference, Dalian, China.
- Zhou, (2021), Acta Aeronaut. ET Astronaut. Sin., 42, pp. 506
- Huang, (2020), Comput. Eng. Appl., 56, pp. 30
- Feng, (2021), Comput. Appl. Softw., 38, pp. 250
- Cao, (2023), Proc. Inst. Mech. Eng. Part D J. Automob. Eng., 237, pp. 2295, 10.1177/09544070221106037
- Li, S., Xin, X., and Lei, Z. (2015, January 8–10). Dynamic path planning of a mobile robot with improved Q-learning algorithm. Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China.
- Yan, (2020), J. Intell. Robot. Syst., 98, pp. 297, 10.1007/s10846-019-01073-3
- Gao, (2006), J. Syst. Simul., 18, pp. 2570
- Wen, (2017), Int. J. Mach. Learn. Cybern., 8, pp. 469, 10.1007/s13042-015-0339-4
- Silver, (2018), Science, 362, pp. 1140, 10.1126/science.aar6404
- Silver, (2016), Nature, 529, pp. 484, 10.1038/nature16961
- Sutton, (1998), IEEE Trans. Neural Netw., 9, pp. 1054, 10.1109/TNN.1998.712192
- Liu, Z., Lan, F., and Yang, H. (2019, January 20–22). Partition Heuristic RRT Algorithm of Path Planning Based on Q-learning. Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China.
- Tai, L., and Liu, M. (2016). Towards cognitive exploration through deep reinforcement learning for mobile robots. arXiv.
- Guez, (2015), Proc. AAAI Conf. Artif. Intell., 30, pp. 2094
- Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., and Freitas, N. (2016, January 20–22). Dueling network architectures for deep reinforcement learning. In Proceeding of 33rd International Conference on Machine Learning (ICML), New York, NY, USA.
- Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2015). Prioritized experience replay. Comput. Sci., 1–17.
- Maniatopoulos, A., and Mitianoudis, N. (2021). Learnable Leaky ReLU (LeLeLU): An Alternative Accuracy-Optimized Activation Function. Information, 12.
- Sui, Z., Pu, Z., Yi, J., and Xiong, T. (2019, January 14–19). Formation control with collision avoidance through deep reinforcement learning. Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary.
- Xie, (2022), Front. Neurorobotics, 16, pp. 817168, 10.3389/fnbot.2022.817168