Deep Reinforcement Learning for the Visual Servoing Control of UAVs with FOV Constraint

  1. Fu, Gui 12
  2. Chu, Hongyu 1
  3. Liu, Liwen 2
  4. Fang, Linyi 2
  5. Zhu, Xinyu 2
  6. Horri, Nadjim
  7. Khan, Samir
  8. Lappas, Vaios
  9. González Aguilera, Diego 3
  1. 1 School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
  2. 2 Institute of Electronic and Electrical Engineering, Civil Aviation Flight University of China, Guanghan 618307, China
  3. 3 Universidad de Salamanca

    Universidad de Salamanca

    Salamanca, España



ISSN: 2504-446X

Year of publication: 2023

Volume: 7

Issue: 6

Pages: 375

Type: Article

DOI: 10.3390/DRONES7060375 GOOGLE SCHOLAR lock_openOpen access editor

More publications in: Drones


Visual servoing is a control method that utilizes image feedback to control robot motion, and it has been widely applied in unmanned aerial vehicle (UAV) motion control. However, due to field-of-view (FOV) constraints, visual servoing still faces challenges, such as easy target loss and low control efficiency. To address these issues, visual servoing control for UAVs based on the deep reinforcement learning (DRL) method is proposed, which dynamically adjusts the servo gain in real time to avoid target loss and improve control efficiency. Firstly, a Markov model of visual servoing control for a UAV under field-of-view constraints is established, which consists ofquintuplet and considers the improvement of the control efficiency. Secondly, an improved deep Q-network (DQN) algorithm with a target network and experience replay is designed to solve the Markov model. In addition, two independent agents are designed to adjust the linear and angular velocity servo gains in order to enhance the control performance, respectively. In the simulation environment, the effectiveness of the proposed method was verified using a monocular camera.

Funding information


  • National Natural Science Foundation of China
    • 61601382
  • Fund project for basic scientific research expenses of central universities
    • J2022-024
    • J2022-07
  • Independent research project of the Key Laboratory of Flight Techniques and Flight Safety
    • FZ2021ZZ04

Bibliographic References

  • Alzahrani, (2020), J. Netw. Comput. Appl., 166, pp. 102706, 10.1016/j.jnca.2020.102706
  • Mahony, (2012), IEEE Robot. Autom. Mag., 19, pp. 20, 10.1109/MRA.2012.2206474
  • Zhen, (2020), Aerosp. Sci. Technol., 100, pp. 105826, 10.1016/j.ast.2020.105826
  • Liu, (2020), Aerosp. Sci. Technol., 98, pp. 105671, 10.1016/j.ast.2019.105671
  • Wu, (2021), Control. Decis., 36, pp. 2851
  • Zheng, (2017), IEEE/ASME Trans. Mechatron., 22, pp. 972, 10.1109/TMECH.2016.2639531
  • Chaumette, (2006), IEEE Robot. Autom. Mag., 13, pp. 82, 10.1109/MRA.2006.250573
  • Chen, C., Tian, Y., Lin, L., Chen, S., Li, H., Wang, Y., and Su, K. (2020). Obtaining World Coordinate Information of UAV in GNSS Denied Environments. Sensors, 20.
  • Abdessameud, (2015), Automatica, 53, pp. 111, 10.1016/j.automatica.2014.12.032
  • Zhang, (2017), IEEE Trans. Ind. Electron., 64, pp. 390, 10.1109/TIE.2016.2598526
  • Zhang, (2020), IEEE Trans. Ind. Inform., 16, pp. 7624, 10.1109/TII.2020.2974485
  • Ceren, (2012), J. Intell. Robot. Syst., 65, pp. 325, 10.1007/s10846-011-9582-4
  • Liu, (2019), Nonlinear Dyn., 95, pp. 2605, 10.1007/s11071-018-4700-5
  • Santamaria-Navarro, À., and Andrade-Cetto, J. (2013, January 6–10). Uncalibrated image based visual servoing. Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany.
  • Miao, (2021), IEEE Trans. Control. Syst. Technol., 29, pp. 2231, 10.1109/TCST.2020.3023415
  • Lopez-Nicolas, G., Aranda, M., and Mezouar, Y. (June, January 29). Formation of differential-drive vehicles with field-of-view constraints for enclosing a moving target. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore.
  • Bhagat, S., and Pb, S. (2020, January 1–4). UAV Target Tracking in Urban Environments Using Deep Reinforcement Learning. Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece.
  • Bruno, (2021), Neurocomputing, 455, pp. 97, 10.1016/j.neucom.2021.05.027
  • Hajiloo, (2016), IEEE Trans. Ind. Electron., 63, pp. 2242
  • Chesi, (2009), IEEE Trans. Robot., 25, pp. 281, 10.1109/TRO.2009.2014131
  • Huang, (2022), IEEE Trans. Veh. Technol., 71, pp. 2385, 10.1109/TVT.2021.3138912
  • Zhang, (2020), IEEE/ASME Trans. Mechatron., 25, pp. 1912, 10.1109/TMECH.2020.2993617
  • Zheng, (2019), IEEE/ASME Trans. Mechatron., 24, pp. 1087, 10.1109/TMECH.2019.2906430
  • Krichen, M., Mihoub, A., Alzahrani, M.Y., Adoni, W.Y.H., and Nahhal, T. (2022, January 9–11). Are Formal Methods Applicable To Machine Learning And Artificial Intelligence?. Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia.
  • Seshia, (2022), Commun. ACM, 65, pp. 46, 10.1145/3503914
  • Sutton, R.S., and Barto, A.G. (2018). Reinforcement Learning: An Introduction, MIT Press.
  • Wang, (2010), IEEE/ASME Trans. Mechatron., 15, pp. 757, 10.1109/TMECH.2009.2034740
  • Shi, (2018), IEEE Trans. Ind. Inform., 14, pp. 241, 10.1109/TII.2016.2617464
  • Shi, (2020), IEEE Trans. Cogn. Dev. Syst., 12, pp. 417, 10.1109/TCDS.2019.2908923