End-to-End Nano-Drone Obstacle Avoidance for Indoor Exploration
- Zhang, Ning 1
-
Nex, Francesco
1
-
Vosselman, George
1
-
Kerle, Norman
1
- González-Aguilera, Diego ed. lit. 2
- 1 ITC Faculty Geo-Information Science and Earth Observation, University of Twente, 7522 NH Enschede, The Netherlands
- 2 Cartographic and Land Engineering Departament, Higher Polytechnic School of Avila, University of Salamanca, Hornos Caleros, 5005003 Avila, Spain
ISSN: 2504-446X
Year of publication: 2024
Volume: 8
Issue: 2
Pages: 33
Type: Article
More publications in: Drones
Funding information
Funders
- European Union’s Horizon 2020 Research and Innovation Programme
-
Korean Government under Grant Agreement
- 833435
Bibliographic References
- Paliotta, C., Ening, K., and Albrektsen, S.M. (2021, January 23–26). Micro indoor-drones (mins) for localization of first responders. Proceedings of the ISCRAM, Blacksburg, VA, USA.
- Smolyanskiy, (2017), Front. Robot. AI, 4, pp. 11, 10.3389/frobt.2017.00011
- Schmid, K., Tomic, T., Ruess, F., Hirschmüller, H., and Suppa, M. (2013, January 3–8). Stereo vision based indoor/outdoor navigation for flying robots. Proceedings of the IROS, Tokyo, Japan.
- Chiella, A.C., Machado, H.N., Teixeira, B.O., and Pereira, G.A. (2019). GNSS/LiDAR-based navigation of an aerial robot in sparse forests. Sensors, 19.
- Moffatt, A., Platt, E., Mondragon, B., Kwok, A., Uryeu, D., and Bhandari, S. (2020, January 1–4). Obstacle detection and avoidance system for small uavs using a lidar. Proceedings of the ICUAS, Athens, Greece.
- Park, J., and Cho, N. (2020). Collision avoidance of hexacopter UAV based on LiDAR data in dynamic environment. Remote Sens., 12.
- Akbari, A., Chhabra, P.S., Bhandari, U., and Bernardini, S. (2020, January 25–29). Intelligent exploration and autonomous navigation in confined spaces. Proceedings of the IROS, Las Vegas, NV, USA.
- Yang, T., Li, P., Zhang, H., Li, J., and Li, Z. (2018). Monocular vision SLAM-based UAV autonomous landing in emergencies and unknown environments. Electronics, 7.
- von Stumberg, L., Usenko, V., Engel, J., Stückler, J., and Cremers, D. (2017, January 6–8). From monocular SLAM to autonomous drone exploration. Proceedings of the ECMR, Paris, France.
- Tulldahl, M., Holmberg, M., Karlsson, O., Rydell, J., Bilock, E., Axelsson, L., Tolt, G., and Svedin, J. (2020, January 1–3). Laser sensing from small UAVs. Proceedings of the Electro-Optical Remote Sensing XIV, San Francisco, CA, USA.
- Kouris, A., and Bouganis, C.S. (2018, January 1–5). Learning to fly by myself: A self-supervised cnn-based approach for autonomous navigation. Proceedings of the IROS, Madrid, Spain.
- Loquercio, (2018), IEEE Robot. Autom. Lett., 3, pp. 1088, 10.1109/LRA.2018.2795643
- Gandhi, D., Pinto, L., and Gupta, A. (2017, January 24–28). Learning to fly by crashing. Proceedings of the IROS, Vancouver, BC, Canada.
- Yang, (2019), IEEE Trans. Intell. Transport. Syst., 22, pp. 156, 10.1109/TITS.2019.2955598
- Chakravarty, P., Kelchtermans, K., Roussel, T., Wellens, S., Tuytelaars, T., and Van Eycken, L. (June, January 29). CNN-based single image obstacle avoidance on a quadrotor. Proceedings of the ICRA, Singapore.
- Zhang, Z., Xiong, M., and Xiong, H. (2019, January 6–7). Monocular depth estimation for UAV obstacle avoidance. Proceedings of the CCIOT, Changchun, China.
- Zhang, N., Nex, F., Vosselman, G., and Kerle, N. (2023, January 18–22). Lite-mono: A lightweight cnn and transformer architecture for self-supervised monocular depth estimation. Proceedings of the CVPR, Vancouver, BC, Canada.
- McGuire, (2019), Sci. Robot., 4, pp. eaaw9710, 10.1126/scirobotics.aaw9710
- Duisterhof, B.P., Li, S., Burgués, J., Reddi, V.J., and de Croon, G.C. (October, January 27). Sniffy bug: A fully autonomous swarm of gas-seeking nano quadcopters in cluttered environments. Proceedings of the IROS, Prague, Czech Republic.
- Niculescu, V., Müller, H., Ostovar, I., Polonelli, T., Magno, M., and Benini, L. (2022, January 16–19). Towards a Multi-Pixel Time-of-Flight Indoor Navigation System for Nano-Drone Applications. Proceedings of the I2MTC, Ottawa, ON, Canada.
- Geebelen, (2021), IEEE Trans. Veh. Technol., 71, pp. 961
- Briod, A., Zufferey, J.C., and Floreano, D. (2013, January 3–8). Optic-flow based control of a 46g quadrotor. Proceedings of the IROS Workshop, Tokyo, Japan.
- Bouwmeester, R.J., Paredes-Vallés, F., and de Croon, G.C. (2022). NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter. arXiv.
- McGuire, (2017), IEEE Robot. Autom. Lett., 2, pp. 1070, 10.1109/LRA.2017.2658940
- Palossi, (2019), IEEE Internet Things J., 6, pp. 8357, 10.1109/JIOT.2019.2917066
- Zhilenkov, A.A., and Epifantsev, I.R. (February, January 29). System of autonomous navigation of the drone in difficult conditions of the forest trails. Proceedings of the EIConRus, Moscow, Russia.
- Godard, C., Mac Aodha, O., Firman, M., and Brostow, G.J. (November, January 27). Digging into self-supervised monocular depth estimation. Proceedings of the ICCV, Seoul, Republic of Korea.
- Jung, H., Park, E., and Yoo, S. (2021, January 11–17). Fine-grained semantics-aware representation enhancement for self-supervised monocular depth estimation. Proceedings of the ICCV, Montreal, QC, Canada.
- Yin, Z., and Shi, J. (2018, January 18–22). Geonet: Unsupervised learning of dense depth, optical flow and camera pose. Proceedings of the CVPR, Salt Lake City, UT, USA.
- Poggi, M., Aleotti, F., Tosi, F., and Mattoccia, S. (2020, January 16–18). On the uncertainty of self-supervised monocular depth estimation. Proceedings of the CVPR, Seattle, WA, USA.
- Yang, N., Stumberg, L.v., Wang, R., and Cremers, D. (2020, January 16–18). D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. Proceedings of the CVPR, Seattle, WA, USA.
- Yan, J., Zhao, H., Bu, P., and Jin, Y. (2021, January 1–3). Channel-wise attention-based network for self-supervised monocular depth estimation. Proceedings of the 3DV, Online.
- Bae, J., Moon, S., and Im, S. (2023, January 7–14). Deep Digging into the Generalization of Self-supervised Monocular Depth Estimation. Proceedings of the AAAI, Washington, DC, USA.
- Lyu, X., Liu, L., Wang, M., Kong, X., Liu, L., Liu, Y., Chen, X., and Yuan, Y. (2021, January 2–9). Hr-depth: High resolution self-supervised monocular depth estimation. Proceedings of the AAAI, Online.
- Wofk, D., Ma, F., Yang, T.J., Karaman, S., and Sze, V. (2019, January 20–24). Fastdepth: Fast monocular depth estimation on embedded systems. Proceedings of the ICRA, Montreal, QC, Canada.
- Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv.
- Zhou, Z., Fan, X., Shi, P., and Xin, Y. (2021, January 11–17). R-msfm: Recurrent multi-scale feature modulation for monocular depth estimating. Proceedings of the ICCV, Montreal, QC, Canada.
- Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. arXiv.
- Zhang, Y., Xiang, T., Hospedales, T.M., and Lu, H. (2018, January 18–22). Deep mutual learning. Proceedings of the CVPR, Salt Lake City, UT, USA.
- Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., and Bengio, Y. (2014). Fitnets: Hints for thin deep nets. arXiv.
- Komodakis, N., and Zagoruyko, S. (2017, January 24–26). Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. Proceedings of the ICLR, Toulon, France.
- Shu, C., Liu, Y., Gao, J., Yan, Z., and Shen, C. (2021, January 11–17). Channel-wise knowledge distillation for dense prediction. Proceedings of the ICCV, Montreal, QC, Canada.
- Zhou, Z., Zhuge, C., Guan, X., and Liu, W. (2020). Channel distillation: Channel-wise attention for knowledge distillation. arXiv.
- Wang, Y., Li, X., Shi, M., Xian, K., and Cao, Z. (2021, January 19–25). Knowledge distillation for fast and accurate monocular depth estimation on mobile devices. Proceedings of the CVPR, Nashville, TN, USA.
- He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the CVPR, Las Vegas, NV, USA.
- Hu, J., Fan, C., Jiang, H., Guo, X., Gao, Y., Lu, X., and Lam, T.L. (2021). Boosting Light-Weight Depth Estimation Via Knowledge Distillation. arXiv.
- Pilzer, A., Lathuiliere, S., Sebe, N., and Ricci, E. (2019, January 16–20). Refine and distill: Exploiting cycle-inconsistency and knowledge distillation for unsupervised monocular depth estimation. Proceedings of the CVPR, Long Beach, CA, USA.
- Cho, J.H., and Hariharan, B. (November, January 27). On the efficacy of knowledge distillation. Proceedings of the ICCV, Seoul, Republic of Korea.
- Stanton, (2021), NeurIPS, 34, pp. 6906
- Lin, S., Xie, H., Wang, B., Yu, K., Chang, X., Liang, X., and Wang, G. (2022, January 21–23). Knowledge distillation via the target-aware transformer. Proceedings of the CVPR, New Orleans, LA, USA.
- Wang, (2004), IEEE Trans. Image Process., 13, pp. 600, 10.1109/TIP.2003.819861
- Althaus, P., and Christensen, H.I. (October, January 30). Behaviour coordination for navigation in office environments. Proceedings of the IROS, Lausanne, Switzerland.
- Geiger, (2013), Int. J. Res., 32, pp. 1231
- Eigen, D., and Fergus, R. (2015, January 13–16). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. Proceedings of the ICCV, Santiago, Chile.
- Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20–25). Imagenet: A large-scale hierarchical image database. Proceedings of the CVPR, Miami, FL, USA.
- Bhat, S.F., Birkl, R., Wofk, D., Wonka, P., and Müller, M. (2023). Zoedepth: Zero-shot transfer by combining relative and metric depth. arXiv.
- (2017), IEEE Trans. Robot., 33, pp. 1255, 10.1109/TRO.2017.2705103