Dark-SLAM: A Robust Visual Simultaneous Localization and Mapping Pipeline for an Unmanned Driving Vehicle in a Dark Night Environment
- Chen, Jie 1
- Wang, Yan 1
-
Hou, Pengshuai
1
- Chen, Xingquan 1
- Shao, Yule 1
- González-Aguilera, Diego ed. lit. 2
- 1 School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
-
2
Universidad de Salamanca
info
ISSN: 2504-446X
Year of publication: 2024
Volume: 8
Issue: 8
Pages: 390
Type: Article
More publications in: Drones
Funding information
Funders
-
National Natural Science Foundation of China
- 52175004
-
Fundamental Research Funds for Central Universities
- N2203013
Bibliographic References
- Gu, N., Xing, F., and You, Z. (2022). Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack. Remote Sens., 14.
- Veneruso, P., Opromolla, R., Tiana, C., Gentile, G., and Fasano, G. (2022). Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios. Remote Sens., 14.
- Hong, (2024), Mach. Vis. Appl., 35, pp. 5, 10.1007/s00138-023-01488-x
- Engel, (2018), IEEE Trans. Pattern Anal. Mach. Intell., 40, pp. 611, 10.1109/TPAMI.2017.2658577
- Tardos, (2017), IEEE Trans. Robot., 3, pp. 1255
- Teng, Z., Han, B., Cao, J., Hao, Q., Tang, X., and Li, Z. (2023). PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features. Remote Sens., 15.
- Hao, L., Li, H., Zhang, Q., Hu, X., and Cheng, J. (2019, January 6–8). LMVI-SLAM: Robust Low-Light Monocular Visual-Inertial Simultaneous Localization and Mapping. Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China.
- Fang, Y., Shan, G., Wang, T., Li, X., Liu, W., and Snoussi, H. (December, January 30). HE-SLAM: A Stereo SLAM System Based on Histogram Equalization and ORB Features. Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China.
- Cheng, W., Zhang, Y., Qi, Y., Liu, J., and Liu, F. (2020, January 11–14). A Fast Global Adaptive Solution to Low-Light Images Enhancement in Visual SLAM. Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China.
- Savinykh, A., Kurenkov, M., Kruzhkov, E., Yudin, E., Potapov, A., Karpyshev, P., and Tsetserukou, D. (2022, January 19–22). Darkslam: Gan-assisted visual slam for reliable operation in low-light conditions. Proceedings of the 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland.
- Ross, P., English, A., Ball, D., and Corke, P. (2014, January 2–4). A method to quantify a descriptor’s illumination variance. Proceedings of the 16th Australasian Conference on Robotics and Automation 2014 Australian Robotics and Automation Association (ARAA), Melbourne, Australia.
- Quan, Y., Fu, D., Chang, Y., and Wang, C. (2022). 3D Convolutional Neural Network for Low-Light Image Sequence Enhancement in SLAM. Remote Sens., 14.
- Nuske, (2009), J. Field Robot., 26, pp. 728, 10.1002/rob.20306
- Wang, (2022), IEEE Robot. Autom. Lett., 7, pp. 2008, 10.1109/LRA.2022.3142854
- Pratap Singh, S., Mazotti, B., Mayilvahanan, S., Li, G., Manish Rajani, D., and Ghaffari, M. (2023). Twilight SLAM: A Comparative Study of Low-Light Visual SLAM Pipelines. arXiv.
- Chen, C., Chen, Q., Xu, J., and Koltun, V. (2018, January 18–23). Learning to See in the Dark. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.
- Jiang, S., Liu, J., Li, Y., Weng, D., and Chen, W. (2023). Reliable Feature Matching for Spherical Images via Local Geometric Rectification and Learned Descriptor. Remote Sens., 15.
- Xu, (2022), IEEE Robot. Autom. Lett., 8, pp. 752, 10.1109/LRA.2022.3231983
- Chen, (2022), IEEE Robot. Autom. Lett., 7, pp. 11894, 10.1109/LRA.2022.3207800
- Kim, P., Coltin, B., Alexandrov, O., and Kim, H.J. (June, January 29). Robust Visual Localization in Changing Lighting Conditions. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore.
- Gridseth, (2022), IEEE Robot. Autom. Lett., 7, pp. 1016, 10.1109/LRA.2021.3136867
- Han, (2023), IEEE/ASME Trans. Mechatron., 28, pp. 2225, 10.1109/TMECH.2023.3234316
- Land, (1971), J. Opt. Soc. Am., 61, pp. 1, 10.1364/JOSA.61.000001
- Pisano, (1998), J. Digit. Imaging, 11, pp. 193, 10.1007/BF03178082
- Jobson, (1997), IEEE Trans. Image Process., 6, pp. 451, 10.1109/83.557356
- Jobson, (1997), IEEE Trans. Image Process., 6, pp. 965, 10.1109/83.597272
- Brainard, (1986), J. Opt. Soc. Am. A, 3, pp. 1651, 10.1364/JOSAA.3.001651
- Wei, C., Wang, W., Yang, W., and Liu, J. (2018). Deep retinex decomposition for low-light enhancement. arXiv.
- Lore, (2017), Pattern Recognit., 61, pp. 650, 10.1016/j.patcog.2016.06.008
- Li, (2023), Mach. Vis. Appl., 34, pp. 13, 10.1007/s00138-022-01365-z
- Li, (2021), IEEE Trans. Multimed., 23, pp. 3153, 10.1109/TMM.2020.3021243
- Zhou, S., Li, C., and Loy, C.C. (2022). Lednet: Joint low-light enhancement and deblurring in the dark. Computer Vision—ECCV 2022, Proceedings of the 17th European Conference, Tel Aviv, Israel, 23–27 October 2022, Springer Nature. European Conference on Computer Vision.
- Rahman, (2016), EURASIP J. Image Video Process., 2016, pp. 35, 10.1186/s13640-016-0138-1
- DeTone, D., Malisiewicz, T., and Rabinovich, A. (2018, January 18–22). Superpoint: Self-supervised interest point detection and description. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.
- Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv.
- Choy, C.B., Gwak, J., Savarese, S., and Chandraker, M. (2016, January 5–10). Universal correspondence network. Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
- Yamashita, (2016), Pattern Recognit., 52, pp. 459, 10.1016/j.patcog.2015.10.002
- Lowe, (2004), Int. J. Comput. Vis., 60, pp. 91, 10.1023/B:VISI.0000029664.99615.94
- Fischler, (1981), Commun. ACM, 24, pp. 381, 10.1145/358669.358692
- Zhang, (2010), Int. J. Mach. Learn. Cybern., 1, pp. 43, 10.1007/s13042-010-0001-0
- Horn, (1987), Josa A, 4, pp. 629, 10.1364/JOSAA.4.000629
- Wenzel, P., Wang, R., Yang, N., Cheng, Q., Khan, Q., von Stumberg, L., Zeller, N., and Cremers, D. (2021). 4Seasons: A cross-season dataset for multi-weather SLAM in autonomous driving. Pattern Recognition, Proceedings of the 42nd DAGM German Conference, DAGM GCPR 2020, Tübingen, Germany, 28 September–1 October 2020, Springer International Publishing. Proceedings 42.
- Kim, G., and Kim, A. (2018, January 1–5). Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map. Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.
- Bian, J., Lin, W.-Y., Matsushita, Y., Yeung, S.-K., Nguyen, T.-D., and Cheng, M.-M. (2017, January 21–26). GMS: Grid-Based Motion Statistics for Fast, Ultra-Robust Feature Correspondence. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.
- Moreno, (2019), IEEE Trans. Robot., 35, pp. 734, 10.1109/TRO.2019.2899783
- Michaud, (2019), J. Field Robot., 36, pp. 416, 10.1002/rob.21831