空天异源图像匹配关键技术研究
作者:
作者单位:

1.航天时代飞鸿技术有限公司 北京 100094;2.北京理工大学机电学院 北京 100081;3.中国航天科技集团有限公司智能无人系统总体技术研发中心 北京 100094;4.中国航天科技集团有限公司第九研究院无人体系中心 北京 100094

作者简介:

刘庚辰 2000年生,硕士研究生。
吴国强 1984年生,博士研究生。
黄 坤 1990年生,硕士。
刘奇霏 2000年生,硕士研究生。
黄蜀玲 1986年生,博士。

中图分类号:

TP751


Research on the Key Techniques of Space-Sky Heterogeneous Image Matching
Author:
Affiliation:

1.Aerospace Times Feihong Technology Limited Company, Beijing 100094, China;2.School of Mechanical and Electrical Engineering, Beijing Institute of Technology, Beijing 100081, China;3.China Aerospace Science and Technology Corporation Limited Intelligent Unmanned System Overall Technology Research and Development Center, Beijing 100094, China;4.China Aerospace Science and Technology Corporation 9th Institute Unmanned System Center, Beijing 100094, China

  • 摘要
  • | |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    空天异源图像匹配技术是无人机的重要研究方向之一。本文系统构建了空天异源图像匹配框架,并深度剖析了其关键组成部分。基于文献综述,本文将空天异源图像匹配框架的关键技术归纳为三大类:图像质量评价技术、图像预处理技术与图像匹配技术。本文对各类技术的最新进展做了总结概括,重点分析了各方法应用于无人机领域的技术差异。在此基础上结合数据集,进行交叉对比实验,对各方法的具体效果进行分析。最后,针对空天异源图像匹配所面临的难点作出总结,并对未来研究方向和发展趋势进行了展望。

    Abstract:

    Aerial-space image matching is one of the significant research directions of unmanned aerial vehicles. This paper systematically constructs a framework for matching heterogeneous aerial-space images and provides an in-depth analysis of its key components. Based on literature review, this paper categorizes the key technologies of the heterogeneous image matching framework into three major types: image quality assessment technology, image preprocessing technology, and image matching technology. It summarizes the latest advancements in each of these technologies, with a particular focus on analyzing the technical differences in their application to the UAV field. Based on this, cross-comparison experiments are conducted using datasets to analyze the specific effects of each method. Finally, the paper summarizes the challenges faced in matching heterogeneous aerial and space images and provides an outlook on future research directions and development trends.

    相似文献
    引证文献
引用本文

刘庚辰,吴国强,黄坤,刘奇霏,黄蜀玲.空天异源图像匹配关键技术研究[J].遥测遥控,2025,46(2):1-11.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
    参考文献
    [1] MOORTHY A K, BOVIK A C. A two-step framework for constructing blind image quality indices[J]. IEEE Signal Processing Letters, 2010, 7(5): 13-516.
    [2] MOORTHY ANUSH KRISHNA, BOVIK ALAN CONRAD. Blind image quality assessment: From natural scene statistics to perceptual quality[J]. IEEE Transactions on Image Processing, 2011, 20(12): 3350-3364. DOI: 10.1109/TIP.2011.2160799.
    [3] SAAD M A, BOVIK A C, CHARRIER C. Blind image quality assessment: A natural scene statistics approach in the DCT domain[J]. IEEE Transactions on Image Processing, 2012, 21(8): 3339-3352. DOI: 10.1109/TIP.2012.2197527.
    [4] MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a "completely blind" image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209-212. DOI: 10.1109/LSP.2013.2248785.
    [5] MITTAL A K, MOORTHY A, BOVIK A C. No-reference image quality assessment in the spatial domain[J]. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708. DOI: 10.1109/TIP.2012.2214050.
    [6] GU S Y, BAO J M, CHEN D, et al. GIQA: Generated image quality assessment[EB/OL]. [2022-11-08]. https://arxiv.org/abs/2003.08932.
    [7] SU S, et al. Blindly assess image quality in the wild guided by a self-adaptive hyper network[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA. New York: IEEE, 2020: 3664-3673. DOI: 10.1109/CVPR42600.2020.00372.
    [8] YOU J, KORHOVEN J. Transformer for image quality assessment[C]//Proceedings of the 2021 IEEE International Conference on Image Processing. Anchorage, AK, USA. New York: IEEE, 2021: 1389-1393. DOI: 10.1109/ICIP42928.2021.9506075.
    [9] KIM J, NGUYEN A D, LEE S. Deep CNN-based blind image quality predictor[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(1): 11-24. DOI: 10.1109/TNNLS.2018.2829819.
    [10] ZUIDERVELD K. Contrast limited adaptive histogram equalization[J]. Graphics Gems, 1994: 474-485. DOI: 10.1016/B978-0-12-336156-1.50061-6.
    [11] ZHANG K, ZUO W, CHEN D, et al. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2017, 26(7): 3142-3155. DOI: 10.1109/TIP. 2017.2662206.
    [12] ZHANG K, ZUO W, ZHANG L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising[J]. IEEE Transactions on Image Processing, 2018, 27(9): 4608-4622. DOI: 10.1109/TIP.2018.2839891.
    [13] ZHANG K, ZUO W, ZHANG L. Learning a single convolutional super-resolution network for multiple degradations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Computer Society, 2017. DOI: 10.48550/arXiv.1712.06116.
    [14] SIMONYAN K, ZISSERMAN A .Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Computer Science, 2014. DOI:10.48550/arXiv.1409.1556.
    [15] WANG Y, HUANG H, XU Q, et al. Practical deep raw image denoising on mobile devices[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2020: 12351. DOI: 10.1007/978-3-030-58539-6_1.
    [16] HUI Z, GAO X, YANG Y, et al. Lightweight Image Super-Resolution with Information Multi-distillation Network[J]. ACM, 2019. DOI:10.1145/3343031. 3351084.
    [17] XU W , YAO Y, CAO J, et al. UAV-VisLoc: A Large-scale Dataset for UAV Visual Localization[J]. 2024.
    [18] ZHANG K, ZUO W, ZHANG L. Deep Plug-And-Play Super-Resolution for Arbitrary Blur Kernels[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. DOI:10.1109/CVPR.2019.00177.
    [19] WANG X, YU K, WU S, et al. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks[J]. 2018. DOI: 10.1007/978-3-030-11021-5_5.
    [20] Detone D, Malisiewicz T, Rabinovich A. SuperPoint: Self-Supervised Interest Point Detection and Description[J]. 2017. DOI:10.48550/arXiv.1712.07629.
    [21] SARLIN P E, DETONE D, MALISIEWICZ T, et al. SuperGlue: Learning feature matching with graph neural networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2020. DOI: 10.1109/CVPR42600. 2020.00499.
    [22] LOWE D. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2003, 20(91-110). DOI: 10.1023/B:AVISI. 0000029664.99615.94.
    [23] BAY H, TUYTELAARS T, GOOLSVEN L V. SURF: Speeded up robust features[C]//Proceedings of the 9th European Conference on Computer Vision. Berlin: Springer-Verlag, 2006: 32. DOI: 10.1007/11744023_32.
    [24] SUN J, SHEN Z, WANG Y, et al. LoFTR: Detector-free local feature matching with transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2021. DOI: 10.1109/CVPR46437.2021.00881.
    [25] WANG Y, HE X, PENG S, et al. Efficient LoFTR: Semi-dense local feature matching with sparse-like speed[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2024: 21666-21675.
    [26] CHEN H, LUO Z, ZHOU L, et al. Aspanformer: Detector-free image matching with adaptive span transformer[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 20-36.
    [27] LINDENBERGER P, SARLIN P E, POLLEFEYS M. LightGlue: Local feature matching at light speed[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, Paris, France. New York: IEEE, 2023: 17581-17592. DOI: 10.1109/ICCV51070. 2023.01616.
历史
  • 收稿日期:2024-10-14
  • 最后修改日期:2024-11-21
  • 在线发布日期: 2025-03-19