一种基于特征交汇关键点检测和Sim-CSPNet的SAR图像配准算法

项德良 徐益豪 程建达 胡粲彬 孙晓坤

项德良, 徐益豪, 程建达, 等. 一种基于特征交汇关键点检测和Sim-CSPNet的SAR图像配准算法[J]. 雷达学报, 待出版. doi: 10.12000/JR22110
引用本文: 项德良, 徐益豪, 程建达, 等. 一种基于特征交汇关键点检测和Sim-CSPNet的SAR图像配准算法[J]. 雷达学报, 待出版. doi: 10.12000/JR22110
XIANG Deliang, XU Yihao, CHENG Jianda, et al. An algorithm based on a feature interaction-based keypoint detector and sim-cspnet for SAR image registration[J]. Journal of Radars, in press. doi: 10.12000/JR22110
Citation: XIANG Deliang, XU Yihao, CHENG Jianda, et al. An algorithm based on a feature interaction-based keypoint detector and sim-cspnet for SAR image registration[J]. Journal of Radars, in press. doi: 10.12000/JR22110

一种基于特征交汇关键点检测和Sim-CSPNet的SAR图像配准算法

doi: 10.12000/JR22110
基金项目: 国家自然科学基金(62171015)
详细信息
    作者简介:

    项德良(1989–),男,河南罗山人。2016年分别在国防科技大学和瑞典皇家理工学院获得工学博士和哲学博士学位,现为北京化工大学教授。系统内优秀博士学位论文获得者,德国洪堡学者,入选某领域青年托举工程人才。主要研究方向为SAR/PolSAR信息处理、探地雷达等

    徐益豪(1997–),男,河南周口人,北京化工大学信息科学与技术学院硕士生,主要研究方向为人工智能、SAR图像配准

    程建达(1996–),男,山东临沂人,北京化工大学信息科学与技术学院博士生。主要研究方向为极化SAR图像分类、目标检测、配准等

    胡粲彬(1985–),男,江西余干人,工学博士,毕业于国防科技大学,现为北京化工大学讲师,主要研究方向为SAR图像目标检测识别、极化SAR信息处理等

    孙晓坤(1980–),女,河北赵县人,2008年在国防科学技术大学获得工学博士学位。2021年以高层次人才引进北京化工大学信息科学与技术学院,副研究员,主要研究方向为SAR信息处理、SAR图像质量评定及应用等

    通讯作者:

    程建达 cjd_buct@163.com

  • 责任主编:仇晓兰 Corresponding Editor: QIU Xiaolan
  • 中图分类号: TP75

An Algorithm Based on a Feature Interaction-based Keypoint Detector and Sim-CSPNet for SAR Image Registration

Funds: The National Natural Science Foundation of China (62171015)
More Information
  • 摘要: 合成孔径雷达(SAR)图像存在固有的相干斑噪声和几何畸变,并且其成像过程中图像之间存在非线性辐射差异,因此SAR图像配准是近年来最具挑战性的任务之一。关键点的可重复性和特征描述符的有效性直接影响基于特征的配准方法精度。该文提出了一种新颖的基于特征交汇的关键点检测器,它包含3个并行的检测器,即相位一致性(PC)检测器、水平和垂直方向梯度检测器以及局部变异系数检测器。所提出的特征交汇关键点检测器不仅可以有效提取具有高重复性的关键点,而且大大减少了错误关键点的数量,从而降低了特征描述和匹配的计算成本。同时,该文设计了一种孪生跨阶段部分网络(Sim-CSPNet)来快速提取包含深层和浅层特征的特征描述符。与传统手工设计的浅层描述符相比,它可以用来获得更准确的匹配点对。通过对多组SAR图像进行配准实验,并与其他3种方法进行对比,验证了该方法具有很好的配准结果。

     

  • 图  1  本文算法总体流程图

    Figure  1.  Overall flow chart of the proposed algorithm

    图  2  基于特征交汇关键点检测器流程图

    Figure  2.  Flowchart of the feature intersection-based keypoint detector

    图  3  PC检测器在SAR图像对数变换前后的关键点检测结果

    Figure  3.  Keypoint detection results of PC detector before and after logarithmic transformation of SAR images

    图  4  不同关键点检测器得到的关键点检测结果

    Figure  4.  Keypoint detection results obtained by different keypoint detectors

    图  5  不同关键点检测器得到的关键点匹配结果

    Figure  5.  Keypoint matching results obtained by different keypoint detectors

    图  6  孪生跨阶段部分网络结构

    Figure  6.  Architecture of the siamese cross stage partial network

    图  7  4个不同场景的SAR实验数据

    Figure  7.  SAR experimental data of four different scenarios

    图  8  不同算法在图像对A的特征匹配结果

    Figure  8.  Feature matching results of pair A with different method

    图  9  不同算法在图像对B的特征匹配结果

    Figure  9.  Feature matching results of pair B with different method

    图  10  不同算法在图像对C的特征匹配结果

    Figure  10.  Feature matching results of pair C with different method

    图  11  不同算法在图像对D的特征匹配结果

    Figure  11.  Feature matching results of pair D with different method

    图  12  不同算法的棋盘格叠加图

    Figure  12.  Checkerboard overlays for different algorithms

    图  13  图像对A上不同关键点检测器的配准结果

    Figure  13.  Registration results with different keypoint detectors on pair A

    图  14  图像对B上不同关键点检测器的配准结果

    Figure  14.  Registration results with different keypoint detectors on pair B

    图  15  图像对C上不同关键点检测器的配准结果

    Figure  15.  Registration results with different keypoint detectors on pair C

    图  16  图像对D上不同关键点检测器的配准结果

    Figure  16.  Registration results with different keypoint detectors on pair D

    表  1  Sim-CSPNet模型结构

    Table  1.   Sim-CSPNet model structure

    网络模块网络层输出尺寸
    Input layerInput64×64×1
    Conv layerConv(3×3), stride(2)32×32×32
    Block 1Half of previous layer32×32×16
    Conv(1×1), stride(1)32×32×48
    Conv(3×3), stride(1)32×32×12
    Connect32×32×28
    Conv(1×1), stride(1)32×32×48
    Conv(3×3), stride(1)32×32×12
    Connect32×32×40
    Conv(1×1), stride(1)32×32×20
    Connect32×32×36
    Transition layerConv(1×1), stride(1)32×32×18
    Average pooling(2×2), stride(2)16×16×18
    Block 2 16×16×25
    Transition layerConv(1×1), stride(1)16×16×12
    Average pooling(2×2), stride(2)8×8×12
    Block 3 8×8×21
    Output layerConv(8×8), stride(1)256×1
    下载: 导出CSV

    表  2  实验使用的SAR图像对信息

    Table  2.   Information of SAR image pairs used in the experiment

    传感器图像对编号图像大小分辨率(m)获取时间
    GF-3Pair A1214×1130(左)2×620200715(左)
    1480×1207(右)20200726(右)
    Pair B1000×1000(左)2×620200715(左)
    1000×1000(右)20200726(右)
    Sentinel-1Pair C500×500(左)11×1420201010(左)
    600×600(右)20211222(右)
    Pair D1374×1349(左)11×1420201010(左)
    1597×1462(右)20211222(右)
    下载: 导出CSV

    表  3  不同方法在4对SAR图像上的比较

    Table  3.   Comparison of different methods on four pairs of SAR images

    算法Pair APair BPair CPair D
    RMSENCMTime (s)RMSENCMTime (s)RMSENCMTime (s)RMSENCMTime (s)
    SAR-SIFT0.97523215.90.89511859.400.8714369.90.91494232.1
    KAZE-SAR41.52.401819.605.74.806179.2
    HardNet1.1311158.963.500.802128.90.9665146.2
    本文方法0.7420918.10.685947.950.71513.60.6530624.7
    下载: 导出CSV

    表  4  不同关键点检测器的定量比较

    Table  4.   Quantitative comparison of different keypoint detectors

    算法指标Pair APair BPair CPair D
    DoG关键点数量(参考图像)106513475299213422
    关键点数量(待配准图像)192883383424619923
    时间(s)14.96.643.5621.28
    NCM811736128
    RMSE0.940.741.010.79
    Harris关键点数量(参考图像)115138114186714905
    关键点数量(待配准图像)142096370249415053
    时间(s)12.476.793.3913.32
    NCM60402549
    RMSE1.070.810.800.86
    SAR-Harris关键点数量(参考图像)227469193339123660
    关键点数量(待配准图像)3269310215558437767
    时间(s)49.411.317.634.68
    NCM13413239164
    RMSE0.950.870.790.76
    特征交汇检测器关键点数量(参考图像)7848613375210481
    关键点数量(待配准图像)7961542471511499
    时间(s)18.17.953.624.7
    NCM20959451306
    RMSE0.740.680.710.65
    下载: 导出CSV

    表  5  不同网络的定量比较

    Table  5.   Quantitative comparison of different networks

    算法Pair APair BPair CPair D
    RMSENCMTime (s)RMSENCMTime (s)RMSENCMTime (s)RMSENCMTime (s)
    FI+L2Net1.135134.350.812230.381.56183.630.844128.17
    FI+HardNet0.855532.500.774229.410.79293.990.766828.42
    FI+SOSNet0.839314.210.70799.240.86482.770.9212111.70
    本文方法0.7420918.100.685947.950.71513.600.6530624.70
    下载: 导出CSV
  • [1] SUN Yili, LEI Lin, LI Xiao, et al. Structure consistency-based graph for unsupervised change detection with homogeneous and heterogeneous remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1–21. doi: 10.1109/TGRS.2021.3053571
    [2] 苏娟, 李彬, 王延钊. 一种基于封闭均匀区域的SAR图像配准方法[J]. 电子与信息学报, 2016, 38(12): 3282–3288. doi: 10.11999/JEIT160141

    SU Juan, LI Bin, and WANG Yanzhao. SAR image registration algorithm based on closed uniform regions[J]. Journal of Electronics &Information Technology, 2016, 38(12): 3282–3288. doi: 10.11999/JEIT160141
    [3] 张王菲, 陈尔学, 李增元, 等. 雷达遥感农业应用综述[J]. 雷达学报, 2020, 9(3): 444–461. doi: 10.12000/JR20051

    ZHANG Wangfei, CHEN Erxue, LI Zengyuan, et al. Review of applications of radar remote sensing in agriculture[J]. Journal of Radars, 2020, 9(3): 444–461. doi: 10.12000/JR20051
    [4] 周荣荣. 山地SAR影像配准方法研究[D]. [硕士论文], 长安大学, 2019.

    ZHOU Rongrong. Research on registration method of mountainous SAR images[D]. [Master dissertation], Chang’an University, 2019.
    [5] SURI S and REINARTZ P. Mutual-information-based registration of TerraSAR-X and Ikonos imagery in urban areas[J]. IEEE Transactions on Geoscience and Remote Sensing, 2010, 48(2): 939–949. doi: 10.1109/TGRS.2009.2034842
    [6] YOO J C and HAN T H. Fast normalized cross-correlation[J]. Circuits, Systems and Signal Processing, 2009, 28(6): 819–843. doi: 10.1007/s00034-009-9130-7
    [7] SHI Wei, SU Fenzhen, WANG Ruirui, et al. A visual circle based image registration algorithm for optical and SAR imagery[C]. 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 2012: 2109–2112.
    [8] WANG Fei and VEMURI B C. Non-rigid multi-modal image registration using cross-cumulative residual entropy[J]. International Journal of Computer Vision, 2007, 74(2): 201–215. doi: 10.1007/s11263-006-0011-2
    [9] PAUL S and PATI U C. SAR image registration using an improved SAR-SIFT algorithm and Delaunay-triangulation-based local matching[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(8): 2958–2966. doi: 10.1109/JSTARS.2019.2918211
    [10] LOWE D G. Object recognition from local scale-invariant features[C]. Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999: 1150–1157.
    [11] MIKOLAJCZYK K and SCHMID C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1615–1630. doi: 10.1109/TPAMI.2005.188
    [12] MA Wenping, WEN Zelian, WU Yue, et al. Remote sensing image registration with modified SIFT and enhanced feature matching[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(1): 3–7. doi: 10.1109/LGRS.2016.2600858
    [13] XIANG Yuming, WANG Feng, and YOU Hongjian. OS-SIFT: A robust SIFT-like algorithm for high-resolution optical-to-SAR image registration in suburban areas[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(6): 3078–3090. doi: 10.1109/TGRS.2018.2790483
    [14] SCHWIND P, SURI S, REINARTZ P, et al. Applicability of the SIFT operator to geometric SAR image registration[J]. International Journal of Remote Sensing, 2010, 31(8): 1959–1980. doi: 10.1080/01431160902927622
    [15] DELLINGER F, DELON J, GOUSSEAU Y, et al. SAR-SIFT: A SIFT-like algorithm for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(1): 453–466. doi: 10.1109/TGRS.2014.2323552
    [16] WANG Shanhu, YOU Hongjian, and FU Kun. BFSIFT: A novel method to find feature matches for SAR image registration[J]. IEEE Geoscience and Remote Sensing Letters, 2012, 9(4): 649–653. doi: 10.1109/LGRS.2011.2177437
    [17] FAN Jianwei, WU Yan, WANG Fan, et al. SAR image registration using phase congruency and nonlinear diffusion-based SIFT[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(3): 562–566. doi: 10.1109/LGRS.2014.2351396
    [18] ELTANANY A S, AMEIN A S, and ELWAN M S. A modified corner detector for SAR images registration[J]. International Journal of Engineering Research in Africa, 2021, 53(106): 123–156. doi: 10.4028/www.scientific.net/JERA.53.123
    [19] YE Yuanxin, WANG Mengmeng, HAO Siyuan, et al. A novel keypoint detector combining corners and blobs for remote sensing image registration[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(3): 451–455. doi: 10.1109/LGRS.2020.2980620
    [20] ZHANG Han, NI Weiping, YAN Weidong, et al. Registration of multimodal remote sensing image based on deep fully convolutional neural network[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(8): 3028–3042. doi: 10.1109/JSTARS.2019.2916560
    [21] GE Ynchen, XIONG Zhaolong, and LAI Zuomei. Image registration of SAR and optical based on salient image sub-patches[J]. Journal of Physics:Conference Series, 2021, 1961(1): 12–17. doi: 10.1088/1742-6596/1961/1/012017
    [22] ZHU Hao, JIAO Licheng, MA Wenping, et al. A novel neural network for remote sensing image matching[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(9): 2853–2865. doi: 10.1109/TNNLS.2018.2888757
    [23] MISHCHUK A, MISHKIN D, RADENOVIC F, et al. Working hard to know your neighbor’s margins: Local descriptor learning loss[C]. The 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 4829–4840.
    [24] DU Wenliang, ZHOU Yong, and ZHAO Jiaqi, et al. Exploring the potential of unsupervised image synthesis for SAR-optical image matching[J]. IEEE Access, 2021, 9: 71022–71033. doi: 10.1109/ACCESS.2021.3079327
    [25] YE Famao, SU Yanfei, XIAO Hui, et al. Remote sensing image registration using convolutional neural network features[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(2): 232–236. doi: 10.1109/LGRS.2017.2781741
    [26] WANG C Y, LIAO H Y M, WU Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, 2020: 1571–1580.
    [27] WANG Lina, SUN Mingchao, LIU Jinghong, et al. A robust algorithm based on phase congruency for optical and SAR image registration in suburban areas[J]. Remote Sensing, 2020, 12(20): 3339. doi: 10.3390/rs12203339
    [28] XIANG Yuming, TAO Rongshu, WANG Feng, et al. Automatic registration of optical and SAR images VIA improved phase congruency[C]. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 931–934.
    [29] KOVESI P. Image features from phase congruency[J]. Videre:Journal of Computer Vision Research, 1999, 1(3): 1–26. doi: 10.1080/00268976.2015.1118568
    [30] XIE Hua, PIERCE L E, and ULABY F T. Statistical properties of logarithmically transformed speckle[J]. IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(3): 721–727. doi: 10.1109/TGRS.2002.1000333
    [31] HARRIS C and STEPHENS M. A combined corner and edge detector[C]. Alvey Vision Conference, Manchester, UK, 1988.
    [32] HAN Xufeng, LEUNG T, JIA Yangqing, et al. MatchNet: Unifying feature and metric learning for patch-based matching[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3279–3286.
    [33] DE TONE D, MALISIEWICZ T, and RABINOVICH A. Deep image homography estimation[EB/OL]. https://doi.org/10.48550/arXiv.1606.03798, 2016.
    [34] MERKLE N, LUO Wenjie, AUER S, et al. Exploiting deep matching and SAR data for the geo-localization accuracy improvement of optical satellite images[J]. Remote Sensing, 2017, 9(6): 586. doi: 10.3390/rs9060586
    [35] BALNTAS V, RIBA E, PONSA D, et al. Learning local feature descriptors with triplets and shallow convolutional neural networks[C]. British Machine Vision Conference 2016, York, UK, 2016.
    [36] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2016: 2261–2269.
    [37] HERMANS A, BEYER L, and LEIBE B. In defense of the triplet loss for person re-identification[EB/OL]. https://doi.org/10.48550/arXiv.1703.07737, 2017.
    [38] POURFARD M, HOSSEINIAN T, SAEIDI R, et al. KAZE-SAR: SAR image registration using KAZE detector and modified SURF descriptor for tackling speckle noise[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5207612. doi: 10.1109/TGRS.2021.3084411
    [39] TIAN Yurun, FAN Bin, and WU Fuchao. L2-Net: Deep learning of discriminative patch descriptor in euclidean space[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honnolulu, USA, 2017: 6128–6136.
    [40] TIAN Yurun, YU Xin, FAN Bin, et al. SOSNet: Second order similarity regularization for local descriptor learning[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 11008–11017.
    [41] TOUZI R. A review of speckle filtering in the context of estimation theory[J]. IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(11): 2392–2404. doi: 10.1109/TGRS.2002.803727
  • 加载中
图(16) / 表(5)
计量
  • 文章访问数:  332
  • HTML全文浏览量:  124
  • PDF下载量:  51
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-06-08
  • 修回日期:  2022-07-20
  • 网络出版日期:  2022-08-03

目录

    /

    返回文章
    返回