面向SAR图像解译的物理可解释深度学习技术进展与探讨

黄钟泠 姚西文 韩军伟

周超, 刘泉华, 胡程. 间歇采样转发式干扰的时频域辨识与抑制[J]. 雷达学报, 2019, 8(1): 100–106. doi: 10.12000/JR18080
引用本文: 黄钟泠, 姚西文, 韩军伟. 面向SAR图像解译的物理可解释深度学习技术进展与探讨[J]. 雷达学报, 2022, 11(1): 107–125. doi: 10.12000/JR21165
ZHOU Chao, LIU Quanhua, and HU Cheng. Time-frequency analysis techniques for recognition and suppression of interrupted sampling repeater jamming[J]. Journal of Radars, 2019, 8(1): 100–106. doi: 10.12000/JR18080
Citation: HUANG Zhongling, YAO Xiwen, and HAN Junwei. Progress and perspective on physically explainable deep learning for synthetic aperture radar image interpretation[J]. Journal of Radars, 2022, 11(1): 107–125. doi: 10.12000/JR21165

面向SAR图像解译的物理可解释深度学习技术进展与探讨

DOI: 10.12000/JR21165
基金项目: 国家自然科学基金(62101459),中国博士后科学基金(BX2021248),中央高校基本科研业务费专项资金(G2021KY05104)
详细信息
    作者简介:

    黄钟泠(1994–),女,重庆人,2020年获中国科学院大学博士学位,现为西北工业大学自动化学院准聘副教授,硕士生导师。主要研究方向为SAR图像解译、深度学习和可解释人工智能

    姚西文(1988–),男,山东人,2016年获西北工业大学博士学位,现为西北工业大学自动化学院副研究员,博士生导师。主要研究方向为计算机视觉、遥感图像处理、细粒度图像分类和目标识别

    韩军伟(1977–),男,陕西人,2003年获西北工业大学博士学位,现为西北工业大学自动化学院教授,博士生导师。主要研究方向为计算机视觉与脑成像分析

    通讯作者:

    黄钟泠 huangzhongling@nwpu.edu.cn

  • 责任主编:计科峰 Corresponding Editor: JI Kefeng
  • 中图分类号: TN957.51

Progress and Perspective on Physically Explainable Deep Learning for Synthetic Aperture Radar Image Interpretation(in English)

Funds: The National Natural Science Foundation of China (62101459), China Postdoctoral Science Foundation (BX2021248), Fundamental Research Funds for the Central Universities (G2021KY05104)
More Information
  • 摘要:

    深度学习技术近年来在合成孔径雷达(SAR)图像解译领域发展迅速,但当前基于数据驱动的方法通常忽视了SAR潜在的物理特性,预测结果高度依赖训练数据,甚至违背了物理认知。深层次地整合理论驱动和数据驱动的方法在 SAR 图像解译领域尤为重要,数据驱动的方法擅长从大规模数据中自动挖掘新模式,对物理过程能起到有效的补充;反之,在数据驱动方法中加入可解释的物理模型能提升深度学习算法的透明度,并降低模型对标记样本的依赖。该文提出在SAR图像解译应用领域发展物理可解释的深度学习技术,从SAR信号、特性理解到图像语义和应用场景等多个维度开展研究,并结合物理机器学习提出了几种在SAR解译中融合物理模型和深度学习模型的研究思路,逐步发展可学习且可解释的智能化SAR图像解译新范式。在此基础上,该文回顾了近两三年在SAR图像解译相关领域中整合数据驱动深度学习和理论驱动物理模型的相关工作,主要聚焦信号特性理解和图像语义理解两大方向,并结合研究现状和其他领域的相关研究探讨了目前面临的挑战和未来可能的发展方向。

     

  • 极化合成孔径雷达(Polarimetric Synthetic Aperture Radar, PolSAR)具有全天时和几乎全天候的工作能力,通过收发极化状态正交的电磁波以获取目标的全极化散射信息[1]。地物分类是农作物生长监控、农村与城市用地普查、环境监测等应用领域的共性基础问题,也是极化SAR图像理解与解译的重要应用方向。高精度的地物分类结果能够为上述应用领域提供可靠的信息支撑。

    通常,提高极化SAR地物分类精度主要有两种途径[2]。第1种途径专注于极化特征的挖掘与优选,通过精细化的极化散射机理建模与解译,从全极化信息中提取出对不同地物类别具有更强区分度的特征。常用的极化散射机理解译方法有基于特征值分解的方法和基于模型分解的方法。基于这些极化目标分解方法所得到的极化特征参数经常被用于极化SAR地物分类,例如Cloude-Pottier分解所得的极化熵/极化平均角/极化反熵(H/ α/A)参数[3],Freeman-Durden分解[4]、Yamaguchi分解[5]和近年来提出的精细化极化目标分解[6]所得的各散射机理的散射能量参数(如奇次散射、偶次散射、体散射、螺旋散射等)[7]。第2种途径则从分类器入手,使用性能更好的分类器,以对现有的极化特征进行充分利用。常用的分类器包括C均值分类器、Wishart分类器、支持向量机(Support Vector Machine, SVM)分类器、随机森林分类器、神经网络分类器以及近来年在诸多领域取得成功应用的以卷积神经网络为代表的深度学习分类方法等[811]。当然,对特征和分类器同时进行优化和优选也是提高极化SAR地物分类精度的有效途径。

    在传统基于特征的极化SAR地物分类中,具有旋转不变特性的极化特征参数得到了广泛应用。例如,基于H/ α/A和总散射能量SPAN的极化SAR地物分类就是一种常用的分类方法。然而,目标的极化响应与目标和SAR的相对几何关系密切相关。同一目标在不同方位取向下,其后向散射可以是显著不同的。同时,不同目标在某些特定方位取向下,其后向散射又是十分相似的。例如,具有不同方位取向的建筑物与森林等植被就是极化SAR图像解译的难点。这是诸多传统极化目标分解方法存在散射机理解译模糊的重要原因之一,同时也限制了基于旋转不变极化特征参数的传统分类方法所得精度的进一步提升。为避免这种解译模糊,一种思路是构建更精细化的目标散射模型和精细化的极化目标分解方法。而另一种思路则是挖掘利用目标方位取向与其后向散射机理之间的隐含关系。文献[12]提出的统一的极化矩阵旋转理论就是一种代表性的方法。该方法提出了在绕雷达视线的旋转域中理解目标散射特性的新思路,并导出了一系列旋转域极化特征。部分旋转域极化特征参数已经在农作物辨识[13]、目标对比增强[12]、人造目标提取[14]等领域获得了成功应用。

    由于这些旋转域极化特征包含有目标在旋转域中隐含的极化散射信息,且与其方位取向具有一定关系。若将它们与传统的旋转不变极化特征参数于H/ α/A/SPAN联合作为地物分类特征集,则从极化特征挖掘的角度来看,两类不同的极化特征对于不同地物类别的区分能力势必会形成一定程度的互补,进而使分类精度得到进一步提升。基于这一思路,本文提出了一种结合旋转域极化特征与旋转不变特征H/ α/A/SPAN的极化SAR地物分类方法。具体即基于不同地物类别样本集类间距最大的特征优选准则,以部分优选的旋转域极化特征参数与H/ α/A/SPAN联合作为地物分类所用特征,并选用性能较为稳定的SVM[15]作为分类器进行分类处理。由于该分类方法额外使用了目标在方位取向方面的隐含信息,故相较于仅使用旋转不变特征H/ α/A/SPAN作为输入的SVM分类器[10],其能够达到更优的分类性能表现。

    本文第2节简要介绍了统一的极化矩阵旋转理论及其所导出的旋转域极化特征参数;第3节提出结合旋转域极化特征的极化SAR地物分类方法;第4节基于AIRSAR和多时相UAVSAR实测数据开展了地物分类对比实验及分析;第5节总结本文方法并对后续研究工作进行展望。

    极化SAR获得的目标全极化信息可以通过极化相干矩阵T表示。满足互易性原理时,极化相干矩阵T可以表示为:

    T=kPkHP=[T11T12T13T21T22T23T31T32T33]
    (1)

    其中, kP=12[SHH+SVVSHHSVV2SHV]T为Pauli散射矢量。 SHV为以垂直极化天线发射并以水平极化天线接收条件下的散射系数, kP中其它元素可类似定义。  表示集合平均。 Tij则表示极化相干矩阵 T中第i行第j列所对应的元素。

    将极化相干矩阵 T绕雷达视线进行旋转处理,则可得到旋转域中极化相干矩阵的表达式为:

    T(θ)=kP(θ)kHP(θ)=R3(θ)TRH3(θ)
    (2)

    其中,旋转矩阵为:

    R3(θ)=[1000cos2θsin2θ0sin2θcos2θ]
    (3)

    在旋转域中极化相干矩阵 T(θ)的每个元素经过相应的数学变换即可被统一地由一个正弦函数进行表征[12]

    f(θ)=Asin[ω(θ+θ0)]+B
    (4)

    其中,A为振荡幅度,B为振荡中心, ω为角频率, θ0为初始角度。文献[12]将这4类极化特征参数 {A,B,ω,θ0}称为振荡参数集,其完整表征极化相干矩阵的各元素在旋转域中的特性。这样就可以导出一系列旋转域极化特征参数,如表1所示。其中, Angle{a}表示复数a的相位,相应取值范围为 [π,π]

    表  1  旋转域极化特征参数[12]
    Table  1.  Polarimetric feature parameters derived from rotation domain[12]
    散射矩阵元素项 A= B ω θ0=1ωAngle{}
    Re[T12(θ)] Re2[T12]+Re2[T13] 0 2 Re[T13]+jRe[T12]
    Re[T13(θ)] Re2[T12]+Re2[T13] 0 2 Re[T12]+jRe[T13]
    Im[T12(θ)] Im2[T12]+Im2[T13] 0 2 Im[T13]+jIm[T12]
    Im[T13(θ)] Im2[T12]+Im2[T13] 0 2 Im[T12]+jIm[T13]
    Re[T23(θ)] 14(T33T22)2+Re2[T23] 0 4 12(T33T22)+jRe[T23]
    T22(θ) 14(T33T22)2+Re2[T23] 12(T22+T33) 4 Re[T23]+j12(T22T33)
    T33(θ) 14(T33T22)2+Re2[T23] 12(T22+T33) 4 Re[T23]+j12(T33T22)
    |T12(θ)|2 Re2[T12T13]+14(|T13|2|T12|2)2 12(|T12|2+|T13|2) 4 Re[T12T13]+j12(|T12|2|T13|2)
    |T13(θ)|2 Re2[T12T13]+14(|T13|2|T12|2)2 12(|T12|2+|T13|2) 4 Re[T12T13]+j12(|T13|2|T12|2)
    |T23(θ)|2 14{14(T33T22)2+Re2[T23]}2 12{14(T33T22)2+Re2[T23]}+Im2[T23] 8 12(T33T22)Re[T23]+j12[Re2[T23]14(T33T22)2]
    下载: 导出CSV 
    | 显示表格

    基于上述振荡参数集,文献[12]还导出了一系列的极化角参数集,如极化零角参数、极化最大化角参数以及极化最小化角参数等。其中,极化零角参数的定义为在绕雷达视线的旋转域中使极化相干矩阵某元素取值为零的旋转角,即:

    f(θ)=Asin[ω(θnull+θ0)]+B=0θnull=θ0
    (5)

    其中, θnull即极化零角参数。由于表1中相互独立的5个初始角度 θ0分别为 θ0_Re[T12(θ)], θ0_Im[T12(θ)], θ0_Re[T23(θ)], θ0_|T12(θ)|2θ0_|T23(θ)|2,故相应的极化零角参数有 θnull_Re[T12(θ)], θnull_Im[T12(θ)], θnull_Re[T23(θ)], θnull_|T12(θ)|2θnull_|T23(θ)|2。由文献[12]可知,各初始角度与其相应极化零角参数所包含的极化信息是相互等价的,且极化零角参数具有相对明确的物理意义,故在本文的后续部分均以极化零角参数代替相应的初始角度。

    文献[12]使用极化零角参数 θnull_Re[T12(θ)]θnull_Im[T12(θ)]的组合能够成功辨识7类不同农作物,初步证实了极化零角参数集对于不同地物类别具有较好的区分能力。在此基础上,本文挖掘利用旋转域极化特征所蕴含目标在旋转域中的隐含信息,并将其应用于极化SAR地物分类。

    在此之前,需要基于地物分类的应用背景对众多的旋转域极化特征进行优选处理。在文献[12]所导出的一系列旋转域极化特征之中,以不同地物类别样本集相互之间的“类间距最大化”为准则,进行相应的旋转域极化特征优选。具体步骤为:首先对各旋转域极化特征参数进行归一化处理;然后将不同的地物类别两两组合形成若干的地物类别对;接着针对各地物类别对,以其中两地物类别之间的类间距为标准,优选出使其取值达到最大的旋转域极化特征,则每个地物类别对均对应于一个优选的旋转域极化特征;最后,将各地物类别对的优选结果进行“取并集”处理,进而得到最终的优选结果。

    文献[12]所导出相互独立的旋转域极化特征共有12个,分别为 θnull_Re[T12(θ)], θnull_Im[T12(θ)], θnull_Re[T23(θ)], θnull_|T12(θ)|2, θnull_|T23(θ)|2, A_Re[T12(θ)], A_Im[T12(θ)], A_T12(θ), A_ T23(θ), B_T12(θ), B_T33(θ), B_T23(θ)。针对之后实验部分所使用的AIRSAR数据(15类地物,两两组合形成105个地物类别对;其它说明见4.1节)以及多时相UAVSAR数据(7类地物,两两组合形成21个地物类别对;4个数据获取日期;其它说明见4.2节),上述特征优选流程所得结果如表2所示。

    表  2  针对不同极化SAR实测数据的特征优选结果
    Table  2.  Selected features for different PolSAR data
    实测数据 优选所得旋转域极化特征(相应地物类别对的个数)
    AIRSAR θnull_Re[T12(θ)](18), θnull_Im[T12(θ)](15), θnull_Re[T23(θ)](71), B_T33(θ)(1)
    UAVSAR 6月17日 θnull_Re[T12(θ)](5), θnull_Im[T12(θ)](12), θnull_Re[T23(θ)](4)
    6月22日 θnull_Re[T12(θ)](5), θnull_Im[T12(θ)](14), θnull_Re[T23(θ)](2)
    7月03日 θnull_Im[T12(θ)](3), θnull_Re[T23(θ)](18)
    7月17日 θnull_Re[T12(θ)](7), θnull_Im[T12(θ)](5), θnull_Re[T23(θ)](9)
    下载: 导出CSV 
    | 显示表格

    综合考虑表2中的优选结果,并在追求较高地物分类精度的同时,将两组实测数据优选得到的旋转域极化特征进行统一,故本文优选部分的最终结果为3个极化零角参数,即 θnull_Re[T12(θ)], θnull_Im[T12(θ)]θnull_Re[T23(θ)]

    为了将目标在旋转域中的隐含信息充分利用在极化SAR地物分类中,同时又发挥传统的旋转不变极化特征参数H/A/ α/SPAN在极化散射机理解译方面的优点,本文提出了一种结合旋转域极化特征的极化SAR地物分类方法,其流程图如图1所示,相应的具体操作如下:

    图  1  本文方法具体流程图
    Figure  1.  Flowchart of proposed method

    (1) 在进行Cloude-Pottier分解之前,需要对极化SAR数据进行相干斑滤波处理。本文采用新近提出的一种基于矩阵相似性检验的SimiTest自适应相干斑滤波方法[16]对极化SAR数据进行滤波预处理。

    (2) 基于滤波后的极化相干矩阵,计算总散射能量SPAN。

    (3) 同样地,基于滤波后的极化相干矩阵,进行Cloude-Pottier分解,得到极化特征量H/ α/A

    (4) 同时,将滤波后的极化相干矩阵绕雷达视线旋转,计算上述优选部分所得的3个极化零角参数。

    (5) 对上述7个极化特征参数分别进行归一化处理,以作为地物分类特征集输入至SVM分类器。

    (6) 通过SVM相应的训练与测试过程,实现对不同地物类别的分类处理。

    为了验证新极化特征(即3个旋转域极化零角参数)的引入对于传统地物分类方法性能的提升作用,在对极化相干矩阵中全部极化信息进行利用的前提之下,将本文方法与仅使用旋转不变特征H/A/ α/SPAN作为SVM分类器输入的传统方法进行对比。首先使用AIRSAR数据15类地物的分类验证本文方法的分类性能,再使用多时相UAVSAR数据7类地物的分类进一步验证本文方法对多时相数据的稳健性。在对此两组数据分别进行SimiTest相干斑滤波[16]时,所用滑窗大小均为15×15。对SVM分类器,各类地物样本的一半用于训练,另一半用于测试。

    本文首先使用NASA/JPL AIRSAR系统在荷兰Flevoland地区所获取的L波段全极化SAR数据进行地物分类实验。该数据方位向分辨率为12.1 m,距离向分辨率为6.6 m,所用区域大小为736×1010。SimiTest相干斑滤波后的Pauli RGB图如图2(a)所示。该区域的真值图如图2(b)所示,其中主要包含茎豆、豌豆、森林、苜蓿、小麦1、甜菜、土豆、裸地、草地、油菜籽、大麦、小麦2、小麦3、水域以及建筑物等15类地物。

    图  2  AIRSAR数据
    Figure  2.  AIRSAR data

    使用传统方法和本文方法分别对滤波后的数据进行分类处理,所得结果如图3所示。

    图  3  AIRSAR数据的分类结果
    Figure  3.  Classification results of AIRSAR data

    两种方法对AIRSAR数据15类地物分类处理所得精度如表3所示。通过比较可知,本文方法得到的总体分类精度为92.3%,优于传统方法91.1%的分类精度。且本文方法对草地77.3%的分类精度相较于传统方法的59.3%提升了18个百分点。另外,由于SVM分类器所用分类策略以总体分类精度的最大化为目标,无法保证单一地物类别的分类精度均达到最优。例如,本文方法在苜蓿、小麦1、裸地、大麦以及建筑物等5种地物类别区域所得分类精度均不及传统方法。针对其中分类精度差距最大(约8.3%)的裸地,由于其相应区域的主要散射机制为“面散射”,不同方位取向对其后向散射的影响较小,使用传统的旋转不变极化特征已经能较好地对其进行区分与辨识,本文方法额外引入的3个旋转域极化零角参数可能造成了分类信息的冗余,进而导致所得分类精度的较大幅度下降。

    表  3  两种方法所得AIRSAR数据15类地物及总体的分类精度(%)
    Table  3.  Classification accuracy of different terrains in AIRSAR data using two methods (%)
    地物 传统方法 本文方法
    茎豆 97.2 98.0
    豌豆 93.7 96.9
    森林 92.6 93.7
    苜蓿 96.8 96.6
    小麦1 88.7 85.9
    甜菜 93.8 93.8
    土豆 92.6 93.3
    裸地 95.5 87.2
    草地 59.3 77.3
    油菜籽 83.9 88.0
    大麦 92.6 91.5
    小麦2 89.2 89.4
    小麦3 94.3 95.9
    水域 98.0 98.5
    建筑物 84.9 83.2
    总体精度 91.1 92.3
    下载: 导出CSV 
    | 显示表格

    本文使用NASA/JPL UAVSAR系统在加拿大Manitoba地区所获取的多时相L波段全极化SAR数据进行地物分类实验。该数据方位向分辨率为7 m,距离向分辨率为5 m,所用区域大小为1325×1011。多时相极化SAR数据分别获取于6月17日、6月22日、7月3日以及7月17日。SimiTest相干斑滤波处理之后多时相极化SAR数据对应的Pauli RGB图如图4所示。该区域的主要地物类型是以谷物和油种产品为代表的混合型牧场农作物。相应的真值图如图5所示,其中主要包含阔叶林、草料、大豆、玉米、小麦、油菜籽以及燕麦等7类地物。

    图  4  多时相UAVSAR数据滤波后Pauli RGB图
    Figure  4.  Filtered Pauli RGB images of multi-temporal UAVSAR data
    图  5  所用区域的真值图
    Figure  5.  Gound truth of the multi-temporal data

    使用传统方法和本文方法分别对滤波后的多时相极化SAR数据进行相互独立的分类处理,所得结果分别如图6图7所示。

    图  6  传统方法对多时相UAVSAR数据分类结果
    Figure  6.  Classification results of multi-temporal UAVSAR data using conventional method
    图  7  本文方法对多时相UAVSAR数据分类结果
    Figure  7.  Classification results of multi-temporal UAVSAR data using proposed method

    图6(c)图7(c)所示,基于7月3日获取的数据,传统方法将红色圆框内小麦与燕麦的绝大部分错分为了大豆,而本文方法在该区域的分类性能相较于前者有显著提升。又如图6(d)图7(d)所示,基于7月17日获取的数据,传统方法将白色圆框内小麦的绝大部分错分为了大豆,而本文方法在该区域的分类精度相较于前者也有较大提升。

    两种方法对多时相UAVSAR数据7类地物分类处理所得精度如表4所示。通过比较可知,对不同日期获取的数据,本文方法所得各类地物及总体的分类精度均优于或相当于传统方法。其中,对6月17日、6月22日、7月3日以及7月17日4个不同日期所获取的数据,本文方法得到的总体分类精度分别为94.98%, 95.12%, 95.99%以及96.78%,而传统方法所得总体分类精度则波动于80.87%至90.75%之间,出现约10%的起伏。具体就小麦和燕麦而言,本文方法得到的分类精度均分别保持在94%和92%以上,而传统方法所得相应分类精度则分别出现了约30%和23%的波动起伏。另外,本文方法95.72%的平均总体分类精度相较于传统方法的87.80%提升了约8个百分点。故本文方法较好的分类性能对于同一系统的多时相数据更具稳健性。

    表  4  两种方法所得多时相UAVSAR数据7类地物及总体的分类精度 (%)
    Table  4.  The classification accuracy of different terrains in multi-temporal UAVSAR data using two methods (%)
    日期 方法 阔叶林 草料 大豆 玉米 小麦 油菜籽 燕麦 总体
    6月17日 传统 98.47 62.24 92.64 96.12 93.63 91.70 86.37 90.19
    本文 98.49 81.65 96.76 98.19 96.08 92.25 96.32 94.98
    6月22日 传统 98.05 61.38 94.14 97.30 97.89 93.82 77.29 90.75
    本文 97.96 72.60 96.86 98.18 97.07 96.84 95.13 95.12
    7月3日 传统 97.41 54.38 90.45 98.89 68.75 98.81 63.46 80.87
    本文 97.77 76.68 98.12 99.08 96.95 98.93 94.22 95.99
    7月17日 传统 96.86 64.51 97.38 99.78 84.76 92.19 82.98 89.39
    本文 97.27 93.15 99.31 99.58 94.73 99.71 92.16 96.78
    平均 传统 97.70 60.63 93.65 98.02 86.26 94.13 77.53 87.80
    本文 97.87 81.02 97.76 98.76 96.21 96.93 94.46 95.72
    下载: 导出CSV 
    | 显示表格

    另外,对于6月22日所获取数据中的阔叶林和小麦,以及7月17日所获取数据中的玉米,本文方法所得分类精度均略低于传统方法,且分类精度的差距均在1%以内。

    在上述两组相互独立的对比实验所得结果中,本文方法所得分类精度均优于传统方法。故本文方法所表现出的较好分类性能对于不同系统的数据也具有较强稳健性。

    目标方位取向对其后向散射响应的直接影响极易引起散射机理的解译模糊,进而限制仅使用旋转不变特征参数作为分类特征集的极化SAR地物分类所得精度。针对这一问题,本文将刻画目标旋转域隐含信息的旋转域极化特征用于极化SAR地物分类,并提出了一种结合旋转域极化特征和旋转不变特征H/A/ α/SPAN的极化SAR地物分类方法,该方法将旋转域极化零角参数和H/A/ α/SPAN联合作为分类特征集输入至SVM分类器。

    将本文方法与仅使用旋转不变特征H/A/ α/SPAN作为SVM分类器输入的传统方法进行比较:对AIRSAR数据15类地物分类而言,本文方法总体分类精度达到92.3%,优于传统方法的91.1%。对多时相UAVSAR数据7类地物分类而言,本文方法平均总体分类精度达到95.72%,显著优于传统方法的87.80%,表明本文方法对同一系统的多时相数据更具稳健性。这两组对比实验也表明本文方法较好的分类性能对于不同系统的数据具有较强稳健性。

    通过对旋转域中目标极化散射信息的深入挖掘,能够为极化SAR图像的解译与应用提供一条新的可行途径。下一步将考虑旋转域极化特征与具有深度学习能力的卷积神经网络等分类器相结合,以实现更高的分类精度。另外,对极化特征参数更优的选择准则及相互融合也是我们未来将要深入研究讨论的内容。

  • 图  1  Sentinel-1卫星在不同成像条件下拍摄的SAR图像[2]

    Figure  1.  The SAR images obtained by Sentinel-1 under different imaging conditions[2]

    图  2  物理可解释的深度学习 SAR 图像解译应从多个维度开展研究,充分结合数据驱动和知识驱动的模型,逐步发展可学习且可解释的智能化图像解译新范式

    Figure  2.  The PXDL for SAR image interpretation is supposed to be carried out from multiple aspects, that deeply integrates the data-driven and knowledge-driven models to develop the novel learnable and explainable intelligent paradigm

    图  3  SAR图像解译思路,①②③④⑤表示可以发展物理可解释深度学习方法的模块

    Figure  3.  The SAR image interpretation guideline, ①②③④⑤ are the potential modules to develop PXDL

    图  4  文献[50]给出的全极化SAR图像 H/α 平面,以及选取的部分地物样本在其中的分布

    Figure  4.  The H/α plane for full-polarized SAR data and the selected land-use and land-cover samples distributed in Ref. [50]

    图  5  基于时频分析和极化特征扩展时频分析模型的无监督学习方法在不同极化SAR图像上的结果比较[92]

    Figure  5.  The unsupervised learning results of different polarized SAR images based on TFA and pol-extended TFA models[92]

    图  6  物理引导与注入式学习

    Figure  6.  Physics guided and injected learning

    图  7  文献[11]所提的SAR图像分类框架Deep SAR-Net (DSN)

    Figure  7.  The SAR image classification framework Deep SAR-Net (DSN) in Ref. [11]

    图  8  无监督的物理引导学习与CNN监督分类学习在训练集与测试集数据上的特征可视化[100]

    Figure  8.  The feature visualization of the unsupervised physics guided learning and supervised CNN classification on training and test set[100]

    图  9  基于ASC模型初始化的复数卷积神经网络第一层卷积核幅度可视化[106]

    Figure  9.  The amplitude images of convolution kernels in the first layer of CV-CNN based on ASC model initialization[106]

    图  10  不同SAR图像建筑物分割数据集和算法示例[121,123]

    Figure  10.  The different SAR image building segmentation datasets and algorithms[121,123]

    图  1  The SAR images obtained by Sentinel-1 under different imaging conditions[2]

    图  2  The PXDL for SAR image interpretation is supposed to be carried out from multiple aspects, that deeply integrates the data-driven and knowledge-driven models to develop the novel learnable and explainable intelligent paradigm

    图  3  The SAR image interpretation guideline. ① ② ③ ④ ⑤ are the potential modules to develop PXDL

    图  4  The H/ α plane for full-polarized SAR data and the selected land-use and land-cover samples distributed in Ref. [50]

    图  5  The unsupervised learning results of different polarized SAR images based on TFA and pol-extended TFA models[92]

    图  6  Physics guided and injected learning

    图  7  The SAR image classification framework Deep SAR-Net (DSN)[11]

    图  8  The feature visualization of the unsupervised physics guided learning and supervised CNN classification on training and test set[100]

    图  9  The amplitude images of convolution kernels in the first layer of CV-CNN based on ASC model initialization[106]

    图  10  The different SAR image building segmentation datasets and algorithms[121,123]

  • [1] CUMMING I G, WONG F H, 洪文, 胡东辉, 韩冰, 等译. 合成孔径雷达成像算法与实现[M]. 北京: 电子工业出版社, 2019, 93–100.

    CUMMING I G, WONG F H, HONG Wen, HU Donghui, HAN Bing, et al. translation. Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation[M]. Beijing: Publishing House of Electronics Industry, 2019, 93–100.
    [2] 黄钟泠. 面向合成孔径雷达图像分类的深度学习方法研究[D]. [博士论文], 中国科学院大学, 2020: 59.

    HUANG Zhongling. A study on synthetic aperture radar image classification with deep learning[D]. [Ph. D. dissertation], University of Chinese Academy of Sciences, 2020: 59.
    [3] 谷秀昌, 付琨, 仇晓兰. SAR图像判读解译基础[M]. 北京: 科学出版社, 2017.

    GU Xiuchang, FU Kun, and QIU Xiaolan. Fundamentals of SAR Image of SAR Image Interpretation[M]. Beijing: Science Press, 2017.
    [4] OLIVER C and QUEGAN S. Understanding Synthetic Aperture Radar Images[M]. London: SciTech Publishing, 2004.
    [5] GAO Gui, OUYANG Kewei, LUO Yongbo, et al. Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(3): 1812–1832. doi: 10.1109/TGRS.2016.2634862
    [6] LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Ship detection based on complex signal kurtosis in single-channel SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6447–6461. doi: 10.1109/TGRS.2019.2906054
    [7] CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720
    [8] HUANG Zhongling, DUMITRU C O, PAN Zongxu, et al. Classification of large-scale high-resolution SAR images with deep transfer learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(1): 107–111. doi: 10.1109/LGRS.2020.2965558
    [9] HUANG Zhongling, PAN Zongxu, and LEI Bin. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data[J]. Remote Sensing, 2017, 9(9): 907. doi: 10.3390/rs9090907
    [10] HUANG Zhongling, PAN Zongxu, and LEI Bin. What, where, and how to transfer in SAR target recognition based on deep CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(4): 2324–2336. doi: 10.1109/TGRS.2019.2947634
    [11] HUANG Zhongling, DATCU M, PAN Zongxu, et al. Deep SAR-Net: Learning objects from signals[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161: 179–193. doi: 10.1016/j.isprsjprs.2020.01.016
    [12] 金亚秋. 多模式遥感智能信息与目标识别: 微波视觉的物理智能[J]. 雷达学报, 2019, 8(6): 710–716. doi: 10.12000/JR19083

    JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083
    [13] 张钹, 朱军, 苏航. 迈向第三代人工智能[J]. 中国科学:信息科学, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204

    ZHANG Bo, ZHU Jun, and SU Hang. Toward the third generation of artificial intelligence[J]. SCIENTIA SINICA Informationis, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204
    [14] DAS A and RAD P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey[OL]. arXiv: 2006.11371, 2020.
    [15] BAI Xiao, WANG Xiang, LIU Xianglong, et al. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments[J]. Pattern Recognition, 2021, 120: 108102. doi: 10.1016/j.patcog.2021.108102
    [16] ANGELOV P and SOARES E. Towards explainable deep neural networks (xDNN)[J]. Neural Networks, 2020, 130: 185–194. doi: 10.1016/j.neunet.2020.07.010
    [17] MOLNAR C. Interpretable machine learning: A guide for making black box models explainable[EB/OL]. https://christophm.github.io/interpretable-ml-book/, 2021.
    [18] CAMBURU O M. Explaining deep neural networks[D]. [Ph. D. dissertation], Oxford University, 2020.
    [19] 李玮杰, 杨威, 刘永祥, 等. 雷达图像深度学习模型的可解释性研究与探索[J]. 中国科学: 信息科学, 待出版. doi: 10.1360/SSI-2021-0102.

    LI Weijie, YANG Wei, LIU Yongxiang, et al. Research and exploration on interpretability of deep learning model in radar image[J]. SCIENTIA SINICA Informationis, in press. doi: 10.1360/SSI-2021-0102.
    [20] BELLONI C, BALLERI A, AOUF N, et al. Explainability of deep SAR ATR through feature analysis[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(1): 659–673. doi: 10.1109/TAES.2020.3031435
    [21] 郭炜炜, 张增辉, 郁文贤, 等. SAR图像目标识别的可解释性问题探讨[J]. 雷达学报, 2020, 9(3): 462–476. doi: 10.12000/JR20059

    GUO Weiwei, ZHANG Zenghui, YU Wenxian, et al. Perspective on explainable SAR target recognition[J]. Journal of Radars, 2020, 9(3): 462–476. doi: 10.12000/JR20059
    [22] KARNIADAKIS G E, KEVREKIDIS I G, LU Lu, et al. Physics-informed machine learning[J]. Nature Reviews Physics, 2021, 3(6): 422–440. doi: 10.1038/s42254-021-00314-5
    [23] THUEREY N, HOLL P, MUELLER M, et al. Physics-based deep learning[OL]. arXiv: 2109.05237, 2021.
    [24] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686–707. doi: 10.1016/j.jcp.2018.10.045
    [25] MENG Xuhui, LI Zhen, ZHANG Dongkun, et al. PPINN: Parareal physics-informed neural network for time-dependent PDEs[J]. Computer Methods in Applied Mechanics and Engineering, 2020, 370: 113250. doi: 10.1016/j.cma.2020.113250
    [26] GOSWAMI S, ANITESCU C, CHAKRABORTY S, et al. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture[J]. Theoretical and Applied Fracture Mechanics, 2020, 106: 102447. doi: 10.1016/j.tafmec.2019.102447
    [27] KARPATNE A, EBERT-UPHOFF I, RAVELA S, et al. Machine learning for the geosciences: Challenges and opportunities[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(8): 1544–1554. doi: 10.1109/TKDE.2018.2861006
    [28] CAMPS-VALLS G, REICHSTEIN M, ZHU Xiaoxiang, et al. Advancing deep learning for earth sciences: From hybrid modeling to interpretability[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3979–3982. doi: 10.1109/IGARSS39084.2020.9323558.
    [29] REICHSTEIN M, CAMPS-VALLS G, STEVENS B, et al. Deep learning and process understanding for data-driven Earth system science[J]. Nature, 2019, 566(7743): 195–204. doi: 10.1038/s41586-019-0912-1
    [30] CAMPS-VALLS G, SVENDSEN D H, CORTÉS-ANDRÉS J, et al. Physics-aware machine learning for geosciences and remote sensing[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 2086–2089. doi: 10.1109/IGARSS47720.2021.9554521.
    [31] JIA Xiaowei, WILLARD J, KARPATNE A, et al. Physics guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles[C]. The 2019 SIAM International Conference on Data Mining, Calgary, Canada, 2019: 558–566. doi: 10.1137/1.9781611975673.63.
    [32] DAW A, KARPATNE A, WATKINS W, et al. Physics-guided neural networks (PGNN): An application in lake temperature modeling[OL]. arXiv: 1710.11431, 2021. doi: https://arxiv.org/abs/1710.11431.
    [33] BEUCLER T, PRITCHARD M, GENTINE P, et al. Towards physically-consistent, data-driven models of convection[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3987–3990. doi: 10.1109/IGARSS39084.2020.9324569.
    [34] SHEN Huanfeng, JIANG Menghui, LI Jie, et al. Coupling model-driven and data-driven methods for remote sensing image restoration and fusion[OL]. arXiv: 2108.06073, 2021.
    [35] WANG Yuqing, WANG Qi, LU Wenkai, et al. Physics-constrained seismic impedance inversion based on deep learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3072132
    [36] XIA Wenchao, ZHENG Gan, WONG K K, et al. Model-driven beamforming neural networks[J]. IEEE Wireless Communications, 2020, 27(1): 68–75. doi: 10.1109/MWC.001.1900239
    [37] ZHANG Juping, XIA Wenchao, YOU Minglei, et al. Deep learning enabled optimization of downlink beamforming under per-antenna power constraints: Algorithms and experimental demonstration[J]. IEEE Transactions on Wireless Communications, 2020, 19(6): 3738–3752. doi: 10.1109/TWC.2020.2977340
    [38] ZHU Xiaoxiang, MONTAZERI S, ALI M, et al. Deep learning meets SAR: Concepts, models, pitfalls, and perspectives[J]. IEEE Geoscience and Remote Sensing Magazine, in press. doi: 10.1109/MGRS.2020.3046356.
    [39] MALMGREN-HANSEN D, KUSK A, DALL J, et al. Improving SAR automatic target recognition models with transfer learning from simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(9): 1484–1488. doi: 10.1109/LGRS.2017.2717486
    [40] 文贡坚, 朱国强, 殷红成, 等. 基于三维电磁散射参数化模型的SAR目标识别方法[J]. 雷达学报, 2017, 6(2): 115–135. doi: 10.12000/JR17034

    WEN Gongjian, ZHU Guoqiang, YIN Hongcheng, et al. SAR ATR based on 3D parametric electromagnetic scattering model[J]. Journal of Radars, 2017, 6(2): 115–135. doi: 10.12000/JR17034
    [41] 罗迎, 倪嘉成, 张群. 基于“数据驱动+智能学习”的合成孔径雷达学习成像[J]. 雷达学报, 2020, 9(1): 107–122. doi: 10.12000/JR19103

    LUO Ying, NI Jiacheng, and ZHANG Qun. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence[J]. Journal of Radars, 2020, 9(1): 107–122. doi: 10.12000/JR19103
    [42] CHAN T H, JIA Kui, GAO Shenghua, et al. PCANet: A simple deep learning baseline for image classification?[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5017–5032. doi: 10.1109/TIP.2015.2475625
    [43] LI Mengke, LI Ming, ZHANG Peng, et al. SAR image change detection using PCANet guided by saliency detection[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(3): 402–406. doi: 10.1109/LGRS.2018.2876616
    [44] WANG Rongfang, ZHANG Jie, CHEN Jiawei, et al. Imbalanced learning-based automatic SAR images change detection by morphologically supervised PCA-net[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(4): 554–558. doi: 10.1109/LGRS.2018.2878420
    [45] CLOUDE S and POTTIER E. An entropy based classification scheme for land applications of polarimetric SAR[J]. IEEE Transactions on Geoscience and Remote Sensing, 1997, 35(1): 68–78. doi: 10.1109/36.551935
    [46] YAMAGUCHI Y, YAJIMA Y, and YAMADA H. A four-component decomposition of POLSAR images based on the coherency matrix[J]. IEEE Geoscience and Remote Sensing Letters, 2006, 3(3): 292–296. doi: 10.1109/LGRS.2006.869986
    [47] FERRO-FAMIL L, REIGBER A, and POTTIER E. Scene characterization using sub-aperture polarimetric interferometric SAR data[C]. IGARSS 2003-2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 2003: 702–704. doi: 10.1109/IGARSS.2003.1293889.
    [48] POTTER L C and MOSES R L. Attributed scattering centers for SAR ATR[J]. IEEE Transactions on Image Processing, 1997, 6(1): 79–91. doi: 10.1109/83.552098
    [49] JI Kefeng and WU Yonghui. Scattering mechanism extraction by a modified cloude-pottier decomposition for dual polarization SAR[J]. Remote Sensing, 2015, 7(6): 7447–7470. doi: 10.3390/rs70607447
    [50] YONEZAWA C, WATANABE M, and SAITO G. Polarimetric decomposition analysis of ALOS PALSAR observation data before and after a landslide event[J]. Remote Sensing, 2012, 4(8): 2314–2328. doi: 10.3390/rs4082314
    [51] NIU Shengren, QIU Xiaolan, LEI Bin, et al. Parameter extraction based on deep neural network for SAR target simulation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4901–4914. doi: 10.1109/TGRS.2020.2968493
    [52] NIU Shengren, QIU Xiaolan, LEI Bin, et al. A SAR target image simulation method with DNN embedded to calculate electromagnetic reflection[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2593–2610. doi: 10.1109/JSTARS.2021.3056920
    [53] GUO Jiayi, LEI Bin, DING Chibiao, et al. Synthetic aperture radar image synthesis by using generative adversarial nets[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(7): 1111–1115. doi: 10.1109/LGRS.2017.2699196
    [54] OH J and KIM M. PeaceGAN: A GAN-based multi-task learning method for SAR target image generation with a pose estimator and an auxiliary classifier[J]. Remote Sensing, 2021, 13(19): 3939. doi: 10.3390/rs13193939
    [55] CUI Zongyong, ZHANG Mingrui, CAO Zongjie, et al. Image data augmentation for SAR sensor via generative adversarial nets[J]. IEEE Access, 2019, 7: 42255–42268. doi: 10.1109/ACCESS.2019.2907728
    [56] SONG Qian, XU Feng, and JIN Yaqiu. SAR image representation learning with adversarial autoencoder networks[C]. IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 9498–9501. doi: 10.1109/IGARSS.2019.8898922.
    [57] WANG Ke, ZHANG Gong, LENG Yang, et al. Synthetic aperture radar image generation with deep generative models[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6): 912–916. doi: 10.1109/LGRS.2018.2884898
    [58] HU Xiaowei, FENG Weike, GUO Yiduo, et al. Feature learning for SAR target recognition with unknown classes by using CVAE-GAN[J]. Remote Sensing, 2021, 13(18): 3554. doi: 10.3390/rs13183554
    [59] XIE You, FRANZ E, CHU Mengyu, et al. TempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow[J]. ACM Transactions on Graphics, 2018, 37(4): 95.
    [60] CHU Mengyu, THUEREY N, SEIDEL H P, et al. Learning meaningful controls for fluids[J]. ACM Transactions on Graphics, 2021, 40(4): 100. doi: 10.1145/3450626.3459845
    [61] QIAN Jiang, HUANG Shaoyin, WANG Lu, et al. Super-resolution ISAR imaging for maneuvering target based on deep-learning-assisted time-frequency analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 5201514. doi: 10.1109/TGRS.2021.3050189
    [62] LIANG Jiadian, WEI Shunjun, WANG Mou, et al. ISAR compressive sensing imaging using convolution neural network with interpretable optimization[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 2483–2486. doi: 10.1109/IGARSS39084.2020.9323601.
    [63] GREGOR K and LECUN Y. Learning fast approximations of sparse coding[C]. 27th International Conference on Machine Learning, Haifa, Israel, 2010: 399–406.
    [64] LIU Jialin, CHEN Xiaohan, WANG Zhangyang, et al. ALISTA: Analytic weights are as good as learned weights in LISTA[C]. The 7th International Conference on Learning Representations, New Orleans, USA, 2019, 1–33.
    [65] BEHRENS F, SAUDER J, and JUNG P. Neurally augmented ALISTA[C]. The 9th International Conference on Learning Representations, Virtual Event, Austria, 2021: 1–10.
    [66] YANG Yan, SUN Jian, LI Huibin, et al. Deep ADMM-Net for compressive sensing MRI[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 10–18. doi: 10.5555/3157096.3157098.
    [67] YANG Yan, SUN Jian, LI Huibin, et al. ADMM-CSNet: A deep learning approach for image compressive sensing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(3): 521–538. doi: 10.1109/TPAMI.2018.2883941
    [68] MASON E, YONEL B, and YAZICI B. Deep learning for SAR image formation[C]. SPIE 10201, Algorithms for Synthetic Aperture Radar Imagery XXIV, Anaheim, USA, 2017: 1020104. doi: 10.1117/12.2267831.
    [69] GAO Jingkun, DENG Biin, QIN Yuliang, et al. Enhanced radar imaging using a complex-valued convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(1): 35–39. doi: 10.1109/LGRS.2018.2866567
    [70] HU Changyu, WANG Ling, LI Ze, et al. Inverse synthetic aperture radar imaging using a fully convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(7): 1203–1207. doi: 10.1109/LGRS.2019.2943069
    [71] ALVER M B, SALEEM A, and ÇETIN M. Plug-and-play synthetic aperture radar image formation using deep priors[J]. IEEE Transactions on Computational Imaging, 2021, 7: 43–57. doi: 10.1109/TCI.2020.3047473
    [72] WANG Mou, WEI Shunjun, LIANG Jiadian, et al. TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging[J]. IEEE Transactions on Image Processing, 2021, 30: 7317–7332. doi: 10.1109/TIP.2021.3104168
    [73] HU Changyu, LI Ze, WANG Ling, et al. Inverse synthetic aperture radar imaging using a deep ADMM network[C]. 20th International Radar Symposium (IRS), Ulm, Germany, 2019: 1–9. doi: 10.23919/IRS.2019.8768138.
    [74] LI Xiaoyong, BAI Xueru, and ZHOU Feng. High-resolution ISAR imaging and autofocusing via 2d-ADMM-net[J]. Remote Sensing, 2021, 13(12): 2326. doi: 10.3390/rs13122326
    [75] LI Ruize, ZHANG Shuanghui, ZHANG Chi, et al. Deep learning approach for sparse aperture ISAR imaging and autofocusing based on complex-valued ADMM-net[J]. IEEE Sensors Journal, 2021, 21(3): 3437–3451. doi: 10.1109/JSEN.2020.3025053
    [76] HU Xiaowei, XU Feng, GUO Yiduo, et al. MDLI-Net: Model-driven learning imaging network for high-resolution microwave imaging with large rotating angle and sparse sampling[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–17. doi: 10.1109/TGRS.2021.3110579
    [77] RATHA D, GAMBA P, BHATTACHARYA A, et al. Novel techniques for built-up area extraction from polarimetric SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(1): 177–181. doi: 10.1109/LGRS.2019.2914913
    [78] AO Dongyang, DATCU M, SCHWARZ G, et al. Moving ship velocity estimation using TanDEM-X data based on subaperture decomposition[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(10): 1560–1564. doi: 10.1109/LGRS.2018.2846399
    [79] 廖明生, 王茹, 杨梦诗, 等. 城市目标动态监测中的时序InSAR分析方法及应用[J]. 雷达学报, 2020, 9(3): 409–424. doi: 10.12000/JR20022

    LIAO Mingsheng, WANG Ru, YANG Mengshi, et al. Techniques and applications of spaceborne time-series InSAR in urban dynamic monitoring[J]. Journal of Radars, 2020, 9(3): 409–424. doi: 10.12000/JR20022
    [80] SICA F, GOBBI G, RIZZOLI P, et al. Φ-Net: Deep residual learning for InSAR parameters estimation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(5): 3917–3941. doi: 10.1109/TGRS.2020.3020427
    [81] SONG Qian, XU Feng, and JIN Yaqiu. Radar image colorization: Converting single-polarization to fully polarimetric using deep neural networks[J]. IEEE Access, 2018, 6: 1647–1661. doi: 10.1109/ACCESS.2017.2779875
    [82] ZHAO Juanping, DATCU M, ZHANG Zenghai, et al. Contrastive-regulated CNN in the complex domain: A method to learn physical scattering signatures from flexible PolSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 10116–10135. doi: 10.1109/TGRS.2019.2931620
    [83] QU Junrong, QIU Xiaolan, and DING Chibiao. A study of recovering POLSAR information from single-polarized data using DNN[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 812–815. doi: 10.1109/IGARSS47720.2021.9554304.
    [84] CHENG Zezhou, YANG Qingxiong, and SHENG Bin. Deep colorization[C]. The IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 415–423. doi: 10.1109/ICCV.2015.55.
    [85] LUAN Fujun, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6997–7005. doi: 10.1109/CVPR.2017.740.
    [86] JI Guang, WANG Zhaohui, ZHOU Lifan, et al. SAR image colorization using multidomain cycle-consistency generative adversarial network[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(2): 296–300. doi: 10.1109/LGRS.2020.2969891
    [87] TUPIN F and TISON C. Sub-aperture decomposition for SAR urban area analysis[C]. European Conference on Synthetic Aperture Radar (EUSAR), Ulm, Germany, 2004: 431–434.
    [88] BOVENGA F, DERAUW D, RANA F M, et al. Multi-chromatic analysis of SAR images for coherent target detection[J]. Remote Sensing, 2014, 6(9): 8822–8843. doi: 10.3390/rs6098822
    [89] SPIGAI M, TISON C, and SOUYRIS J C. Time-frequency analysis in high-resolution SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(7): 2699–2711. doi: 10.1109/TGRS.2011.2107914
    [90] FERRO-FAMIL L, REIGBER A, POTTIER E, et al. Scene characterization using subaperture polarimetric SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(10): 2264–2276. doi: 10.1109/TGRS.2003.817188
    [91] HUANG Zongling, DATCU M, PAN Zongxu, et al. HDEC-TFA: An unsupervised learning approach for discovering physical scattering properties of single-polarized SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(4): 3054–3071. doi: 10.1109/TGRS.2020.3014335
    [92] HUANG Zhongling, DATCU M, PAN Zongxu, et al. A hybrid and explainable deep learning framework for SAR images[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 1727–1730. doi: 10.1109/IGARSS39084.2020.9323845.
    [93] DE S, CLANTON C, BICKERTON S, et al. Exploring the relationships between scattering physics and auto-encoder latent-space embedding[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3501–3504. doi: 10.1109/IGARSS39084.2020.9323410.
    [94] HUANG Zhongling, YAO Xiwen, DUMITRU C O, et al. Physically explainable CNN for SAR image classification[OL]. arXiv: 2110.14144, 2021.
    [95] ZHANG Jinsong, XING Mengdao, and XIE Yiyuan. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(3): 2174–2187. doi: 10.1109/TGRS.2020.3003264
    [96] LEI Songlin, QIU Xiaolan, DING Chibiao, et al. A feature enhancement method based on the sub-aperture decomposition for rotating frame ship detection in SAR images[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 3573–3576. doi: 10.1109/IGARSS47720.2021.9553635.
    [97] THEAGARAJAN R, BHANU B, ERPEK T, et al. Integrating deep learning-based data driven and model-based approaches for inverse synthetic aperture radar target recognition[J]. Optical Engineering, 2020, 59(5): 051407. doi: 10.1117/1.OE.59.5.051407
    [98] HORI C, HORI T, LEE T Y, et al. Attention-based multimodal fusion for video description[C]. The IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4203–4212. doi: 10.1109/ICCV.2017.450.
    [99] PORIA S, CAMBRIA E, BAJPAI R, et al. A review of affective computing: From unimodal analysis to multimodal fusion[J]. Information Fusion, 2017, 37: 98–125. doi: 10.1016/j.inffus.2017.02.003
    [100] HUANG Zhongling, DUMITRU C O, and REN Jun. Physics-aware feature learning of SAR images with deep neural networks: A case study[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 1264–1267. doi: 10.1109/IGARSS47720.2021.9554842.
    [101] LEE J S, GRUNES M R, AINSWORTH T L, et al. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier[J]. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(5): 2249–2258. doi: 10.1109/36.789621
    [102] RATHA D, BHATTACHARYA A, and FRERY A C. Unsupervised classification of PolSAR data using a scattering similarity measure derived from a geodesic distance[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(1): 151–155. doi: 10.1109/LGRS.2017.2778749
    [103] LI Yi, DU Lan, and WEI Di. Multiscale CNN based on component analysis for SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–12. doi: 10.1109/TGRS.2021.3100137
    [104] FENG Sijia, JI Kefeng, ZHANG Linbin, et al. SAR target classification based on integration of ASC parts model and deep learning algorithm[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 10213–10225. doi: 10.1109/JSTARS.2021.3116979
    [105] LIU Qingshu and LANG Liang. MMFF: Multi-manifold feature fusion based neural networks for target recognition in complex-valued SAR imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 151–162. doi: 10.1016/j.isprsjprs.2021.08.008
    [106] LIU Jiaming, XING Mengdao, YU Hanwen, et al. EFTL: Complex convolutional networks with electromagnetic feature transfer learning for SAR target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–11. doi: 10.1109/TGRS.2021.3083261
    [107] CUI Yuanhao, LIU Fang, JIAO Licheng, et al. Polarimetric multipath convolutional neural network for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–18. doi: 10.1109/TGRS.2021.3071559
    [108] DAW A, THOMAS R Q, CAREY C C, et al. Physics-guided architecture (PGA) of neural networks for quantifying uncertainty in lake temperature modeling[C]. The 2020 SIAM International Conference on Data Mining (SDM), Cincinnati, USA, 2020: 532–540.
    [109] SUN Jian, NIU Zhan, INNANEN K A, et al. A theory-guided deep-learning formulation and optimization of seismic waveform inversion[J]. Geophysics, 2020, 85(2): R87–R99. doi: 10.1190/geo2019-0138.1
    [110] HE Qishan, ZHAO Lingjun, JI Kefeng, et al. SAR target recognition based on task-driven domain adaptation using simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3116707
    [111] ZHANG Linbin, LENG Xiangguang, FENG Sijia, et al. Domain knowledge powered two-stream deep network for few-shot SAR vehicle recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–15. doi: 10.1109/TGRS.2021.3116349
    [112] AGARWAL T, SUGAVANAM N, and ERTIN E. Sparse signal models for data augmentation in deep learning ATR[C]. IEEE Radar Conference, Florence, Italy, 2020: 1–6. doi: 10.1109/RadarConf2043947.2020.9266382.
    [113] DIEMUNSCH J R and WISSINGER J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR[C]. Proceedings of SPIE 3370, Algorithms for synthetic aperture radar Imagery V, Orlando, USA, 1998: 481–492. doi: 10.1117/12.321851.
    [114] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [115] 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0: 高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi: 10.12000/JR19097

    SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARSHIP-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi: 10.12000/JR19097
    [116] 杜兰, 王兆成, 王燕, 等. 复杂场景下单通道SAR目标检测及鉴别研究进展综述[J]. 雷达学报, 2020, 9(1): 34–54. doi: 10.12000/JR19104

    DU Lan, WANG Zhaocheng, WANG Yan, et al. Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes[J]. Journal of Radars, 2020, 9(1): 34–54. doi: 10.12000/JR19104
    [117] CHEN Siwei and TAO Chensong. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(4): 627–631. doi: 10.1109/LGRS.2018.2799877
    [118] LIU Xu, JIAO Licheng, TANG Xu, et al. Polarimetric convolutional network for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(5): 3040–3054. doi: 10.1109/TGRS.2018.2879984
    [119] BI Haixia, SUN Jian, and XU Zongben. A graph-based semisupervised deep learning model for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(4): 2116–2132. doi: 10.1109/TGRS.2018.2871504
    [120] VINAYARAJ P, SUGIMOTO R, NAKAMURA R, et al. Transfer learning with CNNs for segmentation of PALSAR-2 power decomposition components[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 6352–6361. doi: 10.1109/JSTARS.2020.3031020
    [121] XIA Junshi, YOKOYA N, ADRIANO B, et al. A benchmark high-resolution GaoFen-3 SAR dataset for building semantic segmentation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 5950–5963. doi: 10.1109/JSTARS.2021.3085122
    [122] WU Fan, WANG Chao, ZHANG Hong, et al. Built-up area mapping in China from GF-3 SAR imagery based on the framework of deep learning[J]. Remote Sensing of Environment, 2021, 262: 112515. doi: 10.1016/j.rse.2021.112515
    [123] CHEN Jiankun, QIU Xiaolan, DING Chibiao, et al. CVCMFF Net: Complex-valued convolutional and multifeature fusion network for building semantic segmentation of InSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–14. doi: 10.1109/TGRS.2021.3068124
    [124] SHI Xianzheng, FU Shilei, CHEN Jin, et al. Object-level semantic segmentation on the high-resolution Gaofen-3 FUSAR-map dataset[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 3107–3119. doi: 10.1109/JSTARS.2021.3063797
    [125] 仇晓兰, 焦泽坤, 彭凌霄, 等. SARMV3D-1.0: SAR微波视觉三维成像数据集[J]. 雷达学报, 2021, 10(4): 485–498. doi: 10.12000/JR21112

    QIU Xiaolan, JIAO Zekun, PENG Lingxiao, et al. SARMV3D-1.0: Synthetic aperture radar microwave vision 3D imaging dataset[J]. Journal of Radars, 2021, 10(4): 485–498. doi: 10.12000/JR21112
  • 期刊类型引用(5)

    1. 布锦钶,李鹏飞,周鹏,周志一,曾祥祝,胡城志,孙兴赛,赵青,张文理. 基于杂波知识图谱驱动的空时自适应处理方法. 物联网技术. 2024(11): 122-126 . 百度学术
    2. 陈慧,田湘,王文钦. 共形FDA-MIMO波束图综合. 信号处理. 2023(05): 793-806 . 百度学术
    3. 胡瑞贤,贾秀权,曹晨. 基于频控阵波束锐化的主瓣干扰抑制技术. 中国电子科学研究院学报. 2023(08): 770-778 . 百度学术
    4. 兰岚,廖桂生,许京伟,朱圣棋,曾操,张玉洪. 基于频率分集阵列的多功能一体化波形设计与信号处理方法. 雷达学报. 2022(05): 850-870 . 本站查看
    5. 王文钦,张顺生. 频控阵雷达技术研究进展综述. 雷达学报. 2022(05): 830-849 . 本站查看

    其他类型引用(3)

  • 加载中
图(20)
计量
  • 文章访问数: 6619
  • HTML全文浏览量: 2910
  • PDF下载量: 877
  • 被引次数: 8
出版历程
  • 收稿日期:  2021-11-04
  • 修回日期:  2021-12-08
  • 网络出版日期:  2021-12-31
  • 刊出日期:  2022-02-28

目录

/

返回文章
返回