Progress and Perspective on Physically Explainable Deep Learning for Synthetic Aperture Radar Image Interpretation(in English)
-
摘要:
深度学习技术近年来在合成孔径雷达(SAR)图像解译领域发展迅速,但当前基于数据驱动的方法通常忽视了SAR潜在的物理特性,预测结果高度依赖训练数据,甚至违背了物理认知。深层次地整合理论驱动和数据驱动的方法在 SAR 图像解译领域尤为重要,数据驱动的方法擅长从大规模数据中自动挖掘新模式,对物理过程能起到有效的补充;反之,在数据驱动方法中加入可解释的物理模型能提升深度学习算法的透明度,并降低模型对标记样本的依赖。该文提出在SAR图像解译应用领域发展物理可解释的深度学习技术,从SAR信号、特性理解到图像语义和应用场景等多个维度开展研究,并结合物理机器学习提出了几种在SAR解译中融合物理模型和深度学习模型的研究思路,逐步发展可学习且可解释的智能化SAR图像解译新范式。在此基础上,该文回顾了近两三年在SAR图像解译相关领域中整合数据驱动深度学习和理论驱动物理模型的相关工作,主要聚焦信号特性理解和图像语义理解两大方向,并结合研究现状和其他领域的相关研究探讨了目前面临的挑战和未来可能的发展方向。
Abstract:Deep learning technologies have been developed rapidly in Synthetic Aperture Radar (SAR) image interpretation. The current data-driven methods neglect the latent physical characteristics of SAR; thus, the predictions are highly dependent on training data and even violate physical laws. Deep integration of the theory-driven and data-driven approaches for SAR image interpretation is of vital importance. Additionally, the data-driven methods specialize in automatically discovering patterns from a large amount of data that serve as effective complements for physical processes, whereas the integrated interpretable physical models improve the explainability of deep learning algorithms and address the data-hungry problem. This study aimed to develop physically explainable deep learning for SAR image interpretation in signals, scattering mechanisms, semantics, and applications. Strategies for blending the theory-driven and data-driven methods in SAR interpretation are proposed based on physics machine learning to develop novel learnable and explainable paradigms for SAR image interpretation. Further, recent studies on hybrid methods are reviewed, including SAR signal processing, physical characteristics, and semantic image interpretation. Challenges and future perspectives are also discussed on the basis of the research status and related studies in other fields, which can serve as inspiration.
-
1. 引言
成像雷达因其稳定性及高分辨率的优势,在军事侦查和民用勘测领域得到广泛应用。其中,成像体制以合成孔径雷达(Synthetic Aperture Radar, SAR)为代表[1,2];但传统SAR成像技术和多普勒波束锐化技术(Doppler Beam Sharpening, DBS)受限于成像机理,无法满足正前视区域成像的需求;对于斜视SAR和侧视SAR的成像模式,其应用均存在一定的局限性[3]。此外,SAR成像性能在积累时间、波束分辨瑞利限、信噪比等因素的制约下,对抗复杂电磁环境的应用性能受到极大挑战。
涡旋电磁波成像雷达采用一种新型成像体制,其波前相位受轨道角动量(Orbital Angular Momentum, OAM)调制[4,5],从而形成特定的相位分布,理论上可以生成无数种正交的调制模式,使得涡旋电磁波在数学处理上更为简便,可以提升成像分辨率、算法效率等性能。针对电磁涡旋雷达应用,学者开展了实孔径成像、SAR以及ISAR (Inverse Synthetic Aperture Radar)等一系列研究[6−8],并且在典型前视雷达成像应用中,展示出优异性能[9]。
在涡旋雷达成像算法方面,2013年郭桂蓉等人[10]对电磁涡旋雷达成像的机理与可行性方面进行了研究,为涡旋波在雷达成像感知领域的应用奠定了基础。2018年前后,学者结合SAR体制对单一模态下的二维成像[11]与多模态三维合成孔径成像[12]等开展了系列研究,研究结果表明,电磁涡旋可以用于SAR成像且成像效果优于平面波。2019年,Fang等人[13]改进了对传统的Chirp Scaling算法,实现了电磁涡旋波合成孔径雷达成像,Bu等人[14]将涡旋电磁波与合成孔径雷达干涉(Interferometric Synthetic Aperture Radar, InSAR)测量技术相结合对目标进行了三维成像。2020年,袁航等人[15]将涡旋电磁波应用于对人体步态的识别,建立了人体目标的涡旋电磁波雷达回波模型。2022年,学者对涡旋雷达高分辨技术进行研究[16],力求突破现有的成像分辨率限制,实现涡旋电磁波的高精度成像。
在前视雷达成像应用中,较大视场角下的前视三维成像算法的研究尚未成熟,电磁涡旋雷达成像的研究仍处于起步阶段。为了解决在较大视场角的场景精确成像的问题,本文采用分时独立发射的多模态涡旋电磁波扫描方式,从前视雷达成像场景与均匀圆阵的几何构型出发构建了涡旋回波信号模型;在原有后向投影(Back Projection, BP)算法的基础上添加了对较大视场角的多普勒网格与贝塞尔幅度补偿,并利用目标涡旋方位角与OAM模态的对偶关系,获取目标方位-俯仰信息。在得到的成像结果下定义了信号处理增益,并计算了信号处理增益随俯仰角变化的趋势,对于实际场景要求的俯仰角,信号处理增益的最大衰减不超过–1.8 dB,算法将分布在各个模态的能量利用起来,有较好的成像性能。最后对涡旋雷达的前视场景的距离-俯仰-方位三维成像进行了仿真与实测分析,算法在实测场景也有较好的表现。并对输出信噪比与信号处理增益进行分析。
2. 成像几何模型
电磁涡旋场是一种由多个激励源组合形成的场,可以通过如多点、线、面激励源等多种方式来生成。均匀圆阵(Uniform Circular Array, UCA)体制利用多通道相控方法实现不同波束模式的调控,系统具有较强的灵活性和可变性;它可同时激励多种模态的涡旋波,因此成为电磁涡旋雷达系统的理想方案之一。图1所示的前视涡旋雷达的成像模型中,电磁涡旋成像的方位角分辨力来源于多模态涡旋波的差异性空域相位调制,距离向分辨力则基于信号带宽。不失一般性的以目标区域为参考系,以平台的初始位置为坐标原点建立直角坐标系O-XYZ,平台沿X轴移动,移动速度为vη,成像区域在雷达移动方向的前侧方。
放大图1的细节,在UCA本地坐标系下的目标各参数定义如图2所示。在慢时间η时刻,以平台当前位置(xη, 0, 0)为原点建立直角坐标系O'-XYZ则UCA所在平面为YO'Z平面,N个发射天线均匀地分布在圆心为O',半径为ra的圆周上,接收天线位于为YO'Z平面的原点O′,目标到接收天线的距离定义为目标的斜距rm;目标的俯仰角θm定义为斜距与X轴之间的夹角;目标方位角φm定义为斜距在YO'Z平面上的投影与Y轴正方向的夹角。
3. 回波信号建模
本文的发射信号采用如图3所示的分时多模态方式,每个发射阵元的时域波形根据当前模态增加初相调制。其中,每一个模态的脉冲持续时间均为Tp,雷达在一个周期内顺次发射Nα个模态的涡旋波,总积累时长T=NαKnTp。
分时多模态对系统的要求较低,能够很好地兼容传统的SAR雷达系统,与基于波形分集实现OAM解调的同时多模态的发射机制[17]相比,它能够快速地进行大空域的扫描,在一个周期内获取更多的空域信息。由于各个模态的信号在时间上的独立性,无需OAM解调即可分离多模态的信号,加快了算法的处理速度,更适用成像范围大,少快拍实时成像的机载或者弹载雷达场景。但无论哪一种波形发射方式,其目的是获取不同模态维的信息,本文后续所提出的成像方法仍然适用。
均匀圆阵列的每一个阵元发射带宽为B、调频率为K,幅度相同的线性调频信号。每一个阵元的方位角ϕn=2π(n−1)/(n−1)NN,n=1,2,⋯,N,N为阵元个数。为了在远场合成模态为α的涡旋波,需要对阵元施加不同的相移。第n个阵元的发射信号为
Sn(α,t)=p(t)⋅exp(jαϕn),n=1,2,⋯,N (1) 其中,t为快时间变量。p(t)为线性调频波包络,具体表达式为
p(t)=rect(tTp)⋅exp(jπK⋅t2) (2) 根据图1、图2的前视成像几何模型与UCA阵元位置关系,在YO'Z平面上第n个阵元的矢量向量为rn=ra(ˆycosϕn+ˆzsinϕn),其中ˆy,ˆz为Y轴与Z轴的单位向量,ra为UCA阵元半径,远场目标Pm(rm,θm,φm)的场点矢径rm的单位向量为ˆrm=ˆysinθmcosφm+ˆzsinθmsinφm+ˆxcosθm,则第n个发射阵元到目标的实际距离为|rm−rn|,目标到接收阵元的距离为|rm|=rm,在此双程作用下,接收阵元处的脉冲回波信号为
Snr(α,t)=N∑n=1σm⋅p(t−τm)⋅exp(jαϕn)⋅exp[j2πfd(t−τm)]⋅exp[−jk(rm+|rm−rn|)] (3) 其中,σm为后向散射系数,τm=2rm/rmcc,c为光速,exp[−jk(rm+|rm−rn|)]为传播相位,k为信号的波数,k=2π/2πλλ为信号的波数,λ为信号波长,fc为信号载频,fd为多普勒频移,在远场条件下可用rm+|rm−rn|≈2rm−ˆrm⋅rn=2rm−rasinθm⋅cos(φm−ϕn)对传播相位作近似,则式(3)可近似为
Snr(α,t)=σm⋅p(t−τm)⋅exp[j2πfd(t−τm)]⋅N∑n=1{exp(jαϕn)⋅exp(−j2krm)⋅exp[jk⋅rasinθmcos(φm−ϕn)]} (4) 当阵元数量N足够多时,式(4)中的求和可近似使用积分替换,可改写为
Snr(α,t)≈σm⋅p(t−τm)⋅exp[j2πfd(t−τm)]⋅exp(−j2krm)⋅N2π∫2π0exp(jαϕ)⋅exp[jk⋅rasinθmcos(φm−ϕ)]dϕφm−φ=ϕ = σm⋅p(t−τm)⋅exp[j2πfd(t−τm)]⋅exp(−j2krm)⋅N2πexp(jαφm)∫2π0exp(−jαφ)⋅exp[jk⋅rasinθmcosφ]dφ =Njασm⋅p(t−τm)⋅exp[j2πfd(t−τm)]⋅exp(−j2krm)⋅exp(jαφm)⋅Jα(k⋅rasinθm) (5) 其中, Jα(⋅)为α阶的第1类贝塞尔函数,式(5)利用了贝塞尔函数的积分形式[18]的变换:
Jα(krasinθm)=jα2π⋅∫2π0exp(−jkrasinθmcosφ)⋅exp(−jαφ)dφ=(−1)αjα2π⋅∫2π0exp(jkrasinθmcosφ)⋅exp(−jαφ)dφ (6) 考虑到成像场景下雷达与目标的相对运动,多个散射目标的斜距rm与俯仰角θm,m=1,2,⋯,M随慢时间η变化,可得到M点的复杂目标在成像过程中的回波表达式:
Sr(t,α,η)=Njα⋅M∑m=1{σmJα[kra⋅sinθm(η)]⋅exp(jαφm)⋅exp[j2πfd(t−τm)]⋅exp[−j2krm(η)]⋅rect[t−2rm(η)/cTp]⋅exp[jπK(t−2rm(η)/c)2]} (7) 4. 成像处理算法
4.1 基于后向投影-距离多普勒的目标三维成像
成像处理目的是从距离-俯仰-方位多维度耦合的回波相位中分离出目标信息。本文所提出的整体算法流程如图4所示,首先基于匹配滤波方法实现距离压缩;其次,根据后向投影算法划分距离网格和俯仰网格,并在此基础上添加多普勒网格。然后,将网格在慢时间上投影的同时进行相位与幅度补偿,获得匹配的目标俯仰信息;最后对模态维做傅里叶变换获得目标的方位信息,在多普勒维非相干积累,获取更多离网目标信息,提升目标信噪比。
对式(7)的信号进行脉冲压缩后得到的回波表达式为
Srcomp(t,α,η)=Njα⋅M∑m=1{σmJα[kra⋅sinθm(η)]⋅exp(jαφm)⋅exp[−j2krm(η)]⋅pr(⋅)} (8) 其中,pr(⋅)为匹配滤波处理后的距离包络。
pr(⋅)=|K|Tp⋅sinc{|K|Tp⋅(t−2rm(η)−fd/fdKK)} 对于每一组模态-慢时间采样数据,反射波信号代表了该模态主副瓣照射范围内目标散射特性的总和,其中包含了目标距离单元、传播相位、当前模态的涡旋相位、雷达与目标相对运动与多普勒频移带来距离徙动相位。
BP算法首先将场景网格化,在慢时间采样点上,根据每个网格点的斜距,在回波信号中选取对应距离单元,计算网格点对应的相位因子进行相位补偿,最后将相位补偿后的信号叠加为网格点上的成像图。因此,BP算法本质上是针对所有网格点在时域上设计了相应的匹配滤波器,可有效解决回波信号相位中空域多维度难以解耦的问题,但是,在较大视场角的情况下,由于BP网络的划分限制,低密度的网络会大量丢失离网目标的信息,高密度的网络会大大增加算法的运算量,如何在不添加网络数量的情况获取更多目标信息,成为少快拍大空域成像急需解决的问题。
本文将多普勒偏移与BP算法相结合,在BP算法中根据已知的相对速度信息添加多普勒网格,补偿多普勒频率变化在距离与俯仰角网格变换时带来的距离偏移;并考虑到少快拍成像场景的算法运算量问题,对BP-RD网络的投影计算过程进行线性替代。无论图3中的分时多模态发射模式还是基于波形分集的同时多模态体制,均可基于BP-RD算法对于多维耦合的相位信息进行精确补偿后实现目标点聚焦。接下来对BP-RD成像算法的实现流程进行详细介绍。
(1) 成像场景网格化
当平台位于起始点O时,将成像场景划分为三维(Rg,θg,φg)网格,如图5所示。距离网格沿雷达视线方向延伸;以雷达视线与Y轴夹角增大方向划分方位角网格φg,最大可划分范围为–π~π;以雷达视线与X轴夹角增加方向划分俯仰角网格θg,最大可划分范围为0~π/2。最后在已经划分好的距离维与俯仰维上添加多普勒网格Dg,具体计算方式见式(9)。在实际应用中可以根据前视雷达参数以及具体成像需求缩小网格划分范围,加快运算速度,降低运算量。
(2) 俯仰角变换与计算距离多普勒延迟
根据图6所示的雷达运动轨迹与场景网格之间的几何关系,在当前航迹采样点,将起始点网络变换迭代到真实距离和俯仰角网格上。网络的变换表达式为
{Rη(Rg,θg)=√R2g−2Rgxηcosθg+x2ηθη(Rg,θg)=arccos(Rgcosθg/RgcosθgRηRη)Dg(Rg,θg)=vηc⋅cos[θη(Rg,θg)]/vηc⋅cos[θη(Rg,θg)]λK(λK) (9) 其中,vη为平台速度,xη=∫vηdη为整个航迹位置采样点的位置X轴坐标。在前视雷达少快拍的成像条件下,xη≪Rg,对Rg和θg在xη/xηRgRg→0处进行一阶泰勒展开:
{Rη(Rg,θg)=Rg⋅√1−2⋅xηRgcosθg+(xηRg)2=Rg−cosθg⋅xηθη(Rg,θg)=arccos(√cos2θg√1−2⋅xηRgcosθg+(xηRg)2)=θg−cos2θg2sinθg⋅xηRg (10) (3) 贝塞尔幅度调制与传播相位补偿
由式(8)可知,贝塞尔函数将目标的俯仰维信息与不同模态的涡旋波束方位图耦合,导致了不同俯仰角的信息在模态维信号幅度谱上受贝塞尔函数的调制;影响了目标方位角与OAM模态的对称关系,使得信号在方位向成像时的脉冲旁瓣被抬高,甚至产生栅瓣干扰。对此,需要在BP-RD算法中对贝塞尔函数进行补偿,抑制栅瓣,尽可能地消除贝塞尔函数对后续累积的影响。本文采用的正负整数模态的发射模式下,贝塞尔函数在模态维上呈现如式(11)的对称性质[9]:
J−α(θ)={−Jα(θ), α为奇数Jα(θ), α为偶数 (11) 因此,在对奇数阶模态的信号进行累加时会出现幅值相消的现象。由于BP算法类比于匹配滤波器的特性,当补偿因子与输入信号为复共轭时,输出信噪比达到最大。采用共轭式(11)不仅可以消除贝塞尔函数在模态维上幅值相消的现象,同时取得最大的信噪比。
Hα=(−j)α⋅Jα(krasinθη)⋅exp[j4πfcRη(Rg,θg)/c] (12) (4) 模态维傅里叶变换与多普勒非相干叠加
由于涡旋目标方位角φm和OAM模态数α之间的对偶关系,经典谱估计方法即可得到目标的方位向轮廓。将每个网格中投影的回波与补偿因子Hα相乘,消除距离和相位偏移的影响后,在模态维傅里叶变换,在多普勒维进行非相干叠加,得到式(13)中对目标区域的成像结果。对于没有目标散射点分布的网格点,叠加后能量值较小。在对具有目标散射点分布的网格上的回波叠加时,可以累积能量,产生图像峰值。
δ(Rg,θg,φg)=∑f|fftη{fftα{Srcomp[2(Rη+Dη)/c,α,η]⋅Hα}}|2 (13) 其中,f为多普勒频率,根据帕塞瓦尔定理,得到最终的信号表达式:
δ(Rg,θg,φg)=∑η|fftα{Srcomp[2(Rη+Dη)/c,α,η]⋅Hα}|2 (14) 4.2 成像性能分析
在对成像性能分析之前,有必要先对阵列合成效果进行分析。图7中的仿真参数设置为:UCA半径12λ,模态范围为[–34, 34],目标俯仰角分别设为0.10π与0.17π。
由图7可见,在低仰角、低模态的情况下,少量阵元也能有很好的拟合效果,但是随着俯仰角与模态数的增加,所需要的阵元个数也增加;在前视雷达的应用场景下,64阵元能得到不失真的最大模态范围为[–31, 31],对应的俯仰角为0.15π,满足前视空域的要求。本文后续基于64阵元合成涡旋波对成像性能进行分析。
64阵元合成的涡旋波如图8所示,仿真参数为阵元半径12λ,模态范围[–34, 34],如图8所示随着俯仰角的增加,能量越发分散于各个模态之中,俯仰角在0~0.15π增大时,系统的峰值响应逐步减小,0.15π达到最低–13 dB。在0.15π后由于涡旋波的失真,高模态破坏了贝塞尔函数的调制关系,峰值响应位置偏移,峰值响应值增大。以0.16π为例,在图8(b)、图8(c)中当俯仰角为0.16π时贝塞尔函数的峰值响应点应位于±34模态,但阵元合成信号在模态上的峰值响应位于±31模态,且峰值响应值提高了4 dB。因此,在高模态存在失真,且高俯仰角峰值响应最大变化量为–13 dB的幅度调制下,有必要验证算法能否得到较高的信号处理增益,获得好的成像性能。表1列举了目标在不同俯仰位置下,信号处理增益的变化,图9以俯仰角为0时的信号处理增益为基准对信号处理增益进行归一化。为尽量贴近真实环境,本文添加接收机噪声,并计算多模态的信噪比作为输入信噪比。
SNRin=10lg(PrNα⋅PN0) (15) 其中,Pr为多模态的无噪声的回波信号总功率,PN0为接收机噪声,Nα为发射的涡旋波模态个数。
设在无目标的环境下的噪声信号的成像图幅值为δnoise,有目标情况下输入雷达的成像图幅值为δtar。取δtar在3 dB主瓣宽度上的信号为δ3 dB,该区域可被认为是目标的多模态成像的响应,则输出信噪比可以被定义为
SNRout=10lg(∑δ3 dB∑δnoise) (16) 在此基础上,信号处理增益G是指在经过信号处理后,使信号增强的同时,抑制输入噪声能力的大小,定义为
G = SNRout−SNRin (17) 表 1 不同俯仰位置下信号处理增益变化Table 1. Signal processing gain of different elevation俯仰角θ
(rad)有效模态 输入信
噪比(dB)输出信
噪比(dB)归一化信号处理
增益(dB)0 [0] 12.4140 52.3270 39.9130 0.01π [–2, 2] 12.4140 52.3231 39.9091 0.02π [–4, 4] 12.4140 52.3115 39.8975 0.03π [–6, 6] 12.4140 52.2922 39.8782 0.04π [–8, 8] 12.4140 52.2653 39.8513 0.05π [–10, 10] 12.4140 52.2307 39.8167 0.06π [–12, 12] 12.4140 52.1887 39.7747 0.07π [–14, 14] 12.4140 52.1393 39.7253 0.08π [–16, 16] 12.4140 52.0826 39.6686 0.09π [–18, 18] 12.4140 52.0188 39.6048 0.10π [–20, 20] 12.4140 51.9482 39.5342 0.11π [–23, 23] 12.4133 51.8693 51.4341 0.12π [–25, 25] 12.4010 51.7580 39.3570 0.13π [–27, 27] 12.2969 51.4341 39.1372 0.14π [–28, 28] 11.9384 50.5159 38.5775 0.15π [–30, 30] 11.7014 49.8458 38.1444 0.16π [–32, 32] 12.2512 51.0137 38.7625 0.17π [–34, 34] 12.2907 50.9907 38.7001 从图9与表1可以看出,在多模态覆盖的目标区域内,低俯仰角处有效模态数较少,但是各模态对应的幅度方向图增益值高;高俯仰角的情况则与之相反;当俯仰角在[0, 0.15π]变化时,进行累加处理后,归一化的等效增益(模态增益+累加处理增益)随着俯仰角的增大呈下降趋势,其最大损失为–1.77 dB。俯仰角为0.16π与0.17π时出现了输入信噪比与输出信噪比上升的现象。在图8(c)中,俯仰角为0.16π与0.17π时,零模态响应不变,高模态的峰值响应变大,导致式(15)中多模态无噪声的回波信号功率Pr增大,累加后的δ3 dB功率随之增大,而噪声信号与其成像图δnoise功率变化较小,导致了输入、输出信噪比的增大。在Ka波段,UCA半径为12λ的情况下,俯仰角27°处的归一化增益不低于–1.8 dB,算法能将分布在各个模态的能量利用起来,系统在所需俯仰角范围上具有较高的稳定性,有较好的成像性能。
4.3 对比分析
为了定量分析成像性能,将本文提出的成像方法与先前学者提出的成像方法进行对比分析。单个散射点(300.5 m, 0.12π rad, 0.19π rad) 的三维剖面图如图10所示,图10(a)为BP-FFT与BP-FFT(Hp) 方法结果对比图,图10(b)为BP-FFT与 BP-RD-FFT方法结果对比图,图10(c)为FFT与Burg方法结果对比图。特别的,为区分FFT与Burg两种方法,此小节中BP-RD-FFT代指为4.1节提出的BP-RD成像方法。
首先考虑未添加多普勒网格的情况, BP-FFT方法最终的成像图为δ(Rg,θg,φg)=∑ηfftα⋅{Srcomp[2Rη/2Rηc,α,ηc,α,η]⋅Hα},BP-FFT(Hp)方法参考了文献[19]提出的一种相位补偿方法:
Hp={−(−j)α,Jα(krasinθη)<0(−j)α,Jα(krasinθη)>0 该方法考虑了贝塞尔函数的正负号带来的影响,BP-FFT方法对俯仰维的副瓣电平改善幅度较小,说明贝塞尔函数幅度补偿对成像性能的提升没有帮助。在添加多普勒网格后,图10(b)中BP-RD-FFT方法相比于BP-FFT方法在距离以及俯仰维上缩窄了主瓣宽度,减低了副瓣电平,说明多普勒网格的添加能够较好地提升成像性能。同时,可以采用超分辨算法进一步提升成像性能。相比于传统的谱分析方法,基于AR模型的Burg方法能够降低副瓣电平并得到更高的分辨率[20],图10(c)中BP-Burg与BP-RD-Burg分别为改进后的BP-FFT与 BP-RD-FFT方法,可以观察到Burg方法在俯仰维与方位维上较大地改善了成像效果,其根本原因在于方位维的分辨率取决于模态个数,Burg通过外推实现了方位维的高分辨处理,但同时超分辨方法会带来额外的计算量,在实际运用中应该综合考虑成像性能与时效性来选取不同的方法。
在实际应用中,无论机载平台还是弹载平台,成像算法都需要适应高速平台与目标相对运动的场景,除了成像性能上的提升,本文所提出算法可以有效解决速度估计不准时,无法正确聚焦成像的问题,在中心速度200 m/s,速度偏差绝对值≤25 m/s的情况下,对上述5种方法在不同的估计误差下进行仿真,得到了聚焦偏差随速度估计误差的变化结果,如图11(a)、图11(b)所示。根据仿真结果,未添加多普勒网格的方法在较高的估计误差下聚焦结果较差,俯仰维的最大偏差接近20°,方位维的最大偏差接近90°。添加了多普勒网格的BP-RD-FFT与BP-RD-Burg方法在俯仰维与方位维的聚焦偏差远远小于其他方法,俯仰维最大偏差值小于1°,在方位维也大大改善了速度估计误差对成像的影响,验证了所提算法相比其他算法的性能优势。
5. 实验与分析
5.1 仿真实验
通过多模态连续等幅扫描波束、脉冲压缩、BP-RD成像处理,对目标进行距离-方位-俯仰三维成像仿真,仿真参数设置如表2所示。
表 2 仿真参数Table 2. Simulation parameters参数 数值 目标1的R−θ−φ坐标(m, rad, rad) (300, 0.10π, 0.055π) 目标2的R−θ−φ坐标(m, rad, rad) (300, 0.15π, 0.055π) 雷达UCA阵元数量N (个) 64 UCA半径ra (m) 0.09 信号载频fc (GHz) 35 信号脉冲周期Tp (μs) 0.54 带宽B (MHz) 300 OAM范围 [–30, 30] 在如表2所示的参数设置下,3个维度均有良好的成像分辨率。图12与图13中能清晰地观察到目标1与目标2对焦到正确位置的成像图峰值,算法有着很好的聚焦效果。并在此基础上对多个点目标进行成像,设置俯仰角分别为[0π, 0.04π, 0.08π, 0.12π, 0.15π]的5个目标一同成像,算法对多目标的处理结果如图14所示,在方位-俯仰剖面图中,受到多目标的旁瓣影响,随着俯仰角的增大,个别目标的峰值响应增加,但整体上仍呈衰减趋势,且衰减量小于–1.5 dB,验证了信号处理增益的变化。对成像结果平滑处理后提取峰值位置即可得到目标的三维信息。
5.2 实测数据分析
在微波暗室中模拟雷达的成像场景(图15),飞机模型位于微波暗室的中央,放置在白色泡沐支架上,模型呈水平姿态,机翼部分稍有倾斜。涡旋波雷达采用UCA体制,具有16个发射阵元,单接收阵元位于均匀圆阵中心,对准了飞机模型的机身部分,模型整体位于雷达的多模态波束范围之内;理论上,在测试体制下的最大不失真俯仰角为10°,此时机翼部分位于视场边缘。对飞机的回波数据进行了分析,具体的环境参数如表3所示。
表 3 实测参数Table 3. Experimental parameters参数 数值 飞机模型中心位置(m) 4.5 飞机模型在XYZ上的跨度(m) (1.5, 0.08, 1.15) 阵元数量N (个) 16 UCA半径ra (m) 0.0615 信号载频fc (GHz) 35.025 信号脉冲周期Tp (μs) 0.54 带宽B (MHz) 300 OAM范围 [–7, 7] 图16给出了一个周期内的不同模态值的信号脉压结果,模态值α=0时发射阵元的相位调制量为零,此时发射的信号为线性调频波,其脉压后回波信号峰值的时延量对应着目标的距离向信息,与飞机摆放位置相吻合;贝塞尔函数幅度调制导致了不同模态的信号在距离向上的峰值幅度不同;飞机在距离向上的跨度决定了不同模态高于背景杂波的主瓣范围。
图17、图18为最终的成像结果。实际测量中,受到模型大小与发射阵元数量的限制,在模型跨越距离单元较多的X轴和Z轴上,整体有较好的成像表现,在跨度较小的Y轴上出现了旁瓣增大,成像结果模糊的现象,导致了机身机尾等RCS较小的地方,成像效果较差,飞机头部以及机翼等RCS较强的部分在成像图中呈现较高的主瓣,有较好的成像效果。
6. 结语
本文将涡旋电磁波与前视雷达成像相结合,建立了基于均匀圆阵多发单收体制的电磁涡旋前视雷达成像模型,提出了分时多模态扫描的空域成像体制,添加了多普勒网格并在幅度与相位补偿方面改进了BP成像算法,实现了对目标三维位置的准确的匹配,并通过仿真和实测验证了算法的成像性能。所提方法适用于分时多模态扫描、同时多模态收发等不同的电磁涡旋雷达体制。基于点目标成像结果,验证了在多模态涡旋波覆盖的较大视场范围内,目标回波的归一化等效增益在低俯仰角与高俯仰角处相当,在给出的示例中,视场角覆盖范围为±27°时,最大俯仰角处的等效增益相比0°不低于–1.8 dB。所提算法在飞机目标成像上得到了验证,成像结果可精确重构复杂目标的三维结构。在后续研究中,将根据前视雷达的具体应用场景,利用涡旋电磁波引入的多模态特性,通过分数阶模态扫描方式,从而进一步消除贝塞尔函数对成像性能的影响。
-
图 2 物理可解释的深度学习 SAR 图像解译应从多个维度开展研究,充分结合数据驱动和知识驱动的模型,逐步发展可学习且可解释的智能化图像解译新范式
Figure 2. The PXDL for SAR image interpretation is supposed to be carried out from multiple aspects, that deeply integrates the data-driven and knowledge-driven models to develop the novel learnable and explainable intelligent paradigm
图 4 The H/
α plane for full-polarized SAR data and the selected land-use and land-cover samples distributed in Ref. [50]图 5 The unsupervised learning results of different polarized SAR images based on TFA and pol-extended TFA models[92]
图 7 The SAR image classification framework Deep SAR-Net (DSN)[11]
图 8 The feature visualization of the unsupervised physics guided learning and supervised CNN classification on training and test set[100]
图 9 The amplitude images of convolution kernels in the first layer of CV-CNN based on ASC model initialization[106]
-
[1] CUMMING I G, WONG F H, 洪文, 胡东辉, 韩冰, 等译. 合成孔径雷达成像算法与实现[M]. 北京: 电子工业出版社, 2019, 93–100.CUMMING I G, WONG F H, HONG Wen, HU Donghui, HAN Bing, et al. translation. Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation[M]. Beijing: Publishing House of Electronics Industry, 2019, 93–100. [2] 黄钟泠. 面向合成孔径雷达图像分类的深度学习方法研究[D]. [博士论文], 中国科学院大学, 2020: 59.HUANG Zhongling. A study on synthetic aperture radar image classification with deep learning[D]. [Ph. D. dissertation], University of Chinese Academy of Sciences, 2020: 59. [3] 谷秀昌, 付琨, 仇晓兰. SAR图像判读解译基础[M]. 北京: 科学出版社, 2017.GU Xiuchang, FU Kun, and QIU Xiaolan. Fundamentals of SAR Image of SAR Image Interpretation[M]. Beijing: Science Press, 2017. [4] OLIVER C and QUEGAN S. Understanding Synthetic Aperture Radar Images[M]. London: SciTech Publishing, 2004. [5] GAO Gui, OUYANG Kewei, LUO Yongbo, et al. Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(3): 1812–1832. doi: 10.1109/TGRS.2016.2634862 [6] LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Ship detection based on complex signal kurtosis in single-channel SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6447–6461. doi: 10.1109/TGRS.2019.2906054 [7] CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720 [8] HUANG Zhongling, DUMITRU C O, PAN Zongxu, et al. Classification of large-scale high-resolution SAR images with deep transfer learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(1): 107–111. doi: 10.1109/LGRS.2020.2965558 [9] HUANG Zhongling, PAN Zongxu, and LEI Bin. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data[J]. Remote Sensing, 2017, 9(9): 907. doi: 10.3390/rs9090907 [10] HUANG Zhongling, PAN Zongxu, and LEI Bin. What, where, and how to transfer in SAR target recognition based on deep CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(4): 2324–2336. doi: 10.1109/TGRS.2019.2947634 [11] HUANG Zhongling, DATCU M, PAN Zongxu, et al. Deep SAR-Net: Learning objects from signals[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161: 179–193. doi: 10.1016/j.isprsjprs.2020.01.016 [12] 金亚秋. 多模式遥感智能信息与目标识别: 微波视觉的物理智能[J]. 雷达学报, 2019, 8(6): 710–716. doi: 10.12000/JR19083JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083 [13] 张钹, 朱军, 苏航. 迈向第三代人工智能[J]. 中国科学:信息科学, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204ZHANG Bo, ZHU Jun, and SU Hang. Toward the third generation of artificial intelligence[J]. SCIENTIA SINICA Informationis, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204 [14] DAS A and RAD P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey[OL]. arXiv: 2006.11371, 2020. [15] BAI Xiao, WANG Xiang, LIU Xianglong, et al. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments[J]. Pattern Recognition, 2021, 120: 108102. doi: 10.1016/j.patcog.2021.108102 [16] ANGELOV P and SOARES E. Towards explainable deep neural networks (xDNN)[J]. Neural Networks, 2020, 130: 185–194. doi: 10.1016/j.neunet.2020.07.010 [17] MOLNAR C. Interpretable machine learning: A guide for making black box models explainable[EB/OL]. https://christophm.github.io/interpretable-ml-book/, 2021. [18] CAMBURU O M. Explaining deep neural networks[D]. [Ph. D. dissertation], Oxford University, 2020. [19] 李玮杰, 杨威, 刘永祥, 等. 雷达图像深度学习模型的可解释性研究与探索[J]. 中国科学: 信息科学, 待出版. doi: 10.1360/SSI-2021-0102.LI Weijie, YANG Wei, LIU Yongxiang, et al. Research and exploration on interpretability of deep learning model in radar image[J]. SCIENTIA SINICA Informationis, in press. doi: 10.1360/SSI-2021-0102. [20] BELLONI C, BALLERI A, AOUF N, et al. Explainability of deep SAR ATR through feature analysis[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(1): 659–673. doi: 10.1109/TAES.2020.3031435 [21] 郭炜炜, 张增辉, 郁文贤, 等. SAR图像目标识别的可解释性问题探讨[J]. 雷达学报, 2020, 9(3): 462–476. doi: 10.12000/JR20059GUO Weiwei, ZHANG Zenghui, YU Wenxian, et al. Perspective on explainable SAR target recognition[J]. Journal of Radars, 2020, 9(3): 462–476. doi: 10.12000/JR20059 [22] KARNIADAKIS G E, KEVREKIDIS I G, LU Lu, et al. Physics-informed machine learning[J]. Nature Reviews Physics, 2021, 3(6): 422–440. doi: 10.1038/s42254-021-00314-5 [23] THUEREY N, HOLL P, MUELLER M, et al. Physics-based deep learning[OL]. arXiv: 2109.05237, 2021. [24] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686–707. doi: 10.1016/j.jcp.2018.10.045 [25] MENG Xuhui, LI Zhen, ZHANG Dongkun, et al. PPINN: Parareal physics-informed neural network for time-dependent PDEs[J]. Computer Methods in Applied Mechanics and Engineering, 2020, 370: 113250. doi: 10.1016/j.cma.2020.113250 [26] GOSWAMI S, ANITESCU C, CHAKRABORTY S, et al. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture[J]. Theoretical and Applied Fracture Mechanics, 2020, 106: 102447. doi: 10.1016/j.tafmec.2019.102447 [27] KARPATNE A, EBERT-UPHOFF I, RAVELA S, et al. Machine learning for the geosciences: Challenges and opportunities[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(8): 1544–1554. doi: 10.1109/TKDE.2018.2861006 [28] CAMPS-VALLS G, REICHSTEIN M, ZHU Xiaoxiang, et al. Advancing deep learning for earth sciences: From hybrid modeling to interpretability[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3979–3982. doi: 10.1109/IGARSS39084.2020.9323558. [29] REICHSTEIN M, CAMPS-VALLS G, STEVENS B, et al. Deep learning and process understanding for data-driven Earth system science[J]. Nature, 2019, 566(7743): 195–204. doi: 10.1038/s41586-019-0912-1 [30] CAMPS-VALLS G, SVENDSEN D H, CORTÉS-ANDRÉS J, et al. Physics-aware machine learning for geosciences and remote sensing[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 2086–2089. doi: 10.1109/IGARSS47720.2021.9554521. [31] JIA Xiaowei, WILLARD J, KARPATNE A, et al. Physics guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles[C]. The 2019 SIAM International Conference on Data Mining, Calgary, Canada, 2019: 558–566. doi: 10.1137/1.9781611975673.63. [32] DAW A, KARPATNE A, WATKINS W, et al. Physics-guided neural networks (PGNN): An application in lake temperature modeling[OL]. arXiv: 1710.11431, 2021. doi: https://arxiv.org/abs/1710.11431. [33] BEUCLER T, PRITCHARD M, GENTINE P, et al. Towards physically-consistent, data-driven models of convection[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3987–3990. doi: 10.1109/IGARSS39084.2020.9324569. [34] SHEN Huanfeng, JIANG Menghui, LI Jie, et al. Coupling model-driven and data-driven methods for remote sensing image restoration and fusion[OL]. arXiv: 2108.06073, 2021. [35] WANG Yuqing, WANG Qi, LU Wenkai, et al. Physics-constrained seismic impedance inversion based on deep learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3072132 [36] XIA Wenchao, ZHENG Gan, WONG K K, et al. Model-driven beamforming neural networks[J]. IEEE Wireless Communications, 2020, 27(1): 68–75. doi: 10.1109/MWC.001.1900239 [37] ZHANG Juping, XIA Wenchao, YOU Minglei, et al. Deep learning enabled optimization of downlink beamforming under per-antenna power constraints: Algorithms and experimental demonstration[J]. IEEE Transactions on Wireless Communications, 2020, 19(6): 3738–3752. doi: 10.1109/TWC.2020.2977340 [38] ZHU Xiaoxiang, MONTAZERI S, ALI M, et al. Deep learning meets SAR: Concepts, models, pitfalls, and perspectives[J]. IEEE Geoscience and Remote Sensing Magazine, in press. doi: 10.1109/MGRS.2020.3046356. [39] MALMGREN-HANSEN D, KUSK A, DALL J, et al. Improving SAR automatic target recognition models with transfer learning from simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(9): 1484–1488. doi: 10.1109/LGRS.2017.2717486 [40] 文贡坚, 朱国强, 殷红成, 等. 基于三维电磁散射参数化模型的SAR目标识别方法[J]. 雷达学报, 2017, 6(2): 115–135. doi: 10.12000/JR17034WEN Gongjian, ZHU Guoqiang, YIN Hongcheng, et al. SAR ATR based on 3D parametric electromagnetic scattering model[J]. Journal of Radars, 2017, 6(2): 115–135. doi: 10.12000/JR17034 [41] 罗迎, 倪嘉成, 张群. 基于“数据驱动+智能学习”的合成孔径雷达学习成像[J]. 雷达学报, 2020, 9(1): 107–122. doi: 10.12000/JR19103LUO Ying, NI Jiacheng, and ZHANG Qun. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence[J]. Journal of Radars, 2020, 9(1): 107–122. doi: 10.12000/JR19103 [42] CHAN T H, JIA Kui, GAO Shenghua, et al. PCANet: A simple deep learning baseline for image classification?[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5017–5032. doi: 10.1109/TIP.2015.2475625 [43] LI Mengke, LI Ming, ZHANG Peng, et al. SAR image change detection using PCANet guided by saliency detection[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(3): 402–406. doi: 10.1109/LGRS.2018.2876616 [44] WANG Rongfang, ZHANG Jie, CHEN Jiawei, et al. Imbalanced learning-based automatic SAR images change detection by morphologically supervised PCA-net[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(4): 554–558. doi: 10.1109/LGRS.2018.2878420 [45] CLOUDE S and POTTIER E. An entropy based classification scheme for land applications of polarimetric SAR[J]. IEEE Transactions on Geoscience and Remote Sensing, 1997, 35(1): 68–78. doi: 10.1109/36.551935 [46] YAMAGUCHI Y, YAJIMA Y, and YAMADA H. A four-component decomposition of POLSAR images based on the coherency matrix[J]. IEEE Geoscience and Remote Sensing Letters, 2006, 3(3): 292–296. doi: 10.1109/LGRS.2006.869986 [47] FERRO-FAMIL L, REIGBER A, and POTTIER E. Scene characterization using sub-aperture polarimetric interferometric SAR data[C]. IGARSS 2003-2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 2003: 702–704. doi: 10.1109/IGARSS.2003.1293889. [48] POTTER L C and MOSES R L. Attributed scattering centers for SAR ATR[J]. IEEE Transactions on Image Processing, 1997, 6(1): 79–91. doi: 10.1109/83.552098 [49] JI Kefeng and WU Yonghui. Scattering mechanism extraction by a modified cloude-pottier decomposition for dual polarization SAR[J]. Remote Sensing, 2015, 7(6): 7447–7470. doi: 10.3390/rs70607447 [50] YONEZAWA C, WATANABE M, and SAITO G. Polarimetric decomposition analysis of ALOS PALSAR observation data before and after a landslide event[J]. Remote Sensing, 2012, 4(8): 2314–2328. doi: 10.3390/rs4082314 [51] NIU Shengren, QIU Xiaolan, LEI Bin, et al. Parameter extraction based on deep neural network for SAR target simulation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4901–4914. doi: 10.1109/TGRS.2020.2968493 [52] NIU Shengren, QIU Xiaolan, LEI Bin, et al. A SAR target image simulation method with DNN embedded to calculate electromagnetic reflection[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2593–2610. doi: 10.1109/JSTARS.2021.3056920 [53] GUO Jiayi, LEI Bin, DING Chibiao, et al. Synthetic aperture radar image synthesis by using generative adversarial nets[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(7): 1111–1115. doi: 10.1109/LGRS.2017.2699196 [54] OH J and KIM M. PeaceGAN: A GAN-based multi-task learning method for SAR target image generation with a pose estimator and an auxiliary classifier[J]. Remote Sensing, 2021, 13(19): 3939. doi: 10.3390/rs13193939 [55] CUI Zongyong, ZHANG Mingrui, CAO Zongjie, et al. Image data augmentation for SAR sensor via generative adversarial nets[J]. IEEE Access, 2019, 7: 42255–42268. doi: 10.1109/ACCESS.2019.2907728 [56] SONG Qian, XU Feng, and JIN Yaqiu. SAR image representation learning with adversarial autoencoder networks[C]. IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 9498–9501. doi: 10.1109/IGARSS.2019.8898922. [57] WANG Ke, ZHANG Gong, LENG Yang, et al. Synthetic aperture radar image generation with deep generative models[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6): 912–916. doi: 10.1109/LGRS.2018.2884898 [58] HU Xiaowei, FENG Weike, GUO Yiduo, et al. Feature learning for SAR target recognition with unknown classes by using CVAE-GAN[J]. Remote Sensing, 2021, 13(18): 3554. doi: 10.3390/rs13183554 [59] XIE You, FRANZ E, CHU Mengyu, et al. TempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow[J]. ACM Transactions on Graphics, 2018, 37(4): 95. [60] CHU Mengyu, THUEREY N, SEIDEL H P, et al. Learning meaningful controls for fluids[J]. ACM Transactions on Graphics, 2021, 40(4): 100. doi: 10.1145/3450626.3459845 [61] QIAN Jiang, HUANG Shaoyin, WANG Lu, et al. Super-resolution ISAR imaging for maneuvering target based on deep-learning-assisted time-frequency analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 5201514. doi: 10.1109/TGRS.2021.3050189 [62] LIANG Jiadian, WEI Shunjun, WANG Mou, et al. ISAR compressive sensing imaging using convolution neural network with interpretable optimization[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 2483–2486. doi: 10.1109/IGARSS39084.2020.9323601. [63] GREGOR K and LECUN Y. Learning fast approximations of sparse coding[C]. 27th International Conference on Machine Learning, Haifa, Israel, 2010: 399–406. [64] LIU Jialin, CHEN Xiaohan, WANG Zhangyang, et al. ALISTA: Analytic weights are as good as learned weights in LISTA[C]. The 7th International Conference on Learning Representations, New Orleans, USA, 2019, 1–33. [65] BEHRENS F, SAUDER J, and JUNG P. Neurally augmented ALISTA[C]. The 9th International Conference on Learning Representations, Virtual Event, Austria, 2021: 1–10. [66] YANG Yan, SUN Jian, LI Huibin, et al. Deep ADMM-Net for compressive sensing MRI[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 10–18. doi: 10.5555/3157096.3157098. [67] YANG Yan, SUN Jian, LI Huibin, et al. ADMM-CSNet: A deep learning approach for image compressive sensing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(3): 521–538. doi: 10.1109/TPAMI.2018.2883941 [68] MASON E, YONEL B, and YAZICI B. Deep learning for SAR image formation[C]. SPIE 10201, Algorithms for Synthetic Aperture Radar Imagery XXIV, Anaheim, USA, 2017: 1020104. doi: 10.1117/12.2267831. [69] GAO Jingkun, DENG Biin, QIN Yuliang, et al. Enhanced radar imaging using a complex-valued convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(1): 35–39. doi: 10.1109/LGRS.2018.2866567 [70] HU Changyu, WANG Ling, LI Ze, et al. Inverse synthetic aperture radar imaging using a fully convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(7): 1203–1207. doi: 10.1109/LGRS.2019.2943069 [71] ALVER M B, SALEEM A, and ÇETIN M. Plug-and-play synthetic aperture radar image formation using deep priors[J]. IEEE Transactions on Computational Imaging, 2021, 7: 43–57. doi: 10.1109/TCI.2020.3047473 [72] WANG Mou, WEI Shunjun, LIANG Jiadian, et al. TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging[J]. IEEE Transactions on Image Processing, 2021, 30: 7317–7332. doi: 10.1109/TIP.2021.3104168 [73] HU Changyu, LI Ze, WANG Ling, et al. Inverse synthetic aperture radar imaging using a deep ADMM network[C]. 20th International Radar Symposium (IRS), Ulm, Germany, 2019: 1–9. doi: 10.23919/IRS.2019.8768138. [74] LI Xiaoyong, BAI Xueru, and ZHOU Feng. High-resolution ISAR imaging and autofocusing via 2d-ADMM-net[J]. Remote Sensing, 2021, 13(12): 2326. doi: 10.3390/rs13122326 [75] LI Ruize, ZHANG Shuanghui, ZHANG Chi, et al. Deep learning approach for sparse aperture ISAR imaging and autofocusing based on complex-valued ADMM-net[J]. IEEE Sensors Journal, 2021, 21(3): 3437–3451. doi: 10.1109/JSEN.2020.3025053 [76] HU Xiaowei, XU Feng, GUO Yiduo, et al. MDLI-Net: Model-driven learning imaging network for high-resolution microwave imaging with large rotating angle and sparse sampling[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–17. doi: 10.1109/TGRS.2021.3110579 [77] RATHA D, GAMBA P, BHATTACHARYA A, et al. Novel techniques for built-up area extraction from polarimetric SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(1): 177–181. doi: 10.1109/LGRS.2019.2914913 [78] AO Dongyang, DATCU M, SCHWARZ G, et al. Moving ship velocity estimation using TanDEM-X data based on subaperture decomposition[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(10): 1560–1564. doi: 10.1109/LGRS.2018.2846399 [79] 廖明生, 王茹, 杨梦诗, 等. 城市目标动态监测中的时序InSAR分析方法及应用[J]. 雷达学报, 2020, 9(3): 409–424. doi: 10.12000/JR20022LIAO Mingsheng, WANG Ru, YANG Mengshi, et al. Techniques and applications of spaceborne time-series InSAR in urban dynamic monitoring[J]. Journal of Radars, 2020, 9(3): 409–424. doi: 10.12000/JR20022 [80] SICA F, GOBBI G, RIZZOLI P, et al. Φ-Net: Deep residual learning for InSAR parameters estimation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(5): 3917–3941. doi: 10.1109/TGRS.2020.3020427 [81] SONG Qian, XU Feng, and JIN Yaqiu. Radar image colorization: Converting single-polarization to fully polarimetric using deep neural networks[J]. IEEE Access, 2018, 6: 1647–1661. doi: 10.1109/ACCESS.2017.2779875 [82] ZHAO Juanping, DATCU M, ZHANG Zenghai, et al. Contrastive-regulated CNN in the complex domain: A method to learn physical scattering signatures from flexible PolSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 10116–10135. doi: 10.1109/TGRS.2019.2931620 [83] QU Junrong, QIU Xiaolan, and DING Chibiao. A study of recovering POLSAR information from single-polarized data using DNN[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 812–815. doi: 10.1109/IGARSS47720.2021.9554304. [84] CHENG Zezhou, YANG Qingxiong, and SHENG Bin. Deep colorization[C]. The IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 415–423. doi: 10.1109/ICCV.2015.55. [85] LUAN Fujun, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6997–7005. doi: 10.1109/CVPR.2017.740. [86] JI Guang, WANG Zhaohui, ZHOU Lifan, et al. SAR image colorization using multidomain cycle-consistency generative adversarial network[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(2): 296–300. doi: 10.1109/LGRS.2020.2969891 [87] TUPIN F and TISON C. Sub-aperture decomposition for SAR urban area analysis[C]. European Conference on Synthetic Aperture Radar (EUSAR), Ulm, Germany, 2004: 431–434. [88] BOVENGA F, DERAUW D, RANA F M, et al. Multi-chromatic analysis of SAR images for coherent target detection[J]. Remote Sensing, 2014, 6(9): 8822–8843. doi: 10.3390/rs6098822 [89] SPIGAI M, TISON C, and SOUYRIS J C. Time-frequency analysis in high-resolution SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(7): 2699–2711. doi: 10.1109/TGRS.2011.2107914 [90] FERRO-FAMIL L, REIGBER A, POTTIER E, et al. Scene characterization using subaperture polarimetric SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(10): 2264–2276. doi: 10.1109/TGRS.2003.817188 [91] HUANG Zongling, DATCU M, PAN Zongxu, et al. HDEC-TFA: An unsupervised learning approach for discovering physical scattering properties of single-polarized SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(4): 3054–3071. doi: 10.1109/TGRS.2020.3014335 [92] HUANG Zhongling, DATCU M, PAN Zongxu, et al. A hybrid and explainable deep learning framework for SAR images[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 1727–1730. doi: 10.1109/IGARSS39084.2020.9323845. [93] DE S, CLANTON C, BICKERTON S, et al. Exploring the relationships between scattering physics and auto-encoder latent-space embedding[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3501–3504. doi: 10.1109/IGARSS39084.2020.9323410. [94] HUANG Zhongling, YAO Xiwen, DUMITRU C O, et al. Physically explainable CNN for SAR image classification[OL]. arXiv: 2110.14144, 2021. [95] ZHANG Jinsong, XING Mengdao, and XIE Yiyuan. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(3): 2174–2187. doi: 10.1109/TGRS.2020.3003264 [96] LEI Songlin, QIU Xiaolan, DING Chibiao, et al. A feature enhancement method based on the sub-aperture decomposition for rotating frame ship detection in SAR images[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 3573–3576. doi: 10.1109/IGARSS47720.2021.9553635. [97] THEAGARAJAN R, BHANU B, ERPEK T, et al. Integrating deep learning-based data driven and model-based approaches for inverse synthetic aperture radar target recognition[J]. Optical Engineering, 2020, 59(5): 051407. doi: 10.1117/1.OE.59.5.051407 [98] HORI C, HORI T, LEE T Y, et al. Attention-based multimodal fusion for video description[C]. The IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4203–4212. doi: 10.1109/ICCV.2017.450. [99] PORIA S, CAMBRIA E, BAJPAI R, et al. A review of affective computing: From unimodal analysis to multimodal fusion[J]. Information Fusion, 2017, 37: 98–125. doi: 10.1016/j.inffus.2017.02.003 [100] HUANG Zhongling, DUMITRU C O, and REN Jun. Physics-aware feature learning of SAR images with deep neural networks: A case study[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 1264–1267. doi: 10.1109/IGARSS47720.2021.9554842. [101] LEE J S, GRUNES M R, AINSWORTH T L, et al. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier[J]. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(5): 2249–2258. doi: 10.1109/36.789621 [102] RATHA D, BHATTACHARYA A, and FRERY A C. Unsupervised classification of PolSAR data using a scattering similarity measure derived from a geodesic distance[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(1): 151–155. doi: 10.1109/LGRS.2017.2778749 [103] LI Yi, DU Lan, and WEI Di. Multiscale CNN based on component analysis for SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–12. doi: 10.1109/TGRS.2021.3100137 [104] FENG Sijia, JI Kefeng, ZHANG Linbin, et al. SAR target classification based on integration of ASC parts model and deep learning algorithm[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 10213–10225. doi: 10.1109/JSTARS.2021.3116979 [105] LIU Qingshu and LANG Liang. MMFF: Multi-manifold feature fusion based neural networks for target recognition in complex-valued SAR imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 151–162. doi: 10.1016/j.isprsjprs.2021.08.008 [106] LIU Jiaming, XING Mengdao, YU Hanwen, et al. EFTL: Complex convolutional networks with electromagnetic feature transfer learning for SAR target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–11. doi: 10.1109/TGRS.2021.3083261 [107] CUI Yuanhao, LIU Fang, JIAO Licheng, et al. Polarimetric multipath convolutional neural network for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–18. doi: 10.1109/TGRS.2021.3071559 [108] DAW A, THOMAS R Q, CAREY C C, et al. Physics-guided architecture (PGA) of neural networks for quantifying uncertainty in lake temperature modeling[C]. The 2020 SIAM International Conference on Data Mining (SDM), Cincinnati, USA, 2020: 532–540. [109] SUN Jian, NIU Zhan, INNANEN K A, et al. A theory-guided deep-learning formulation and optimization of seismic waveform inversion[J]. Geophysics, 2020, 85(2): R87–R99. doi: 10.1190/geo2019-0138.1 [110] HE Qishan, ZHAO Lingjun, JI Kefeng, et al. SAR target recognition based on task-driven domain adaptation using simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3116707 [111] ZHANG Linbin, LENG Xiangguang, FENG Sijia, et al. Domain knowledge powered two-stream deep network for few-shot SAR vehicle recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–15. doi: 10.1109/TGRS.2021.3116349 [112] AGARWAL T, SUGAVANAM N, and ERTIN E. Sparse signal models for data augmentation in deep learning ATR[C]. IEEE Radar Conference, Florence, Italy, 2020: 1–6. doi: 10.1109/RadarConf2043947.2020.9266382. [113] DIEMUNSCH J R and WISSINGER J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR[C]. Proceedings of SPIE 3370, Algorithms for synthetic aperture radar Imagery V, Orlando, USA, 1998: 481–492. doi: 10.1117/12.321851. [114] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672 [115] 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0: 高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi: 10.12000/JR19097SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARSHIP-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi: 10.12000/JR19097 [116] 杜兰, 王兆成, 王燕, 等. 复杂场景下单通道SAR目标检测及鉴别研究进展综述[J]. 雷达学报, 2020, 9(1): 34–54. doi: 10.12000/JR19104DU Lan, WANG Zhaocheng, WANG Yan, et al. Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes[J]. Journal of Radars, 2020, 9(1): 34–54. doi: 10.12000/JR19104 [117] CHEN Siwei and TAO Chensong. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(4): 627–631. doi: 10.1109/LGRS.2018.2799877 [118] LIU Xu, JIAO Licheng, TANG Xu, et al. Polarimetric convolutional network for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(5): 3040–3054. doi: 10.1109/TGRS.2018.2879984 [119] BI Haixia, SUN Jian, and XU Zongben. A graph-based semisupervised deep learning model for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(4): 2116–2132. doi: 10.1109/TGRS.2018.2871504 [120] VINAYARAJ P, SUGIMOTO R, NAKAMURA R, et al. Transfer learning with CNNs for segmentation of PALSAR-2 power decomposition components[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 6352–6361. doi: 10.1109/JSTARS.2020.3031020 [121] XIA Junshi, YOKOYA N, ADRIANO B, et al. A benchmark high-resolution GaoFen-3 SAR dataset for building semantic segmentation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 5950–5963. doi: 10.1109/JSTARS.2021.3085122 [122] WU Fan, WANG Chao, ZHANG Hong, et al. Built-up area mapping in China from GF-3 SAR imagery based on the framework of deep learning[J]. Remote Sensing of Environment, 2021, 262: 112515. doi: 10.1016/j.rse.2021.112515 [123] CHEN Jiankun, QIU Xiaolan, DING Chibiao, et al. CVCMFF Net: Complex-valued convolutional and multifeature fusion network for building semantic segmentation of InSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–14. doi: 10.1109/TGRS.2021.3068124 [124] SHI Xianzheng, FU Shilei, CHEN Jin, et al. Object-level semantic segmentation on the high-resolution Gaofen-3 FUSAR-map dataset[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 3107–3119. doi: 10.1109/JSTARS.2021.3063797 [125] 仇晓兰, 焦泽坤, 彭凌霄, 等. SARMV3D-1.0: SAR微波视觉三维成像数据集[J]. 雷达学报, 2021, 10(4): 485–498. doi: 10.12000/JR21112QIU Xiaolan, JIAO Zekun, PENG Lingxiao, et al. SARMV3D-1.0: Synthetic aperture radar microwave vision 3D imaging dataset[J]. Journal of Radars, 2021, 10(4): 485–498. doi: 10.12000/JR21112 -