Processing math: 88%
ZHU Hangui, FENG Weike, FENG Cunqian, et al. Deep unfolding based space-time adaptive processing method for airborne radar[J]. Journal of Radars, 2022, 11(4): 676–691. doi: 10.12000/JR22051
Citation: XING Mengdao, XIE Yiyuan, GAO Yuexin, et al. Electromagnetic scattering characteristic extraction and imaging recognition algorithm: a review[J]. Journal of Radars, 2022, 11(6): 921–942. doi: 10.12000/JR22232

Electromagnetic Scattering Characteristic Extraction and Imaging Recognition Algorithm: A Review

DOI: 10.12000/JR22232
Funds:  The National Natural Science Foundation of China for Distinguished Young Scholars (61825105)
More Information
  • Corresponding author: XING Mengdao, xmd@xidian.edu.cn
  • Received Date: 2022-11-30
  • Rev Recd Date: 2022-12-22
  • Available Online: 2022-12-26
  • Publish Date: 2022-12-27
  • One remarkable trend in applying synthetic aperture radar technology is the automatic interpretation of Synthetic Aperture Radar (SAR) images. The electromagnetic scattering characteristics have a robust correlation with the target structure, which provides key support for SAR image interpretation. Therefore, elucidating how to extract accurate electromagnetic characteristics and how to use these electromagnetic characteristics to retrieve target characteristics has been widely valued recently. This study discusses the research accomplishments, summarizes the key elements and ideas of electromagnetic characteristic extraction and electromagnetic-characteristic-based target recognition, and details the extension applications of the electromagnetic scattering mechanism in imaging and recognition. Finally, the future research direction of electromagnetic scattering characteristic extraction and application was proposed.

     

  • 空时自适应处理(Space-Time Adaptive Processing, STAP)是机载雷达地/海杂波抑制和运动目标检测的关键技术[1,2]。为设计空时滤波器自适应抑制杂波,STAP方法一般需要利用一定数量的独立同分布(Independent Identically Distributed, IID)训练距离单元估计待测距离单元(Range cell Under Test, RUT)的杂波协方差矩阵(Clutter Covariance Matrix, CCM)。为保证输出信杂噪比相比理想条件的损失不超过3 dB,传统STAP方法所需IID训练单元的数量至少为系统自由度的2倍。然而,在实际非均匀杂波环境中,通常难以获得足够的IID训练单元。为解决这一问题,学者提出降维、降秩、直接数据域、知识辅助和稀疏恢复等STAP新方法[3-8]。其中,稀疏恢复空时自适应处理(SR-STAP)方法基于杂波在角度-多普勒域(即空时二维平面)的稀疏特性,利用少量训练距离单元即可获得杂波空时谱的准确估计,从而重构CCM或杂波子空间,构造空时滤波器对杂波进行抑制[9-16]

    在实际应用中,机载雷达不可避免地存在着阵列误差,包括幅度误差和相位误差。由于误差信息隐含于CCM的估计之中,传统STAP方法具有较强的误差自适应补偿能力。然而,由于SR-STAP方法通常利用理想空时导向矢量构建杂波空时谱估计模型,其性能受误差的影响较大。阵列误差会降低杂波空时谱及CCM的估计准确性,从而严重影响SR-STAP方法的杂波抑制和目标检测性能。针对这一问题,文献[17]提出了基于迭代交替下降(Iterative Alternating Descent, IAD)算法的SR-STAP方法,能够同时估计杂波空时谱和阵列误差参数,但该方法的运算复杂度较高;文献[18]提出了基于ADMM算法的杂波空时谱和阵列误差参数联合估计方法,相比IAD方法运算复杂度较低,但需要同时对多个迭代参数进行设置。

    在构建杂波空时谱估计模型或杂波空时谱及阵列误差参数联合估计模型的前提下,现有SR-STAP方法的性能往往依赖于所采用的SR算法。目前,典型的SR算法均基于模型驱动实现,具有理论保证性高、可解释性强等优点。但是,模型驱动类SR算法通常需要设置一个或多个参数,例如正则化因子、迭代步长等。不恰当的参数设置会影响SR算法的收敛速度和精度,从而使得SR-STAP方法的运算复杂度升高、杂波抑制性能下降,限制了其在实际中的应用。针对模型驱动类SR算法存在的问题,受深度学习技术的启发,学者提出了DU方法[19-23]。DU方法将特定SR算法展开为深度神经网络,将算法的迭代次数作为网络的层数、算法的参数作为网络的学习参数,利用训练数据集对SR算法所涉及的迭代参数进行训练,获得最优参数,从而提高SR算法的收敛速度和精度。例如,Gregor等人[19]基于迭代软阈值算法(Iterative Soft Thresholding Algorithm, ISTA),提出了学习型ISTA(Learned ISTA, LISTA)算法;Borgerding等人[21]对近似消息传递(Approximate Message Passing, AMP)算法进行展开,提出了LAMP算法;Yang等人[22]基于近端算子方法(Proximal Operator Methods, POM),提出了LePOM算法。相比其对应的SR算法,DU方法将模型驱动和数据驱动相结合,能够有效降低算法复杂度、提高算法性能。

    目前,尚未有研究将DU方法引入到机载雷达SR-STAP之中,且上述DU方法仅能用于杂波空时谱估计,无法同时估计阵列误差参数。因此,为解决现有SR-STAP方法存在的参数设置困难和运算复杂度高等问题,本文提出了机载雷达DU-STAP方法,以验证DU方法在机载雷达杂波抑制和目标检测中的适用性。首先,建立了阵列误差条件下的机载雷达回波信号模型,并利用ADMM算法对杂波空时谱和阵列误差参数进行联合估计;接着,根据对其迭代步骤和数据流图的分析,将ADMM算法展开为深度神经网络,构建具有正则化因子、迭代步长、二次惩罚因子和比例因子等可学习参数的AE-ADMM-Net;然后,定义网络损失函数,基于充足完备的数据集对AE-ADMM-Net进行训练,获得最优参数;最后,利用训练后的AE-ADMM-Net对训练距离单元数据进行处理,快速获得杂波空时谱和阵列误差参数的准确估计,从而设计空时滤波器进行杂波抑制和目标检测。仿真实验表明:相比基于稀疏贝叶斯学习(Sparse Bayesian Learning, SBL)算法[13]、欠定系统聚焦式求解算法(Focal Under-determined System Solver, FOCUSS)[10]和ADMM算法的典型SR-STAP方法,本文所提出的DU-STAP方法均能够在保持较低运算复杂度的同时提高杂波抑制性能。

    图1所示,假设机载雷达以高度H、速度v沿y轴匀速飞行,正侧视均匀线阵的阵元个数为M,阵元间距为d=λ/2(λ为波长),脉冲重复频率为fr,在一个相干处理间隔内共有N个脉冲。

    图  1  机载雷达几何模型
    Figure  1.  Geometry model of airborne radar

    不考虑距离模糊杂波的影响,假设每个距离单元所对应的距离环中共有Nc个杂波块均匀分布在方位角θ[0,π]上,则包含运动目标的RUT空时回波信号可表示为

    y0=αTsdsT+Nci=1αisdsi+ε0=yT0+yC0+ε0CNM×1 (1)

    其中,αTαi分别表示目标和第i个杂波块的复幅度;sdsT=sdTssTCNM×1sdsi=sdissiCNM×1分别表示目标和第i个杂波块的空时导向矢量;sdi=[1,ej2πfdi,,ej2π(N1)fdi]TCN×1ssi=[1,ej2πfsi,,ej2π(M1)fsi]TCM×1分别表示第i个杂波块的时域导向矢量和空域导向矢量,fdi=2vcosφicosθi/(λfr)fsi=dcosφicosθi/λ分别表示第i个杂波块的归一化多普勒频率和空间频率,φiθi表示第i个杂波块的俯仰角和方位角;表示Kronecker积,[]T表示转置,j=1yT0, yC0ε0分别表示目标、杂波和噪声信号。

    假设各个杂波块之间相互独立,与噪声不相关,且噪声服从均值为0、协方差矩阵为RN = σ2INM的复高斯分布,则杂波加噪声协方差矩阵(Clutter plus Noise Covariance Matrix, CNCM)可表示为

    RC=E[(yC0+ε0)(yC0+ε0)H]=Nci=1E[|αi|2](sdsi)(sdsi)H+RNCNM×NM (2)

    其中,E[]表示期望,[]H表示共轭转置,INM表示NM×NM的单位矩阵。

    STAP通过计算空时回波信号的加权组合实现对杂波和噪声的抑制以及对运动目标的检测。为使输出信杂噪比(Signal to Clutter plus Noise Ratio, SCNR)最大,空时滤波器的最优权值可通过式(3)计算得出:

    wopt=R1CsdsT/[(sdsT)HR1CsdsT]CNM×1 (3)

    其中,()1表示对矩阵求逆。

    实际上,RUT的CNCM是未知的,一般需要一定数量的无目标训练距离单元对其进行估计。假设训练距离单元与RUT的杂波独立同分布,则RUT的CNCM可以通过采样协方差矩阵求逆(Sample Matrix Inversion, SMI)方法估计得到[1],表示为

    ˆRC=(1/L)Ll=1ylyHl (4)

    其中,l=1,2,,LL表示IID训练距离单元个数,yl表示第l个训练距离单元的空时回波信号。

    根据RMB准则[2],SMI方法确保输出SCNR损失小于 3 dB所需的 IID 训练距离单元数应至少为 2 倍的系统自由度。在实际非均匀环境中,该条件通常难以得到满足。此外,实际机载雷达不可避免地存在阵列幅相误差。此时,RUT空时回波信号、CNCM和最优空时权值可分别表示为

    ˜y0=αT˜sdsT+Nci=1αisdi(essi)+ε0=αT˜sdsT+Nci=1αi˜sdsi+ε0=˜yT0+˜yC0+ε0 (5)
    ˜RC=Nci=1E[|αi|2]sdi(essi)[sdi(essi)]H+RN=Nci=1E[|αi|2]sdi(sdi)H(eeHssi(ssi)H)+RN (6)
    ˜wopt=˜R1C˜sdsT/[(˜sdsT)H˜R1C˜sdsT] (7)

    其中,表示Hadamard积,˜sdsT=sdT(essT)表示阵列误差下的目标空时导向矢量,e=[e1,e2,,eM]T表示阵列幅相误差矢量,em=(1+νm)ejϕm, m=1,2,,M, νmRϕmR分别为第m个实际阵元与理想阵元之间的幅度误差和相位误差。

    由式(1)可以看出,杂波信号可由不同空间频率和多普勒频率的空时信号叠加而成。如果分别将空间频率和多普勒频率离散化为Ns=κsMNd=κdN个网格点(其中κs>1κd>1表示尺度因子),则第l个无目标训练距离单元的空时回波信号可表示为

    yl=NsNdq=1αqlsdqssq+εl=NsNdq=1αqlsdsq+εl=Aαl+εl (8)

    其中,αql为第q个网格点对应的复幅度,q=1,2,,NsNd, αl=[α1l,α2l,,αNsNdl]TCNsNd×1为所有网格点对应的复幅度矢量,即杂波空时谱;sdq=[1,ej2πfdq,,ej2π(N1)fdq]Tssq=[1,ej2πfsq,,ej2π(M1)fsq]T为第q个网格点对应的时域导向矢量和空域导向矢量,fdqfsq为第q个多普勒频率和空间频率,A=[sds1,sds2,,sdsNsNd]CNM×NsNd为空时导向矢量字典,εl为噪声信号。

    根据杂波空时谱的稀疏性,可将欠定问题(8)转化为如下约束优化问题进行求解:

    argminαl||αl||0s.t.||ylAαl||22ξ (9)

    其中,||||0||||2分别表示向量的L0范数和L2范数,ξ表示噪声电平。

    在存在L个训练距离单元的情况下,式(9)可扩展至多观测模型,表示为

    argminΛ||Λ||2,0s.t.||YAΛ||2FLξ (10)

    其中,Y=[y1,y2,,yL]CNM×L, Λ=[α1,α2,,αL]CNM×L||||2,0表示先对矩阵各行取L2范数再对列取L0范数,||||F表示矩阵的Frobenius范数。

    利用L1凸优化算法、FOCUSS算法或SBL算法等稀疏恢复算法对式(9)或式(10)进行求解,可获得αlΛ的高分辨估计。然后,可通过式(11)计算CNCM,并根据式(3)设计空时滤波器:

    ˆRC=(1/L)Ll=1NsNdq=1|αql|2sdsq(sdsq)H+RN (11)

    同理,当存在阵列误差时,第l个训练距离单元的空时回波信号可表示为

    ˜yl = Ns×Ndq=1αqlsdq(essq)+εl=EAαl+εl (12)

    其中,E=INdiag(e)IN表示N×N的单位矩阵,diag()表示取对角矩阵。

    此时,需要同时估计杂波空时谱αl和阵列误差参数e,表示为

    argminαl,e||αl||0s.t.||˜ylEAαl||22ξ (13)

    在求解(13)的基础上,CNCM可通过式(14)进行计算,从而根据式(7)设计空时滤波器:

    ˆ˜RC=NsNdq=1|αql|2sdq(sdq)H(eeHssq(ssq)H)+RN (14)

    SR-STAP方法利用少量甚至单个训练距离单元即可获得CNCM的准确估计,从而实现对杂波的抑制,在实际非均匀环境中具有显著优势。为简便起见,本文仅考虑单个训练距离单元的情况,即L=1,多个训练距离单元的情况可对本文算法进行拓展处理。此外,需要说明的是:在存在距离模糊的情况下,仍然可以建立如式(9)或式(10)所示的优化模型,利用SR算法进行求解,获得距离模糊杂波空时谱的高分辨估计,具体可参考文献[24,25]。

    为降低运算复杂度、提高杂波抑制性能,本文拟利用DU方法对杂波空时谱和阵列误差参数联合估计模型(13)进行求解。由文献[21]可知,对于y=Aα + ε所示的稀疏恢复问题,大多迭代类SR算法的步骤可表示为αk+1=P(αkγkAH(Aαky))。其中,αk为第k次迭代估计结果,γk为迭代步长,P()为非线性算子。令Wk=INMγkAHA, Bk=γkAH,则SR算法的第k次迭代等价于αk+1=P(Wkαk+Bky)。将WkBk定义为深度神经网络的权重参数,P()定义为深度神经网络的激活函数,αkαk+1分别定义为深度神经网络第k层的输入和输出,则SR算法的第k次迭代等价于深度神经网络的第k层运算。因此,DU方法可以看作基于SR算法的迭代步骤对深度神经网络的结构和参数进行设计。理论上,LISTA, LAMP和LePOM等DU方法[19-21]均可以实现对杂波空时谱的估计,即对式(9)进行求解。然而,这些方法无法同时估计阵列误差参数,即无法对式(13)进行求解。针对这一问题,本文对ADMM算法[18]进行分析,将其展开为深度神经网络,构建AE-ADMM-Net,实现对杂波空时谱和阵列误差参数的快速准确估计。

    定义T=INdiag(t),其中t=[t1,t2,,tM]T, tm=e1m=(1+νm)1ejϕm,则可将式(12)变换为

    Ty=TEAα+Tε=Aα+ε (15)

    其中,y=˜yl, α=αl, ε=εl。由于本文仅考虑单个训练距离单元的情况,因此忽略下标l

    式(15)将阵列幅相误差矢量e转化为参数矢量t,可通过式(16)进行求解:

    argminα,t||α||1+1/(2ρ)||TyAα||22 (16)

    其中,ρ>0表示正则化因子。

    定义辅助变量η=TyAα,则式(16)可等效为

    argminα,η||α||1+1/(2ρ)||η||22 s.t. Aα+η=Ty (17)

    等式约束问题(17)的增广拉格朗日函数可表示为

    argminα,η,λ,t||α||1+1/(2ρ)||η||22RλH(Aα+ηTy)+γ/2||Aα+ηTy||22 (18)

    其中,λCNM×1表示拉格朗日乘子,γ>0表示二次惩罚因子,R{}表示取实部操作。

    为避免零解,引入凸约束Mm=1tm=(δ+jw),则可将式(18)改写为

    argminα,η,λ,t||α||1+1/(2ρ)||η||22R{β(Mm=1tmδjw)}R{λH(Aα+ηTy)}+γ/2||Aα+ηTy||22 (19)

    其中,δR, wR为比例因子,β为辅助参数,()表示共轭。

    ADMM算法利用K次迭代交替求解以下4个子问题对式(19)进行求解[18]

    {η(k+1)=argminη 1/(2ρ)||η||22+γ/2||Aα(k)+ηT(k)yλ(k)/γ||22α(k+1)=argminα ||α||1+γ/2||Aα+η(k+1)T(k)yλ(k)/γ||22t(k+1)=argmint γ/2||Aα(k+1)+η(k+1)Tyλ(k)/γ||22R{β(Mm=1tmδjw)}λ(k+1)=argminλ R{λH(Aα(k+1)+η(k+1)T(k+1)y)} (20)

    其中,T(k+1)=INdiag(t(k+1)), α(k+1), λ(k+1), η(k+1)t(k+1)分别为α, λ, ηt在第k+1次迭代时的估计,k=0,1,,K1

    式(20)中4个子问题的解可表示为

    {X(k+1):η(k+1)=ργ/(1+ργ)(λ(k)/γAα(k)+T(k)y)O(k+1):α(k+1)=soft(α(k)+τAHη(k+1)/(ργ),τ/γ)Z(k+1):t(k+1)=[(b1+β)/a1,,(bM+β)/aM]TM(k+1):λ(k+1)=λ(k)γ(Aα(k+1)+η(k+1)T(k+1)y) (21)

    其中,τα的迭代步长,soft(x,c)=max{|x|c,0}x/|x|为软阈值算子[19]

    am=Nn=1|y(n1)M+m|2,bm=Nn=1y(n1)M+mz(k)(n1)M+m,z(k)=Aα(k+1)+η(k+1)λ(k)/γ,
    β=[δ+jwMm=1(bm/am)]/Mm=1(1/am)

    综上所述,利用ADMM算法对式(13)进行求解的步骤如表1所示。需要强调的是:当不存在阵列误差时,表1所示的ADMM算法同样可以对式(9)进行求解。此时,可跳过步骤4,并令T(k+1)=T(0)保持不变;也可令比例因子δ=M, w=0, ADMM算法将输出阵列误差的估计e1M,即νmϕm0

    表  1  ADMM算法
    Table  1.  ADMM algorithm
     输入:A, y,迭代次数K,正则化因子ρ,二次惩罚因子γ,迭代步长τ,比例因子δw
     步骤1 初始化:α(0)=0NdNs(NdNs×1的全0列向量),λ(0)=0NM(NM×1的全0列向量),t(0)=1M(M×1的全1列向量),
         T(0)=INdiag(t(0))k=0
     步骤2 η(k+1)=ργ/(1+ργ)(λ(k)/γAα(k)+T(k)y)
     步骤3 α(k+1)=soft(α(k)+τAHη(k+1)/(ργ),τ/γ)
     步骤4-1 
         z(k)=Aα(k+1)+η(k+1)λ(k)/γ, bm=Nn=1y(n1)M+mz(k)(n1)M+m, am=Nn=1|y(n1)M+m|2,
         β=[δ+jwMm=1(bm/am)]/Mm=1(1/am)
     步骤4-2 t(k+1)=[(b1+β)/a1,(b2+β)/a2,,(bM+β)/aM]T
     步骤5 λ(k+1)=λ(k)γ(Aα(k+1)+η(k+1)T(k+1)y)
     步骤6 令kk+1,若kK1,则返回步骤2,否则结束。
     输出: α=αK, em=1/tKm, e=[e1,e2,,eM]T
    下载: 导出CSV 
    | 显示表格

    ADMM属于模型驱动类算法,其正则化因子ρ、二次惩罚因子γ、迭代步长τ、比例因子δw等参数均需提前给定。在实际应用中,参数的设置是比较困难的。不恰当的参数设置会影响ADMM算法的收敛速度和精度,从而使式(13)的求解复杂度升高、杂波空时谱和阵列误差参数的估计准确性下降。即使能够通过理论分析、交叉验证的方法选择合适的参数,固定的参数设置并不能保证ADMM算法获得最好的收敛效果。为解决上述问题,基于DU方法的思路,本文将ADMM算法展开为深度神经网络AE-ADMM-Net,利用学习的方法获得其最优迭代参数。为构建AE-ADMM-Net,将ADMM算法的迭代步骤映射为一个数据流图,如图2所示。

    图  2  ADMM算法的数据流图
    Figure  2.  The data flow graph of ADMM algorithm

    图2所示数据流图主要由ADMM算法所对应的不同图节点和不同图节点之间表示数据流动的有向边组成。数据流图的第k+1层表示ADMM算法的第k+1次迭代,表1的迭代步骤2—步骤5对应4个图节点:辅助变量更新节点(X(k+1))、杂波空时谱更新节点(O(k+1))、误差参数更新节点(Z(k+1))和拉格朗日乘子更新节点(M(k+1))。可以看出:ADMM算法的K次迭代可以映射为一个K层的数据流图,输入的空时回波信号沿此数据流图进行传递,将获得杂波空时谱和阵列幅相误差的估计结果。

    对于式(13)所示的优化问题,当机载雷达参数给定且杂波复幅度、阵列误差和噪声均服从一定分布时,训练距离单元的空时回波信号yl也将具有一定分布。此外,给定空时导向矢量字典A,杂波空时谱αl也将具有一定稀疏分布。此时,可假设存在一组最优的参数序列,使得对于所有服从一定分布的空时回波信号、杂波空时谱和阵列误差,ADMM算法均能够快速准确地求解式(13)。因此,为解决ADMM算法存在的问题,结合模型驱动算法的可解释性和数据驱动深度学习方法的非线性拟合能力,本节基于ADMM算法的迭代步骤和数据流图,构建AE-ADMM-Net,将其用于求解式(13)。基于充足完备的训练数据集对AE-ADMM-Net进行训练,能够获得最优的迭代参数,从而提高杂波空时谱和阵列误差参数的估计速度和性能。下面对AE-ADMM-Net的网络结构、数据集构建方法、网络初始化与训练进行具体描述。

    3.2.1   网络结构

    根据表1所示的算法步骤和图2所示的数据流图,可将ADMM算法等效为一个如图3所示的K层网络AE-ADMM-Net,其输入为y, A, α(0), λ(0)t(0),可学习参数为Θ={Θ(k+1)}K1k=0={ρk+1,γk+1,τk+1,δk + 1,ωk+1 ,ςk+1}K1k=0,输出为α(K)t(K),从而可得杂波空时谱α=αK和阵列误差em=1/tKm, e=[e1,e2,,eM]T。其中,AE-ADMM-Net的第k+1层运算可表示为

    图  3  AE-ADMM-Net的网络结构
    Figure  3.  The network structure of AE-ADMM-Net
    {α(k+1),λ(k+1),t(k+1)}=Fk+1{y,A,α(k),λ(k),t(k),Θ(k+1)} (22)

    其中,Fk+1{}对应一个4层子网络,包括辅助变量更新层(X(k+1))、杂波空时谱更新层(O(k+1))、误差参数更新层(Z(k+1))和拉格朗日乘子更新层(M(k+1)),如图4所示,其中实箭头表示正向传播的方向,虚线箭头表示反向传播的方向,具体描述如下:

    图  4  AE-ADMM-Net的4个子层
    Figure  4.  Four sub-layers of AE-ADMM-Net

    (1) 辅助变量更新层(X(k+1)):将y, A以及AE-ADMM-Net第k层中O(k), Z(k)M(k)的输出α(k), t(k)λ(k)作为其输入,则X(k+1)的输出为

    η(k+1)=ρk+1γk+1/(1+ρk+1γk+1)[λ(k)/γk+1Aα(k)+(INdiag(t(k)))y] (23)

    其中,ρk+1γk+1为第k+1层可学习的正则化因子和二次惩罚因子。X(k+1)的输出η(k+1)将作为第k+1层中O(k+1), Z(k+1), M(k+1)的输入。

    (2) 杂波空时谱更新层(O(k+1)):将A以及AE-ADMM-Net第k层中O(k)的输出α(k)和第k+1层中X(k+1)的输出η(k+1)作为其输入,则O(k+1)的输出为

    α(k+1)=soft(α(k)+τk+1AHη(k+1)/(ρk+1γk+1),τk+1/γk+1) (24)

    其中,τk+1为第k+1层可学习的迭代步长。O(k+1)的输出α(k+1)将作为第k+1层中M(k+1)Z(k+1)以及第k+2层中X(k+2)O(k+2)的输入。

    (3) 误差参数更新层(Z(k+1)):将y,A,AE-ADMM-Net第k层中M(k)的输出λ(k), AE-ADMM-Net第k+1层中X(k+1)O(k+1)的输出η(k+1)α(k+1)作为其输入,则Z(k+1)的输出为

    t(k+1)=[(b1+β)/a1,(b2+β)/a2,,(bM+β)/aM]T (25)

    其中,z(k)=Aα(k+1)+η(k+1)λ(k)/γk+1, bm=Nn=1y(n1)M+mz(k)(n1)M+m, am=Nn=1|y(n1)M+m|2, β=[δk+1+ jwk+1Mm=1(bm/am)]/Mm=1(1/am), δk+1wk+1为第k+1层可学习的比例因子。Z(k+1)的输出t(k+1)将作为第k+1层中M(k+1)以及第k+2层中X(k+2)的输入。

    (4) 拉格朗日乘子更新层(M(k+1)):将y, A, AE-ADMM-Net第k层中M(k)的输出λ(k), AE-ADMM-Net第k+1层中X(k+1), O(k+1)Z(k+1)的输出η(k+1), α(k+1)t(k+1)作为其输入,则M(k+1)的输出为

    λ(k+1)=λ(k)ςk+1[Aα(k+1)+η(k+1)(INdiag(t(k+1)))y] (26)

    其中,ςk+1为第k+1层可学习的乘子更新参数。M(k+1)的输出λ(k+1)将作为第k+2层中M(k+2), X(k+2)Z(k+2)的输入。需要强调的是:相比利用γk+1作为乘子更新参数(如式(21)所示),添加新参数ςk+1是为了进一步增强网络的学习能力,提高AE-ADMM-Net的性能。

    3.2.2   数据集构建方法

    与现有DU方法相同,本文AE-ADMM-Net是一种“模型+数据”联合驱动的SR方法,合理构建具有泛化能力的数据集是决定其有效性的关键。此外,DU方法大多采用监督学习的方式,按照提前给定的数据及其标签对网络进行训练。为了使空时回波信号、杂波空时谱和阵列幅相误差均具有一定的分布,本文构建数据集的方式可以概括为“设定雷达参数、设定杂波分布、设定阵列幅相误差分布、生成空时回波信号、划分训练与测试数据集、构造空时导向矢量字典、获得训练和测试标签集”,具体描述如下:

    步骤1 对于机载雷达正侧视均匀线阵,设定载机高度H、载机速度v、阵元数M、脉冲数N、阵元间距d、波长λ、脉冲重复频率fr和距离范围[Rmin, Rmax]等参数;

    步骤2 根据雷达距离分辨率将距离范围划分为L个距离单元,将每个距离单元所对应的距离环在方位角θ[0,π]上划分为Nc个杂波块,杂波块之间相互独立且幅度服从复高斯分布;

    步骤3 令阵元幅度误差νm和相位误差ϕm分别服从[νmax,νmax][ϕmax,ϕmax]上的均匀分布,随机产生P个阵元误差矢量{ep}Pp=1,其中ep=[ep1,ep2,,epM]T, epm=(1+νpm)ejϕpm, νpmU(νmax,νmax), ϕpmU(ϕmax,ϕmax)νmaxϕmax分别表示幅度误差和相位误差的最大值;

    步骤4 对于每个阵元误差矢量ep,根据yl,p=Nci=1αl,isdl,i(epssl,i)+εl产生L个空时回波信号{yl,p}Ll=1,其中,sdl,issl,i分别为第l个距离单元上第i个杂波块的时域导向矢量和空域导向矢量,对应的复幅度为αl,iεl为复高斯白噪声,杂噪比为CNR;

    步骤5 将PL个空时回波信号{{yl,p}Ll=1Pp=1随机划分为包含O个空时回波信号的训练数据集{ytraino}Oo=1和包含S=(PLO)个空时回波信号的测试数据集{ytests}Ss=1

    步骤6 设定空间频率和多普勒频率范围[fsmin,fsmax][fdmin,fdmax]、网格数Ns=κsMNd=κdN,构造空时导向矢量字典A=[sds1,sds2,,sdsNsNd]

    步骤7 通过理论分析和交叉验证的方式设置ADMM算法参数ρ=ρ0, γ=γ0, τ=τ0, δ=δ0, ω=ω0K=K0,对式(13)进行求解,获得训练标签集{αtraino,etraino}Oo=1和测试标签集{αtests,etests}Ss=1。具体步骤为:基于理论分析,得到算法收敛时ρ, γτ需满足的条件[26,27];在满足收敛条件的数值范围内,设置多组不同的ρ, γτ;由于假设实际阵列误差服从均匀分布,设置δ=M, ω=0不变;基于不同的参数组合对空时回波信号进行处理,终止迭代的条件设为第K次迭代结果相对第K–1次迭代结果的归一化误差小于10–6;得到对于所有空时回波信号杂波空时谱估计均较为准确、杂波抑制性能均较优的一组参数ρ0, γ0, τ0, δ0, ω0K0,作为ADMM算法的参数,并获得其对应的训练和测试标签集。

    3.2.3   初始化与训练

    网络的初始化和训练方法对AE-ADMM-Net的性能具有一定的影响,较好的初始化和训练方法能够使网络更容易达到收敛,在一定程度上避免陷入局部最优。AE-ADMM-Net的参数可根据3.2.2节中的步骤(7)进行初始化,即令ρ1:K=ρ0, γ1:K=γ0, τ1:K=τ0, δ1:K=δ0, ω1:K=ω0ς1:K=γ0。与采用固定参数设置的ADMM算法相比,AE-ADMM-Net经过训练后将在保证收敛性能的基础上,大幅提高收敛速度(即减少迭代次数),缩短求解式(13)的时间。

    基于所构建的训练数据集{αtraino,etraino,ytraino}Oo=1,给定网络层数K,定义归一化均方根误差(Normalized Mean Square Error, NMSE)作为网络损失函数,则AE-ADMM-Net的最优参数Θ={ρk+1,γk+1,τk+1,δk+1, ωk+1,ςk+1}K1k=0可通过后向传播(Back Propagation, BP)方法[28]求解下式得到:

    Θ=argminΘ1OOo=10.5Loα+0.5Loe (27)

    其中

    {Loα=||α(K)(Θ,A,α(0),t(0),λ(0),ytraino)αtraino||22/||αtraino||22Loe=||e(K)(Θ,A,α(0),t(0),λ(0),ytraino)etraino||22/||etraino||22 (28)

    α(K)(ytraino,A,α(0),λ(0),t(0),Θ)表示以ytraino, A, α(0)=0NdNs, λ(0)=0NMt(0)=1M为输入、以Θ为参数的AE-ADMM-Net第K层中杂波空时谱更新层的输出,e(K)(Θ,A,α(0),t(0),λ(0),ytraino)对应第K层中误差参数更新层的输出t(K)(Θ,A,α(0),t(0),λ(0),ytraino),满足e(K)=[e(K)1,e(K)2,,e(K)M], e(K)m=1/t(K)m

    经过训练得到最优参数后,即可将AE-ADMM-Net应用于实际训练距离单元空时回波信号的处理。对于测试数据{ytests}Ss=1,其杂波空时谱和阵列误差参数的估计可以表示为

    {ˆαtests=α(K)(ytests,A,α(0),t(0),λ(0),Θ)ˆetests=e(K)(ytests,A,α(0),t(0),λ(0),Θ) (29)

    其中,α(K)(ytests,A,α(0),t(0),λ(0),Θ)表示以ytests, A, α(0)=0NdNs, λ(0)=0NMt(0)=1M为输入、以Θ为参数的AE-ADMM-Net第K层中杂波空时谱更新层,e(K)(ytests,A,α(0),t(0),λ(0),Θ)对应第K层中误差参数更新层的输出t(K)(ytests,A,α(0),t(0),λ(0),Θ)

    本节通过仿真对基于AE-ADMM-Net的DU-STAP方法进行验证,并与基于SBL,FOCUSS和ADMM等算法的典型SR-STAP方法进行对比分析,仿真参数如表2所示。所有仿真均基于MATLAB R2020b实现,系统配置为Intel(R) Core(TM) i9-10900K CPU @ 3.70 GHz和NVIDIA GeForce RTX 2080 Ti GPU。

    表  2  仿真参数
    Table  2.  Simulation parameters
    参数数值参数数值
    载机高度H3000 m载机速度v100 ms–1
    阵元数M10 个脉冲数N10 个
    阵元间距d0.1 m工作波长λ0.2 m
    脉冲重复频率fr2000 Hz距离范围[Rmin, Rmax][21,31] km
    距离单元数L100 个杂波块数Nc361 个
    阵元误差数P100 个杂噪比CNR60 dB
    训练数据集大小O7500测试数据集大小S2500
    频率范围f sf d[–0.5,0.5]网格数NsNd50 个
    下载: 导出CSV 
    | 显示表格

    为验证所提方法在不同阵列误差条件下的性能,令阵列幅相误差的最大值(νmax,ϕmax)分别等于(0,0), (0.1,10), (0.2,20)(0.3,30),按照3.2.2节步骤(1)—步骤(5)所述方法构建4组不同的数据集。然后,设置ADMM算法的迭代参数为ρ0=0.5, γ0=0.01, τ0=0.04, δ0=M, ω0=0K0=3000,按照3.2.2节步骤(6)—步骤(7)所述方法构建标签集。图5给出了不同阵列误差条件下,利用ADMM算法对某一数据进行处理得到的杂波空时谱和阵列误差参数估计,其从左到右分别对应(νmax,ϕmax)等于(0,0), (0.1,10), (0.2,20)(0.3,30)的情况,从上到下分别为杂波空时谱、阵列幅度误差和阵列相位误差的估计。可以看出,基于上述固定参数,ADMM算法在不同条件下均能获得较为准确的估计结果,因此可利用所构建的数据集对AE-ADMM-Net进行训练。

    图  5  固定参数ADMM算法杂波空时谱和阵列误差参数估计结果(a1—a4:不同阵列误差参数下的空时谱估计结果,b1—b4:不同阵列误差参数下的幅度误差估计结果,c1—c4:不同阵列误差参数下的相位误差估计结果)
    Figure  5.  Clutter space-time spectra and array error parameters estimated via ADMM algorithm with fixed parameters (a1—a4: Clutter space-time spectra estimation results in different array error parameters, b1—b4: Amplitude error estimation results in different array error parameters, c1—c4: Phase error estimation results in different array error parameters)

    本节验证AE-ADMM-Net的收敛性,并与固定迭代参数的ADMM算法进行对比分析。设置不同的网络层数K,按照3.2.3节所述方法对AE-ADMM-Net进行初始化和训练(Adam算法,迭代次数为3000),所得结果如图6所示。其中,图6(a)图6(b)为AE-ADMM-Net在网络层数K=25时的训练NMSE和测试NMSE,图6(c)为AE-ADMM-Net和ADMM算法在网络层数(迭代次数)K=15~45时的NMSE,图6(d)为ADMM算法在迭代次数K=60~180时的NMSE。从图6(a)图6(b)可以看出,无论是否存在阵列误差,AE-ADMM-Net的训练和测试NMSE均随着训练次数的增加而逐渐下降,且在训练1500次后基本达到收敛。从图6(c)可以看出,随着网络层数(迭代次数)的增加,AE-ADMM-Net和ADMM算法的NMSE均逐渐下降,但前者的NMSM远小于后者。从图6(c)图6(d)可以看出,当ADMM算法的迭代次数为AE-ADMM-Net的4倍时,两者才具有相近的NMSE。因此,可以得出结论:无论是否存在阵列误差,AE-ADMM-Net均能够从所构建的数据集中学习得到最优迭代参数,获得更好的收敛性能。需要说明的是:当网络层数达到一定数值(35~40)时,AE-ADMM-Net就可以获得比较准确的杂波空时谱估计结果,进一步增加网络层数并不能显著提高杂波抑制性能,反而会增加运算复杂度。因此,在实际应用中,可基于不同的仿真条件对AE-ADMM-Net进行离线训练,确定可获得较好杂波抑制性能和较低运算复杂度的网络层数取值范围,再根据实际情况进行选择。

    图  6  AE-ADMM-Net的收敛性及其与ADMM算法的对比
    Figure  6.  Convergence performance of AE-ADMM-Net and its comparison with ADMM algorithm

    本节验证AE-ADMM-Net的杂波空时谱估计性能,并与FOCUSS算法、SBL算法和固定迭代参数的ADMM算法进行对比分析。图7给出了不同阵列误差条件下,利用不同算法对图5所对应的数据进行处理获得的杂波空时谱估计结果,其从左到右分别对应(νmax,ϕmax)等于(0,0), (0.1,10), (0.2,20)(0.3,30)的情况,从上到下分别对应迭代25次的ADMM算法、迭代45次的ADMM算法、迭代200次的FOCUSS算法(正则化参数设为10–3)、迭代400次的SBL算法(噪声功率初始值设为10–6)、层数为25的AE-ADMM-Net和层数为45的AE-ADMM-Net。可以看出:(1)与图5相比,固定迭代参数的ADMM算法在迭代次数较少时难以获得准确的杂波空时谱估计;(2)在不存在阵列误差时,SBL算法和FOCUSS算法均能够获得杂波空时谱的准确估计,但存在阵列误差时估计性能急剧下降;(3)无论是否存在阵列误差,AE-ADMM-Net均能够基于少量网络层数(迭代次数),实现对杂波空时谱的准确估计。因此,可以得出结论:相比典型的SR算法,AE-ADMM-Net在不同条件下均能快速获得杂波空时谱的准确估计。

    图  7  不同条件下不同算法的杂波空时谱估计结果(a1—a4:ADMM算法在不同阵列误差参数下的迭代25次的估计结果,b1—b4:ADMM算法在不同阵列误差参数下的迭代45次的估计结果,c1—c4:FOCUSS算法在不同阵列误差参数下的迭代200次的估计结果,d1—d4:SBL算法在不同阵列误差参数下的迭代400次的估计结果,e1—e4:25层的AE-ADMM-Net 在不同阵列误差参数下的的估计结果,f1—f4:45层的AE-ADMM-Net 在不同阵列误差参数下的估计结果)
    Figure  7.  Clutter space-time spectra estimated via different algorithms under different conditions (a1—a4: estimation results of ADMM algorithm with 25 iterations in different array error parameters, b1—b4: estimation results of ADMM algorithm with 45 iterations in different array error parameters, c1—c4: estimation results of FOCUSS algorithm with 200 iterations in different array error parameters, d1—d4: estimation results of SBL algorithm with 400 iterations in different array error parameters, e1—e4: estimation results of AE-ADMM-Net with 25 layers in different array error parameters, f1—f4: estimation results of AE-ADMM-Net with 45 layers in different array error parameters)

    本节验证AE-ADMM-Net的阵列误差参数估计性能,结果如图8所示。图8从左到右分别对应(νmax,ϕmax)等于(0,0), (0.1,10), (0.2,20)(0.3,30)的情况,上图和下图分别为幅度误差和相位误差估计结果。可以看出:在不同条件下,AE-ADMM-Net均能获得阵列幅度误差和相位误差的准确估计。

    图  8  不同条件下AE-ADMM-Net的阵列误差参数估计结果(a1—a4:不同阵列误差参数下的幅度误差估计结果,b1—b4:不同阵列误差参数下的相位误差估计结果)
    Figure  8.  Array error parameters estimated by AE-ADMM-Net under different conditions (a1—a4: Amplitude error estimation results in different array error parameters, b1—b4: Phase error estimation results in different array error parameters)

    本节验证基于AE-ADMM-Net的DU-STAP方法的杂波抑制性能,并与基于FOCUSS算法、SBL算法和固定迭代参数ADMM算法的SR-STAP方法进行对比分析。需要说明的是:由于SBL算法和FOCUSS算法无法有效估计阵列误差参数,因此在进行性能对比分析时,不考虑阵列误差参数,仅对不同算法得到的杂波空时谱进行处理,即基于式(11)估计CNCM ˆRC、基于式(3)计算空时滤波器最优权值wopt。然后,利用SCNR损失衡量不同方法的杂波抑制性能,表示为

    SCNRLoss=σ2|wHoptsdsT|2NM wHoptˆRCwopt (30)

    假设目标的空间频率为0(即ssT=1M)、归一化多普勒频率在[–0.5,0.5]范围内变化,不同方法对应的SCNR损失曲线如图9所示,其从左到右分别对应(νmax,ϕmax)等于(0,0), (0.1,10), (0.2,20)(0.3,30)的情况,下图对应上图的局部放大结果。可以看出:(1)基于FOCUSS和SBL算法的SR-STAP方法仅在无阵列误差时有效,在存在阵列误差时杂波抑制性能急剧下降;(2)基于固定迭代参数ADMM算法的SR-STAP方法在迭代次数较多的条件下(ADMM-opt, K = 3000)能够有效抑制杂波,但在迭代次数较少的条件下(K = 25和45),由于杂波空时谱估计不准确,其杂波抑制性能较差;(3)基于AE-ADMM-Net的DU-STAP方法基于少量网络层数(迭代次数)即可获得杂波空时谱的准确估计,实现对杂波的有效抑制,网络层数为K=45时的性能与ADMM-opt相当。因此,可以得出结论:相比典型的SR-STAP方法,DU-STAP方法在不同条件下均能获得较好的杂波抑制性能。

    图  9  不同条件下不同方法对应的SCNR损失曲线(a1—a4:不同阵列误差参数下的SCNR曲线结果,b1—b4:不同阵列误差参数下的SCNR曲线局部放大结果)
    Figure  9.  SCNR loss curves corresponding to different methods under different conditions (a1—a4: SCNR loss curves results in different array error parameters, b1—b4: SCNR loss curves results with enlarged scale in different array error parameters)

    本节分析AE-ADMM-Net的运算复杂度,并与FOCUSS算法和SBL算法进行对比。需要强调的是:由于可以采用离线训练、在线应用的方法[25,29],本文对AE-ADMM-Net的运算复杂度分析不包括网络训练所需的运算量。此外,在进行训练获得最优参数后,AE-ADMM-Net与ADMM算法的运算完全相同,仅在迭代参数上具有差异。因此,在网络层数(迭代次数)相同的条件下,AE-ADMM-Net与ADMM算法在应用时将具有相同的运算复杂度。以乘法次数为指标,可得不同算法进行一次迭代所需的运算复杂度如表3所示。可以看出,AE-ADMM-Net的运算复杂度远小于FOCUSS算法和SBL算法。为了对此进行验证,基于MATLAB的TIC和TOC命令获得不同条件下AE-ADMM-Net, FOCUSS和SBL算法的运行时间如图10所示。其中,图10(a)对应M=N=10、Nd=Ns=50、迭代次数K=15~45;图10(b)对应M=N=4~16、Nd=Ns=50、迭代次数K=45;图10(c)对应M=N=10、Nd=Ns=20~80、迭代次数K=45;图10(d)对应M=N=Nd/5=Ns/5=4~16、迭代次数K=45。可以看出:在不同条件下,AE-ADMM-Net的运行时间均远小于FOCUSS算法和SBL算法。此外,需要指出的是:与ADMM算法相似,参数固定的FOCUSS算法和SBL算法通常也需要相比AE-ADMM-Net更多的迭代次数以达到收敛。因此,可以得出结论:相比基于FOCUSS和SBL算法的SR-STAP算法,基于AE-ADMM-Net的DU-STAP方法具有更低的运算复杂度。

    表  3  不同算法的运算复杂度
    Table  3.  Computational complexities of different algorithms
    算法运算复杂度
    FOCUSSO(3NMNsNd+(NM)3+2(NM)2NsNd)
    SBLO(5NMNsNd+(NM)3+2(NM)2NsNd+NM+NsNd)
    AE-ADMM-NetO(2NMNsNd+(NM)2+2NM+NsNd)
    下载: 导出CSV 
    | 显示表格
    图  10  不同条件下不同算法的运行时间
    Figure  10.  Running time of different algorithms under different conditions

    本节基于Mountain Top实测数据[16]对所提DU-STAP方法的实际性能进行验证,并与基于固定迭代参数ADMM算法的SR-STAP方法进行对比分析,其中ADMM算法的参数设置与仿真实验一致,DU-STAP方法直接采用由仿真数据训练得到的AE-ADMM-Net。Mountain Top数据的阵元数为14、脉冲数为16,目标位于第147个距离单元,为与仿真相匹配,取10个阵元和10个脉冲所对应的数据进行处理。假设不存在阵元误差并设保护距离单元个数为4,基于ADMM和AE-ADMM-Net对第152个距离单元的空时回波信号进行处理,从而估计杂波空时谱、设计空时滤波器进行杂波抑制和目标检测,结果如图11所示。其中,前3个子图依次对应迭代3000次的ADMM算法、迭代45次的ADMM算法和网络层数为45的AE-ADMM-Net,第4个子图为目标检测结果。可以看出,本文所提DU-STAP方法对实测数据进行处理仍能获得较好的结果,在迭代次数相同的条件下,杂波空时谱估计和目标检测性能均优于基于固定迭代参数ADMM算法的SR-STAP方法。

    图  11  Mountain Top实测数据处理结果
    Figure  11.  Processing results of Mountain Top actual measured data

    本文提出了基于DU的机载雷达STAP方法。在存在阵列误差的条件下,对基于ADMM算法的杂波空时谱和阵列误差联合估计方法进行了分析,针对其存在的问题构建了深度神经网络AE-ADMM-Net,并对其网络结构、数据集构建方法、网络初始化与训练方法进行了介绍。通过仿真实验对基于AE-ADMM-Net的DU-STAP方法进行了验证,结果表明:相比典型的SR算法,AE-ADMM-Net能够从数据中学习得到最优迭代参数,在不同阵列误差条件下快速获得杂波空时谱和阵列误差参数的准确估计;相比典型的SR-STAP方法,DU-STAP方法能够获得较好的杂波抑制性能,且运算复杂度更低。下一步将对载机偏航、距离模糊、杂波内部运动和网格失配等非理想条件下的算法改进与分析进行深入研究。

  • [1]
    保铮, 邢孟道, 王彤. 雷达成像技术[M]. 北京: 电子工业出版社, 2005.

    BAO Zheng, XING Mengdao, and WANG Tong. Radar Imaging Technology[M]. Beijing: Publishing House of Electronics Industry, 2005.
    [2]
    MOREIRA A, PRATS-IRAOLA P, YOUNIS M, et al. A tutorial on synthetic aperture radar[J]. IEEE Geoscience and Remote Sensing Magazine, 2013, 1(1): 6–43. doi: 10.1109/MGRS.2013.2248301
    [3]
    黄培康, 殷红成, 许小剑. 雷达目标特性[M]. 北京: 电子工业出版社, 2005.

    HUANG Peikang, YIN Hongcheng, and XU Xiaojian. Radar Target Characteristics[M]. Beijing: Publishing House of Electronics Industry, 2005.
    [4]
    孙真真. 基于光学区雷达目标二维像的目标散射特征提取的理论及方法研究[D]. [博士论文], 中国人民解放军国防科学技术大学, 2001.

    SUN Zhenzhen. Target scattering characteristic extraction method for radar 2-D image in optical region[D]. [Ph. D. dissertation], National University of Defense Technology, 2001.
    [5]
    OLIVER C and QUEGAN S. Understanding Synthetic Aperture Radar Images[M]. Raleigh: SciTech Publishing, 2004.
    [6]
    EL-DARYMLI K, GILL E W, MCGUIRE P, et al. Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review[J]. IEEE Access, 2016, 4: 6014–6058. doi: 10.1109/ACCESS.2016.2611492
    [7]
    NOVAK L M, OWIRKA G J, BROWER W S, et al. The automatic target-recognition system in SAIP[J]. The Lincoln Laboratory Journal, 1997, 10(2): 187–202.
    [8]
    DIEMUNSCH J R and WISSINGER J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR[C]. Algorithms for Synthetic Aperture Radar Imagery V, Orlando, United States, 1998: 481–492.
    [9]
    文贡坚, 朱国强, 殷红成, 等. 基于三维电磁散射参数化模型的 SAR 目标识别方法[J]. 雷达学报, 2017, 6(2): 115–135. doi: 10.12000/JR17034

    WEN Gongjian, ZHU Guoqiang, YIN Hongcheng, et al. SAR ATR based on 3D parametric electromagnetic scattering model[J]. Journal of Radars, 2017, 6(2): 115–135. doi: 10.12000/JR17034
    [10]
    HE Yang, HE Siyuan, ZHANG Yuehua, et al. A forward approach to establish parametric scattering center models for known complex radar targets applied to SAR ATR[J]. IEEE Transactions on Antennas and Propagation, 2014, 62(12): 6192–6205. doi: 10.1109/TAP.2014.2360700
    [11]
    代大海. 极化雷达成像及目标特征提取研究[D]. [博士论文], 国防科技大学, 2008.

    DAI Dahai. Study on polarimetric radar imaging and target feature extraction[D]. [Ph. D. dissertation], National University of Defense Technology, 2008.
    [12]
    LING Hao, CHOU R C, and LEE S W. Shooting and bouncing rays: Calculating the RCS of an arbitrarily shaped cavity[J]. IEEE Transactions on Antennas and Propagation, 1989, 37(2): 194–205. doi: 10.1109/8.18706
    [13]
    BHALLA R, LING Hao, MOORE J, et al. 3D scattering center representation of complex targets using the shooting and bouncing ray technique: A review[J]. IEEE Antennas and Propagation Magazine, 1998, 40(5): 30–39. doi: 10.1109/74.735963
    [14]
    BHALLA R, MOORE J, and LING Hao. A global scattering center representation of complex targets using the shooting and bouncing ray technique[J]. IEEE Transactions on Antennas and Propagation, 1997, 45(12): 1850–1856. doi: 10.1109/8.650204
    [15]
    BHALLA R and LING Hao. Three-dimensional scattering center extraction using the shooting and bouncing ray technique[J]. IEEE Transactions on Antennas and Propagation, 1996, 44(11): 1445–1453. doi: 10.1109/8.542068
    [16]
    JACKSON J A. Three-dimensional feature models for synthetic aperture radar and experiments in feature extraction[D]. [Ph. D. dissertation], The Ohio State University, 2009.
    [17]
    JACKSON J A, RIGLING B D, and MOSES R L. Canonical scattering feature models for 3D and bistatic SAR[J]. IEEE Transactions on Aerospace and Electronic Systems, 2010, 46(2): 525–541. doi: 10.1109/TAES.2010.5461639
    [18]
    JACKSON J A, RIGLING B D, and MOSES R L. Parametric scattering models for bistatic synthetic aperture radar[C]. 2008 IEEE Radar Conference, Rome, Italy, 2008: 1–5.
    [19]
    HURST M and MITTRA R. Scattering center analysis via Prony's method[J]. IEEE Transactions on Antennas and Propagation, 1987, 35(8): 986–988. doi: 10.1109/TAP.1987.1144210
    [20]
    SACCHINI J J, STEEDLY W M, and MOSES R L. Two-dimensional Prony modeling and parameter estimation[J]. IEEE Transactions on Signal Processing, 1993, 41(11): 3127–3137. doi: 10.1109/78.257242
    [21]
    POTTER L C, CHIANG D M, CARRIERE R, et al. A GTD-based parametric model for radar scattering[J]. IEEE Transactions on Antennas and Propagation, 1995, 43(10): 1058–1067. doi: 10.1109/8.467641
    [22]
    闫华, 张磊, 陆金文, 等. 任意多次散射机理的GTD散射中心模型频率依赖因子表达[J]. 雷达学报, 2021, 10(3): 370–381. doi: 10.12000/JR21005

    YAN Hua, ZHANG Lei, LU Jinwen, et al. Frequency-dependent factor expression of the GTD scattering center model for the arbitrary multiple scattering mechanism[J]. Journal of Radars, 2021, 10(3): 370–381. doi: 10.12000/JR21005
    [23]
    GERRY M J, POTTER L C, GUPTA I J, et al. A parametric model for synthetic aperture radar measurements[J]. IEEE Transactions on Antennas and Propagation, 1999, 47(7): 1179–1188. doi: 10.1109/8.785750
    [24]
    李增辉. 稀疏激励的极化逆散射理论研究[D]. [博士论文], 清华大学, 2015.

    LI Zenghui. Research on polarimetric inverse scattering through enforcing sparsity[D]. [Ph. D. dissertation], Tsinghua University, 2015.
    [25]
    STEEDLY W M and MOSES R L. High resolution exponential modeling of fully polarized radar returns[J]. IEEE Transactions on Aerospace and Electronic Systems, 1991, 27(3): 459–469. doi: 10.1109/7.81427
    [26]
    代大海, 王雪松, 肖顺平, 等. 高分辨相干极化 GTD 散射模型及其应用[J]. 电波科学学报, 2008, 23(1): 55–61. doi: 10.3969/j.issn.1005-0388.2008.01.009

    DAI Dahai, WANG Xuesong, XIAO Shunping, et al. High-resolution coherent polarization GTD model and its application[J]. Chinese Journal Of Radio Science, 2008, 23(1): 55–61. doi: 10.3969/j.issn.1005-0388.2008.01.009
    [27]
    段佳. SAR/ISAR目标电磁特征提取及应用研究[D]. [博士论文], 西安电子科技大学, 2015.

    DUAN Jia. Study on electro-magnetic feature extraction of SAR/ISAR and its applications[D]. [Ph. D. dissertation], Xidian University, 2015.
    [28]
    STEER D G, DEWDNEY P E, and ITO M R. Enhancements to the deconvolution algorithm 'CLEAN'[J]. Astronomy and Astrophysics, 1984, 137(2): 159–165.
    [29]
    CLARK B G. An efficient implementation of the algorithm 'CLEAN'[J]. Astronomy and Astrophysics, 1980, 89(3): 377–378.
    [30]
    GERRY M J. Two-Dimensional Inverse Scattering Based on the GTD Model[M]. The Ohio State University, 1997.
    [31]
    KOETS M A and MOSES R L. Image domain feature extraction from synthetic aperture imagery[C]. 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, Phoenix, USA, 1999: 2319–2322.
    [32]
    STACH J and LEBARON E. Enhanced image editing by peak region segmentation[C]. 18th Annual Meeting & Symposium of the Antenna Measurement Techniques Association, 1996: 303–307.
    [33]
    KOETS M A. Automated algorithms for extraction of physically relevant features from synthetic aperture radar imagery[D]. Ohio State University, 1998.
    [34]
    MOSES R L, POTTER L C, and GUPTA I J. Feature extraction using attributed scattering center models for model-based automatic target recognition (ATR)[R]. AFRL-SN-WP-TR-2006-1004, 2005.
    [35]
    AKYILDIZ Y and MOSES R L. Scattering center model for SAR imagery[C]. SAR Image Analysis, Modeling, and Techniques II, Florence, Italy, 1999: 76–85.
    [36]
    DING Baiyuan and WEN Gongjian. Target reconstruction based on 3-D scattering center model for robust SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(7): 3772–3785. doi: 10.1109/TGRS.2018.2810181
    [37]
    DING Baiyuan, WEN Gongjian, HUANG Xiaohong, et al. Data augmentation by multilevel reconstruction using attributed scattering center for SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(6): 979–983. doi: 10.1109/LGRS.2017.2692386
    [38]
    丁柏圆. 针对扩展操作条件的合成孔径雷达图像目标识别方法研究[D]. [博士论文], 国防科技大学, 2018.

    DING Baiyuan. Research on automatic target recognition of synthetic aperture radar images under extened operating conditions[D]. [Ph. D. dissertation], National University of Defense Technology, 2018.
    [39]
    JACKSON J A and MOSES R L. Synthetic aperture radar 3D feature extraction for arbitrary flight paths[J]. IEEE Transactions on Aerospace and Electronic Systems, 2012, 48(3): 2065–2084. doi: 10.1109/TAES.2012.6237579
    [40]
    AKYILDIZ Y. Feature extraction from synthetic aperture radar imagery[D]. [Ph. D. dissertation], The Ohio State University, 2000.
    [41]
    XU Feng, JIN Yaqiu, and MOREIRA A. A preliminary study on SAR advanced information retrieval and scene reconstruction[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(10): 1443–1447. doi: 10.1109/LGRS.2016.2590878
    [42]
    段佳, 张磊, 盛佳恋, 等. 独立属性散射中心参数降耦合估计方法[J]. 电子与信息学报, 2012, 34(8): 1853–1859. doi: 10.3724/SP.J.1146.2011.01302

    DUAN Jia, ZHANG Lei, SHENG Jialian, et al. Parameters decouple and estimation of independent attributed scattering centers[J]. Journal of Electronics &Information Technology, 2012, 34(8): 1853–1859. doi: 10.3724/SP.J.1146.2011.01302
    [43]
    计科峰, 匡纲要, 粟毅, 等. 基于SAR图像的目标散射中心特征提取方法研究[J]. 国防科技大学学报, 2003, 25(1): 45–50. doi: 10.3969/j.issn.1001-2486.2003.01.010

    JI Kefeng, KUANG Gangyao, SU Yi, et al. Research on the extracting method of the scattering center feature from SAR imagery[J]. Journal of National University of Defense Technology, 2003, 25(1): 45–50. doi: 10.3969/j.issn.1001-2486.2003.01.010
    [44]
    蒋文, 李王哲. 基于幅相分离的属性散射中心参数估计新方法[J]. 雷达学报, 2019, 8(5): 606–615. doi: 10.12000/JR18097

    JIANG Wen and LI Wangzhe. A new method for parameter estimation of attributed scattering centers based on amplitude-phase separation[J]. Journal of Radars, 2019, 8(5): 606–615. doi: 10.12000/JR18097
    [45]
    谢意远, 高悦欣, 邢孟道, 等. 跨谱段SAR散射中心多维参数解耦和估计方法[J]. 电子与信息学报, 2021, 43(3): 632–639. doi: 10.11999/JEIT200319

    XIE Yiyuan, GAO Yuexin, XING Mengdao, et al. A Decoupling and dimension dividing multi-parameter estimation method for cross-band SAR scattering centers[J]. Journal of Electronics &Information Technology, 2021, 43(3): 632–639. doi: 10.11999/JEIT200319
    [46]
    YANG Dongwen, NI Wei, DU Lan, et al. Efficient attributed scatter center extraction based on image-domain sparse representation[J]. IEEE Transactions on Signal Processing, 2020, 68: 4368–4381. doi: 10.1109/TSP.2020.3011332
    [47]
    XIE Yiyuan, XING Mengdao, GAO Yuexin, et al. Attributed scattering center extraction method for microwave photonic signals using DSM-PMM-regularized optimization[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5230016. doi: 10.1109/TGRS.2022.3183855
    [48]
    周剑雄. 光学区雷达目标三维散射中心重构理论与技术[D]. [博士论文], 国防科学技术大学, 2006.

    ZHOU Jianxiong. Theory and technology on reconstructing 3D scattering centers of radar targets in optical region[D]. [Ph. D. dissertation], National University of Defense Technology, 2006.
    [49]
    石志广, 周剑雄, 赵宏钟, 等. 基于协同粒子群优化的GTD模型参数估计方法[J]. 电子学报, 2007, 35(6): 1102–1107. doi: 10.3321/j.issn:0372-2112.2007.06.020

    SHI Zhiguang, ZHOU Jianxiong, ZHAO Hongzhong, et al. A GTD scattering center model parameter estimation method based on CPSO[J]. Acta Electronica Sinica, 2007, 35(6): 1102–1107. doi: 10.3321/j.issn:0372-2112.2007.06.020
    [50]
    孙真真, 陈曾平, 庄钊文, 等. 一种高频区复杂雷达目标二维散射的参数模型[J]. 国防科技大学学报, 2001, 23(4): 113–119. doi: 10.3969/j.issn.1001-2486.2001.04.025

    SUN Zhenzhen, CHEN Zengping, ZHUANG Zhaowen, et al. A parametric model for high frequency complex 2-D radar scattering[J]. Journal of National University of Defense Technology, 2001, 23(4): 113–119. doi: 10.3969/j.issn.1001-2486.2001.04.025
    [51]
    段佳, 张磊, 邢孟道, 等. 合成孔径雷达目标特征提取新方法[J]. 西安电子科技大学学报:自然科学版, 2014, 41(4): 13–19. doi: 10.3969/j.issn.1001-2400.2014.04.003

    DUAN Jia, ZHANG Lei, XING Mengdao, et al. Novel feature extraction method for synthetic aperture radar targets[J]. Journal of Xidian University, 2014, 41(4): 13–19. doi: 10.3969/j.issn.1001-2400.2014.04.003
    [52]
    占荣辉, 胡杰民, 张军. 基于压缩感知的二维GTD模型参数估计方法[J]. 电子与信息学报, 2013, 35(2): 419–425. doi: 10.3724/SP.J.1146.2012.00780

    ZHAN Ronghui, HU Jiemin, and ZHANG Jun. A novel method for parametric estimation of 2D geometrical theory of diffraction model based on compressed sensing[J]. Journal of Electronics &Information Technology, 2013, 35(2): 419–425. doi: 10.3724/SP.J.1146.2012.00780
    [53]
    HAMMOND G B and JACKSON J A. SAR canonical feature extraction using molecule dictionaries[C]. 2013 IEEE Radar Conference (RadarCon13), Ottawa, Canada, 2013: 1–6.
    [54]
    WU Min, XING Mengdao, ZHANG Lei, et al. Super-resolution imaging algorithm based on attributed scattering center model[C]. 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), Xi'an, China, 2014: 271–275.
    [55]
    LIU Hongwei, JIU Bo, LI Fei, et al. Attributed scattering center extraction algorithm based on sparse representation with dictionary refinement[J]. IEEE Transactions on Antennas and Propagation, 2017, 65(5): 2604–2614. doi: 10.1109/TAP.2017.2673764
    [56]
    李飞. 雷达图像目标特征提取方法研究[D]. [博士论文], 西安电子科技大学, 2014.

    LI Fei. Study on target feature extraction based on radar image[D]. [Ph. D. dissertation], Xidian University, 2014.
    [57]
    李飞, 纠博, 刘宏伟, 等. 基于稀疏表示的SAR图像属性散射中心参数估计算法[J]. 电子与信息学报, 2014, 36(4): 931–937. doi: 10.3724/SP.J.1146.2013.00576

    LI Fei, JIU Bo, LIU Hongwei, et al. Sparse representation based algorithm for estimation of attributed scattering center parameter on SAR imagery[J]. Journal of Electronics &Information Technology, 2014, 36(4): 931–937. doi: 10.3724/SP.J.1146.2013.00576
    [58]
    CONG Yulai, CHEN Bo, LIU Hongwei, et al. Nonparametric bayesian attributed scattering center extraction for synthetic aperture radar targets[J]. IEEE Transactions on Signal Processing, 2016, 64(18): 4723–4736. doi: 10.1109/TSP.2016.2569463
    [59]
    丛玉来. 基于深层贝叶斯生成网络的层次特征学习[D]. [博士论文], 西安电子科技大学, 2017.

    CONG Yulai. Hierarchical feature learning based on deep bayesian generative networks[D]. [Ph. D. dissertation], Xidian University, 2017.
    [60]
    LI Zenghui, JIN Kan, XU Bin, et al. An improved attributed scattering model optimized by incremental sparse Bayesian learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(5): 2973–2987. doi: 10.1109/TGRS.2015.2509539
    [61]
    KIM K T and KIM H T. Two-dimensional scattering center extraction based on multiple elastic modules network[J]. IEEE Transactions on Antennas and Propagation, 2003, 51(4): 848–861. doi: 10.1109/TAP.2003.811107
    [62]
    吕玉增, 曹敏, 贾宇平, 等. 基于遗传算法的二维散射中心提取研究[J]. 现代雷达, 2006, 28(11): 64–68. doi: 10.3969/j.issn.1004-7859.2006.11.019

    LV Yuzeng, CAO Min, JIA Yuping, et al. 2-D scattering center extraction technique based on genetic algorithm[J]. Modern Radar, 2006, 28(11): 64–68. doi: 10.3969/j.issn.1004-7859.2006.11.019
    [63]
    JING Maoqiang and ZHANG Guo. Attributed scattering center extraction with genetic algorithm[J]. IEEE Transactions on Antennas and Propagation, 2021, 69(5): 2810–2819. doi: 10.1109/TAP.2020.3027630
    [64]
    FENG Sijia, JI Kefeng, WANG Fulai, et al. Electromagnetic scattering feature (ESF) module embedded network based on ASC model for robust and interpretable SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5235415. doi: 10.1109/TGRS.2022.3208333
    [65]
    张磊, 何思远, 朱国强, 等. 雷达目标三维散射中心位置正向推导和分析[J]. 电子与信息学报, 2018, 40(12): 2854–2860. doi: 10.11999/JEIT180115

    ZHANG Lei, HE Siyuan, ZHU Guoqiang, et al. Forward derivation and analysis for 3-D scattering center position of radar target[J]. Journal of Electronics &Information Technology, 2018, 40(12): 2854–2860. doi: 10.11999/JEIT180115
    [66]
    HU Jiemin, WANG Wei, ZHAI Qinglin, et al. Global scattering center extraction for radar targets using a modified RANSAC method[J]. IEEE Transactions on Antennas and Propagation, 2016, 64(8): 3573–3586. doi: 10.1109/TAP.2016.2574880
    [67]
    刘晓明, 文贡坚, 钟金荣. 基于SAR数据的三维散射中心模型位置重构方法[J]. 雷达学报, 2013, 2(2): 187–194. doi: 10.3724/SP.J.1300.2013.20080

    LIU Xiaoming, WEN Gongjian, and ZHONG Jinrong. Methods for parametrically reconstructing position of 3D scattering center model of targets from SAR images[J]. Journal of Radars, 2013, 2(2): 187–194. doi: 10.3724/SP.J.1300.2013.20080
    [68]
    MA Conghui, WEN Gongjian, DING Boyuan, et al. Three-dimensional electromagnetic model–based scattering center matching method for synthetic aperture radar automatic target recognition by combining spatial and attributed information[J]. Journal of Applied Remote Sensing, 2016, 10(1): 016025. doi: 10.1117/1.JRS.10.016025
    [69]
    马聪慧. 基于三维电磁散射部件模型的SAR目标识别方法研究[D]. [博士论文], 国防科技大学, 2017.

    MA Conghui. Research on SAR target recognition with three dimensional electromagnetic part model[D]. [Ph. D. dissertation], National University of Defense Technology, 2017.
    [70]
    代大海, 王雪松, 肖顺平, 等. 全极化散射中心提取与参数估计: P-MUSIC方法[J]. 信号处理, 2007, 23(6): 818–822. doi: 10.3969/j.issn.1003-0530.2007.06.005

    DAI Dahai, WANG Xuesong, XIAO Shunping, et al. Fully polarized scattering center extraction and parameter estimation: P-MUSIC algorithm[J]. Signal Processing, 2007, 23(6): 818–822. doi: 10.3969/j.issn.1003-0530.2007.06.005
    [71]
    DAI Dahai, WANG Xuesong, CHANG Yuliang, et al. Fully-polarized scattering center extraction and parameter estimation: P-SPRIT algorithm[C]. 2006 CIE International Conference on Radar, Shanghai, China, 2006: 1–4.
    [72]
    代大海, 王雪松, 肖顺平. 基于相干极化GTD模型的散射中心提取新方法[J]. 系统工程与电子技术, 2007, 29(7): 1057–1061. doi: 10.3321/j.issn:1001-506X.2007.07.010

    DAI Dahai, WANG Xuesong, and XIAO Shunping. Novel method for scattering center extraction based on coherent polarization GTD model[J]. Systems Engineering and Electronics, 2007, 29(7): 1057–1061. doi: 10.3321/j.issn:1001-506X.2007.07.010
    [73]
    DAI Dahai, ZHANG Jingke, WANG Xuesong, et al. Superresolution polarimetric ISAR imaging based on 2D CP-GTD model[J]. Journal of Sensors, 2015, 2015: 293141. doi: 10.1155/2015/293141
    [74]
    安文韬. 基于极化SAR的目标极化分解与散射特征提取研究[D]. [博士论文], 清华大学, 2010.

    AN Wentao. The polarimetric decomposition and scattering characteristic extraction of polarimetric SAR[D]. [Ph. D. dissertation], Tsinghua University, 2010.
    [75]
    CLOUDE S R and POTTIER E. A review of target decomposition theorems in radar polarimetry[J]. IEEE Transactions on Geoscience and Remote Sensing, 1996, 34(2): 498–518. doi: 10.1109/36.485127
    [76]
    KROGAGER E. New decomposition of the radar target scattering matrix[J]. Electronics Letters, 1990, 26(18): 1525–1527. doi: 10.1049/el:19900979
    [77]
    CAMERON W L and LEUNG L K. Feature motivated polarization scattering matrix decomposition[C]. IEEE International Conference on Radar, Arlington, USA, 1990: 549-557.
    [78]
    徐丰. 全极化合成孔径雷达的正向与逆向遥感理论[D]. [博士论文], 复旦大学, 2007.

    XU Feng. Direct and inverse remote sensing theories of polarimetric synthetic aperture radar[D]. [Ph. D. dissertation], Fudan University, 2007.
    [79]
    FULLER D F and SAVILLE M A. A high-frequency multipeak model for wide-angle SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(7): 4279–4291. doi: 10.1109/TGRS.2012.2226732
    [80]
    DUAN Jia, ZHANG Lei, XING Mengdao, et al. Polarimetric target decomposition based on attributed scattering center model for synthetic aperture radar targets[J]. IEEE Geoscience and Remote Sensing Letters, 2014, 11(12): 2095–2099. doi: 10.1109/LGRS.2014.2320053
    [81]
    高悦欣. ISAR高分辨成像与目标参数估计算法研究[D]. [博士论文], 西安电子科技大学, 2018.

    GAO Yuexin. Study of ISAR high resolution imaging and target parameter estimation algorithms[D]. [Ph. D. dissertation], Xidian University, 2018.
    [82]
    XU Feng, LI Yongchen, and JIN Yaqiu. Polarimetric–anisotropic decomposition and anisotropic entropies of high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(9): 5467–5482. doi: 10.1109/TGRS.2016.2565693
    [83]
    高悦欣, 李震宇, 盛佳恋, 等. 一种大转角SAR图像散射中心各向异性提取方法[J]. 电子与信息学报, 2016, 38(8): 1956–1961. doi: 10.11999/JEIT151261

    GAO Yuexin, LI Zhenyu, SHENG Jialian, et al. Extraction method for anisotropy characteristic of scattering center in wide-angle SAR imagery[J]. Journal of Electronics &Information Technology, 2016, 38(8): 1956–1961. doi: 10.11999/JEIT151261
    [84]
    盛佳恋. ISAR高分辨成像和参数估计算法研究[D]. [博士论文], 西安电子科技大学, 2016.

    SHENG Jialian. Study on ISAR high resolution imaging and parameter estimation techniques[D]. [Ph. D. dissertation], Xidian University, 2016.
    [85]
    CARRARA W G, GOODMAN R S, and MAJEWSKI R M. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms[M]. Boston: Artech House, 1995.
    [86]
    ÇETIN M, STOJANOVIĆ I, ÖNHON N Ö, et al. Sparsity-driven synthetic aperture radar imaging: Reconstruction, autofocusing, moving targets, and compressed sensing[J]. IEEE Signal Processing Magazine, 2014, 31(4): 27–40. doi: 10.1109/MSP.2014.2312834
    [87]
    VARSHNEY K R, ÇETIN M, FISHER III J W, et al. Joint image formation and anisotropy characterization in wide-angle SAR[C]. Algorithms for Synthetic Aperture Radar Imagery XIII, Orlando, USA, 2006: 95–106.
    [88]
    VARSHNEY K R, ÇETIN M, FISHER J W, et al. Sparse representation in structured dictionaries with application to synthetic aperture radar[J]. IEEE Transactions on Signal Processing, 2008, 56(8): 3548–3561. doi: 10.1109/TSP.2008.919392
    [89]
    ASH J, ERTIN E, POTTER L C, et al. Wide-angle synthetic aperture radar imaging: Models and algorithms for anisotropic scattering[J]. IEEE Signal Processing Magazine, 2014, 31(4): 16–26. doi: 10.1109/MSP.2014.2311828
    [90]
    MOSES R L and ASH J N. An autoregressive formulation for SAR backprojection imaging[J]. IEEE Transactions on Aerospace and Electronic Systems, 2011, 47(4): 2860–2873. doi: 10.1109/TAES.2011.6034669
    [91]
    TRINTINALIA L C, BHALLA R, and LING Hao. Scattering center parameterization of wide-angle backscattered data using adaptive Gaussian representation[J]. IEEE Transactions on Antennas and Propagation, 1997, 45(11): 1664–1668. doi: 10.1109/8.650078
    [92]
    STOJANOVIC I, CETIN M, and KARL W C. Joint space aspect reconstruction of wide-angle SAR exploiting sparsity[C]. Algorithms for Synthetic Aperture Radar Imagery XV, Orlando, USA, 2008: 37–48.
    [93]
    ZINIEL J and SCHNITER P. Dynamic compressive sensing of time-varying signals via approximate message passing[J]. IEEE Transactions on Signal Processing, 2013, 61(21): 5270–5284. doi: 10.1109/TSP.2013.2273196
    [94]
    JIANG Wen, LIU Jianwei, YANG Jiyao, et al. A novel multiband fusion method based on a modified RELAX algorithm for high-resolution and anti-non-gaussian colored clutter microwave imaging[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 5105312. doi: 10.1109/TGRS.2021.3109724
    [95]
    吴敏. 逆合成孔径雷达提高分辨率成像方法研究[D]. [博士论文], 西安电子科技大学, 2016.

    WU Min. Study on high resolution ISAR imaging techniques[D]. [Ph. D. dissertation], Xidian University, 2016.
    [96]
    吴敏, 张磊, 段佳, 等. 基于属性散射中心模型的SAR超分辨成像算法[J]. 宇航学报, 2014, 35(9): 1058–1064. doi: 10.3873/j.issn.1000-1328.2014.09.011

    WU Min, ZHANG Lei, DUAN Jia, et al. Super-resolution SAR imaging algorithm based on attribute scattering center model[J]. Journal of Astronautics, 2014, 35(9): 1058–1064. doi: 10.3873/j.issn.1000-1328.2014.09.011
    [97]
    HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [98]
    计科峰, 匡纲要, 粟毅, 等. SAR图像目标峰值特征提取与方位角估计方法研究[J]. 宇航学报, 2004, 25(1): 102–108,113. doi: 10.3321/j.issn:1000-1328.2004.01.018

    JI Kefeng, KUANG Gangyao, SU Yi, et al. Methods of target’s peak extraction and azimuth estimation from SAR imagery[J]. Journal of Astronautics, 2004, 25(1): 102–108,113. doi: 10.3321/j.issn:1000-1328.2004.01.018
    [99]
    张翠, 郦苏丹, 邹涛, 等. 一种应用峰值特征匹配的SAR图象自动目标识别方法[J]. 中国图象图形学报, 2002, 7(7): 729–734. doi: 10.3969/j.issn.1006-8961.2002.07.020

    ZHANG Cui, LI Sudan, ZOU Tao, et al. An automatic target recognition method in SAR imagery using peak feature matching[J]. Journal of Image and Graphics, 2002, 7(7): 729–734. doi: 10.3969/j.issn.1006-8961.2002.07.020
    [100]
    CHIANG H C, MOSES R L, and POTTER L C. Model-based classification of radar images[J]. IEEE Transactions on Information Theory, 2000, 46(5): 1842–1854. doi: 10.1109/18.857795
    [101]
    DUNGAN K E and POTTER L C. Classifying transformation-variant attributed point patterns[J]. Pattern Recognition, 2010, 43(11): 3805–3816. doi: 10.1016/j.patcog.2010.05.033
    [102]
    LI Tingli and DU Lan. Target discrimination for SAR ATR based on scattering center feature and K-center one-class classification[J]. IEEE Sensors Journal, 2018, 18(6): 2453–2461. doi: 10.1109/JSEN.2018.2791947
    [103]
    LIN Yuesong, ZHANG Le, XUE Anke, et al. SAR imagery scattering center extraction and target recognition based on scattering center model[C]. 2006 6th World Congress on Intelligent Control and Automation, Dalian, China, 2006: 9631–9636.
    [104]
    文贡坚, 马聪慧, 丁柏圆, 等. 基于部件级三维参数化电磁模型的SAR目标物理可解释识别方法[J]. 雷达学报, 2020, 9(4): 608–621. doi: 10.12000/JR20099

    WEN Gongjian, MA Conghui, DING Baiyuan, et al. SAR target physics interpretable recognition method based on three dimensional parametric electromagnetic part model[J]. Journal of Radars, 2020, 9(4): 608–621. doi: 10.12000/JR20099
    [105]
    丁柏圆, 文贡坚, 余连生, 等. 属性散射中心匹配及其在SAR目标识别中的应用[J]. 雷达学报, 2017, 6(2): 157–166. doi: 10.12000/JR16104

    DING Baiyuan, WEN Gongjian, YU Liansheng, et al. Matching of attributed scattering center and its application to synthetic aperture radar automatic target recognition[J]. Journal of Radars, 2017, 6(2): 157–166. doi: 10.12000/JR16104
    [106]
    DING Baiyuan, WEN Gongjian, MA Conghui, et al. Decision fusion based on physically relevant features for SAR ATR[J]. IET Radar, Sonar & Navigation, 2017, 11(4): 682–690. doi: 10.1049/iet-rsn.2016.0357
    [107]
    DING Baiyuan, WEN Gongjian, HUANG Xiaohong, et al. Target recognition in synthetic aperture radar images via matching of attributed scattering centers[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(7): 3334–3347. doi: 10.1109/JSTARS.2017.2671919
    [108]
    CHIANG H C, MOSES R L, and POTTER L C. Model-based Bayesian feature matching with application to synthetic aperture radar target recognition[J]. Pattern Recognition, 2001, 34(8): 1539–1553. doi: 10.1016/S0031-3203(00)00089-3
    [109]
    ZHANG Lamei, SUN Liangjie, ZOU Bin, et al. Fully polarimetric SAR image classification via sparse representation and polarimetric features[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015, 8(8): 3923–3932. doi: 10.1109/JSTARS.2014.2359459
    [110]
    CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720
    [111]
    DING Baiyuan, WEN Gongjian, MA Conghui, et al. An efficient and robust framework for SAR target recognition by hierarchically fusing global and local features[J]. IEEE Transactions on Image Processing, 2018, 27(12): 5983–5995. doi: 10.1109/TIP.2018.2863046
    [112]
    LI Tingli and DU Lan. SAR automatic target recognition based on attribute scattering center model and discriminative dictionary learning[J]. IEEE Sensors Journal, 2019, 19(12): 4598–4611. doi: 10.1109/JSEN.2019.2901050
    [113]
    LI Yi, DU Lan, and WEI Di. Multiscale CNN based on component analysis for SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5211212. doi: 10.1109/TGRS.2021.3100137
    [114]
    ZHANG Jinsong, XING Mengdao, SUN Guangcai, et al. Integrating the reconstructed scattering center feature maps with deep CNN feature maps for automatic SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4009605. doi: 10.1109/LGRS.2021.3054747
    [115]
    ZHANG Jinsong, XING Mengdao, and XIE Yiyuan. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(3): 2174–2187. doi: 10.1109/TGRS.2020.3003264
    [116]
    LIU Jiaming, XING Mengdao, YU Hanwen, et al. EFTL: Complex convolutional networks with electromagnetic feature transfer learning for sar target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5209811. doi: 10.1109/TGRS.2021.3083261
    [117]
    YANG Lichao, XING Mengdao, ZHANG Lei, et al. Integration of rotation estimation and high-order compensation for ultrahigh-resolution microwave photonic ISAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(3): 2095–2115. doi: 10.1109/TGRS.2020.2994337
    [118]
    DENG Yuhui, XING Mengdao, SUN Guangcai, et al. A processing framework for airborne microwave photonic SAR with resolution up to 0.03 m: motion estimation and compensation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022.
  • Relative Articles

    [1]LI Yi, DU Lan, ZHOU Ke’er, DU Yuang. Deep Network for SAR Target Recognition Based on Attribute Scattering Center Convolutional Kernel Modulation[J]. Journal of Radars, 2024, 13(2): 443-456. doi: 10.12000/JR24001
    [2]WAQI Riti, LI Gang, ZHAO Zhichun, ZE Zhenghua. Feature Selection Method of Radar-based Road Target Recognition via Histogram Analysis and Adaptive Genetics[J]. Journal of Radars, 2023, 12(5): 1014-1030. doi: 10.12000/JR22245
    [3]ZENG Tao, WEN Yuhan, WANG Yan, DING Zegang, WEI Yangkai, YUAN Tiaotiao. Research Progress on Synthetic Aperture Radar Parametric Imaging Methods[J]. Journal of Radars, 2021, 10(3): 327-341. doi: 10.12000/JR21004
    [4]CHEN Hui, TIAN Xiang, LI Zihao, JIANG Xinrui. Reduced-dimension Target Parameter Estimation For Conformal FDA-MIMO Radar[J]. Journal of Radars, 2021, 10(6): 811-821. doi: 10.12000/JR21197
    [5]LI Yongzhen, HUANG Datong, XING Shiqi, WANG Xuesong. A Review of Synthetic Aperture Radar Jamming Technique[J]. Journal of Radars, 2020, 9(5): 753-764. doi: 10.12000/JR20087
    [6]HUANG Yan, ZHAO Bo, TAO Mingliang, CHEN Zhanye, HONG Wei. Review of Synthetic Aperture Radar Interference Suppression[J]. Journal of Radars, 2020, 9(1): 86-106. doi: 10.12000/JR19113
    [7]WEN Gongjian, MA Conghui, DING Baiyuan, SONG Haibo. SAR Target Physics Interpretable Recognition Method Based on Three Dimensional Parametric Electromagnetic Part Model[J]. Journal of Radars, 2020, 9(4): 608-621. doi: 10.12000/JR20099
    [8]WEI Yangkai, ZENG Tao, CHEN Xinliang, DING Zegang, FAN Yujie, WEN Yuhan. Parametric SAR Imaging for Typical Lines and Surfaces[J]. Journal of Radars, 2020, 9(1): 143-153. doi: 10.12000/JR19077
    [9]XING Mengdao, LIN Hao, CHEN Jianlai, SUN Guangcai, YAN Bangbang. A Review of Imaging Algorithms in Multi-platform-borne Synthetic Aperture Radar[J]. Journal of Radars, 2019, 8(6): 732-757. doi: 10.12000/JR19102
    [10]WANG Chao, WANG Yanfei, LIU Chang, LIU Bidan. A New Approach to Range Cell Migration Correction for Ground Moving Targets in High-resolution SAR System Based on Parameter Estimation[J]. Journal of Radars, 2019, 8(1): 64-72. doi: 10.12000/JR18054
    [11]Zhang Pengfei, Li Gang, Huo Chaoying, Yin Hongcheng. Classification of Drones Based on Micro-Doppler Radar Signatures Using Dual Radar Sensors[J]. Journal of Radars, 2018, 7(5): 557-564. doi: 10.12000/JR18061
    [12]Zhang Qun, Hu Jian, Luo Ying, Chen Yijun. Research Progresses in Radar Feature Extraction, Imaging, and Recognition of Target with Micro-motions[J]. Journal of Radars, 2018, 7(5): 531-547. doi: 10.12000/JR18049
    [13]Kang Miao, Ji Kefeng, Leng Xiangguang, Xing Xiangwei, Zou Huanxin. SAR Target Recognition with Feature Fusion Based on Stacked Autoencoder[J]. Journal of Radars, 2017, 6(2): 167-176. doi: 10.12000/JR16112
    [14]Zhao Feixiang, Liu Yongxiang, Huo Kai. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder[J]. Journal of Radars, 2017, 6(2): 149-156. doi: 10.12000/JR16151
    [15]Ding Baiyuan, Wen Gongjian, Yu Liansheng, Ma Conghui. Matching of Attributed Scattering Center and Its Application to Synthetic Aperture Radar Automatic Target Recognition[J]. Journal of Radars, 2017, 6(2): 157-166. doi: 10.12000/JR16104
    [16]Zhang Xinzheng, Tan Zhiying, Wang Yijian. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion[J]. Journal of Radars, 2017, 6(5): 492-502. doi: 10.12000/JR17078
    [17]Li Hai, Zhang Zhi-qiang, Zhou Meng. A Novel Detection and Parameter Estimation Method of Airborne Phased Array Radar for Maneuvering Target Based on Wigner-Ville Distributed[J]. Journal of Radars, 2015, 4(4): 393-400. doi: 10.12000/JR15094
    [18]Tian Rui-qi, Bao Qing-long, Wang Ding-he, Chen Zeng-ping. An Algorithm for Target Parameter Estimation Based on Fractional Fourier and Keystone Transforms[J]. Journal of Radars, 2014, 3(5): 511-517. doi: 10.3724/SP.J.1300.2014.14058
    [19]Chen Gong-bo, Li Yong, Tao Man-yi. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging[J]. Journal of Radars, 2013, 2(2): 203-209. doi: 10.3724/SP.J.1300.2013.20073
    [20]Zheng Ming-jie, Yan He, Zhang Bing-chen, Zhao Feng-jun, Yang Ru-liang. A Novel Method of Moving Target Detection and Parameters Estimation for Dual-channel WAS Radar Based on DBS Image[J]. Journal of Radars, 2012, 1(1): 36-42. doi: 10.3724/SP.J.1300.2013.20007
  • Cited by

    Periodical cited type(11)

    1. 徐文静,刘杰,于君明,冯晓峰,范睿嘉,尹良. 基于极化对比增强和模板匹配的全极化SAR目标分类方法. 无线电工程. 2025(01): 138-145 .
    2. 郑学召,丁文,黄渊,蔡国斌,马扬,刘盛铠,周博. 不同领域下超宽带雷达探测呼吸心跳信号研究综述. 雷达学报(中英文). 2025(01): 204-228 .
    3. 黄钟泠,吴冲,姚西文,王立鹏,韩军伟. 基于时频分析的SAR目标微波视觉特性智能感知方法与应用. 雷达学报. 2024(02): 331-344 . 本站查看
    4. 靳明振,杨申,吴中杰,张会强,刘盛启. 基于RANSAC和三维谱峰分析的全姿态散射中心建模. 雷达学报. 2024(02): 471-484 . 本站查看
    5. 徐丰,金亚秋. 微波视觉与SAR图像智能解译. 雷达学报. 2024(02): 285-306 . 本站查看
    6. 罗汝,赵凌君,何奇山,计科峰,匡纲要. SAR图像飞机目标智能检测识别技术研究进展与展望. 雷达学报. 2024(02): 307-330 . 本站查看
    7. 陈实,王威,涂建刚. 工程机械电磁散射特性仿真评估. 工程机械. 2024(04): 145-149+13 .
    8. 岳子瑜,徐丰. 基于物理启发机器学习的属性散射中心提取方法. 电子与信息学报. 2024(05): 2036-2047 .
    9. 王梓祺,李阳,张睿,王家宝,李允臣,陈瑶. 小样本SAR图像分类方法综述. 中国图象图形学报. 2024(07): 1902-1920 .
    10. 秦基凯,刘峥,谢荣,冉磊. 基于GCN和CNN联合的SAR图像自动目标识别. 雷达科学与技术. 2024(06): 587-595 .
    11. 马超,王建明,高华,刘嘉铭. 一种深度神经网络SAR图像目标识别可视化方法. 空天预警研究学报. 2023(04): 295-300 .

    Other cited types(10)

  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(12)  / Tables(5)

    Article views(2955) PDF downloads(694) Cited by(21)
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    • 表  1  ADMM算法
      Table  1.  ADMM algorithm
       输入:A, y,迭代次数K,正则化因子ρ,二次惩罚因子γ,迭代步长τ,比例因子δw
       步骤1 初始化:α(0)=0NdNs(NdNs×1的全0列向量),λ(0)=0NM(NM \times 1的全0列向量),{{\boldsymbol{t}}^{(0)} } = {{\bf{1}}_M}(M \times 1的全1列向量),
           {{\boldsymbol{T}}^{(0)} } = {{\boldsymbol{I}}_N} \otimes {\rm{diag}}({{\boldsymbol{t}}^{(0)} })k = 0
       步骤2 { {\boldsymbol{\eta} } ^{(k + 1)} } = \rho \gamma /(1 + \rho \gamma )({ {\boldsymbol{\lambda} } ^{(k)} }/\gamma - {\boldsymbol{A} }{ {\boldsymbol{\alpha} } ^{(k)} } + { {\boldsymbol T}^{(k)} }{\boldsymbol{y} })
       步骤3 {{\boldsymbol{\alpha}} ^{(k + 1)} } = {\text{soft} }({{\boldsymbol{\alpha}} ^{(k)} } + \tau {{\boldsymbol{A}}^{{\rm{H}}} }{{\boldsymbol{\eta}} ^{(k + 1)} }/(\rho \gamma ),\tau /\gamma )
       步骤4-1 
           { {\boldsymbol{z} }^{(k)} } = {\boldsymbol{A} }{ {\boldsymbol{\alpha} } ^{(k + 1)} } + { {\boldsymbol{\eta} } ^{(k + 1)} } - {{\boldsymbol{\lambda}} ^{(k)} }/\gamma, {b_m} = \displaystyle\sum\nolimits_{n = 1}^N {y_{(n - 1)M + m}^ * z_{(n - 1)M + m}^{(k)} }, {a_m} = \displaystyle\sum\nolimits_{n = 1}^N {|{y_{(n - 1)M + m} }{|^2} },
           \beta = \left[\delta + {\rm{j}}w - \sum\nolimits_{m = 1}^M {({b_m}/{a_m}} )\right]/\sum\nolimits_{m = 1}^M {(1/{a_m})}
       步骤4-2 { {\boldsymbol{t} }^{(k + 1)} } = {\left[ {({b_1} + \beta )/{a_1},({b_2} + \beta )/{a_2}, \cdots ,({b_M} + \beta )/{a_M} } \right]^{\rm{T}}}
       步骤5 { {\boldsymbol{\lambda} } ^{(k + 1)} } = { {\boldsymbol{\lambda} } ^{(k)} } - \gamma ({\boldsymbol{A} }{ {\boldsymbol{\alpha} } ^{(k + 1)} } + {{\boldsymbol{\eta}} ^{(k + 1)} } - { {\boldsymbol T}^{(k + 1)} }{\boldsymbol{y} })
       步骤6 令 k \leftarrow k + 1 ,若k \le K - 1,则返回步骤2,否则结束。
       输出: {\boldsymbol{\alpha}} = {{\boldsymbol{\alpha}} ^K}, {e_m} = 1/t_m^K , {\boldsymbol{e} } = {[{e_1},{e_2}, \cdots ,{e_M}]^{\rm{T} } }
      下载: 导出CSV 
      | 显示表格
    • 表  2  仿真参数
      Table  2.  Simulation parameters
      参数数值参数数值
      载机高度H3000 m载机速度v100 ms–1
      阵元数M10 个脉冲数N10 个
      阵元间距d0.1 m工作波长λ0.2 m
      脉冲重复频率fr2000 Hz距离范围[Rmin, Rmax][21,31] km
      距离单元数L100 个杂波块数Nc361 个
      阵元误差数P100 个杂噪比CNR60 dB
      训练数据集大小O7500测试数据集大小S2500
      频率范围f sf d[–0.5,0.5]网格数NsNd50 个
      下载: 导出CSV 
      | 显示表格
    • 表  3  不同算法的运算复杂度
      Table  3.  Computational complexities of different algorithms
      算法运算复杂度
      FOCUSSO\left( {3NM{N_{{\rm{s}}} }{N_{{\rm{d}}} } + { {(NM)}^3} + 2{ {(NM)}^2}{N_{{\rm{s}}} }{N_{{\rm{d}}} } } \right)
      SBLO\left( {5NM{N_{{\rm{s}}} }{N_{{\rm{d}}} } + { {(NM)}^3} + 2{ {(NM)}^2}{N_{{\rm{s}}} }{N_{{\rm{d}}} } + NM + {N_{{\rm{s}}} }{N_{{\rm{d}}} } } \right)
      AE-ADMM-NetO\left( {2NM{N_{{\rm{s}}} }{N_{{\rm{d}}} } + { {(NM)}^2} + 2NM + {N_{{\rm{s}}} }{N_{{\rm{d}}} } } \right)
      下载: 导出CSV 
      | 显示表格