Loading [MathJax]/jax/output/SVG/jax.js
Li Yuqian, Yi Jianxin, Wan Xianrong, Liu Yuqi, Zhan Weijie. Helicopter Rotor Parameter Estimation Method for Passive Radar[J]. Journal of Radars, 2018, 7(3): 313-319. doi: 10.12000/JR17125
Citation: MA Lin, PAN Zongxu, HUANG Zhongling, et al. Multichannel false-target discrimination in SAR images based on sub-aperture and full-aperture feature learning[J]. Journal of Radars, 2021, 10(1): 159–172. doi: 10.12000/JR20106

Multichannel False-target Discrimination in SAR Images Based on Sub-aperture and Full-aperture Feature Learning

DOI: 10.12000/JR20106
Funds:  The National Natural Science Foundation of China (61701478)
More Information
  • Corresponding author: PAN Zongxu, zxpan@mail.ie.ac.cn
  • Received Date: 2020-07-23
  • Rev Recd Date: 2020-09-09
  • Available Online: 2020-10-09
  • Publish Date: 2021-02-25
  • False targets caused by multichannel Synthetic Aperture Radar (SAR) are similar to a defocused ship in both shape and texture, making it difficult to discriminate in the full-aperture SAR image. To address the issue of false alarms caused by such false targets, this paper proposes a multichannel SAR false-target discrimination method based on sub-aperture and full-aperture feature learning. First, amplitude calculation is performed on complex SAR images to obtain the amplitude images, and transfer learning is utilized to extract the full-aperture features from the amplitude images. Then, sub-aperture decomposition is performed on complex SAR images to obtain a series of sub-aperture images, and the Stacked Convolutional Auto-Encoders (SCAE) are applied to extract the sub-aperture features from the sub-aperture images. Finally, the sub-aperture and the full-aperture features are concatenated to form the joint features, which are used to accomplish target discrimination. The accuracy of the method proposed in this paper is 16.32% higher than that of the approach only using the full-aperture feature on GF-3 UFS SAR images.

     

  • 目标的振动、转动等微动产生的微多普勒效应包含了目标的结构和运动信息,常用于目标的分类和识别[13]。目前,基于外辐射源雷达微多普勒效应目标分类和识别的研究还处于起步状态。外辐射源雷达是一种利用非合作照射源进行目标探测和分类识别的新体制雷达系统,其自身不辐射电磁能量,具有节约频谱资源,隐蔽性好,设备规模小,易于部署和组网等特点[46]。在微多普勒效应目标分类和识别方面,外辐射源雷达表现出得天独厚的优势:(1)收发分置可实现空间分集,有效避免探测盲区。(2)第三方辐射源多为连续波,长时间相干积累可记录多个连续的回波闪烁,同时有利于提高对低雷达散射截面积(Radar Cross-Section, RCS)微动目标的探测与分类识别能力。(3)对微多普勒特征的提取不要求高距离分辨率,参数估计不受第三方辐射源带宽的限制[7,8]

    针对微多普勒效应参数估计问题。文献[9,10]中依据微动目标正弦特征曲线,利用 Hough变换,在参数域中进行多维搜索提取出微动曲线进行参数估计。文献[11,12]通过正交匹配追踪(Orthogonal Matching Pursuit, OMP)算法进行稀疏逼近实现了微动目标参量的估计。上述方法均具有较好的鲁棒性,但由于估计参量维数较高导致计算量巨大。文献[13]利用微动目标在时频域的周期性,采用循环相关系数方法,实现了目标微动周期的估计,但信号周期较长时计算量急剧增加。文献[14]计算了信号的高阶矩函数,通过检测在不同时延下,高阶虚函数部分傅里叶变换累计结果的峰值位置,快速获得目标的旋转速率,相比于图像处理方法和OMP分解方法,计算复杂度较小,但抗噪性能差。而外辐射源雷达所利用的第三方辐射源多为连续波信号,其发射波形不可控,信号能量主要覆盖地面,杂波环境复杂且对空中目标增益低,利用长时间相干积累来提高处理增益会带来数据量巨大的挑战。上述因素决定了外辐射源雷达参数估计方法需要有良好的抗噪性能且计算量要小。

    直升机旋翼旋转时对雷达信号产生周期性调制,当叶片发生镜面反射时,旋翼回波出现峰值,即回波闪烁。闪烁信号在时频图像中表现为一定宽度的频率带,且闪烁时间、闪烁间隔与直升机旋翼微动参数密切相关。针对外辐射源雷达参数估计问题,本文结合上述时频域中闪烁信号的特点,通过时频分析和正交匹配追踪算法实现了直升机旋翼微动参数的估计。本文首先给出了外辐射源雷达直升机旋翼微动信号模型,其次介绍了如何在时频图中提取出闪烁信号参数及正交匹配追踪算法对直升机旋翼微动参数的估计,最后仿真和实测证明了本文方法的有效性。

    直升机旋转叶片与外辐射源雷达的位置关系如图1所示。以直升机旋转叶片的中心点为原点 o ,旋转叶片平面为 xy 面, x 轴平行于发射站与接收站所在直线,建立空间坐标系 (x,y,z) 。直升机相对于发射站和接收站距离为 rT, rR ,方位角为 γ, α ,仰角为 βT, βR (cosβTcosβR=cosβ) 。叶片上某一散射点 p 到原点 o 距离为 lP ,方位角为 φt

    图  1  外辐射源雷达直升机旋翼回波模型
    Figure  1.  Model of helicopter rotors echo for passive radar

    假设直升机平动得到补偿。在 t 时刻,从发射站经散射点 p 到接收站的距离为:

    rP(t)=||RT RP(t)||+||RRRP(t)|| (1)

    其中 RT, RR, RP(t) 分别为发射站、接收站、散射点 p 在坐标系 xyz 中的位置矢量。

    参考文献[1]中单基地直升机建模,将叶片看作线模型,外辐射源雷达直升机旋翼回波可表示为:

    s(t)=Lexp{j2πλ(rR+rT)}Nk=1sinc{ϕk(t)}exp{jϕk(t)} (2)

    其中,

    ϕk(t)=4πλL2cosβcos(αγ2)cos(φk(t)) (3)
    φk(t)=2πfrt+φ0+(k1)2π/Nα+γ2 (4)

    fr 为叶片转速, L 为叶片长度, N 为叶片数量,整数 k (0<kN) 表示第 k 个叶片, φ0 为叶片初相, λ 为照射源信号波长。

    由式(3)得第 k 个叶片引起的瞬时多普勒频移为:

    fk(t)=2πfrLλcosβcos(αγ2)sin(φk(t)) (5)

    由式(2)可知时域信号幅值受 sinc 函数调制,结合式(3)知当 φk(t) 满足式(6)时, ϕk(t)=0 ,时域信号幅值最大,此刻即时域闪烁。

    φk(t)=±π2+2πn (6)

    由式(2)知连续两个闪烁之间的时间间隔为:

    Δt={12Nfr,N1Nfr, N (7)

    直升机旋翼回波的微多普勒呈非线性变化,通过对目标回波信号进行时频分析能够揭示信号频率的时变特性。短时傅里叶变化(Short-Time Fourier Transform, STFT)计算简单,且不产生交叉项。对直升机旋翼回波信号 s(t) 进行STFT到时频域

    TF(t,f)=s(τ)w(τt)ej2πfτdτ (8)

    其中, w(t) 为窗函数。旋翼微多普勒效应特征曲线为正弦曲线,对应时域闪烁出现的时刻出现垂直于时间横轴的频率带,即时频域“闪烁”[15]

    图2为直升机旋翼回波的时频图。当直升机旋翼的叶片数为奇数时(图2(a)),时频域中正负多普勒“闪烁”交替出现;若旋翼叶片数为偶数(图2(b)),则是同时出现。

    图  2  旋翼回波信号时频分析
    Figure  2.  Time-frequency analysis of rotors echo

    设时频域中正频率“闪烁”发生的时间为 t0 ,由式(5)和式(6)知 t0 满足:

    φk(t0)=π2+2πn(n) (9)

    由式(4)和式(9)得第 k 个叶片初相与叶片数量的关系:

    φ0={πt0Δt1N2π(k1)1N+φ1, N2πt0Δt1N2π(k1)1N+φ1,N (10)

    其中

    φ1=α+γ2+2πn+π2(0φ0<2π) (11)

    由于时频图像中闪烁信号频率带垂直于时间横轴,对正频率轴数据幅值进行累加计算,并判断累加后数据局部峰值点,可得到时频域中正频率“闪烁”发生的时间。同样,对负频率轴数据幅值进行累加计算得到时频域中负频率“闪烁”发生的时间。相应的也可得到闪烁间隔。

    由式(7)知,闪烁间隔与旋翼转速、叶片数量密切相关。由式(10)知,闪烁发生的时间与叶片初相、叶片数量、整数 k 密切相关。因此,可根据得到的闪烁间隔,用叶片数量表示出旋翼转速。根据得到的闪烁时间,用叶片数量、整数 k 表示出第 k 个叶片初相。

    由式(2)知时域回波信号可分解为:

    s(t)=Mm=1cmgm(t;Λ)=Dα (12)

    其中, gm 为第 m 个原子, D 为以原子为列张成的字典矩阵 D=[g1 g2 g3···gM] CNt×M, M 为原子个数, Nt 为时间 t 离散后的取值个数, Λ 为要估计的参量, cm 为原子系数, αCM 为系数矢量,是稀疏的。可转化最优 l0 范数问题进行稀疏向量求解。OMP常用于求解此类问题,通过构建字典矩阵,不断选定与信号最匹配的原子进行稀疏逼近[16]。OMP将字典矩阵中原子正交化保证了迭代的最优性。

    由式(2)知直升机旋翼回波信号由参数 (fr,L,φ0,N,k) 确定。利用叶片数量 N 、整数 k (0<kN) 与旋翼转速和叶片初相的关系式(7)和式(10),时域回波可转化为参数 (L,N,k) 来表示。设时间采样点数 Nt ,目标回波为 Nt×1 的矩阵。确定待估参数的取值范围并离散化,叶片长度取值: L(L1,···,Lr,···,LNL) ,叶片数量 N 的可能取值为: N(N1,···,Np,···,NNN) ,整数 k 的取值为 k(1,···,kq,···,kNq) (kNqNNN)

    由OMP算法原理可知,字典中的原子可按照待分解信号的内在特性来构造[16]。根据微动目标的时域回波表达式(2),第 m 个原子可表示为:

    a(m)=sinc(ϕ(Lr,Np,kq))exp{jϕ(Lr,Np,kq)} (13)

    其中

    m=rpq (14)

    并对原子集里的每个原子进行能量归一化:

    a(m)a(m)/a(m)F (15)

    其中, F 表示矩阵的F范数。

    将5参量 (fr,L,φ0,N,k) 的估计转换为3参量 (L,N,k) 估计, NL, NN, Nk 分别为 L, N, k 的取值个数,由于常见直升机主旋翼叶片数量为:3片、5片、7片(奇数),2片、4片、8片(偶数), NN, Nk 较小,降低字典维数为: NL×Nk×NN ,可达到降低计算量的目的。

    直升机旋翼参数估计具体步骤如下:

    步骤1 对直升机旋翼信号进行短时傅里叶变换,得到时频图像 TF(t,f)

    步骤2 对时频图中正频率轴数据幅值进行累加计算,并判断累加后数据局部峰值点,对应时频域正频率“闪烁”发生的时间。同样,对负频率轴数据幅值进行累加计算得到时频域中负频率“闪烁”发生的时间。

    步骤3 根据步骤2中正负频率“闪烁”发生的时间,判别时频域中正负多普勒“闪烁”是否交替出现。若是,则旋翼叶片数为奇数,否则,旋翼叶片数为偶数。

    步骤4 读取某一正频率闪烁发生的时间 t0 及闪烁间隔 Δt 。依据式(7)用叶片数量 N 表示出旋翼转速,依据式(10)和式(11)用叶片数量 N 及整数 k 表示出第 k 个叶片初相。

    步骤5 确定 (L,N,k) 的取值范围并离散化: L(L1,···,Lr,···,LNL) , N(N1,···,Np,···,NNN) , k(1,···,kq,···,kNq) (kNqNNN) 。利用步骤4中表示出的旋翼转速及初相,依据式(13)和式(15)构建字典矩阵。

    步骤6 利用OMP算法寻找叶片数量,叶片长度的最优值,代入式(7)计算出旋翼转速,代入式(10)和式(11)计算出叶片初相。

    结合上述模型对直升机旋翼回波信号进行仿真,仿真参数设置如表1所示。

    表  1  外辐射源雷达直升机旋翼回波模型仿真参数
    Table  1.  Simulation parameters of helicopter rotor echo model for passive radar
    信号载频 叶片数 叶片长度 旋转速率 发射站方位角 接收站方位角 发射站仰角 接收站仰角 SNR
    658 MHz 3 5 m 200 rpm 33° 76° 23° 23° –5 dB
    下载: 导出CSV 
    | 显示表格

    图3(a)显示了信号的联合时频域特征,可看出闪烁信号及噪声严重影响直升机旋翼微多普勒特征曲线的检测,使微多普勒特征曲线提取困难。

    分别对时频图像中正负频率轴数据幅值进行累加计算,得到时频域中正负多普勒“闪烁”时间,如图3(b)所示,图中正负多普勒“闪烁”等间隔交替出现,则旋翼叶片数为奇数。读取闪烁信号时间间隔为0.05 s,根据式(7)表示出旋翼转速为:

    fr=10/N (16)

    读取某一正频率闪烁信号对应时刻为0.066 s(此处选择了图3(b)中的第1个正频率闪烁信号),根据式(10)和式(11)表示出第 k 个叶片初相为:

    φ0=4.14/N6.28×(k1)/N+2.52 (17)

    图3(c)为利用OMP方法对 (L,N,k) 的估计结果,得到叶片数为3片,图中给出了其对应的切面图,3叶片长度分别4.99 m, 5.00 m, 4.98 m,均值4.99 m,与理论基本一致,代入式(17)得3叶片初相分别为1.14 rad, 3.24 rad, 5.34 rad,代入式(16)得旋翼转速为200 rpm,与理论值一致,本文方法准确实现了直升机旋翼参数估计。

    图  3  本文方法参数估计结果
    Figure  3.  Parameter estimation by this article method

    图4为利用常规Hough变换,通过微多普勒曲线 f=fmaxsin(2πfrt+φ0) 检测对参数 (fr,φ0,L) 的估计结果。其中 fmax 为最大频移。

    fmax=4πfrLλcosβcos(αγ2) (18)

    图4中给出了参数空间中局部峰值点中心位置。可得到直升机旋翼转速为200 rpm。3叶片最大频移分别为385.6 Hz, 393.9 Hz, 389.8 Hz,平均值为390.0 Hz,由式(18)计算得叶片长度为4.96 m,与理论值基本一致。3叶片初相分别为0.91 rad, 3.16 rad, 5.24 rad,利用式(4)对初相进行修正,得到3叶片初相位为1.86 rad, 4.11 rad, 6.19 rad,存在较大的误差,是由于STFT受不确定原理的限制,时频图像中时频分辨率受限使参数空间中的局部峰值点扩展范围较大,只能大致估计局部峰值点的位置,估计结果精度较低。

    图  4  常规Hough变换参数估计结果
    Figure  4.  Parameter estimation by traditional Hough transform

    设待处理的时频图像大小为 Nt×Nf 像素, NfNt ,利用常规的Hough变换对微多普勒曲线 f=fmaxsin(2πfrt+φ0) 进行检测,参数 (fr,φ0,L) 分别量化为 Nfr , Nφ0 , NL 份。乘法次数可近似表示为: 2NfrNφ0NLNt2

    直接使用OMP进行参数 (fr,φ0,L) 估计时,设迭代次数为K,乘法次数近似表示为: KNfrNφ0 NLNt2

    本文方法计算量集中在OMP阶段,根据提取的时频域中的闪烁时间,依据式(7)和式(10),最终转化为对参数 (L,N,k) 的估计,常见直升机的叶片数只有若干个取值,且由3.1节方法可判断出叶片数量的奇偶性, NNNk 远小于 NfrNφ0 。本文方法乘法次数近似表示为: KNNNkNLNt2

    在对直升机旋翼微动参数估计时,一般 NNNk 取值量级为101~102,迭代次数K的取值量级为 100101 ,当初相 φ0 的估计精度为7°时, 2Nφ0 取值量级为102,当转速 fr 的估计精度为10 rpm时, Nfr 的取值量级为 101 ,在乘法次数上,常规 Hough变换参数估计方法为本文方法 100102 倍,当进一步提高 fr, φ0 的估计精度时,算法之间的计算量差距将进一步变大。本文在相同的配置环境下,利用Matlab仿真平台,常规Hough变换方法运行时长13006 s,而本文方法运行总时长只有145 s。

    武汉大学电波传播实验室对EC_120B直升机进行了微多普勒效应探究外场实验,EC_120B直升机主旋翼3叶片,叶片长度5 m,额定转速406 rpm,实验中以武汉龟山电视塔数字电视信号为照射源,信号中心频率为658 MHz,带宽8 MHz,接收站位于武汉大学电波传播实验室楼顶,距离发射站7.56 km,实验场景如图5所示。本组实测数据相干积累时间0.8 s,可近似认为目标在这段时间位置不变,直升机旋翼转速为常量。

    图  5  实验场景图
    Figure  5.  Experimental scene map

    图6(a)为去除目标主体影响后,对直升机旋翼回波信号进行短时傅里叶变换后的时频图像。可以观察到闪烁信号,但微多普勒特征曲线已观察不到。分别对时频图像中正负频率轴数据幅值进行累加计算,得到时频域中正负多普勒“闪烁”时间,如图6(b)所示,图中正负多普勒“闪烁”等间隔交替出现,则旋翼叶片数为奇数。

    图  6  本文方法参数估计结果
    Figure  6.  Parameter estimation by this article method

    读取闪烁信号时间间隔为26.2 ms,根据式(7)用 N 表示出旋翼转速。读取某一正频率闪烁时间对应时间为0.25 s,利用式(10)和式(11)表示出第 k 个叶片初相,图6(c)为利用OMP方法对 (L,N,k) 的估计结果,得到叶片数量为3片,3叶片长度分别为4.93 m, 5.00 m, 4.66 m,均值4.86 m,存在较小的误差,与仰角,方位角估计不精确有关,由式(7)知旋翼转速均为382 rpm,符合实际情况。

    本文根据外辐射源雷达直升机旋翼微动信号模型,充分利用时频域中闪烁信号特征和微动信号内在特性进行了参数估计。通过时频分析和正交匹配追踪算法,估计出了旋翼转速、叶片长度、叶片数量和初相。同时开展了外场实验。仿真数据和实测数据处理都表明本文方法对外辐射源雷达直升机旋翼参数估计的可行性。

  • [1]
    杜兰, 王兆成, 王燕, 等. 复杂场景下单通道SAR目标检测及鉴别研究进展综述[J]. 雷达学报, 2020, 9(1): 34–54. doi: 10.12000/JR19104

    DU Lan, WANG Zhaocheng, WANG Yan, et al. Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes[J]. Journal of Radars, 2020, 9(1): 34–54. doi: 10.12000/JR19104
    [2]
    吴亮, 雷斌, 韩冰, 等. 卫星姿态误差对多通道SAR成像质量的影响[J]. 测绘通报, 2015, (1): 124–130. doi: 10.13474/j.cnki.11-2246.2015.0026

    WU Liang, LEI Bin, HAN Bing, et al. The impact of satellite attitude error on multi-channel SAR image quality[J]. Bulletin of Surveying and Mapping, 2015, (1): 124–130. doi: 10.13474/j.cnki.11-2246.2015.0026
    [3]
    张双喜, 乔宁, 邢孟道, 等. 多普勒频谱模糊情况下的星载方位向多通道高分宽幅SAR-GMTI杂波抑制方法[J]. 雷达学报, 2020, 9(2): 295–303. doi: 10.12000/JR20005

    ZHANG Shuangxi, QIAO Ning, XING Mengdao, et al. A novel clutter suppression approach for the space-borne multiple channel in the azimuth high-resolution and wide-swath SAR-GMTI system with an ambiguous Doppler spectrum[J]. Journal of Radars, 2020, 9(2): 295–303. doi: 10.12000/JR20005
    [4]
    ZHANG Shuangxi, XING Mengdao, XIA Xianggen, et al. Multichannel HRWS SAR imaging based on range-variant channel calibration and multi-Doppler-direction restriction ambiguity suppression[J]. IEEE Transactions on Geoscience and Remote Sensing, 2014, 52(7): 4306–4327. doi: 10.1109/TGRS.2013.2281329
    [5]
    PAN Zongxu, LIU Lei, QIU Xiaolan, et al. Fast vessel detection in Gaofen-3 SAR images with ultrafine strip-map mode[J]. Sensors, 2017, 17(7): 1578. doi: 10.3390/s17071578
    [6]
    DI MARTINO G, IODICE A, RICCIO D, et al. Filtering of azimuth ambiguity in stripmap synthetic aperture radar images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014, 7(9): 3967–3978. doi: 10.1109/JSTARS.2014.2320155
    [7]
    温雪娇, 仇晓兰, 尤红建, 等. 高分辨率星载SAR起伏运动目标精细聚焦与参数估计方法[J]. 雷达学报, 2017, 6(2): 213–220. doi: 10.12000/JR17005

    WEN Xuejiao, QIU Xiaolan, YOU Hongjian, et al. Focusing and parameter estimation of fluctuating targets in high resolution spaceborne SAR[J]. Journal of Radars, 2017, 6(2): 213–220. doi: 10.12000/JR17005
    [8]
    WEN Xuejiao, QIU Xiaolan, and YOU Hongjian. Focusing and parameter estimating of fluctuating target in high resolution spaceborne SAR[C]. 2016 CIE International Conference on Radar, Guangzhou, China, 2016: 1–5. doi: 10.1109/RADAR.2016.8059537.
    [9]
    REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [10]
    LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, Holland, 2016. doi: 10.1007/978-3-319-46448-0_2.
    [11]
    LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. doi: 10.1109/BIGSARDATA.2017.8124934.
    [12]
    KANG Miao, LENG Xiangguang, LIN Zhao, et al. A modified faster R-CNN based on CFAR algorithm for SAR ship detection[C]. 2017 International Workshop on Remote Sensing with Intelligent Processing, Shanghai, China, 2017: 1–4. doi: 10.1109/RSIP.2017.7958815.
    [13]
    LIU Lei, CHEN Guowei, PAN Zongxu, et al. Inshore ship detection in SAR images based on deep neural networks[C]. 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 25–28. doi: 10.1109/IGARSS.2018.8519555.
    [14]
    ZHANG Fan, WANG Yunchong, NI Jun, et al. SAR target small sample recognition based on CNN cascaded features and AdaBoost rotation forest[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(6): 1008–1012. doi: 10.1109/LGRS.2019.2939156
    [15]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Ship detection based on complex signal kurtosis in single-channel SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6447–6461. doi: 10.1109/TGRS.2019.2906054
    [16]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Discriminating ship from radio frequency interference based on noncircularity and non-gaussianity in sentinel-1 SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(1): 352–363. doi: 10.1109/TGRS.2018.2854661
    [17]
    ZHANG Zhimian, WANG Haipeng, XU Feng, et al. Complex-valued convolutional neural network and its application in polarimetric SAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(12): 7177–7188. doi: 10.1109/TGRS.2017.2743222
    [18]
    HUANG Zhongling, DATCU M, PAN Zongxu, et al. Deep SAR-Net: Learning objects from signals[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161: 179–193. doi: 10.1016/j.isprsjprs.2020.01.016
    [19]
    TANG Jiaxin, ZHANG Fan, ZHOU Yongsheng, et al. A fast inference networks for SAR target few-shot learning based on improved siamese networks[C]. 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 1212–1215. doi: 10.1109/IGARSS.2019.8898180.
    [20]
    OUCHI K, TAMAKI S, YAGUCHI H, et al. Ship detection based on coherence images derived from cross correlation of multilook SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2004, 1(3): 184–187. doi: 10.1109/LGRS.2004.827462
    [21]
    MARINO A, SANJUAN-FERRER M J, HAJNSEK I, et al. Ship detection with spectral analysis of synthetic aperture radar: A comparison of new and well-known algorithms[J]. Remote Sensing, 2015, 7(5): 5416–5439. doi: 10.3390/rs70505416
    [22]
    RENGA A, GRAZIANO M D, and MOCCIA A. Segmentation of marine SAR images by sublook analysis and application to sea traffic monitoring[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(3): 1463–1477. doi: 10.1109/TGRS.2018.2866934
    [23]
    BREKKE C, ANFINSEN S N, and LARSEN Y. Subband extraction strategies in ship detection with the subaperture cross-correlation magnitude[J]. IEEE Geoscience and Remote Sensing Letters, 2013, 10(4): 786–790. doi: 10.1109/LGRS.2012.2223656
    [24]
    SOUYRIS J C, HENRY C, and ADRAGNA F. On the use of complex SAR image spectral analysis for target detection: Assessment of polarimetry[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(12): 2725–2734. doi: 10.1109/TGRS.2003.817809
    [25]
    FERRO-FAMIL L, REIGBER A, POTTIER E, et al. Scene characterization using subaperture polarimetric SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(10): 2264–2276. doi: 10.1109/TGRS.2003.817188
    [26]
    DUMITRU C O, SCHWARZ G, and DATCU M. Land cover semantic annotation derived from high-resolution SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2016, 9(6): 2215–2232. doi: 10.1109/JSTARS.2016.2549557
    [27]
    HUANG Zhongling, DUMITRU C O, PAN Zongxu, et al. Classification of large-scale high-resolution SAR images with deep transfer learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(1): 107–111. doi: 10.1109/LGRS.2020.2965558
    [28]
    TerraSAR-X Basic Product Specification Document, Issue1.9[Online]. http://sss.terrasar-x.dlr.de/pdfs/TX-GS-DD-3302.pdf, 2013.
    [29]
    HUANG Zhongling, PAN Zongxu, and LEI Bin. What, where, and how to transfer in SAR target recognition based on deep CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(4): 2324–2336. doi: 10.1109/TGRS.2019.2947634
  • Relative Articles

    [1]XING Mengdao, MA Penghui, LOU Yishan, SUN Guangcai, LIN Hao. Review of Fast Back Projection Algorithms in Synthetic Aperture Radar[J]. Journal of Radars, 2024, 13(1): 1-22. doi: 10.12000/JR23183
    [2]WANG Xiang, WANG Yumiao, CHEN Xingyu, ZANG Chuanfei, CUI Guolong. Deep Learning-based Marine Target Detection Method with Multiple Feature Fusion[J]. Journal of Radars, 2024, 13(3): 554-564. doi: 10.12000/JR23105
    [3]TIAN Ye, DING Chibiao, ZHANG Fubo, SHI Min’an. SAR Building Area Layover Detection Based on Deep Learning[J]. Journal of Radars, 2023, 12(2): 441-455. doi: 10.12000/JR23033
    [4]CHEN Xiang, WANG Liandong, XU Xiong, SHEN Xujian, FENG Yuntian. A Review of Radio Frequency Fingerprinting Methods Based on Raw I/Q and Deep Learning[J]. Journal of Radars, 2023, 12(1): 214-234. doi: 10.12000/JR22140
    [5]DING Zihang, XIE Junwei, WANG Bo. Missing Covariance Matrix Recovery with the FDA-MIMO Radar Using Deep Learning Method[J]. Journal of Radars, 2023, 12(5): 1112-1124. doi: 10.12000/JR23002
    [6]HE Mi, PING Qinwen, DAI Ran. Fall Detection Based on Deep Learning Fusing Ultrawideband Radar Spectrograms[J]. Journal of Radars, 2023, 12(2): 343-355. doi: 10.12000/JR22169
    [7]HUANG Zhongling, YAO Xiwen, HAN Junwei. Progress and Perspective on Physically Explainable Deep Learning for Synthetic Aperture Radar Image Interpretation(in English)[J]. Journal of Radars, 2022, 11(1): 107-125. doi: 10.12000/JR21165
    [8]ZENG Tao, WEN Yuhan, WANG Yan, DING Zegang, WEI Yangkai, YUAN Tiaotiao. Research Progress on Synthetic Aperture Radar Parametric Imaging Methods[J]. Journal of Radars, 2021, 10(3): 327-341. doi: 10.12000/JR21004
    [9]LI Xiaofeng, ZHANG Biao, YANG Xiaofeng. Remote Sensing of Sea Surface Wind and Wave from Spaceborne Synthetic Aperture Radar[J]. Journal of Radars, 2020, 9(3): 425-443. doi: 10.12000/JR20079
    [10]LI Yongzhen, HUANG Datong, XING Shiqi, WANG Xuesong. A Review of Synthetic Aperture Radar Jamming Technique[J]. Journal of Radars, 2020, 9(5): 753-764. doi: 10.12000/JR20087
    [11]HUANG Yan, ZHAO Bo, TAO Mingliang, CHEN Zhanye, HONG Wei. Review of Synthetic Aperture Radar Interference Suppression[J]. Journal of Radars, 2020, 9(1): 86-106. doi: 10.12000/JR19113
    [12]LUO Ying, NI Jiacheng, ZHANG Qun. Synthetic Aperture Radar Learning-imaging Method Based onData-driven Technique and Artificial Intelligence[J]. Journal of Radars, 2020, 9(1): 107-122. doi: 10.12000/JR19103
    [13]WEI Yangkai, ZENG Tao, CHEN Xinliang, DING Zegang, FAN Yujie, WEN Yuhan. Parametric SAR Imaging for Typical Lines and Surfaces[J]. Journal of Radars, 2020, 9(1): 143-153. doi: 10.12000/JR19077
    [14]XING Mengdao, LIN Hao, CHEN Jianlai, SUN Guangcai, YAN Bangbang. A Review of Imaging Algorithms in Multi-platform-borne Synthetic Aperture Radar[J]. Journal of Radars, 2019, 8(6): 732-757. doi: 10.12000/JR19102
    [15]Zhao Feixiang, Liu Yongxiang, Huo Kai. A Radar Target Classification Algorithm Based on Dropout Constrained Deep Extreme Learning Machine[J]. Journal of Radars, 2018, 7(5): 613-621. doi: 10.12000/JR18048
    [16]Wang Jun, Zheng Tong, Lei Peng, Wei Shaoming. Study on Deep Learning in Radar[J]. Journal of Radars, 2018, 7(4): 395-411. doi: 10.12000/JR18040
    [17]Xu Feng, Wang Haipeng, Jin Yaqiu. Deep Learning as Applied in SAR Target Recognition and Terrain Classification[J]. Journal of Radars, 2017, 6(2): 136-148. doi: 10.12000/JR16130
    [18]Ren Xiaozhen, Yang Ruliang. Four-dimensional SAR Imaging Algorithm Based on Iterative Reconstruction of Magnitude and Phase[J]. Journal of Radars, 2016, 5(1): 65-71. doi: 10.12000/JR15135
    [19]Jin Tian. An Enhanced Imaging Method for Foliage Penetration Synthetic Aperture Radar[J]. Journal of Radars, 2015, 4(5): 503-508. doi: 10.12000/JR15114
  • Cited by

    Periodical cited type(3)

    1. 张佳辉,苗洪利,杨忠昊,刘昆池. 基于SAR子孔径分解的海表面二维流场反演. 海洋学报. 2023(08): 24-30 .
    2. 李志远,郭嘉逸,张月婷,黄丽佳,李洁,吴一戎. 基于自适应动量估计优化器与空变最小熵准则的SAR图像船舶目标自聚焦算法. 雷达学报. 2022(01): 83-94 . 本站查看
    3. 雷禹,冷祥光,孙忠镇,计科峰. 宽幅SAR海上大型运动舰船目标数据集构建及识别性能分析. 雷达学报. 2022(03): 347-362 . 本站查看

    Other cited types(2)

  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(14)  / Tables(7)

    Article views(4104) PDF downloads(310) Cited by(5)
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    • 表  1  外辐射源雷达直升机旋翼回波模型仿真参数
      Table  1.  Simulation parameters of helicopter rotor echo model for passive radar
      信号载频 叶片数 叶片长度 旋转速率 发射站方位角 接收站方位角 发射站仰角 接收站仰角 SNR
      658 MHz 3 5 m 200 rpm 33° 76° 23° 23° –5 dB
      下载: 导出CSV 
      | 显示表格