
Citation: | WAN Xuanshen, LIU Wei, NIU Chaoyang, et al. Black-box attack algorithm for SAR-ATR deep neural networks based on MI-FGSM[J]. Journal of Radars, 2024, 13(3): 714–729. doi: 10.12000/JR23220 |
随着科学技术的发展,现代战争中的装备必须趋向于综合化发展,但同时也不能让过多的设备加剧恶化平台周围的电磁环境、增加负荷,例如无人机,就需在小体积平台上集成多种装备功能,并保持平台的机动性和综合性,雷达和通信系统是平台广泛配备的两种电子系统[1],若能实现雷达通信一体化[2,3],将大大提高电子系统的综合利用率。
雷达通信一体化的理念在20世纪60年代出现以后,对其研究主要分为分时、分波束和同时3种体制。分时体制在通信时不能兼顾雷达探测,即在通信时存在雷达探测盲区,但相对最易实现,故研究的较多;分波束体制将相控阵面划分为不同区域,利用划分的各个阵面实现不同功能;同时体制将雷达信号和通信信号融合在一起,在同一平台同时实现探测与传输功能。其一体化程度最高,是未来雷达通信一体化的发展方向。这种体制的关键技术更多地集中在共享信号设计,而共享信号设计主要需要解决通信数据传输和雷达探测之间的关系。现有的共享信号设计方法基本可分为3类:①雷达与通信信号各自独立产生后叠加[4],②基于通信信号,将其改造成雷达探测波,③基于雷达信号,在其上调制通信数据[5]。文献[6]中研究了利用线性调频信号(Chirp信号)上、下扫频分别作为雷达波形和通信波形,叠加产生共享信号,接收时利用正交性将其分离的一体化系统,但其通信速率受到很大的限制,雷达性能有所降低;文献[7]中研究了利用正交频分复用(Orthogonal Frequency Division Multiplexing, OFDM)信号实现一体化波形,但是OFDM信号不是恒包络,峰均比较高不利于在雷达的C类放大器中放大,且对多普勒频移较敏感,仅适用于短距离通信与探测;文献[8]中研究了将通信最小频移键控(Minimum Shift Keying, MSK)调制到线性调频信号上实现一体化波形,该波形能在实现雷达检测动能的同时完成通信功能;文献[9]研究了通过键控Chirp信号的初始频率来调制通信数据从而实现雷达通信一体化的方法,但是雷达检测处理的匹配滤波器要随着发射信号的改变而改变。
本文提出一种基于Chirp信号参数调制的多载波雷达通信共享信号,主载波用于雷达检测功能,副载波的调频率与初始频率参数可选,从而携带数据实现通信信息调制。在设计共享信号时,通信数据的随机性常使不同脉冲间的信号相关性减弱,而雷达探测为进行相干积累,需在接收端使用与之对应的匹配滤波器,大大增加雷达系统负担,本文所设计信号利用主载波的确定性提高脉冲的相关性,雷达处理系统不需要增加额外单元,采用同原始雷达相同的处理流程;而不同起始频率、不同调频率的Chirp信号能在带宽利用率及正交性之间提供平衡。在文中对所设计共享波形的模糊函数、主副载波之间的正交性等性能进行了分析;在接收端通过分数傅里叶变换,根据检测点的能量聚集位置进行解调。
共享信号设计中,在雷达探测波形上调制通信信息后,由于通信数据的随机性,使脉冲波形产生差异性,需要增加额外的雷达信号处理单元,造成负担。为减少脉冲差异性,便于雷达目标检测处理,设计主副载波的共享信号形式,主载波作雷达目标检测,副载波调制通信信息[10,11]。
副载波由待传输码元从一组Chirp信号组
skl(t)=A2exp[j(πμkt2+2πflt)],k=0,1,···,N1−1;l=0,1,···,N2−1 | (1) |
式中,等间隔调频率
主载波确定为调频率大于副载波调频率选取范围的Chirp信号,即
sr(t)=A1exp[j(πμrt2+2πfrt)] | (2) |
利用调频率的多样性,给主、副载波提供良好的准正交性[13]。参数选取范围如图2所示,主载波为确定的Chirp信号,副载波为众多参数组合中选取的某一Chirp信号,共享信号表示为
s(t)=sr(t)+skl(t),−τ/2≤t≤τ/2 | (3) |
共享信号的通信码元信息利用分数阶傅里叶变换(FRactional Fourier Transform, FRFT)解调,Chirp信号的FRFT变换为
Sα(u)=A√1−jcotαexp(jπu2cotα)⋅∫τ/2−τ/2ej2π(μ+cotα)t2+jπ(f0−ucscα)tdt | (4) |
式中,旋转角度
{μ=−cotαf=ucscα⇒{p=−2arccotμ/πu=−fsin(arccotμ) | (5) |
时,出现幅度峰值
如图3所示,只有在最优阶次的FRFT变换时,在分数阶傅里叶域上才有峰值输出。接收端对接收到的共享信号进行FRFT变换,检测峰值,得到峰值所在的变换阶次与分数阶傅里叶域坐标,由FRFT变换与Chirp信号参数之间的关系,解得副载波的调频率与初始频率,从而映射出调制的码元数据[15]。
首先根据设定的映射规则,将通信数据映射到对应的初始频率、调频率的Chirp信号序列中,在接收端通过分数阶傅里叶变换进行解调。一体化框图如图4所示,在发送端,将通信数据串并转换后分成
在接收端,雷达处理系统与常规雷达相同,不会增加额外处理单元。通信处理系统,对回波进行分数阶傅里叶变换,依次进行
为满足通信解调准确率与雷达目标检测分辨率,需对共享信号各参数进行设计。
副载波
Δp=|pk−pk+1|=2|arccotμk−arccotμk+1|/π | (6) |
对Chirp信号进行非最优阶FRFT时,分数傅里叶域谱不具有聚集性质,而且随着变换阶次偏离
为使具有相同调频率
Δu=|ul−ul+1|=|(fl−fl+1)⋅sinα|=|Δf⋅sinα| | (7) |
对Chirp信号进行最优阶FRFT变换时,需要在
|Sα(u)|=|Aτ√1−jcotα⋅Sa[π(f0−ucscα)τ]| |
(8)
第1零点间距离为
Δu=|Δf⋅sinα|>|2sinα/τ| | (9) |
即
模糊函数表征了波形的距离与多普勒分辨率等特性。共享信号
χ(τ,fd)=∫∞−∞s(t)s∗(t−τ)ej2πfdtdt=∫t2t1[sr(t)s∗r(t−τ)+skl(t)s∗kl(t−τ)⏟χM+sr(t)s∗kl(t−τ)+skl(t)s∗r(t−τ)]⏟χI⋅ej2πfdtdt | (10) |
{0≤τ≤τ′时, t1=−τ′/2+τ,t2=τ′/2−τ′≤τ≤0时, t1=−τ′/2,t2=τ′/2+τ | (11) |
由表达式可看出,可将共享信号的模糊函数分为主瓣区域
χM=A21ejπ(2fr+fd)τsin[π(μrτ+fd)(τ′−|τ|)]π(μrτ+fd)+A22ejπ(2fl+fd)τsin[π(μkτ+fd)(τ′−|τ|)]π(μkτ+fd) | (12) |
而邻道干扰项
χI=A1A22∑i=1, s=1s≠iexp(j2πfiτ−jπμiτ2)⋅∫t2t1exp[−jπ(μi−μs)t2]⋅exp[−j2π(fi−μiτ−fs−fd)t]dt=A1A22∑i=1, s=1s≠iexp(α)⋅∫t2t1exp(−β2)dt=A1A22∑i=1, s=1s≠i12√j(μi−μs)exp(α)⋅{erf[β(t2)]−erf[β(t1)]} | (13) |
式中,
由于无法求得模糊函数的具体表达式,故对其模糊函数进行了统计意义上的仿真分析,仿真参数同第5节,主瓣区域为两Chirp信号模糊函数叠加,而邻道干扰性是由两项主、副载波的互模糊函数之和,其幅度相较于信号模糊函数的峰值,幅度较低,由多次仿真得到,邻道干扰项幅度峰值的平均值仅为模糊函数峰值的2.2%,方差为0.000115,且峰值不位于速度-距离平面原点,故认为模糊函数主要由主瓣区域决定。随调制数据的改变,多普勒切片的主瓣宽度变化范围均低于
主副载波间的互相关性决定了在接收端进行匹配滤波时副载波的剩余量。主载波与副载波信号表达式如下:
sr(t)=A1exp(jπμrt2+j2πfrt) | (14) |
skl(t)=A2exp(jπμkt2+j2πflt) | (15) |
其中,
Rsr,skl(τ)=∫+∞−∞s∗r(t)skl(t+τ)dt=A1A2∫t2t1exp{j2πflτ−jπ(fl+μkτ−fr)2μk−μr+jπ2(2(fl+μkτ−fr)√2(μk−μr)+√2(μk−μr)⋅t)2+jπμkτ2}dt | (16) |
式中,积分区间
Rsr,skl(τ)=A1A2exp(j2πflτ+jπμkτ2)√2(μk−μr)⋅exp[−jπ(fl+μkτ−fr)2μk−μr]⋅∫γ(t2)γ(t1)exp[jπ2γ2(t)]dγ | (17) |
其 中,
互相关值取决于主副载波的调频率差值以及频率差值与调频率差值的比值。根据前述参数设计主载波
共享信号由主载波与副载波叠加得到,则用于雷达探测的功率会有所下降,但主、副载波功能相互独立,故可调整主载波与副载波的不同功率配比,增加用于雷达探测的功率。
雷达探测目标由以主载波为参考信号的匹配滤波器进行脉压处理,处理结果基于主载波与各脉冲共享信号的相关性,相关性表示为主载波的自相关函数与主、副载波的互相关函数之和:
Rs,sr(τ)=∫+∞−∞s∗(t)sr(t+τ)dt=∫+∞−∞[s∗r(t)+s∗kl(t)]⋅sr(t+τ)dt=Rsr,sr(τ)+Rskl,sr(τ) | (18) |
由4.2节分析可知,主、副载波的互相关函数相较于主载波的自相关函数幅度很低,故雷达探测结果受副载波分量影响很小。
主副载波的功率分配决定了用于雷达探测的功率,可在适当范围内提高主载波的功率以用于雷达探测,表1列出了在不同主、副载波功率比时,主载波与几组不同参数下的共享信号之间的互相关系数,从表1中可以看出,主载波所占功率越大,相关系数越接近于1,雷达探测性能越好。
主副功率比 | s13 | s24 | s35 | s46 | s57 |
1:1 | 0.7349 | 0.7456 | 0.7341 | 0.7323 | 0.7672 |
4:1 | 0.9082 | 0.9053 | 0.9022 | 0.9015 | 0.9115 |
9:1 | 0.9522 | 0.9530 | 0.9519 | 0.9515 | 0.9554 |
增加了主载波的功率后,副载波的功率必然会下降,利用FRFT变换的解调性能会有所下降,如图8所示,在主、副载波的功率比为9:1时,FRFT变换的旁瓣在略微升高后,依然能保持在–10 dB左右,能检测到明显峰值,解调出码元数据,但是主副功率比不能无限制增大,主载波功率过高时,在FRFT解调输出谱中会覆盖掉峰值,无法解调出数据,功率比越高,误码率越差,可根据应用条件选择主副载波的功率比。
雷达与通信接收端之间的相对运动会存在多普勒频移
s(t)=Aexp[j(2πf′t+πμt2+φ)],t∈[−τ/2,τ/2], f′=fc+fd | (19) |
多普勒频移
Δu=fdsin(arccot(−μ)) | (20) |
|Sα(u)|2=A2sinα⋅sin2(πfdτ)(πfd)2 | (21) |
多普勒频移
γ=A2sin2(πfdτ)(πfd)2sinα/A2τ2sinα=sin2(πfdτ)(πfdτ)2 | (22) |
由式(22)可看出衰减系数只与多普勒频移
本文调制方式需要考虑调频率与频率的配比,设比特宽度为
{\eta _{\rm{{{μ}} }}} = \frac{{{R_{\rm{b}}}}}{B} = \frac{1}{{{T_{\rm{b}}}B}} = \frac{n}{{{T_{\rm{s}}}B}} \ge \frac{n}{{{\mu _{k\max }}T_{\rm{s}}\;\!^2}} | (23) |
MFSK (Multiple Frequency-Shift Keying)的信道带宽理论值为
{\eta _{\rm{f}}} = \frac{{{R_{\rm{b}}}}}{B} = \frac{{2n}}{{M + 3}} | (24) |
因此,当
在仿真实验中,设定二进制数据对8调频率与8初始频率的64个Chirp信号进行调制,根据第2部分要求设计仿真参数为:射频
因主载波Chirp信号的初始频率
分析知主、副载波互相关性较低,不影响雷达的目标检测,第1旁瓣依然在–13 dB左右,经过匹配滤波器后副载波剩余量很小,幅度保持在–20 dB以下,而且增加主载波的功率后,剩余量幅度会更低,经过匹配滤波器后的回波脉冲串有很高的相关性,经过多普勒滤波器组进行动目标检测(Moving Target Detection, MTD)处理即可得出目标相对速度,表明在接收端仅需使用单一滤波器即可完成,测速结果如图10所示。
在虚警概率等于10–4条件下,本文共享信号进行脉压处理和不同脉冲数积累MTD处理后,检测概率与信噪比(SNR)的关系曲线如图11所示,脉压与MTD处理利用相干积累提高了SNR,由于本文共享信号存在通信副载波分量,故与同参数下的单Chirp信号相比,检测概率有所下降;但在进行MTD相干处理后,提高了SNR,从而改善检测概率,而且相干积累的脉冲数越多,检测概率越优,故采用较多脉冲积累来弥补共享信号雷达检测性能降低的不足。
副载波通过键控Chirp信号的调频率与初始频率来调制通信数据,通信接收端对回波进行
调制数据时,将通信数据串并转换并分组后,根据数据组的大小排列方式可在共享信号中叠加多个副载波,若后一组数据大于前一组数据,则可将这两组数据调制到同一共享信号中,则此脉冲就有多个副载波,通信接收时不需要改变解调方式,每个副载波携带的数据均可解调出,只需将数据组按大小排列,以此提高通信速率;若后一组数据不大于前一组数据,则后组数据在下一脉冲调制。因此,此共享信号的通信传输速率在
AWGN信道中,本文共享信号的误码率仿真曲线如图12所示,从上往下第3, 4, 5条曲线为64进制调制的3种不同配比,即16K-4F, 8K-8F和4K-16F, K表示调频率,F表示载频,由图12可见这3种方式的误码率性能逐渐改善,根据4.5节分析得知,这是由于FSK的误码率性能优于调频率调制,通过改变调频率与初始频率的不同配比可以调整本文共享信号的抗干扰性能与带宽效率。图中给出数字调制中的键控法MFSK, MASK, MPSK的理论误码率曲线作为对比参考,随着调制位数M的增大,MASK和MPSK的抗噪声性能下降,频带利用率上升,而MFSK抗噪声性能更好,有更好的误码率性能,但频带利用率较差[17]。
\left\{ {\begin{array}{*{20}{l}} {{P_{{\rm{MASK}}}} = \left( {1 - \frac{1}{M}} \right){\rm{erfc}}\left( {\sqrt {\frac{{3r}}{{{M^2} - 1}}} } \right)}\\ {{P_{{\rm{MFSK}}}} = \frac{{M - 1}}{2}{\rm{erfc}}\left( {\sqrt {\frac{r}{2}} } \right)}\\ {{P_{{\rm{MPSK}}}} \approx {\rm{erfc}}\left( {\sqrt {2r} \sin \frac{{{π}} }{{2M}}} \right)} \end{array}} \right. | (25) |
本文设计并研究了一种多载波雷达通信共享信号,通过对副载波Chirp信号的调频率与初始频率键控调制通信数据,利用主载波进行雷达目标检测。对共享信号的模糊函数及主副载波间的正交性进行了分析,对Chirp信号参数间关系进行设计,在通信接收端采用FRFT变换进行解调,并对共享信号的抗多普勒性能进行了分析。共享信号的设计实现了复杂集成电子装备平台中,雷达和通信信号能量和时间的一体化,这将是未来一体化电子战系统的一个重要的发展方向。
[1] |
XU Yan and SCOOT K A. Sea ice and open water classification of SAR imagery using CNN-based transfer learning[C]. 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 2017: 3262–3265. doi: 10.1109/IGARSS.2017.8127693.
|
[2] |
ZHANG Yue, SUN Xian, SUN Hao, et al. High resolution SAR image classification with deeper convolutional neural network[C]. International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 2374–2377. doi: 10.1109/IGARSS.2018.8518829.
|
[3] |
SHAO Jiaqi, QU Changwen, and LI Jianwei. A performance analysis of convolutional neural network models in SAR target recognition[C]. 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. doi: 10.1109/BIGSARDATA.2017.8124917.
|
[4] |
ZHANG Ming, AN Jubai, YU Dahua, et al. Convolutional neural network with attention mechanism for SAR automatic target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4004205. doi: 10.1109/LGRS.2020.3031593.
|
[5] |
CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720.
|
[6] |
徐丰, 王海鹏, 金亚秋. 深度学习在SAR目标识别与地物分类中的应用[J]. 雷达学报, 2017, 6(2): 136–148. doi: 10.12000/JR16130.
XU Feng, WANG Haipeng, and JIN Yaqiu. Deep learning as applied in SAR target recognition and terrain classification[J]. Journal of Radars, 2017, 6(2): 136–148. doi: 10.12000/JR16130.
|
[7] |
吕艺璇, 王智睿, 王佩瑾, 等. 基于散射信息和元学习的SAR图像飞机目标识别[J]. 雷达学报, 2022, 11(4): 652–665. doi: 10.12000/JR22044.
LYU Yixuan, WANG Zhirui, WANG Peijin, et al. Scattering information and meta-learning based SAR images interpretation for aircraft target recognition[J]. Journal of Radars, 2022, 11(4): 652–665. doi: 10.12000/JR22044.
|
[8] |
HUANG Teng, ZHANG Qixiang, LIU Jiabao, et al. Adversarial attacks on deep-learning-based SAR image target recognition[J]. Journal of Network and Computer Applications, 2020, 162: 102632. doi: 10.1016/j.jnca.2020.102632.
|
[9] |
孙浩, 陈进, 雷琳, 等. 深度卷积神经网络图像识别模型对抗鲁棒性技术综述[J]. 雷达学报, 2021, 10(4): 571–594. doi: 10.12000/JR21048.
SUN Hao, CHEN Jin, LEI Lin, et al. Adversarial robustness of deep convolutional neural network-based image recognition models: A review[J]. Journal of Radars, 2021, 10(4): 571–594. doi: 10.12000/JR21048.
|
[10] |
高勋章, 张志伟, 刘梅, 等. 雷达像智能识别对抗研究进展[J]. 雷达学报, 2023, 12(4): 696–712. doi: 10.12000/JR23098.
GAO Xunzhang, ZHANG Zhiwei, LIU Mei, et al. Intelligent radar image recognition countermeasures: A review[J]. Journal of Radars, 2023, 12(4): 696–712. doi: 10.12000/JR23098.
|
[11] |
SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
|
[12] |
GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015: 1050.
|
[13] |
KURAKIN A, GOODFELLOW L J, and BENGIO S. Adversarial examples in the physical world[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 99–112.
|
[14] |
PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]. 2016 IEEE European Symposium on Security and Privacy, Saarbruecken, Germany, 2016: 372–387. doi: 10.1109/EuroSP.2016.36.
|
[15] |
BRENDEL W, RAUBER J, and BETHGE M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
|
[16] |
CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
|
[17] |
SU Jiawei, VARGAS D V, and SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828–841. doi: 10.1109/TEVC.2019.2890858.
|
[18] |
CHEN Pinyu, ZHANG Huan, SHARMA Y, et al. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]. The 10th ACM Workshop on Artificial Intelligence and Security, Dallas, USA, 2017: 15–26. doi: 10.1145/3128572.3140448.
|
[19] |
CHEN Jianbo, JORDAN M I, and WAINWRIGHT M J. HopSkipJumpAttack: A query-efficient decision-based attack[C]. 2020 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 2020: 1277–1294. doi: 10.1109/SP40000.2020.00045.
|
[20] |
DONG Yinpeng, LIAO Fengzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
|
[21] |
ZHAO Haojun, LIN Yun, GAO Song, et al. Evaluating and improving adversarial attacks on DNN-based modulation recognition[C]. GLOBECOM 2020–2020 IEEE Global Communications Conference, Taipei, China, 2020: 1–5. doi: 10.1109/GLOBECOM42002.2020.9322088.
|
[22] |
WANG Xiaosen and HE Kun. Enhancing the transferability of adversarial attacks through variance tuning[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021: 1924–1933. doi: 10.1109/CVPR46437.2021.00196.
|
[23] |
XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019: 2725–2734. doi: 10.1109/CVPR.2019.00284.
|
[24] |
CZAJA W, FENDLEY N, PEKALA M J, et al. Adversarial examples in remote sensing[C]. The 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, USA, 2018: 408–411. doi: 10.1145/3274895.3274904.
|
[25] |
CHEN Li, XU Zewei, LI Qi, et al. An empirical study of adversarial examples on remote sensing image scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7419–7433. doi: 10.1109/TGRS.2021.3051641.
|
[26] |
DU Chuan, HUO Chaoying, ZHANG Lei, et al. Fast C&W: A fast adversarial attack algorithm to fool SAR target recognition with deep convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4010005. doi: 10.1109/LGRS.2021.3058011.
|
[27] |
DU Chuan and ZHANG Lei. Adversarial attack for SAR target recognition based on UNet-generative adversarial network[J]. Remote Sensing, 2021, 13(21): 4358. doi: 10.3390/rs13214358.
|
[28] |
ZHOU Junfan, SUN Hao, and KUANG Gangyao. Template-based universal adversarial perturbation for SAR target classification[C]. The 8th China High Resolution Earth Observation Conference, Singapore, Singapore, 2023: 351–360. doi: 10.1007/978-981-19-8202-6_32.
|
[29] |
XIA Weijie, LIU Zhe, and LI Yi. SAR-PeGA: A generation method of adversarial examples for SAR image target recognition network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2023, 59(2): 1910–1920. doi: 10.1109/TAES.2022.3206261.
|
[30] |
PENG Bowen, PENG Bo, ZHOU Jie, et al. Scattering model guided adversarial examples for SAR target recognition: Attack and defense[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5236217. doi: 10.1109/TGRS.2022.3213305.
|
[31] |
HANSEN L K and SALAMON P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993–1001. doi: 10.1109/34.58871.
|
[32] |
DING Jun, CHEN Bo, LIU Hongwei, et al. Convolutional neural network with data augmentation for SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(3): 364–368. doi: 10.1109/LGRS.2015.2513754.
|
[33] |
LEE J S. Digital image enhancement and noise filtering by use of local statistics[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1980, PAMI-2(2): 165–168. doi: 10.1109/TPAMI.1980.4766994.
|
[34] |
ZHUANG Juntang, TANG T, DING Yifan, et al. AdaBelief optimizer: Adapting stepsizes by the belief in observed gradients[C]. The 34th International Conference on Neural Information Processing Systems, 2020: 795–806.
|
[35] |
NESTEROV Y. A method for unconstrained convex minimization problem with the rate of convergence[J]. Mathematics, 1983, 269: 543–547.
|
[36] |
MA J and YARATS D. Quasi-hyperbolic momentum and Adam for deep learning[C]. The 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019: 1–38.
|
[37] |
KEYDEL E R, LEE S W, and MOORE J T. MSTAR extended operating conditions: A tutorial[C]. SPIE 2757, Algorithms for Synthetic Aperture Radar Imagery III, Orlando, USA, 1996: 228–242. doi: 10.1117/12.242059.
|
[38] |
HOU Xiyue, AO Wei, SONG Qian, et al. FUSAR-Ship: Building a high-resolution SAR-AIS matchup dataset of Gaofen-3 for ship detection and recognition[J]. Science China Information Sciences, 2020, 63(4): 140303. doi: 10.1007/s11432-019-2772-5.
|
[39] |
KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1106–1114.
|
[40] |
SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015.
|
[41] |
HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
|
[42] |
SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 2818–2826. doi: 10.1109/CVPR.2016.308.
|
[43] |
HOWARD A G, ZHU Menglong, CHEN Bo, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. https://arxiv.org/abs/1704.04861, 2017.
|
[44] |
IANDOLA F N, HAN Song, MOSKEWICZ M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[EB/OL]. https://arxiv.org/abs/1602.07360, 2016.
|
[45] |
WANG Wenhai, XIE Enze, LI Xiang, et al. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 2021: 548–558. doi: 10.1109/ICCV48922.2021.00061.
|
[46] |
MEHTA S and RASTEGARI M. MobileViT: Light-weight, general-purpose, and mobile-friendly vision transformer[C]. The Tenth International Conference on Learning Representations, 2022.
|
[47] |
KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015: 1–15.
|
[48] |
WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861.
|
1. | 邬俊,徐刚. ISAR机动目标联合高分辨成像和参数估计. 信号处理. 2018(11): 1355-1361 . ![]() | |
2. | 符吉祥,孙光才,邢孟道. 一种大转角ISAR两维自聚焦平动补偿方法. 电子与信息学报. 2017(12): 2889-2898 . ![]() | |
3. | 冯俊杰,张弓. 多测量向量块稀疏信号重构ISAR成像算法. 系统工程与电子技术. 2017(09): 1959-1964 . ![]() |
主副功率比 | s13 | s24 | s35 | s46 | s57 |
1:1 | 0.7349 | 0.7456 | 0.7341 | 0.7323 | 0.7672 |
4:1 | 0.9082 | 0.9053 | 0.9022 | 0.9015 | 0.9115 |
9:1 | 0.9522 | 0.9530 | 0.9519 | 0.9515 | 0.9554 |