Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Chen Wenfeng, Li Shaodong, Yang Jun, Ma Xiaoyan. Multiple Measurement Vectors ISAR Imaging Algorithm Based on a Class of Linearized Bregman Iteration[J]. Journal of Radars, 2016, 5(4): 389-401. doi: 10.12000/JR16057
Citation: WAN Xuanshen, LIU Wei, NIU Chaoyang, et al. Black-box attack algorithm for SAR-ATR deep neural networks based on MI-FGSM[J]. Journal of Radars, 2024, 13(3): 714–729. doi: 10.12000/JR23220

Black-box Attack Algorithm for SAR-ATR Deep Neural Networks Based on MI-FGSM

DOI: 10.12000/JR23220
Funds:  The National Natural Science Foundation of China (42201472)
More Information
  • Corresponding author: LIU Wei, greatliuliu@163.com
  • Received Date: 2023-11-17
  • Rev Recd Date: 2024-01-14
  • Available Online: 2024-01-18
  • Publish Date: 2024-02-02
  • The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6.0%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable.

     

  • 随着科学技术的发展,现代战争中的装备必须趋向于综合化发展,但同时也不能让过多的设备加剧恶化平台周围的电磁环境、增加负荷,例如无人机,就需在小体积平台上集成多种装备功能,并保持平台的机动性和综合性,雷达和通信系统是平台广泛配备的两种电子系统[1],若能实现雷达通信一体化[2,3],将大大提高电子系统的综合利用率。

    雷达通信一体化的理念在20世纪60年代出现以后,对其研究主要分为分时、分波束和同时3种体制。分时体制在通信时不能兼顾雷达探测,即在通信时存在雷达探测盲区,但相对最易实现,故研究的较多;分波束体制将相控阵面划分为不同区域,利用划分的各个阵面实现不同功能;同时体制将雷达信号和通信信号融合在一起,在同一平台同时实现探测与传输功能。其一体化程度最高,是未来雷达通信一体化的发展方向。这种体制的关键技术更多地集中在共享信号设计,而共享信号设计主要需要解决通信数据传输和雷达探测之间的关系。现有的共享信号设计方法基本可分为3类:①雷达与通信信号各自独立产生后叠加[4],②基于通信信号,将其改造成雷达探测波,③基于雷达信号,在其上调制通信数据[5]。文献[6]中研究了利用线性调频信号(Chirp信号)上、下扫频分别作为雷达波形和通信波形,叠加产生共享信号,接收时利用正交性将其分离的一体化系统,但其通信速率受到很大的限制,雷达性能有所降低;文献[7]中研究了利用正交频分复用(Orthogonal Frequency Division Multiplexing, OFDM)信号实现一体化波形,但是OFDM信号不是恒包络,峰均比较高不利于在雷达的C类放大器中放大,且对多普勒频移较敏感,仅适用于短距离通信与探测;文献[8]中研究了将通信最小频移键控(Minimum Shift Keying, MSK)调制到线性调频信号上实现一体化波形,该波形能在实现雷达检测动能的同时完成通信功能;文献[9]研究了通过键控Chirp信号的初始频率来调制通信数据从而实现雷达通信一体化的方法,但是雷达检测处理的匹配滤波器要随着发射信号的改变而改变。

    本文提出一种基于Chirp信号参数调制的多载波雷达通信共享信号,主载波用于雷达检测功能,副载波的调频率与初始频率参数可选,从而携带数据实现通信信息调制。在设计共享信号时,通信数据的随机性常使不同脉冲间的信号相关性减弱,而雷达探测为进行相干积累,需在接收端使用与之对应的匹配滤波器,大大增加雷达系统负担,本文所设计信号利用主载波的确定性提高脉冲的相关性,雷达处理系统不需要增加额外单元,采用同原始雷达相同的处理流程;而不同起始频率、不同调频率的Chirp信号能在带宽利用率及正交性之间提供平衡。在文中对所设计共享波形的模糊函数、主副载波之间的正交性等性能进行了分析;在接收端通过分数傅里叶变换,根据检测点的能量聚集位置进行解调。

    共享信号设计中,在雷达探测波形上调制通信信息后,由于通信数据的随机性,使脉冲波形产生差异性,需要增加额外的雷达信号处理单元,造成负担。为减少脉冲差异性,便于雷达目标检测处理,设计主副载波的共享信号形式,主载波作雷达目标检测,副载波调制通信信息[10,11]

    副载波由待传输码元从一组Chirp信号组{skl(t)}中选取,表达式为

    skl(t)=A2exp[j(πμkt2+2πflt)],k=0,1,···,N11;l=0,1,···,N21 (1)

    式中,等间隔调频率μk=μ0+kΔμ;等间隔初始频率fl=f0+lΔfn=n1+n2位二进制数据中,n1位数据映射N1个调频率,n2位数据映射N2个初始频率,由通信数据键控映射得调频率为μk、初始频率为fl的Chirp信号,单个Chirp信号可携带n bit的数据[12]。以n1=3, n2=3的8 bit调制为例,示意图如图1所示,μf两方向间没有约束关系。

    图  1  64进制数据调制
    Figure  1.  64 system data modulation

    主载波确定为调频率大于副载波调频率选取范围的Chirp信号,即μr>μk,带宽覆盖副载波的可用带宽,使得共享信号的带宽始终保持不变,

    sr(t)=A1exp[j(πμrt2+2πfrt)] (2)

    利用调频率的多样性,给主、副载波提供良好的准正交性[13]。参数选取范围如图2所示,主载波为确定的Chirp信号,副载波为众多参数组合中选取的某一Chirp信号,共享信号表示为

    图  2  共享信号参数设计
    Figure  2.  Sharing signal parameters design
    s(t)=sr(t)+skl(t),τ/2tτ/2 (3)

    共享信号的通信码元信息利用分数阶傅里叶变换(FRactional Fourier Transform, FRFT)解调,Chirp信号的FRFT变换为

    Sα(u)=A1jcotαexp(jπu2cotα)τ/2τ/2ej2π(μ+cotα)t2+jπ(f0ucscα)tdt (4)

    式中,旋转角度α=pπ/2; p为变换阶次。FRFT变换与Chirp信号的调频率、初始频率参数间的关系满足

    {μ=cotαf=ucscα{p=2arccotμ/πu=fsin(arccotμ) (5)

    时,出现幅度峰值|Sα(u)|=Aτ/|sinα|。即Chirp信号的调频率μ决定了FRFT唯一的最优变换阶次pm,载频f0决定了在pm阶分数阶傅里叶域上最优的能量聚集位置[14]

    图3所示,只有在最优阶次的FRFT变换时,在分数阶傅里叶域上才有峰值输出。接收端对接收到的共享信号进行FRFT变换,检测峰值,得到峰值所在的变换阶次与分数阶傅里叶域坐标,由FRFT变换与Chirp信号参数之间的关系,解得副载波的调频率与初始频率,从而映射出调制的码元数据[15]

    图  3  不同阶次FRFT变换图
    Figure  3.  FRFT graphs of different orders

    首先根据设定的映射规则,将通信数据映射到对应的初始频率、调频率的Chirp信号序列中,在接收端通过分数阶傅里叶变换进行解调。一体化框图如图4所示,在发送端,将通信数据串并转换后分成n位一组,前n1位键控得调频率,后n2位键控得初始频率,产生一个特定的Chirp信号,与主载波确定Chirp信号叠加组成共享信号,送入高斯白噪声信道。

    图  4  共享信号实现框图
    Figure  4.  Block diagram of realizing sharing signal

    在接收端,雷达处理系统与常规雷达相同,不会增加额外处理单元。通信处理系统,对回波进行分数阶傅里叶变换,依次进行p0pN11阶FRFT变换,将变换后FRFT域u0uN21位置处的采样点设为检测点,对其数据进行门限判定,得到超过阈值的检测点的阶次pk和位置ul,并根据映射关系解调出通信码元数据。

    为满足通信解调准确率与雷达目标检测分辨率,需对共享信号各参数进行设计。

    副载波{μk}直接影响数据传输速率及解调性能;固定信号脉宽τ, μ越大Chirp信号占用的带宽越大,{μk}的最大取值被带宽所限制;{μk}对应n1位二进制通信信息,初始频率fr相同时相邻的μkμk+1对应的两个最优分数阶傅里叶变换阶次pkpk+1的间隔Δp决定了采用分数阶傅里叶变换进行解调时对相邻符号的区分度,即Δp决定了相邻符号间的干扰程度。由式(5)推导可得Δpμk, μk+1具有以下关系:

    Δp=|pkpk+1|=2|arccotμkarccotμk+1|/π (6)

    对Chirp信号进行非最优阶FRFT时,分数傅里叶域谱不具有聚集性质,而且随着变换阶次偏离p的程度Δp增大,FRFT的峰值明显下降;同时调频率越大,随Δp增加下降得更快[16]图5仿真了不同Δp下归一化峰值的变化趋势,归一化调频率分别为0.1, 0.3, 0.5, 0.7, 0.9,脉宽相同,从图中看出,随Δp增大峰值幅度降低;要使得FRFT解调输出能唯一确定峰值点对应参数,可根据所需要求设定峰值幅度门限,若设定FRFT峰值幅度门限为–10 dB,则{μk}的设计要使得Δp至少为0.03。

    图  5  FRFT幅度峰值随阶数偏移变化
    Figure  5.  Change of FRFT amplitude peak with order deviation

    为使具有相同调频率μk、不同初始频率fl的Chirp 信号在分数傅里叶域上可以区分,相邻的flfl+1对应的分数阶傅里叶变换域上的两个谱峰位置ulul+1的间隔Δu决定了采用FRFT进行解调时对n2位数据中相邻符号的区分度,即Δu决定了相邻符号间的干扰程度。由式(5)推导可得Δufl, fl+1存在如下关系:

    Δu=|ulul+1|=|(flfl+1)sinα|=|Δfsinα| (7)

    对Chirp信号进行最优阶FRFT变换时,需要在u轴上能根据峰值幅度区分出flfl+1,由Chirp信号的分数傅里叶域的幅度谱

    |Sα(u)|=|Aτ1jcotαSa[π(f0ucscα)τ]|

    (8)

    第1零点间距离为|2sinα/τ|, {fl}的设计应使得Δu满足

    Δu=|Δfsinα|>|2sinα/τ| (9)

    Δf>2/τ

    模糊函数表征了波形的距离与多普勒分辨率等特性。共享信号s(t)=sr(t)+skl(t), τ/2tτ/2的模糊函数为

    χ(τ,fd)=s(t)s(tτ)ej2πfdtdt=t2t1[sr(t)sr(tτ)+skl(t)skl(tτ)χM+sr(t)skl(tτ)+skl(t)sr(tτ)]χIej2πfdtdt (10)
    {0ττ t1=τ/2+τ,t2=τ/2ττ0 t1=τ/2,t2=τ/2+τ (11)

    由表达式可看出,可将共享信号的模糊函数分为主瓣区域χM与邻道干扰项χI。主瓣区域χM为主、副载波的自模糊函数之和,表示为

    χM=A21ejπ(2fr+fd)τsin[π(μrτ+fd)(τ|τ|)]π(μrτ+fd)+A22ejπ(2fl+fd)τsin[π(μkτ+fd)(τ|τ|)]π(μkτ+fd) (12)

    而邻道干扰项χI由主、副载波间的互模糊函数之和表示,是应该尽量抑制的部分。

    χI=A1A22i=1, s=1siexp(j2πfiτjπμiτ2)t2t1exp[jπ(μiμs)t2]exp[j2π(fiμiτfsfd)t]dt=A1A22i=1, s=1siexp(α)t2t1exp(β2)dt=A1A22i=1, s=1si12j(μiμs)exp(α){erf[β(t2)]erf[β(t1)]} (13)

    式中,f1,μ1为主载波参数,f2,μ2为副载波参数,

    α=jπ(fiμiτfsfd)2μsμi+j2πfiτjπμiτ2,

    β=jπ[(fiμiτfsfd)μiμs+μiμst], erf(x)=2πx0exp(z2)dz

    由于无法求得模糊函数的具体表达式,故对其模糊函数进行了统计意义上的仿真分析,仿真参数同第5节,主瓣区域为两Chirp信号模糊函数叠加,而邻道干扰性是由两项主、副载波的互模糊函数之和,其幅度相较于信号模糊函数的峰值,幅度较低,由多次仿真得到,邻道干扰项幅度峰值的平均值仅为模糊函数峰值的2.2%,方差为0.000115,且峰值不位于速度-距离平面原点,故认为模糊函数主要由主瓣区域决定。随调制数据的改变,多普勒切片的主瓣宽度变化范围均低于1/τ,第1旁瓣峰值随调制数据在–13 dB上下变化;时延切片的主瓣宽度在1/B上下变化,第1旁瓣峰值在–12 dB上下变化;故当利用发射的共享信号进行匹配滤波时,性能将有所下降。图6仿真了调制某一数据的共享信号的主瓣区域模糊函数图和邻道干扰项的模糊函数图,主瓣位于速度-距离平面原点处的峰值幅度最高。

    图  6  共享信号的模糊函数图
    Figure  6.  Ambiguity function of sharing signal

    主副载波间的互相关性决定了在接收端进行匹配滤波时副载波的剩余量。主载波与副载波信号表达式如下:

    sr(t)=A1exp(jπμrt2+j2πfrt) (14)
    skl(t)=A2exp(jπμkt2+j2πflt) (15)

    其中,τ/2tτ/2, τ为信号脉冲宽度。相关函数为

    Rsr,skl(τ)=+sr(t)skl(t+τ)dt=A1A2t2t1exp{j2πflτjπ(fl+μkτfr)2μkμr+jπ2(2(fl+μkτfr)2(μkμr)+2(μkμr)t)2+jπμkτ2}dt (16)

    式中,积分区间[t1,t2]取决于τ,当0ττ时,t1=τ/2, t2=τ/2τ;ττ0时,t1=τ/2τ, t2=τ/2。令γ(t)=2(μkμr) t+2(fl+μkτfr)2(μkμr),得dγ=2(μkμr)dt,则式(16)变为

    Rsr,skl(τ)=A1A2exp(j2πflτ+jπμkτ2)2(μkμr)exp[jπ(fl+μkτfr)2μkμr]γ(t2)γ(t1)exp[jπ2γ2(t)]dγ (17)

    其 中,γ(t2)γ(t1)exp[jπ2γ2(t)]dγ=C(γ(t2))+jS(γ(t2))         C(γ(t1))jS(γ(t1)), C(γ)=γ0cos(πv22)dv12+1πγsin(π2γ2),S(γ)=γ0sin(πv22)dv121πγsin(π2γ2)

    互相关值取决于主副载波的调频率差值以及频率差值与调频率差值的比值。根据前述参数设计主载波μr=120MHz/μs, fr=0MHz,副载波μk=99MHz/μs, fl=21MHz,对载波自相关特性及互相关特性进行了仿真,由图7可看出,主副载波的互相关值比主载波的自相关值低35 dB,表明在接收端进行匹配滤波时,副载波剩余量很小。模糊函数表征利用发射信号进行匹配滤波的输出,性能有明显下降,但本文中,主载波保持不变,仅利用主载波进行匹配滤波时,探测性能的降低量将变小。

    图  7  主、副载波互相关函数
    Figure  7.  Cross correlation function of main carrier and subcarrier

    共享信号由主载波与副载波叠加得到,则用于雷达探测的功率会有所下降,但主、副载波功能相互独立,故可调整主载波与副载波的不同功率配比,增加用于雷达探测的功率。

    雷达探测目标由以主载波为参考信号的匹配滤波器进行脉压处理,处理结果基于主载波与各脉冲共享信号的相关性,相关性表示为主载波的自相关函数与主、副载波的互相关函数之和:

    Rs,sr(τ)=+s(t)sr(t+τ)dt=+[sr(t)+skl(t)]sr(t+τ)dt=Rsr,sr(τ)+Rskl,sr(τ) (18)

    由4.2节分析可知,主、副载波的互相关函数相较于主载波的自相关函数幅度很低,故雷达探测结果受副载波分量影响很小。

    主副载波的功率分配决定了用于雷达探测的功率,可在适当范围内提高主载波的功率以用于雷达探测,表1列出了在不同主、副载波功率比时,主载波与几组不同参数下的共享信号之间的互相关系数,从表1中可以看出,主载波所占功率越大,相关系数越接近于1,雷达探测性能越好。

    表  1  主载波与不同主副功率比下共享信号的互相关系数
    Table  1.  Cross-correlation coefficient of shared signal under different power ratios
    主副功率比s13s24s35s46s57
    1:10.73490.74560.73410.73230.7672
    4:10.90820.90530.90220.90150.9115
    9:10.95220.95300.95190.95150.9554
    下载: 导出CSV 
    | 显示表格

    增加了主载波的功率后,副载波的功率必然会下降,利用FRFT变换的解调性能会有所下降,如图8所示,在主、副载波的功率比为9:1时,FRFT变换的旁瓣在略微升高后,依然能保持在–10 dB左右,能检测到明显峰值,解调出码元数据,但是主副功率比不能无限制增大,主载波功率过高时,在FRFT解调输出谱中会覆盖掉峰值,无法解调出数据,功率比越高,误码率越差,可根据应用条件选择主副载波的功率比。

    图  8  主副载波功率比9:1时FRFT输出
    Figure  8.  FRFT output at main and subcarrier power ratio of 9:1

    雷达与通信接收端之间的相对运动会存在多普勒频移fd,此时接收Chirp信号形式为

    s(t)=Aexp[j(2πft+πμt2+φ)],t[τ/2,τ/2], f=fc+fd (19)

    多普勒频移fd可看作是初始频率fc偏移,调频率μ不受影响,对应的是分数阶傅里叶域上的峰值位置,而峰值幅度不变,则解调器输出峰值位置偏移量和检测点幅度平方输出分别为

    Δu=fdsin(arccot(μ)) (20)
    |Sα(u)|2=A2sinαsin2(πfdτ)(πfd)2 (21)

    多普勒频移fd带来的幅度平方衰减系数为

    γ=A2sin2(πfdτ)(πfd)2sinα/A2τ2sinα=sin2(πfdτ)(πfdτ)2 (22)

    由式(22)可看出衰减系数只与多普勒频移fd和信号脉宽τ有关,πfdτ达到0.5时幅度衰减尚不到0.1,可知多普勒频移对检测点幅值影响较小,说明本文所设计的共享信号对多普勒是稳健的。

    本文调制方式需要考虑调频率与频率的配比,设比特宽度为Tb,仅对调频率键控时,M=2n进制符号宽度为Ts=Tbn,由于调频率μk是变化的,不同符号键控输出的Chirp信号的扫频带宽μkTs也是变化的,其最大宽度由最大的μkmax决定。则调频率键控方式的带宽效率{\eta _{{μ}}}满足:

    {\eta _{\rm{{{μ}} }}} = \frac{{{R_{\rm{b}}}}}{B} = \frac{1}{{{T_{\rm{b}}}B}} = \frac{n}{{{T_{\rm{s}}}B}} \ge \frac{n}{{{\mu _{k\max }}T_{\rm{s}}\;\!^2}} (23)

    MFSK (Multiple Frequency-Shift Keying)的信道带宽理论值为{{{R_{\rm{b}}}\left( {M + 3} \right)} / {2n}} (相干MFSK),则MFSK的带宽效率为

    {\eta _{\rm{f}}} = \frac{{{R_{\rm{b}}}}}{B} = \frac{{2n}}{{M + 3}} (24)

    因此,当{\mu _{k\max }}T_{\rm{s}}\,\!^2 < {{\left( {M + 3} \right)} / 2}即用于调频率键控的Chirp信号的最大时宽带宽积\tau {B_{\max }} \!<\! {{\left( {M \!+\! 3} \right)}\! / 2}时,调频率键控有优于MFSK的带宽效率。而MFSK的误码率性能优于调频率键控,故调频率键控与频率键控同时使用时,可以通过调整2种调制方式的配比,在误码率性能与带宽效率间折中选择。

    在仿真实验中,设定二进制数据对8调频率与8初始频率的64个Chirp信号进行调制,根据第2部分要求设计仿真参数为:射频{f\!_{\rm{p}}} = 10 \; {\rm{ GHz}},时宽\tau = 1 \; {{{μ}}{\rm s}},占空比10%,带宽B = 120 \; {\rm{ MHz}},主载波频率{f\!_{\rm{r}}} = 0 \; {\rm{ MHz}},调频率{\mu _{\rm{r}}} = 120 \; {{{\rm{MHz}}} /}μs;副载波调频率组{\mu _k} = \left[ {{\rm{15,27,39,51,63,75,87,99}}} \right] MHz/μs,频率组{f\!_l} = \left[ {{\rm{0, 3, 6, 9, 12, 15, 18, 21}}} \right] \; {\rm{ MHz}},主副载波功率比为1:1,目标参数\left[ {1000 \; {\rm{ m}},200 \; {{\rm{m}} / {\rm{s}}}} \right]

    因主载波Chirp信号的初始频率{f\!_{\rm{r}}} = {f\!_1},调频率{\mu _{\rm{r}}} > {\mu _k}在可用范围内达到最大,则主载波的频谱带宽内包含了副载波用于通信的所有频谱带宽,主载波用于雷达目标检测,接收机利用匹配滤波器对回波进行脉压处理,结果如图9

    图  9  脉压结果
    Figure  9.  Pulse compression result

    分析知主、副载波互相关性较低,不影响雷达的目标检测,第1旁瓣依然在–13 dB左右,经过匹配滤波器后副载波剩余量很小,幅度保持在–20 dB以下,而且增加主载波的功率后,剩余量幅度会更低,经过匹配滤波器后的回波脉冲串有很高的相关性,经过多普勒滤波器组进行动目标检测(Moving Target Detection, MTD)处理即可得出目标相对速度,表明在接收端仅需使用单一滤波器即可完成,测速结果如图10所示。

    图  10  动目标检测结果
    Figure  10.  MTD result

    在虚警概率等于10–4条件下,本文共享信号进行脉压处理和不同脉冲数积累MTD处理后,检测概率与信噪比(SNR)的关系曲线如图11所示,脉压与MTD处理利用相干积累提高了SNR,由于本文共享信号存在通信副载波分量,故与同参数下的单Chirp信号相比,检测概率有所下降;但在进行MTD相干处理后,提高了SNR,从而改善检测概率,而且相干积累的脉冲数越多,检测概率越优,故采用较多脉冲积累来弥补共享信号雷达检测性能降低的不足。

    图  11  检测概率与信噪比关系曲线
    Figure  11.  Curve of relationship between detection probability and signal-to-noise ratio

    副载波通过键控Chirp信号的调频率与初始频率来调制通信数据,通信接收端对回波进行\left\{ {{\mu _k}} \right\}对应的{2^{{n_1}}}个固定阶次的FRFT处理,在\left\{ {{f\!_l}} \right\}对应的{2^{{n_2}}}个固定位置处检测幅度值,得到上述{2^{{n_1}}} \times {2^{{n_2}}}个检测点中幅度值高于设定阈值的检测点对应的阶次p和位置u,即可解调出调制进制数据。

    调制数据时,将通信数据串并转换并分组后,根据数据组的大小排列方式可在共享信号中叠加多个副载波,若后一组数据大于前一组数据,则可将这两组数据调制到同一共享信号中,则此脉冲就有多个副载波,通信接收时不需要改变解调方式,每个副载波携带的数据均可解调出,只需将数据组按大小排列,以此提高通信速率;若后一组数据不大于前一组数据,则后组数据在下一脉冲调制。因此,此共享信号的通信传输速率在n{\rm{PRF}} \sim {2^n} \cdot n{\rm{PRF}}范围内变化,n = {n_1} + {n_2}为单个副载波携带的比特位数。若仅采用主载波叠加单个副载波的形式,此仿真参数下的通信速率为n{\rm{PRF}} = 600 \; {{{\rm{kb}}} / {\rm{s}}},通过改变参数增加调制位数可得更高传输速率。

    AWGN信道中,本文共享信号的误码率仿真曲线如图12所示,从上往下第3, 4, 5条曲线为64进制调制的3种不同配比,即16K-4F, 8K-8F和4K-16F, K表示调频率,F表示载频,由图12可见这3种方式的误码率性能逐渐改善,根据4.5节分析得知,这是由于FSK的误码率性能优于调频率调制,通过改变调频率与初始频率的不同配比可以调整本文共享信号的抗干扰性能与带宽效率。图中给出数字调制中的键控法MFSK, MASK, MPSK的理论误码率曲线作为对比参考,随着调制位数M的增大,MASK和MPSK的抗噪声性能下降,频带利用率上升,而MFSK抗噪声性能更好,有更好的误码率性能,但频带利用率较差[17]

    图  12  误码率随信噪比变化曲线
    Figure  12.  Change of error rate with SNR
    \left\{ {\begin{array}{*{20}{l}} {{P_{{\rm{MASK}}}} = \left( {1 - \frac{1}{M}} \right){\rm{erfc}}\left( {\sqrt {\frac{{3r}}{{{M^2} - 1}}} } \right)}\\ {{P_{{\rm{MFSK}}}} = \frac{{M - 1}}{2}{\rm{erfc}}\left( {\sqrt {\frac{r}{2}} } \right)}\\ {{P_{{\rm{MPSK}}}} \approx {\rm{erfc}}\left( {\sqrt {2r} \sin \frac{{{π}} }{{2M}}} \right)} \end{array}} \right. (25)

    本文设计并研究了一种多载波雷达通信共享信号,通过对副载波Chirp信号的调频率与初始频率键控调制通信数据,利用主载波进行雷达目标检测。对共享信号的模糊函数及主副载波间的正交性进行了分析,对Chirp信号参数间关系进行设计,在通信接收端采用FRFT变换进行解调,并对共享信号的抗多普勒性能进行了分析。共享信号的设计实现了复杂集成电子装备平台中,雷达和通信信号能量和时间的一体化,这将是未来一体化电子战系统的一个重要的发展方向。

  • [1]
    XU Yan and SCOOT K A. Sea ice and open water classification of SAR imagery using CNN-based transfer learning[C]. 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 2017: 3262–3265. doi: 10.1109/IGARSS.2017.8127693.
    [2]
    ZHANG Yue, SUN Xian, SUN Hao, et al. High resolution SAR image classification with deeper convolutional neural network[C]. International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 2374–2377. doi: 10.1109/IGARSS.2018.8518829.
    [3]
    SHAO Jiaqi, QU Changwen, and LI Jianwei. A performance analysis of convolutional neural network models in SAR target recognition[C]. 2017 SAR in Big Data Era: Models, Methods and Applications, Beijing, China, 2017: 1–6. doi: 10.1109/BIGSARDATA.2017.8124917.
    [4]
    ZHANG Ming, AN Jubai, YU Dahua, et al. Convolutional neural network with attention mechanism for SAR automatic target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4004205. doi: 10.1109/LGRS.2020.3031593.
    [5]
    CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720.
    [6]
    徐丰, 王海鹏, 金亚秋. 深度学习在SAR目标识别与地物分类中的应用[J]. 雷达学报, 2017, 6(2): 136–148. doi: 10.12000/JR16130.

    XU Feng, WANG Haipeng, and JIN Yaqiu. Deep learning as applied in SAR target recognition and terrain classification[J]. Journal of Radars, 2017, 6(2): 136–148. doi: 10.12000/JR16130.
    [7]
    吕艺璇, 王智睿, 王佩瑾, 等. 基于散射信息和元学习的SAR图像飞机目标识别[J]. 雷达学报, 2022, 11(4): 652–665. doi: 10.12000/JR22044.

    LYU Yixuan, WANG Zhirui, WANG Peijin, et al. Scattering information and meta-learning based SAR images interpretation for aircraft target recognition[J]. Journal of Radars, 2022, 11(4): 652–665. doi: 10.12000/JR22044.
    [8]
    HUANG Teng, ZHANG Qixiang, LIU Jiabao, et al. Adversarial attacks on deep-learning-based SAR image target recognition[J]. Journal of Network and Computer Applications, 2020, 162: 102632. doi: 10.1016/j.jnca.2020.102632.
    [9]
    孙浩, 陈进, 雷琳, 等. 深度卷积神经网络图像识别模型对抗鲁棒性技术综述[J]. 雷达学报, 2021, 10(4): 571–594. doi: 10.12000/JR21048.

    SUN Hao, CHEN Jin, LEI Lin, et al. Adversarial robustness of deep convolutional neural network-based image recognition models: A review[J]. Journal of Radars, 2021, 10(4): 571–594. doi: 10.12000/JR21048.
    [10]
    高勋章, 张志伟, 刘梅, 等. 雷达像智能识别对抗研究进展[J]. 雷达学报, 2023, 12(4): 696–712. doi: 10.12000/JR23098.

    GAO Xunzhang, ZHANG Zhiwei, LIU Mei, et al. Intelligent radar image recognition countermeasures: A review[J]. Journal of Radars, 2023, 12(4): 696–712. doi: 10.12000/JR23098.
    [11]
    SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [12]
    GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015: 1050.
    [13]
    KURAKIN A, GOODFELLOW L J, and BENGIO S. Adversarial examples in the physical world[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 99–112.
    [14]
    PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]. 2016 IEEE European Symposium on Security and Privacy, Saarbruecken, Germany, 2016: 372–387. doi: 10.1109/EuroSP.2016.36.
    [15]
    BRENDEL W, RAUBER J, and BETHGE M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [16]
    CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
    [17]
    SU Jiawei, VARGAS D V, and SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828–841. doi: 10.1109/TEVC.2019.2890858.
    [18]
    CHEN Pinyu, ZHANG Huan, SHARMA Y, et al. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]. The 10th ACM Workshop on Artificial Intelligence and Security, Dallas, USA, 2017: 15–26. doi: 10.1145/3128572.3140448.
    [19]
    CHEN Jianbo, JORDAN M I, and WAINWRIGHT M J. HopSkipJumpAttack: A query-efficient decision-based attack[C]. 2020 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 2020: 1277–1294. doi: 10.1109/SP40000.2020.00045.
    [20]
    DONG Yinpeng, LIAO Fengzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
    [21]
    ZHAO Haojun, LIN Yun, GAO Song, et al. Evaluating and improving adversarial attacks on DNN-based modulation recognition[C]. GLOBECOM 2020–2020 IEEE Global Communications Conference, Taipei, China, 2020: 1–5. doi: 10.1109/GLOBECOM42002.2020.9322088.
    [22]
    WANG Xiaosen and HE Kun. Enhancing the transferability of adversarial attacks through variance tuning[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021: 1924–1933. doi: 10.1109/CVPR46437.2021.00196.
    [23]
    XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019: 2725–2734. doi: 10.1109/CVPR.2019.00284.
    [24]
    CZAJA W, FENDLEY N, PEKALA M J, et al. Adversarial examples in remote sensing[C]. The 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, USA, 2018: 408–411. doi: 10.1145/3274895.3274904.
    [25]
    CHEN Li, XU Zewei, LI Qi, et al. An empirical study of adversarial examples on remote sensing image scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7419–7433. doi: 10.1109/TGRS.2021.3051641.
    [26]
    DU Chuan, HUO Chaoying, ZHANG Lei, et al. Fast C&W: A fast adversarial attack algorithm to fool SAR target recognition with deep convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4010005. doi: 10.1109/LGRS.2021.3058011.
    [27]
    DU Chuan and ZHANG Lei. Adversarial attack for SAR target recognition based on UNet-generative adversarial network[J]. Remote Sensing, 2021, 13(21): 4358. doi: 10.3390/rs13214358.
    [28]
    ZHOU Junfan, SUN Hao, and KUANG Gangyao. Template-based universal adversarial perturbation for SAR target classification[C]. The 8th China High Resolution Earth Observation Conference, Singapore, Singapore, 2023: 351–360. doi: 10.1007/978-981-19-8202-6_32.
    [29]
    XIA Weijie, LIU Zhe, and LI Yi. SAR-PeGA: A generation method of adversarial examples for SAR image target recognition network[J]. IEEE Transactions on Aerospace and Electronic Systems, 2023, 59(2): 1910–1920. doi: 10.1109/TAES.2022.3206261.
    [30]
    PENG Bowen, PENG Bo, ZHOU Jie, et al. Scattering model guided adversarial examples for SAR target recognition: Attack and defense[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5236217. doi: 10.1109/TGRS.2022.3213305.
    [31]
    HANSEN L K and SALAMON P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993–1001. doi: 10.1109/34.58871.
    [32]
    DING Jun, CHEN Bo, LIU Hongwei, et al. Convolutional neural network with data augmentation for SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(3): 364–368. doi: 10.1109/LGRS.2015.2513754.
    [33]
    LEE J S. Digital image enhancement and noise filtering by use of local statistics[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1980, PAMI-2(2): 165–168. doi: 10.1109/TPAMI.1980.4766994.
    [34]
    ZHUANG Juntang, TANG T, DING Yifan, et al. AdaBelief optimizer: Adapting stepsizes by the belief in observed gradients[C]. The 34th International Conference on Neural Information Processing Systems, 2020: 795–806.
    [35]
    NESTEROV Y. A method for unconstrained convex minimization problem with the rate of convergence[J]. Mathematics, 1983, 269: 543–547.
    [36]
    MA J and YARATS D. Quasi-hyperbolic momentum and Adam for deep learning[C]. The 7th International Conference on Learning Representations, New Orleans, LA, USA, 2019: 1–38.
    [37]
    KEYDEL E R, LEE S W, and MOORE J T. MSTAR extended operating conditions: A tutorial[C]. SPIE 2757, Algorithms for Synthetic Aperture Radar Imagery III, Orlando, USA, 1996: 228–242. doi: 10.1117/12.242059.
    [38]
    HOU Xiyue, AO Wei, SONG Qian, et al. FUSAR-Ship: Building a high-resolution SAR-AIS matchup dataset of Gaofen-3 for ship detection and recognition[J]. Science China Information Sciences, 2020, 63(4): 140303. doi: 10.1007/s11432-019-2772-5.
    [39]
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1106–1114.
    [40]
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015.
    [41]
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [42]
    SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 2818–2826. doi: 10.1109/CVPR.2016.308.
    [43]
    HOWARD A G, ZHU Menglong, CHEN Bo, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. https://arxiv.org/abs/1704.04861, 2017.
    [44]
    IANDOLA F N, HAN Song, MOSKEWICZ M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[EB/OL]. https://arxiv.org/abs/1602.07360, 2016.
    [45]
    WANG Wenhai, XIE Enze, LI Xiang, et al. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 2021: 548–558. doi: 10.1109/ICCV48922.2021.00061.
    [46]
    MEHTA S and RASTEGARI M. MobileViT: Light-weight, general-purpose, and mobile-friendly vision transformer[C]. The Tenth International Conference on Learning Representations, 2022.
    [47]
    KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. The 3rd International Conference on Learning Representations, San Diego, CA, USA, 2015: 1–15.
    [48]
    WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861.
  • Relative Articles

    [1]LI Can, WANG Zengfu, ZHANG Xiaoxuan, PAN Quan. Land-sea Clutter Classification Method Based on Multi-channel Graph Convolutional Networks[J]. Journal of Radars, 2025, 14(2): 322-337. doi: 10.12000/JR24165
    [2]WANG Zhirui, ZHAO Liangjin, WANG Yuelei, ZENG Xuan, KANG Jian, YANG Jian, SUN Xian. AIR-PolSAR-Seg-2.0: Polarimetric SAR Ground Terrain Classification Dataset for Large-scale Complex Scenes[J]. Journal of Radars, 2025, 14(2): 353-365. doi: 10.12000/JR24237
    [3]ZHANG Yipeng, LU Dongdong, QIU Xiaolan, LI Fei. Few-shot Ship Classification of SAR Images via Scattering Point Topology and Dual-branch Convolutional Neural Network[J]. Journal of Radars, 2024, 13(2): 411-427. doi: 10.12000/JR23172
    [4]LI Yi, DU Lan, DU Yuang. Convolutional Neural Network Based on Feature Decomposition for Target Detection in SAR Images[J]. Journal of Radars, 2023, 12(5): 1069-1080. doi: 10.12000/JR23004
    [5]WANG Ruichuan, WANG Yanfei. Terrain Classification of Polarimetric SAR Images Using Semi-supervised Spatial-channel Selective Kernel Network[J]. Journal of Radars, 2021, 10(4): 516-530. doi: 10.12000/JR21080
    [6]QIN Xianxiang, YU Wangsheng, WANG Peng, CHEN Tianping, ZOU Huanxin. Weakly Supervised Classification of PolSAR Images Based on Sample Refinement with Complex-Valued Convolutional Neural Network[J]. Journal of Radars, 2020, 9(3): 525-538. doi: 10.12000/JR20062
    [7]HUA Wenqiang, WANG Shuang, GUO Yanhe, XIE Wen. Semi-supervised PolSAR Image Classification Based on the Neighborhood Minimum Spanning Tree[J]. Journal of Radars, 2019, 8(4): 458-470. doi: 10.12000/JR18104
    [8]ZHANG Lamei, ZHANG Siyu, DONG Hongwei, ZHU Sha. Robust Classification of PolSAR Images Based on Pinball loss Support Vector Machine[J]. Journal of Radars, 2019, 8(4): 448-457. doi: 10.12000/JR19055
    [9]ZHANG Xiangrong, YU Xinyuan, TANG Xu, HOU Biao, JIAO Licheng. PolSAR Image Classification Method Based on Markov Discriminant Spectral Clustering[J]. Journal of Radars, 2019, 8(4): 425-435. doi: 10.12000/JR19059
    [10]XIAO Dongling, LIU Chang. PolSAR Terrain Classification Based on Fine-tuned Dilated Group-cross Convolution Neural Network[J]. Journal of Radars, 2019, 8(4): 479-489. doi: 10.12000/JR19039
    [11]ZHANG Xiaoling, ZHANG Tianwen, SHI Jun, WEI Shunjun. High-speed and High-accurate SAR Ship Detection Based on a Depthwise Separable Convolution Neural Network[J]. Journal of Radars, 2019, 8(6): 841-851. doi: 10.12000/JR19111
    [12]Su Ningyuan, Chen Xiaolong, Guan Jian, Mou Xiaoqian, Liu Ningbo. Detection and Classification of Maritime Target with Micro-motion Based on CNNs[J]. Journal of Radars, 2018, 7(5): 565-574. doi: 10.12000/JR18077
    [13]Zhao Juanping, Guo Weiwei, Liu Bin, Cui Shiyong, Zhang Zenghui, Yu Wenxian. Convolutional Neural Network-based SAR Image Classification with Noisy Labels[J]. Journal of Radars, 2017, 6(5): 514-523. doi: 10.12000/JR16140
    [14]Yang Wen, Zhong Neng, Yan Tianheng, Yang Xiangli. Classification of Polarimetric SAR Images Based on the Riemannian Manifold[J]. Journal of Radars, 2017, 6(5): 433-441. doi: 10.12000/JR17031
    [15]Xu Feng, Wang Haipeng, Jin Yaqiu. Deep Learning as Applied in SAR Target Recognition and Terrain Classification[J]. Journal of Radars, 2017, 6(2): 136-148. doi: 10.12000/JR16130
    [16]Tao Chensong, Chen Siwei, Li Yongzhen, Xiao Shunping. Polarimetric SAR Terrain Classification Using Polarimetric Features Derived from Rotation Domain[J]. Journal of Radars, 2017, 6(5): 524-532. doi: 10.12000/JR16131
    [17]Zou Huanxin, Luo Tiancheng, Zhang Yue, Zhou Shilin. Combined Conditional Random Fields Model for Supervised PolSAR Images Classification[J]. Journal of Radars, 2017, 6(5): 541-553. doi: 10.12000/JR16109
    [18]Tian Zhuangzhuang, Zhan Ronghui, Hu Jiemin, Zhang Jun. SAR ATR Based on Convolutional Neural Network[J]. Journal of Radars, 2016, 5(3): 320-325. doi: 10.12000/JR16037
    [19]Xing Yanxiao, Zhang Yi, Li Ning, Wang Yu, Hu Guixiang. Polarimetric SAR Image Supervised Classification Method Integrating Eigenvalues[J]. Journal of Radars, 2016, 5(2): 217-227. doi: 10.12000/JR16019
    [20]Hua Wen-qiang, Wang Shuang, Hou Biao. Semi-supervised Learning for Classification of Polarimetric SAR Images Based on SVM-Wishart[J]. Journal of Radars, 2015, 4(1): 93-98. doi: 10.12000/JR14138
  • Cited by

    Periodical cited type(3)

    1. 邬俊,徐刚. ISAR机动目标联合高分辨成像和参数估计. 信号处理. 2018(11): 1355-1361 .
    2. 符吉祥,孙光才,邢孟道. 一种大转角ISAR两维自聚焦平动补偿方法. 电子与信息学报. 2017(12): 2889-2898 .
    3. 冯俊杰,张弓. 多测量向量块稀疏信号重构ISAR成像算法. 系统工程与电子技术. 2017(09): 1959-1964 .

    Other cited types(2)

  • Created with Highcharts 5.0.7Amount of accessChart context menuAbstract Views, HTML Views, PDF Downloads StatisticsAbstract ViewsHTML ViewsPDF Downloads2024-052024-062024-072024-082024-092024-102024-112024-122025-012025-022025-032025-04010203040
    Created with Highcharts 5.0.7Chart context menuAccess Class DistributionFULLTEXT: 25.7 %FULLTEXT: 25.7 %META: 67.9 %META: 67.9 %PDF: 6.4 %PDF: 6.4 %FULLTEXTMETAPDF
    Created with Highcharts 5.0.7Chart context menuAccess Area Distribution其他: 5.8 %其他: 5.8 %其他: 0.7 %其他: 0.7 %Beijing: 0.1 %Beijing: 0.1 %Central District: 0.0 %Central District: 0.0 %China: 0.5 %China: 0.5 %France: 0.1 %France: 0.1 %India: 0.2 %India: 0.2 %Saitama: 0.0 %Saitama: 0.0 %San Lorenzo: 0.0 %San Lorenzo: 0.0 %Taichung: 0.0 %Taichung: 0.0 %Taiwan, China: 0.0 %Taiwan, China: 0.0 %United States: 0.1 %United States: 0.1 %XX: 0.1 %XX: 0.1 %[]: 0.5 %[]: 0.5 %三门峡: 0.0 %三门峡: 0.0 %上海: 1.4 %上海: 1.4 %东莞: 0.1 %东莞: 0.1 %中国: 0.0 %中国: 0.0 %临汾: 0.1 %临汾: 0.1 %丹东: 0.0 %丹东: 0.0 %丽水: 0.0 %丽水: 0.0 %乌鲁木齐: 0.0 %乌鲁木齐: 0.0 %代顿: 0.1 %代顿: 0.1 %佛山: 0.1 %佛山: 0.1 %保定: 0.0 %保定: 0.0 %信阳: 0.0 %信阳: 0.0 %六盘水: 0.1 %六盘水: 0.1 %兰州: 0.3 %兰州: 0.3 %兰州市: 0.1 %兰州市: 0.1 %内蒙古自治区呼和浩特: 0.1 %内蒙古自治区呼和浩特: 0.1 %包头: 0.1 %包头: 0.1 %北京: 19.4 %北京: 19.4 %北京市: 0.5 %北京市: 0.5 %北海: 0.0 %北海: 0.0 %十堰: 0.0 %十堰: 0.0 %南京: 1.1 %南京: 1.1 %南京市: 0.0 %南京市: 0.0 %南充: 0.0 %南充: 0.0 %南宁: 0.1 %南宁: 0.1 %南平: 0.0 %南平: 0.0 %南昌: 0.0 %南昌: 0.0 %南通: 0.0 %南通: 0.0 %南阳: 0.0 %南阳: 0.0 %印度: 0.3 %印度: 0.3 %厦门: 0.0 %厦门: 0.0 %古吉拉特: 0.2 %古吉拉特: 0.2 %台北: 0.1 %台北: 0.1 %台州: 0.0 %台州: 0.0 %合肥: 0.5 %合肥: 0.5 %吉安: 0.0 %吉安: 0.0 %呼伦贝尔: 0.1 %呼伦贝尔: 0.1 %呼和浩特: 0.3 %呼和浩特: 0.3 %呼和浩特市: 0.1 %呼和浩特市: 0.1 %和田: 0.0 %和田: 0.0 %咸阳: 0.0 %咸阳: 0.0 %哈尔滨: 0.3 %哈尔滨: 0.3 %哈尔滨市南岗区: 0.0 %哈尔滨市南岗区: 0.0 %哥伦布: 0.0 %哥伦布: 0.0 %嘉兴: 0.0 %嘉兴: 0.0 %圣保罗: 0.0 %圣保罗: 0.0 %大连: 0.3 %大连: 0.3 %天津: 0.5 %天津: 0.5 %太原: 0.1 %太原: 0.1 %太原市: 0.0 %太原市: 0.0 %威海: 0.1 %威海: 0.1 %宁夏回族自治区银川: 0.0 %宁夏回族自治区银川: 0.0 %宁波: 0.0 %宁波: 0.0 %安康: 0.3 %安康: 0.3 %宜昌: 0.0 %宜昌: 0.0 %宜春: 0.1 %宜春: 0.1 %宝鸡: 0.0 %宝鸡: 0.0 %宣城: 0.0 %宣城: 0.0 %宿迁: 0.1 %宿迁: 0.1 %巴音郭楞: 0.1 %巴音郭楞: 0.1 %布里斯班: 0.0 %布里斯班: 0.0 %帕特雷: 0.1 %帕特雷: 0.1 %常州: 0.3 %常州: 0.3 %平顶山: 0.0 %平顶山: 0.0 %广安: 0.1 %广安: 0.1 %广州: 0.5 %广州: 0.5 %广州市: 0.0 %广州市: 0.0 %广西壮族自治区南宁: 0.0 %广西壮族自治区南宁: 0.0 %库比蒂诺: 0.0 %库比蒂诺: 0.0 %廊坊: 0.0 %廊坊: 0.0 %开封: 0.2 %开封: 0.2 %张家口: 0.7 %张家口: 0.7 %徐州: 0.1 %徐州: 0.1 %德州: 0.0 %德州: 0.0 %意法半: 0.1 %意法半: 0.1 %成都: 0.7 %成都: 0.7 %成都市: 0.0 %成都市: 0.0 %扬州: 0.1 %扬州: 0.1 %承德: 0.1 %承德: 0.1 %揭阳: 0.0 %揭阳: 0.0 %新乡: 0.5 %新乡: 0.5 %无锡: 0.1 %无锡: 0.1 %日照: 0.0 %日照: 0.0 %昆明: 0.3 %昆明: 0.3 %晋城: 0.0 %晋城: 0.0 %景德镇: 0.0 %景德镇: 0.0 %曲靖: 0.0 %曲靖: 0.0 %杭州: 1.1 %杭州: 1.1 %杭州市: 0.0 %杭州市: 0.0 %柳州: 0.0 %柳州: 0.0 %株洲: 0.0 %株洲: 0.0 %格兰特县: 0.0 %格兰特县: 0.0 %桂林: 0.0 %桂林: 0.0 %榆林: 0.0 %榆林: 0.0 %武汉: 0.8 %武汉: 0.8 %江门: 0.1 %江门: 0.1 %沈阳: 0.1 %沈阳: 0.1 %泰安: 0.0 %泰安: 0.0 %泰安市岱岳区: 0.0 %泰安市岱岳区: 0.0 %洛杉矶: 0.1 %洛杉矶: 0.1 %洛阳: 0.1 %洛阳: 0.1 %济南: 0.2 %济南: 0.2 %济宁: 0.1 %济宁: 0.1 %浙江: 0.0 %浙江: 0.0 %海口: 0.0 %海口: 0.0 %海口市: 0.0 %海口市: 0.0 %淮北: 0.0 %淮北: 0.0 %淮南: 0.1 %淮南: 0.1 %淮安: 0.0 %淮安: 0.0 %深圳: 0.6 %深圳: 0.6 %深圳市: 0.0 %深圳市: 0.0 %温州: 0.1 %温州: 0.1 %渭南: 0.2 %渭南: 0.2 %湖州: 0.1 %湖州: 0.1 %湘潭: 0.0 %湘潭: 0.0 %湛江: 0.1 %湛江: 0.1 %滨州: 0.0 %滨州: 0.0 %漯河: 0.5 %漯河: 0.5 %潍坊: 0.1 %潍坊: 0.1 %烟台: 0.1 %烟台: 0.1 %焦作: 0.0 %焦作: 0.0 %玉林: 0.1 %玉林: 0.1 %珠海: 0.1 %珠海: 0.1 %白银: 0.0 %白银: 0.0 %益阳: 0.0 %益阳: 0.0 %盐城: 0.0 %盐城: 0.0 %眉山: 0.0 %眉山: 0.0 %石家庄: 0.5 %石家庄: 0.5 %石家庄市: 0.0 %石家庄市: 0.0 %石河子: 0.0 %石河子: 0.0 %福州: 0.2 %福州: 0.2 %秦皇岛: 0.2 %秦皇岛: 0.2 %纽约: 0.2 %纽约: 0.2 %绵阳: 0.1 %绵阳: 0.1 %罗奥尔凯埃: 0.1 %罗奥尔凯埃: 0.1 %美国伊利诺斯芝加哥: 0.0 %美国伊利诺斯芝加哥: 0.0 %芒廷维尤: 13.2 %芒廷维尤: 13.2 %芜湖: 0.0 %芜湖: 0.0 %芝加哥: 0.2 %芝加哥: 0.2 %苏州: 0.2 %苏州: 0.2 %荆州: 0.0 %荆州: 0.0 %菏泽市: 0.0 %菏泽市: 0.0 %葫芦岛: 0.1 %葫芦岛: 0.1 %蚌埠: 0.0 %蚌埠: 0.0 %衡水: 0.1 %衡水: 0.1 %衡阳: 0.0 %衡阳: 0.0 %衡阳市: 0.0 %衡阳市: 0.0 %西宁: 31.4 %西宁: 31.4 %西安: 3.0 %西安: 3.0 %西安市: 0.1 %西安市: 0.1 %西安市长安区: 0.0 %西安市长安区: 0.0 %许昌: 0.0 %许昌: 0.0 %诺沃克: 0.0 %诺沃克: 0.0 %贵港: 0.2 %贵港: 0.2 %贵阳: 0.1 %贵阳: 0.1 %赣州: 0.1 %赣州: 0.1 %达州: 0.0 %达州: 0.0 %运城: 0.5 %运城: 0.5 %连云港: 0.0 %连云港: 0.0 %邢台: 0.0 %邢台: 0.0 %邯郸: 0.0 %邯郸: 0.0 %郑州: 1.7 %郑州: 1.7 %郑州市: 0.0 %郑州市: 0.0 %重庆: 0.3 %重庆: 0.3 %金华: 0.0 %金华: 0.0 %铁岭: 0.0 %铁岭: 0.0 %银川: 0.0 %银川: 0.0 %镇江市: 0.0 %镇江市: 0.0 %长春: 0.1 %长春: 0.1 %长春市: 0.0 %长春市: 0.0 %长沙: 0.6 %长沙: 0.6 %长沙市宁乡县: 0.0 %长沙市宁乡县: 0.0 %长治: 0.1 %长治: 0.1 %阜新: 0.0 %阜新: 0.0 %阜新市: 0.0 %阜新市: 0.0 %阜阳: 0.0 %阜阳: 0.0 %阳泉: 0.0 %阳泉: 0.0 %青岛: 0.3 %青岛: 0.3 %青岛市: 0.0 %青岛市: 0.0 %鞍山: 0.0 %鞍山: 0.0 %韶关: 0.0 %韶关: 0.0 %香港特别行政区: 0.0 %香港特别行政区: 0.0 %其他其他BeijingCentral DistrictChinaFranceIndiaSaitamaSan LorenzoTaichungTaiwan, ChinaUnited StatesXX[]三门峡上海东莞中国临汾丹东丽水乌鲁木齐代顿佛山保定信阳六盘水兰州兰州市内蒙古自治区呼和浩特包头北京北京市北海十堰南京南京市南充南宁南平南昌南通南阳印度厦门古吉拉特台北台州合肥吉安呼伦贝尔呼和浩特呼和浩特市和田咸阳哈尔滨哈尔滨市南岗区哥伦布嘉兴圣保罗大连天津太原太原市威海宁夏回族自治区银川宁波安康宜昌宜春宝鸡宣城宿迁巴音郭楞布里斯班帕特雷常州平顶山广安广州广州市广西壮族自治区南宁库比蒂诺廊坊开封张家口徐州德州意法半成都成都市扬州承德揭阳新乡无锡日照昆明晋城景德镇曲靖杭州杭州市柳州株洲格兰特县桂林榆林武汉江门沈阳泰安泰安市岱岳区洛杉矶洛阳济南济宁浙江海口海口市淮北淮南淮安深圳深圳市温州渭南湖州湘潭湛江滨州漯河潍坊烟台焦作玉林珠海白银益阳盐城眉山石家庄石家庄市石河子福州秦皇岛纽约绵阳罗奥尔凯埃美国伊利诺斯芝加哥芒廷维尤芜湖芝加哥苏州荆州菏泽市葫芦岛蚌埠衡水衡阳衡阳市西宁西安西安市西安市长安区许昌诺沃克贵港贵阳赣州达州运城连云港邢台邯郸郑州郑州市重庆金华铁岭银川镇江市长春长春市长沙长沙市宁乡县长治阜新阜新市阜阳阳泉青岛青岛市鞍山韶关香港特别行政区

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(11)

    Article views(963) PDF downloads(144) Cited by(5)
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    • 表  1  主载波与不同主副功率比下共享信号的互相关系数
      Table  1.  Cross-correlation coefficient of shared signal under different power ratios
      主副功率比s13s24s35s46s57
      1:10.73490.74560.73410.73230.7672
      4:10.90820.90530.90220.90150.9115
      9:10.95220.95300.95190.95150.9554
      下载: 导出CSV 
      | 显示表格