
Citation: | LIU Che, YANG Kaiqiao, BAO Jianghan, et al. Recent progress in intelligent electromagnetic computing[J]. Journal of Radars, 2023, 12(4): 657–683. doi: 10.12000/JR23133 |
近年来,凭借深度神经网络(Deep Neural Networks, DNN)强大的特征提取能力,深度学习技术在合成孔径雷达自动目标识别(Synthetic Aperture Radar Automatic Target Recognition, SAR-ATR)领域取得了显著的成功[1−7]。然而,研究表明基于深度神经网络的SAR-ATR模型容易受到对抗样本的攻击[8−10]。对抗样本的概念由Szegedy等人[11]首次提出,通过在输入样本中添加精心设计的微小扰动,产生对抗样本,从而导致识别模型的错误分类,即实现对神经网络识别模型的攻击。针对SAR-ATR模型对抗攻击算法的研究能够拓展SAR目标识别的数据集,利用生成的对抗样本进行再训练即可提高SAR-ATR模型的鲁棒性。因此,SAR对抗攻击算法的研究对SAR-ATR模型的安全性具有重要意义。
目前针对SAR-ATR模型的对抗攻击算法处于起步阶段,学者主要将光学图像中的对抗攻击算法迁移到SAR图像。在光学图像领域中,已经提出了许多对抗攻击算法。根据对目标模型先验知识的掌握情况,这些对抗攻击算法一般可分为白盒攻击[12−16]和黑盒攻击。黑盒攻击大致可分为基于概率标签的黑盒攻击[17,18]、基于决策的黑盒攻击[19]和基于迁移的黑盒攻击[20−23]。前两种黑盒攻击算法需要大量查询神经网络,然而这在实际情况是难以实现的,因此基于迁移的黑盒攻击算法是学者研究的重点,其几乎都是通过基于梯度的对抗攻击算法实现的。在基于梯度的攻击方面,Goodfellow等人[12]提出快速梯度符号算法(Fast Gradient Sign Method, FGSM),此算法通过寻找神经网络模型梯度变化最大的方向,并在此方向上添加微小扰动,获得对抗样本。Kurakin等人[13]提出一种迭代快速梯度符号算法(Iterative Fast Gradient Sign Method, I-FGSM),此算法通过多次迭代添加较小的扰动,降低扰动被检测的概率,解决了FGSM攻击成功率低的问题。Dong等人[20]在I-FGSM的基础上首次引入动量的思想,提出了MI-FGSM (Momentum Iterative Fast Gradient Sign Method),该攻击方法进一步提高了对抗样本的黑盒攻击能力,即迁移能力。Zhao等人[21]将Nesterov-Adam算法集成到I-FGSM中,提出了一种新的攻击算法NAM,此算法在成功发起黑盒攻击的同时,也提高了白盒攻击的有效性。Wang等人[22]提出了一种基于方差调整(Variance Tuning)的迭代快速梯度符号方法,简称为VMI-FGSM,此算法在每次迭代时减小梯度的方差,避免陷入局部最优,从而有效增强对抗样本的可迁移性。Xie等人[23]提出基于输入多样化(Diversity Input)的迭代快速梯度符号方法,简称为DI-FGSM,此算法在迭代攻击过程中对输入图片进行随机变换,增加输入图片的多样性,进一步提高对抗样本的黑盒攻击有效性。上述研究主要集中在光学图像领域,同时,对抗样本也存在于遥感图像领域。Czaja等人[24]通过实验首次验证了基于深度卷积神经网络的遥感图像识别存在对抗样本。Chen等人[25]对遥感图像场景分类的对抗样本问题进行了全面研究,实验结果表明对抗样本普遍存在。在SAR图像领域,学者早期主要利用光学图像中的对抗攻击算法验证SAR图像领域存在对抗样本,例如,Huang等人[8]验证了深度学习在SAR图像目标识别中存在安全性和鲁棒性问题。在迁移、复现光学图像对抗攻击算法的基础上,一些研究进一步加快SAR对抗样本的生成速度。Fast C&W算法[26]引入一个编码器网络,通过一步映射得到对抗样本,相比于C&W[16]算法,此算法有效提高了对抗样本的生成效率。Du等人[27]利用U-Net生成对抗网络(Generative Adversarial Network, GAN)构建对抗样本,实验结果表明,此算法提升了攻击成功率和计算效率。Zhou等人[28]提出了一种SAR通用对抗扰动(Universal Adversarial Perturbation, UAP),从而大幅度缩短对抗样本的生成时间。考虑到SAR对抗攻击在真实场景的实现问题,Xia等人[29]尝试在信号域生成SAR对抗样本,通过所提出的欺骗干扰模型可以灵活地生成SAR对抗样本。实验结果表明,该方法能产生难以察觉的干扰,并能有效地攻击LeNet, VGGNet16, ResNet18和ResNet50这4种经典DNN模型。为了确保对抗样本的物理可行性,Peng等人[30]提出了一种基于参数化模型的SAR图像对抗样本生成方法,通过在8个DNN模型上进行实验,结果表明此算法具有较好的攻击性能。
目前针对SAR-ATR模型的对抗攻击算法大多为白盒攻击,即需要预先掌握敌方模型的类别和参数信息。但在实际情况的军事应用中,考虑到攻击对象SAR的非合作特性,其SAR-ATR模型对于攻击方来说是未知的,白盒攻击通常难以实施,因此亟需发展无需获取敌方模型先验信息的SAR对抗攻击算法,即黑盒攻击算法。
为解决上述问题,本文提出了一种基于迁移的黑盒攻击算法(Transfer-based Black-box Attack Algorithm, TBAA),有效实现了针对SAR-ATR模型的黑盒攻击。本文的贡献点如下:(1)在结合SAR图像特点方面,本文算法充分考虑SAR图像斑点噪声的特点,通过在每次迭代生成对抗样本期间不断利用Lee滤波算法滤除斑点噪声,并与服从截断指数分布的噪声相乘重构SAR图像,使得输入的SAR图像具有多样性,从而较好地缓解模型的过拟合现象;(2)为了稳定梯度更新方向和加快收敛速度,所提算法设计了梯度方向寻优器和梯度方向稳定算法,从而生成迁移性能强的对抗样本;(3)所提算法结合集成学习的思想,通过攻击集成模型实现对多个模型保持攻击的有效性,从而进一步提高黑盒攻击成功率。
本文基于动量迭代快速梯度符号算法(MI-FGSM)思想设计SAR-ATR模型的黑盒攻击算法,所提算法框架如图1所示。首先,结合SAR图像斑点噪声的特点,对SAR图像以概率p先后进行Lee滤波和随机斑点噪声变换(如图1中黄色框内所示);然后设计了梯度方向寻优器(如图1中绿色框内所示),利用其前瞻性和自适应调整学习率的优势进一步提高黑盒攻击成功率;进一步,利用模型增强的思路(如图1中灰色框内所示),通过将n个模型的逻辑值输出Logits_i(i=1,2,⋯,n)加权求和得到Logits,再通过标签和融合的Logits计算新的损失函数;最后,将损失函数计算得到的梯度通过拟双曲动量(Quasi-Hyperbolic Momentum, QHM)算子得到稳定的梯度更新方向(如图1中蓝色框内所示),提升本文算法黑盒攻击性能。
本文算法以MI-FGSM[20]的思想为基础进行设计。MI-FGSM为基于梯度的攻击算法,其思想是利用对抗样本的迁移性来攻击黑盒模型。其中,对抗样本的迁移性如图2所示。针对一个图像多分类任务,f1(⋅), f2(⋅)和fs(⋅)分别为已经训练好的深度神经网络模型,δ表示添加的扰动,x和xadv分别表示原始输入样本和添加扰动后的对抗样本。其中,fs(⋅)作为生成对抗样本的白盒模型,相反,f1(⋅)和f2(⋅)则为黑盒模型。右侧的柱状图可视化了分类结果,柱状体的高度越高则表示置信度越高,红色柱状体表示神经网络最终错误分类的类别,黄色柱状体表示正确的类别。从图中可以看出,对抗样本xadv能够成功欺骗fs(⋅),并且也能导致f1(⋅)和f2(⋅)输出错误的分类结果。因此,由fs(⋅)模型训练生成的对抗样本具有迁移性。
在基于迁移的黑盒攻击算法中,大部分为基于梯度的攻击算法,此类算法的思路与深度神经网络的训练过程是相似的。假设f(⋅)为深度神经网络模型,损失函数为J,输入的SAR图像为x,y表示x的真实标签。深度神经网络模型的训练思路是沿着梯度下降的方向迭代更新模型参数θ,达到降低损失函数J的目的。因此,深度神经网络模型训练过程的参数更新公式为
˜θ=θ−∂J∂y⋅∂y∂θ | (1) |
基于梯度的对抗样本的生成思路为沿着梯度上升的方向迭代更新输入SAR图像x,其目的是使得深度神经网络模型的损失函数J逐渐变大,从而导致模型输出错误的识别结果。具体为
xadv=x+∂J∂y⋅∂y∂x | (2) |
根据以上分析,基于梯度的攻击算法思路与深度神经网络的训练均是通过反向传播训练参数的过程。在神经网络训练过程中,容易产生过拟合现象,即模型在训练集上识别精度高,但在测试集上识别精度低。基于梯度的攻击算法生成的对抗样本在白盒模型下表现出较好的攻击能力,但在黑盒条件下攻击性能较差,即对抗样本的迁移性不高。因此,可以将提高深度神经网络模型泛化性能的方法用于提高对抗样本的迁移性能上。结合更好的优化器能够有效提高深度神经网络模型的泛化性,如式(3)所示,μ为衰减因子,gt为前t次迭代累积的梯度。MI-FGSM在梯度优化更新过程中引入动量项从而获得稳定更新的梯度方向,进而提高了对抗样本的迁移性,有效实现了黑盒攻击性能。
gt+1 = μ⋅gt+∇xJ(xadvt,y)‖∇xJ(xadvt,y)‖1 | (3) |
同时,通过结合集成学习[31]的思想,如式(4)和式(5)所示,融合多个模型的全连接层输出Logits值,然后利用真实标签y和融合的Logits值构造新的损失函数。实现对多个模型保持攻击的有效性,进一步提高对抗样本的迁移性能,从而更易于攻击其他模型。
l(x)=K∑k=1wklk(x) | (4) |
J(x,y)=−1y⋅log(softmax(l(x))) | (5) |
其中,K为模型数量,lk(x)为第k个模型的Logits,wk为集成系数,−1y为标签的独热编码(One-Hot Encoding)。
SAR图像斑点噪声为深度神经网络模型提供大量的高维特征,与SAR图像目标特征不同,该特征鲁棒性较差,当模型过拟合于斑点噪声特征时,会导致对抗样本的迁移性较差。
为了解决此问题,本文提出SAR图像随机斑点噪声变换(Speckle noise Transformation, ST)方法,有效提高输入训练样本的多样性,实现缓解模型对斑点噪声过拟合的目的,进而提高黑盒攻击成功率。此算法假设处理的SAR图像为单视SAR图像,拟定观察到的场景用乘性噪声模型建模,即实际SAR图像的每个分辨单元强度由一个反映该单元实际RCS的确定值和一个指数分布乘积而成[32]:
I=s×n | (6) |
其中,I表示观察到的强度,s表示SAR图像场景的RCS值,n表示服从截断指数分布的斑点噪声。
由于Lee滤波[33]被广泛应用于SAR图像的去噪,因此本文算法将观察到的SAR图像x以概率p首先对SAR图像进行Lee滤波,然后将Lee滤波后的SAR图像乘以服从截断指数分布的斑点噪声,具体实现公式为
ST(x)=Lee(x)⋅e−z1−e−a=[ˉx+b(x−ˉx)]⋅e−z1−e−a, z>0 | (7) |
其中,x表示SAR图像,Lee(⋅)表示Lee滤波过程,ˉx表示SAR图像的均值,a为截断指数分布的参数,b代表权重系数。
为了有效提高黑盒攻击成功率,实现快速寻找梯度最优下降方向的目的,本文将AdaBelief[34]和Nesterov[35]算法进行有效结合,设计了梯度方向寻优器,简称ABN。如式(8)和式(9)所示,首先计算梯度gt的指数移动平均值mt和(gt−mt)2的移动平均值st。当gt与mt的差值大时,证明当前预测的梯度偏离前一时刻的梯度,导致学习率下降;然而当预测的梯度与前一时刻的梯度相差较小时,学习率上升。因此,能够实现快速收敛的目的。同时,在式(10)中,通过预先向前走一步,并利用下一时刻的梯度代替当前时刻的梯度。因此具有较好的前瞻性,从而能够有效跳出局部最优。
mt=β1⋅mt−1+(1−β1)gt | (8) |
st=β2⋅st−1+(1−β2)(gt−mt)2+ε | (9) |
g∗t=∇xadvt(xadvt+αmt√st+ε) | (10) |
其中,下标t表示迭代至第t步,α表示学习率参数,ε为极小值,用于防止分母为零。
为了稳定梯度的更新方向,有效跳出局部最优,实现进一步提高对抗样本黑盒攻击性能的目的。本文引入拟双曲动量算子[36](简称QHM),增强梯度方向下降方向的稳定性。该算子优点具体如下:第一,此算法是动量梯度算法的简易变式,计算方面不涉及二次求导,具有计算量小且计算流程简单的特点;第二,此算法能够充分结合历史的梯度动量来修正梯度下降的方向,有效避免陷入局部最优值。该算子的计算公式为
gt+1=β⋅gt+(1−β)⋅g∗t‖g∗t‖1 | (11) |
˜gt+1=(1−v)⋅g∗t‖g∗t‖1+vgt+1 | (12) |
其中,gt表示迭代到t时刻累计的梯度,g∗t表示通过ABN寻优器得到的梯度,v和β为动量系数。
TBAA计算公式如算法1所示。具体来说,TBAA首先在步骤1和步骤2中对各个参数进行初始化。步骤4至步骤8为梯度方向寻优器ABN具体计算过程,其中使用mt累计前t次迭代的梯度,衰减因子为β1,st累计前t次迭代的梯度与mt之间差值的平方,其衰减因子为β2,设置稳定系数ζ防止步骤8中公式分母为零。步骤9将随机斑点噪声变换得到的ST(˜xadvt)输入至模型中,再通过模型的损失函数计算得到梯度g∗t,利用梯度方向稳定算子QHM对步骤10得到的梯度进行更新得到˜gt+1,最终在步骤13生成对抗样本。其中,在步骤9中,当K=1时,为单模型攻击;当K>1时,为集成模型攻击。
输入:干净样本x,K个深度神经网络模型f1,f2,⋯,fK,对应 的网络模型逻辑值l1,l2,⋯,lK以及相应的网络模型集成权重 w1,w2,⋯,wK,扰动量大小ε,步长α,迭代次数T,系数v,β, β1和β2 |
输出:对抗样本xadv |
步骤1 α←ε/T,g0←0,m0←0,n0←0 |
步骤2 g0←0,m0←0,s0←0,xadv0←x |
步骤3 For t=0 to T−1 do |
步骤4 Update mt by mt=β1⋅mt−1+(1−β1)gt |
步骤5 Update ˆmt=mt1−βt1 |
步骤6 Update st=β2⋅st−1+(1−β2)(ˆgt−mt)2 |
步骤7 Update ˆst=st+ζ1−βt2 |
步骤8 ˜xadvt=xadvt+α√ˆst+ζˆmt |
步骤9 l(˜xadvt)=K∑k=1wklk(ST(˜xadvt;p)) |
步骤10 Update g∗t by g∗t=∇xadvtJ(ST(˜xadvt;p),y) |
步骤11 Update gt+1 by gt+1=βgt+(1−β)⋅g∗t‖g∗t‖1 |
步骤12 Update ˜gt+1 by ˜gt+1=(1−v)gt+1+v⋅g∗t‖g∗t‖1 |
步骤13 xadvt+1=Clipεx{xadvt+α⋅sign(˜gt+1)} |
步骤14 End for |
步骤15 Return xadvt=xadvt+1 |
为了验证本文算法的有效性,本文采用两个SAR数据集,分别是移动和静止目标采集与识别(The Moving and Stationary Target Acquisition and Recognition, MSTAR)[37]和FUSAR-Ship数据集[38]。
MSTAR由美国国防高级研究计划局(DAPRA)和空军研究实验室(AFRL)提供。此数据集利用高分辨率的聚束式合成孔径雷达获取,分辨率为0.3 m×0.3 m。目前,MSTAR数据集广泛应用于国内外SAR-ATR性能评估研究,图像尺寸为128像素×128像素,包括10类军事车辆目标,如2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131和ZSU23/4,其SAR图像如图3所示。此数据集可分为标准操作条件(Standard Operating Condition, SOC)和扩展工作条件(Extended Operating Condition, EOC)。本文算法采用SOC数据集中的10类SAR目标,SOC数据集一般将17°俯仰角的数据作为训练集,将15°俯仰角的数据作为测试集,其具体目标类别和数量分布如表1所示。
目标类别 | 训练集 | 测试集 | |||
俯仰角(°) | 数量 | 俯仰角(°) | 数量 | ||
2S1 | 17 | 299 | 15 | 274 | |
BRDM2 | 17 | 298 | 15 | 274 | |
BTR60 | 17 | 233 | 15 | 195 | |
D7 | 17 | 299 | 15 | 274 | |
T62 | 17 | 299 | 15 | 273 | |
ZIL131 | 17 | 299 | 15 | 274 | |
BMP2 | 17 | 233 | 15 | 195 | |
ZSU23/4 | 17 | 299 | 15 | 274 | |
T72 | 17 | 232 | 15 | 196 | |
BTR70 | 17 | 233 | 15 | 196 |
FUSAR-Ship数据集由复旦大学电磁波信息科学重点实验室提供。此数据集取自高分三号卫星遥感图像,分辨率为1.124 m×1.728 m,极化模式包含DH和DV,覆盖了各种海、陆、海岸、河流和岛屿场景。FUSAR-Ship数据集适用于复杂海面的船只检测与识别工作,一共包含5000多张不同类别船舶图像,所有图像的尺寸为512像素×512像素。本文选取此数据集中的4类子目标进行实验测试。具体而言,包括BulkCarrier, CargoShip, Fishing和Tanker,其SAR图像如图4所示,训练集和测试集的划分情况如表2所示。
目标类别 | 训练集数量 | 测试集数量 |
BulkCarrier | 97 | 25 |
CargoShip | 126 | 32 |
Fishing | 75 | 19 |
Tanker | 36 | 10 |
本文实验选取10种应用广泛的深度神经网络AlexNet[39], VGGNet16[40], ResNet18[41], ResNet50[41], InceptionV3[42], A-ConvNet[5], MobileNet[43], SqueezeNet[44], PVTv2[45]和MobileViTv2[46]作为SAR-ATR模型。在预处理阶段对SAR图像进行随机翻转、旋转、亮度变化等数据增强操作,在训练阶段,本文通过从训练数据集中统一采样10%的数据来形成验证数据集,本文将学习率设置为0.001,将训练轮数设置为50,将批量大小设置为64,并使用Adam优化器[47]。以上SAR-ATR模型在MSTAR和FUSAR-Ship测试集识别精度如表3所示,其中,FUSAR-Ship数据集仅在上述SAR-ATR模型中的5种模型进行训练。实验使用Windows 10操作系统,PyTorch深度学习开发框架,Python作为开发语言。实验采用的CPU为Intel酷睿i9-11900H,GPU为 NVIDIA GeForce RTX 3080 Laptop GPU。
模型 | MSTAR ACC (%) | FUSAR-Ship ACC (%) |
AlexNet | 95.1 | 69.47 |
VGG16 | 95.6 | 70.23 |
ResNet18 | 96.6 | 68.10 |
ResNet50 | 97.7 | — |
InceptionV3 | 99.1 | — |
A-ConvNet | 99.8 | — |
MobileNet | 97.8 | — |
SqueezeNet | 95.4 | 72.25 |
PVTv2 | 98.8 | — |
MobileViTv2 | 99.4 | 72.70 |
在实验中,本文将所提算法与MI-FGSM[20], NAM[21], VMI-FGSM[22], DI-FGSM[23], Attack-Unet-GAN[27]和Fast C&W[26]进行对比分析,其中,MI-FGSM, NAM, VMI-FGSM和DI-FGSM为目前应用较为广泛的基于迁移的黑盒攻击算法,Attack-Unet-GAN和Fast C&W为主流的SAR-ATR攻击算法。
在本文实验中,对于MI-FGSM, NAM, VMI-FGSM, DI-FGSM和TBAA,将最大扰动值ε设置为0.06,迭代次数T设置为10。对于Attack-Unet-GAN和Fast C&W,按照文献[27]和文献[26]的参数设置。对于TBAA,本文遵循以下默认的设置:衰减因子β=0.999, β1=0.99, β2=0.999,滑动平均系数v=0.7,稳定性参数ζ=10E−8,概率p=0.5。
本实验从攻击有效性和攻击隐蔽性两个方面对实验结果进行评估。
在攻击有效性方面,实验使用攻击成功率[20]作为评价指标,如式(13)所示:
攻击成功率=错误分类的样本数量正确分类的样本数量 | (13) |
其中,正确分类的样本数量为SAR-ATR模型分类正确的样本数量,错误分类的样本数量表示在添加扰动之后输入SAR-ATR导致分类错误的样本数量。
在攻击隐蔽性方面,本文使用平均结构相似度(Average Structural Similarity, ASS)[48],同时平均结构相似度越高,则攻击的隐蔽性越好,其具体计算公式为
ASS(x,xadv)=1MM∑i=1SSIM(x,xadv)=1MM∑i=1(2μxiμxadvi+C1)(2σxixadvi+C2)(μ2xi+μ2xadvi+C1)(σ2xi+σ2xadvi+C2) | (14) |
其中,M为样本数量,xadv表示对抗样本,μxi, μxadvi和σxi, σxadvi分别为对应图像的均值和标准差,σxixadvi表示协方差,C1和C2是用于保持度量稳定的常数。
在本节中,为了验证本文所提算法的攻击性能,分别在MSTAR数据集和FUSAR-Ship数据集上对单个神经网络模型进行对抗攻击。MSTAR数据集和FUSAR-Ship数据集的单模型攻击成功率分别如表4和表5所示,其中标∗数值表示白盒攻击成功率,其余数值表示黑盒攻击成功率。
代理模型 | 攻击算法 | 受害者模型 | |||||||||
AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | ||
AlexNet | MI-FGSM | 10.9 | 12.0 | 9.0 | 5.0 | 28.0 | 35.0 | 18.9 | 14.0 | 19.6 | |
NAM | 12.0 | 13.0 | 10.0 | 6.9 | 37.0 | 22.9 | 20.7 | ||||
VMI-FGSM | 19.5 | 19.5 | 6.0 | 29.5 | 39.5 | 27.0 | 21.5 | ||||
DI-FGSM | 16.0 | 29.5 | 32.0 | 20.5 | |||||||
Attack-Unet-GAN | 7.0 | 8.0 | 7.5 | 4.0 | 20.5 | 32.5 | 14.5 | 12.5 | 9.5 | ||
Fast C&W | 4.5 | 7.0 | 6.0 | 3.0 | 17.5 | 19.5 | 12.5 | 8.0 | 3.5 | ||
TBAA | |||||||||||
VGGNet16 | MI-FGSM | 61.0 | 58.0 | 56.0 | 40.0 | 55.0 | 41.0 | 43.0 | 26.0 | 30.0 | |
NAM | 60.0 | 61.0 | 59.0 | 42.0 | 47.0 | 31.0 | 35.0 | ||||
VMI-FGSM | 62.5 | 59.5 | 58.5 | 42.5 | 57.5 | 41.0 | 46.5 | ||||
DI-FGSM | 59.5 | 42.5 | 37.5 | ||||||||
Attack-Unet-GAN | 53.0 | 40.5 | 32.5 | 24.5 | 32.5 | 38.5 | 39.0 | 23.0 | 24.5 | ||
Fast C&W | 44.5 | 31.0 | 37.5 | 24.0 | 31.5 | 22.0 | 24.5 | 13.5 | 14.5 | ||
TBAA | |||||||||||
ResNet18 | MI-FGSM | 13.0 | 9.9 | 13.9 | 39.0 | 26.0 | 15.0 | 14.0 | 5.0 | ||
NAM | 15.0 | 9.0 | 16.0 | 38.0 | 31.0 | 17.0 | 21.0 | 5.3 | |||
VMI-FGSM | 17.0 | 15.8 | |||||||||
DI-FGSM | 14.0 | 21.0 | 41.0 | 29.5 | 23.5 | 8.6 | |||||
Attack-Unet-GAN | 12.5 | 6.5 | 11.5 | 5.0 | 19.5 | 18.5 | 11.5 | 11.0 | 3.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 3.0 | 9.0 | 11.5 | 12.0 | 13.5 | 4.0 | ||
TBAA | |||||||||||
ResNet50 | MI-FGSM | 8.0 | 12.0 | 10.5 | 21.0 | 16.0 | 22.9 | 10.0 | 12.0 | 9.0 | |
NAM | 10.0 | 14.0 | 14.0 | 22.0 | 24.0 | 13.0 | 17.0 | ||||
VMI-FGSM | 14.5 | 22.0 | 33.0 | 21.0 | 13.5 | ||||||
DI-FGSM | 26.5 | 23.0 | 28.0 | 11.5 | |||||||
Attack-Unet-GAN | 6.5 | 10.5 | 6.5 | 7.0 | 15.0 | 18.0 | 8.0 | 8.0 | 7.0 | ||
Fast C&W | 5.0 | 4.5 | 7.5 | 13.0 | 7.5 | 10.5 | 7.5 | 10.5 | 6.0 | ||
TBAA | |||||||||||
InceptionV3 | MI-FGSM | 29.0 | 31.4 | 65.5 | 38.0 | 65.0 | 31.0 | 39.0 | 12.0 | 28.0 | |
NAM | 66.9 | 33.9 | 18.0 | ||||||||
VMI-FGSM | 33.0 | 31.5 | 52.5 | 39.0 | 43.0 | 30.0 | |||||
DI-FGSM | 34.0 | 34.5 | 56.0 | 41.0 | 66.0 | 33.5 | 41.5 | 28.5 | |||
Attack-Unet-GAN | 20.6 | 24.5 | 53.0 | 31.0 | 32.5 | 25.0 | 26.5 | 9.0 | 24.5 | ||
Fast C&W | 11.0 | 16.5 | 30.0 | 28.0 | 20.0 | 12.0 | 15.0 | 10.5 | 16.5 | ||
TBAA | |||||||||||
A-ConvNet | MI-FGSM | 19.9 | 15.5 | 29.5 | 20.9 | 11.5 | 29.0 | 15.0 | 21.9 | 9.0 | |
NAM | 23.5 | 17.5 | 35.5 | 24.5 | 18.9 | 32.5 | 18.0 | 24.0 | 13.0 | ||
VMI-FGSM | 25.5 | 19.5 | |||||||||
DI-FGSM | 17.5 | 23.0 | 29.5 | 26.5 | 10.5 | ||||||
Attack-Unet-GAN | 10.8 | 5.6 | 9.0 | 13.0 | 7.0 | 11.6 | 11.0 | 14.7 | 8.0 | ||
Fast C&W | 11.5 | 4.0 | 8.5 | 5.0 | 3.0 | 97.5* | 10.5 | 12.5 | 13.5 | 4.0 | |
TBAA | |||||||||||
MobileNet | MI-FGSM | 16.0 | 15.1 | 10.0 | 15.0 | 15.6 | 18.0 | 18.9 | 8.0 | 9.0 | |
NAM | 18.0 | 14.9 | 18.9 | 9.5 | 10.5 | ||||||
VMI-FGSM | 18.0 | 23.0 | 23.5 | 14.0 | |||||||
DI-FGSM | 19.0 | 17.5 | 10.5 | 17.5 | 18.0 | 19.5 | 20.5 | ||||
Attack-Unet-GAN | 9.0 | 3.5 | 7.5 | 7.8 | 2.5 | 12.5 | 11.0 | 7.3 | 5.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 5.0 | 3.0 | 7.0 | 10.0 | 6.5 | 4.0 | ||
TBAA | |||||||||||
SqueezeNet | MI-FGSM | 19.5 | 9.5 | 20.5 | 18.0 | 6.0 | 40.5 | 31.4 | 18.0 | 18.0 | |
NAM | 18.5 | 10.3 | 20.9 | 19.5 | 6.5 | 40.5 | 24.0 | 21.0 | |||
VMI-FGSM | 28.5 | 11.0 | 32.0 | 19.5 | |||||||
DI-FGSM | 21.0 | 11.5 | 22.5 | 41.0 | 31.5 | 23.0 | |||||
Attack-Unet-GAN | 13.0 | 8.0 | 16.5 | 17.0 | 4.5 | 17.5 | 17.0 | 12.5 | 14.5 | ||
Fast C&W | 10.0 | 4.5 | 7.0 | 5.5 | 3.0 | 18.0 | 10.0 | 13.5 | 14.0 | ||
TBAA | |||||||||||
PVTv2 | MI-FGSM | 10.0 | 7.3 | 9.0 | 12.0 | 15.5 | 6.0 | 18.0 | 7.8 | 11.3 | |
NAM | 10.7 | 13.5 | 21.5 | 10.4 | 19.9 | 9.0 | 18.5 | ||||
VMI-FGSM | 12.0 | 12.0 | 9.5 | 22.5 | 23.0 | 11.0 | |||||
DI-FGSM | 11.0 | 15.0 | 12.0 | 13.0 | |||||||
Attack-Unet-GAN | 8.5 | 5.0 | 7.5 | 7.9 | 12.5 | 3.5 | 11.6 | 4.5 | 9.0 | ||
Fast C&W | 10.0 | 4.0 | 6.5 | 4.5 | 13.0 | 5.5 | 9.0 | 3.7 | 4.0 | ||
TBAA | |||||||||||
MobileViTv2 | MI-FGSM | 14.0 | 16.0 | 19.0 | 18.3 | 7.9 | 43.8 | 30.0 | 18.0 | 52.0 | |
NAM | 21.4 | 24.0 | 26.2 | 20.7 | 33.9 | 25.4 | 58.0 | ||||
VMI-FGSM | 21.0 | 21.5 | 11.5 | 45.0 | 35.0 | 27.0 | 56.0 | ||||
DI-FGSM | 23.1 | 10.5 | 46.3 | 98.0* | |||||||
Attack-Unet-GAN | 11.0 | 6.5 | 14.0 | 11.5 | 5.5 | 29.0 | 15.5 | 15.0 | 46.0 | ||
Fast C&W | 11.0 | 4.0 | 8.5 | 5.5 | 3.0 | 17.5 | 10.5 | 11.5 | 45.0 | ||
TBAA | |||||||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
代理模型 | 攻击算法 | 受害者模型 | ||||
AlexNet | VGGNet16 | ResNet18 | SqueezeNet | MobileViTv2 | ||
AlexNet | MI-FGSM | 38.00 | 40.00 | 33.90 | 68.00 | |
NAM | 62.00 | |||||
VMI-FGSM | 47.10 | 42.60 | 70.00 | |||
DI-FGSM | 98.41* | 47.40 | 63.56 | 44.93 | 74.94 | |
Attack-Unet-GAN | 23.80 | 33.20 | 15.60 | 30.00 | ||
Fast C&W | 18.70 | 29.10 | 12.40 | 24.00 | ||
TBAA | ||||||
VGGNet16 | MI-FGSM | 28.00 | 24.00 | 40.00 | 46.00 | |
NAM | 33.90 | 30.00 | 38.00 | 50.00 | ||
VMI-FGSM | 33.90 | 98.62* | 28.10 | 52.40 | ||
DI-FGSM | 98.76* | 42.40 | ||||
Attack-Unet-GAN | 13.50 | 20.40 | 24.00 | 26.00 | ||
Fast C&W | 9.30 | 19.60 | 22.90 | 24.00 | ||
TBAA | ||||||
ResNet18 | MI-FGSM | 6.00 | 7.90 | 15.90 | 40.00 | |
NAM | 7.90 | 9.90 | 21.90 | 50.00 | ||
VMI-FGSM | 9.30 | 10.60 | 28.60 | 53.80 | ||
DI-FGSM | 99.96* | |||||
Attack-Unet-GAN | 4.50 | 5.60 | 10.40 | 17.80 | ||
Fast C&W | 5.30 | 6.20 | 6.70 | 12.90 | ||
TBAA | ||||||
SqueezeNet | MI-FGSM | 16.00 | 9.90 | 28.00 | 45.90 | |
NAM | 21.90 | 14.00 | 43.90 | 56.00 | ||
VMI-FGSM | 44.20 | 53.30 | ||||
DI-FGSM | 25.07 | 16.32 | 99.59* | |||
Attack-Unet-GAN | 14.50 | 7.90 | 22.00 | 28.00 | ||
Fast C&W | 10.10 | 6.30 | 20.10 | 98.69* | 26.90 | |
TBAA | ||||||
MobileViTv2 | MI-FGSM | 4.00 | 7.90 | 42.00 | 21.90 | |
NAM | 9.00 | 16.00 | 26.00 | |||
VMI-FGSM | 12.80 | 45.90 | 24.10 | |||
DI-FGSM | 18.60 | 47.00 | ||||
Attack-Unet-GAN | 2.90 | 5.30 | 25.60 | 18.00 | ||
Fast C&W | 2.60 | 4.20 | 21.60 | 14.60 | ||
TBAA | ||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
通过分析表4可得,在MSTAR数据集上,TBAA算法在所有的黑盒模型上均优于其他基线攻击算法,同时在所有的白盒模型上保持较高的成功率。以在InceptionV3上生成的对抗样本为例,7种基线攻击算法的白盒攻击成功率均达到了100%,同时这7种算法在MobileNet上的攻击成功率分别为31.0%, 33.9%, 34.0%, 33.5%, 25.0%, 12.0%和49.5%。
通过分析表5可得,在FUSAR-Ship数据集上,以在VGGNet16上生成的对抗样本为例,7种基线攻击算法的白盒攻击成功率均在98%以上,7种对比算法在MobileViTv2模型上的攻击成功率分别为46.0%, 50.0%, 52.4%, 52.6%, 26.0%, 24.0%和56.0%。
显然,TBAA的黑盒攻击成功率在所有基线对比算法中是最高的。究其原因,结合2.1节的分析,本文认为,MI-FGSM,NAM和VMI-FGSM算法为通过结合优化算法提升对抗样本的黑盒攻击能力,DI-FGSM算法通过多样化的输入缓解模型对对抗样本的过拟合,从而有效攻击黑盒模型,Attack-Unet-GAN和Fast C&W在生成对抗样本的过程中容易对白盒模型产生过拟合的情况,然而本文所提算法TBAA在利用随机斑点噪声变换得到多样化输入的同时结合更加优秀的优化算法,因此能够生成具有最强迁移性能的对抗样本。
虽然在3.2节中,本文提出的攻击算法在单模型黑盒攻击方面,能够有效提升对抗样本的迁移性,但还可以通过集成模型的方法生成迁移性更强的对抗样本,本文通过对多个网络逻辑值的集成来进行攻击。在本节实验中,分别利用MI-FGSM, NAM,VMI-FGSM, DI-FGSM, Attack-Unet-GAN, FastC&W以及TBAA针对多个正常训练的SAR-ATR模型的集成来生成对抗样本,并在所有的网络上进行测试。
表6给出7种对抗攻击算法在黑盒条件下对集成模型的攻击成功率。MSTAR数据集上的“AlexNet”一列中,本节通过将除AlexNet外其他9个模型进行集成生成对抗样本,即为黑盒条件下。同理,FUSAR-Ship数据集上的“AlexNet”一列表示在除AlexNet外其他4个模型进行集成生成对抗样本。在具有挑战性的黑盒模型下,本文算法TBAA总是在所有的网络上生成比其他基线算法具有更好迁移性的对抗样本。例如,通过将VGGNet16设为黑盒模型,在MSTAR数据集上,7种攻击算法的攻击成功率分别为39.0%, 41.5%, 43.5%, 44.3%, 30.5%, 26.8%, 62.0%;在FUSAR-Ship数据集上,7种攻击算法的攻击成功率分别为40.9%, 50.5%, 53.2%, 56.0%, 25.0%, 22.5%和62.0%。与基线算法相比,本文算法在MSTAR数据集上的黑盒攻击成功率提高了3%~55%;在FUSAR-Ship数据集上的黑盒攻击成功率提高了6.0%~57.5%。
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46.0 |
NAM | 63.1 | 41.5 | 68.2 | 45.0 | 75.7 | 53.2 | 54.0 | 75.6 | 51.4 | ||
VMI-FGSM | 66.4 | 43.5 | 72.5 | 65.8 | 46.5 | 74.6 | 52.0 | 53.0 | |||
DI-FGSM | 70.0 | 70.0 | 51.0 | ||||||||
Attack-Unet-GAN | 53.6 | 30.5 | 47.0 | 35.0 | 30.0 | 35.0 | 41.0 | 43.0 | 52.3 | 31.0 | |
Fast C&W | 46.0 | 26.8 | 35.0 | 38.0 | 28.0 | 33.0 | 28.5 | 30.0 | 51.0 | 24.0 | |
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 48.0 | — | — | — | — | 45.9 | — | 71.9 |
NAM | 34.5 | 50.5 | — | — | — | — | 48.5 | — | |||
VMI-FGSM | 35.8 | 53.2 | 67.0 | — | — | — | — | — | 76.5 | ||
DI-FGSM | 68.0 | — | — | — | — | 50.0 | — | ||||
Attack-Unet-GAN | 16.0 | 25.0 | 38.0 | — | — | — | — | 28.5 | — | 38.4 | |
Fast C&W | 12.5 | 22.5 | 34.2 | — | — | — | — | 26.0 | — | 32.0 | |
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
此外,在MSTAR数据集上,与表4针对VGGNet16单模型攻击中的最高黑盒攻击成功率50%相比,本节集成攻击能够有效提高对抗样本的迁移性能。
本节在MSTAR数据集上进一步分析不同参数对TBAA的影响。
概率p:首先,本文研究在白盒和黑盒模型下,概率p对攻击成功率的影响。概率p的取值范围在0到1之间。图5展示了TBAA算法分别在白盒和黑盒模型下的攻击成功率。从图中可分析得,随着p的增大,算法的白盒攻击成功率逐渐降低,黑盒成功率逐渐升高。究其原因,本文认为随着p的增大,增加了输入样本的多样性,能够有效缓解过拟合的问题,从而提高黑盒攻击有效性。因此,图5所示的变化趋势能够为实际中构建有效的对抗攻击提供有用的建议,例如,可以选择一个合适的概率p值,在满足白盒攻击成功率大于等于90%的条件下,最大限度提高黑盒攻击成功率。
最大扰动量ε:接下来,重点研究在黑盒模型下,最大扰动量ε对攻击成功率的影响。在实验中设置最大扰动量ε从0.02变化到0.10。图6(a)展示了在不同网络模型下的攻击成功率,从中可得,随着最大扰动量ε的增加,TBAA的攻击成功率升高。
迭代次数T:最后,本文研究在黑盒模型下,迭代次数T对攻击成功率的影响。实验设置迭代次数从2以2的步长大小变化到12,具体实验结果如图6(b)所示。从中可得出,当迭代次数较小时,攻击成功率上升幅度相对较大;当迭代次数大于等于8时,攻击成功率上升速度变缓。以此类推,当迭代次数不断增大时,攻击成功率将趋于稳定。
本节主要分析所提算法中各个模块对攻击有效性的提升。实验以MI-FGSM为基准算法,通过逐步添加本文各个模块来生成对抗样本,具体对比方法设置如表7所示。本节在MSTAR数据集和FUSAR-Ship数据集上将表7中的4种对比算法在集成模型条件下进行对比分析,具体实验结果如表8所示。
攻击算法 | QHM | ABN | ST |
MI-FGSM | — | — | — |
AN-QHMI-FGSM | √ | — | — |
ABN-QHMI-FGSM | √ | √ | — |
TBAA | √ | √ | √ |
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46 |
AN-QHMI-FGSM | 65.7 | 48.0 | 75.0 | 78.0 | 56.0 | 82.0 | 56.0 | 57.2 | 82.0 | 58.0 | |
ABN-QHMI-FGSM | |||||||||||
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 38.0 | — | — | — | — | 45.9 | — | 71.9 |
AN-QHMI-FGSM | 36.9 | 52.0 | 76.0 | — | — | — | — | 50.0 | — | 81.6 | |
ABN-QHMI-FGSM | — | — | — | — | — | ||||||
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
通过对比AN-QHMI-FGSM和MI-FGSM算法在两个数据集上的攻击成功率可得,梯度方向稳定算子QHM通过稳定梯度更新的方向帮助算法找到全局最优解,从而提高对抗样本的迁移性能;当梯度更新方向不稳定时,优化过程可能会出现震荡或停滞的情况,导致收敛速度较慢或无法达到较好的解。因此梯度方向稳定算子QHM有助于提高算法的黑盒攻击能力。ABN-QHMI-FGSM算法在AN-QHMI-FGSM集成了梯度方向寻优器ABN,实验结果表明ABN-QHMI-FGSM的黑盒攻击成功率优于AN-QHMI-FGSM,其原因在于ABN具有前瞻性和自适应性,能够结合历史和未来的梯度信息更新梯度方向,有效避免算法陷入局部最优解,进一步对抗样本的迁移性能。同时,TBAA算法在ABN-QHMI-FGSM的基础上结合了随机斑点噪声变换模块,在深度学习中通常利用数据增强提升模型的泛化性,在本文算法利用随机斑点噪声变换增加输入的多样性,缓解白盒模型对对抗样本的过拟合,最终有效增强对抗样本的迁移性。
在实际对抗攻击的应用中,SAR对抗样本面临着两个挑战,其一是对SAR目标识别网络的欺骗,其二是其对人工检查的欺骗。其中完成对SAR目标识别网络的欺骗仅需要使其判断错误即可,而对于人工检查,SAR对抗样本需要与原始图像保持高度相似性。本节为了验证SAR对抗样本的隐蔽性,利用3.1.5节的平均结构相似度对原始SAR图像与SAR对抗样本图像的差异进行评估。在本节中,主要评估在MSTAR数据集上,集成模型攻击生成的对抗样本的攻击隐蔽性。实验结果如表9所示,与基线算法相比,本文算法在所有DNN模型的平均ASS最高。分析原因,本文设计的ABN寻优器具有快速收敛和前瞻性的特点,同时QHM算子能够稳定梯度的下降方向。因此,所提算法可以有效找到攻击性最强的梯度下降方向;并且本文算法利用L∞范数和Clip(⋅)函数限制每次迭代扰动的最大范围,实现仅通过小幅度改变SAR图像像素值,即在保持图像结构相似性的同时,提高对抗样本的黑盒攻击成功率。
攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | Mean |
MI-FGSM | 0.951 | 0.959 | 0.968 | 0.976 | 0.970 | 0.962 | 0.969 | 0.960 | 0.963 | 0.960 | 0.9638 |
NAM | 0.962 | 0.965 | 0.971 | 0.978 | 0.973 | 0.967 | 0.973 | 0.966 | 0.968 | 0.962 | 0.9685 |
VMI-FGSM | 0.965 | 0.961 | 0.972 | 0.976 | 0.975 | 0.969 | 0.977 | 0.967 | 0.969 | 0.965 | 0.9696 |
DI-FGSM | 0.960 | 0.970 | 0.974 | 0.974 | 0.976 | 0.971 | 0.979 | 0.974 | 0.970 | 0.963 | 0.9711 |
Attack-Unet- GAN |
0.975 | 0.978 | |||||||||
Fast C&W | 0.980 | 0.9744 | |||||||||
TBAA | |||||||||||
注:标红数字为最优值,标蓝数字为次优值。 |
图7展示了由TBAA算法生成的对抗样本、相对应的原始干净样本以及对抗扰动图像,从图中可以看出原始图像和对应生成的对抗样本之间的差别很小,即对抗扰动是人眼几乎不可见的。
由于本文算法基于MI-FGSM算法的思想,设计了ABN寻优器、QHM算子以及随机斑点噪声变换,在一定程度上增加了算法的复杂度。因此本节探究在黑盒条件下,不同基线攻击算法在单模型和多模型集成攻击时生成对抗样本的运算时间,并将运算时间的长短作为衡量攻击算法效率的指标。
如表10所示,以AlexNet为例,在MSTAR数据集上分别进行单模型黑盒攻击运算时间测试和集成模型黑盒攻击时间测试。由表中数据可得,Attack-Unet-GAN和Fast C&W生成对抗样本的运算时间最短且较为接近,因为Attack-Unet-GAN和Fast C&W均为通过训练好的U-Net生成器模型一步映射得到对抗样本,生成对抗样本的速度远远快于基于梯度的对抗攻击算法。在其余5种基于梯度的对抗攻击算法中,MI-FGSM的运算时间最短,TBAA的运算时间最长。在单模型黑盒攻击中,TBAA与MI-FGSM的运算时间差值最大约为0.0659 s,最短约为0.0459 s。因此,本文的攻击算法虽然在一定程度上增加了运算时间,生成对抗样本的效率有所下降,但仍在可接受范围之内,没有导致增加大量的额外运算时间。
攻击方法 | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | Squeezenet | Ensemble |
MI-FGSM | 0.2970 | 0.2404 | 0.3621 | 0.5217 | 0.1797 | 0.3258 | 0.2550 | 1.8953 |
NAM | 0.3014 | 0.2410 | 0.3623 | 0.5303 | 0.1822 | 0.3248 | 0.2257 | 1.8973 |
VMI-FGSM | 0.2980 | 0.2498 | 0.3625 | 0.5289 | 0.1826 | 0.3289 | 0.2274 | 1.9766 |
DI-FGSM | 0.2984 | 0.2485 | 0.3623 | 0.5280 | 0.1823 | 0.3283 | 0.2294 | 1.9795 |
Attack-Unet-GAN | ||||||||
Fast C&W | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 |
TBAA | ||||||||
注:标红数字为最大值,标蓝数字为最小值。 |
本文结合MI-FGSM的思想,考虑到神经网络的训练过程与基于梯度的对抗攻击算法是相似的,两者均是采用梯度迭代算法对参数进行更新。基于此,提升神经网络模型泛化性能的算法同样也可以用于提升对抗样本的黑盒攻击性能。因此,本文算法结合SAR图像的特点,利用随机斑点噪声变换有效缓解模型的过拟合情况;为了加快收敛速度,设计了梯度方向寻优器ABN,实现提高黑盒攻击成功率的目的;利用QHM算子能够在深度神经网络训练中稳定梯度下降方向的优势,进一步提升黑盒攻击的有效性。同时,本文结合集成学习的思想,利用集成模型的方法实现对多个模型保持攻击的有效性,从而更加容易成功攻击其他模型。实验结果表明,本文算法能够在不增加大量额外运行时间的情况下,有效提升黑盒攻击能力和隐蔽性。随着SAR智能识别技术的发展,考虑到在实际对抗场景下难以获取敌方SAR的波位参数,因此在未来的研究中将进一步探索不依赖于视角和成像参数的通用对抗攻击算法。
[1] |
“电磁计算”专刊编委会. 电磁计算方法研究进展综述[J]. 电波科学学报, 2020, 35(1): 13–25. doi: 10.13443/j.cjors.2019110301.
The Editorial Board of Special Issue for “Computational Electromagnetics”. Progress in computational electromagnetic methods[J]. Chinese Journal of Radio Science, 2020, 35(1): 13–25. doi: 10.13443/j.cjors.2019110301.
|
[2] |
SILVER D, SCHRITTWIESER J, SIMONYAN K, et al. Mastering the game of Go without human knowledge[J]. Nature, 2017, 550(7676): 354–359. doi: 10.1038/nature24270.
|
[3] |
JUMPER J, EVANS R, PRITZEL A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583–589. doi: 10.1038/s41586-021-03819-2.
|
[4] |
BI Kaifeng, XIE Lingxi, ZHANG Hengheng, et al. Accurate medium-range global weather forecasting with 3D neural networks[J]. Nature, 2023, 619(7970): 533–538. doi: 10.1038/s41586-023-06185-3.
|
[5] |
ZHANG Yuchen, LONG Mingsheng, CHEN Kaiyuan, et al. Skilful nowcasting of extreme precipitation with NowcastNet[J]. Nature, 2023, 619(7970): 526–532. doi: 10.1038/S41586-023-06184-4.
|
[6] |
FAWZI A, BALOG M, HUANG A, et al. Discovering faster matrix multiplication algorithms with reinforcement learning[J]. Nature, 2022, 610(7930): 47–53. doi: 10.1038/s41586-022-05172-4.
|
[7] |
ZHU Shiqiang, YU Ting, XU Tao, et al. Intelligent computing: The latest advances, challenges, and future[J]. Intelligent Computing, 2023, 2: 0006. doi: 10.34133/icomputing.0006.
|
[8] |
HORNIK K, STINCHCOMBE M, and WHITE H. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2(5): 359–366. doi: 10.1016/0893-6080(89)90020-8.
|
[9] |
MA Qian, LIU Che, XIAO Qiang, et al. Information metasurfaces and intelligent metasurfaces[J]. Photonics Insights, 2022, 1(1): R01. doi: 10.3788/PI.2022.R01.
|
[10] |
CHEVALIER M W, LUEBBERS R J, and CABLE V P. FDTD local grid with material traverse[J]. IEEE Transactions on Antennas and Propagation, 1997, 45(3): 411–421. doi: 10.1109/8.558656.
|
[11] |
JIN Jianming. The Finite Element Method in Electromagnetics[M]. 3rd ed. Hoboken: John Wiley & Sons, 2014.
|
[12] |
HARRINGTON R F. The method of moments in electromagnetics[J]. Journal of Electromagnetic Waves and Applications, 1987, 1(3): 181–200. doi: 10.1163/156939387X00018.
|
[13] |
LING H, CHOU R C, and LEE S W. Shooting and bouncing rays: Calculating the RCS of an arbitrarily shaped cavity[J]. IEEE Transactions on Antennas and Propagation, 1989, 37(2): 194–205. doi: 10.1109/8.18706.
|
[14] |
SCHÜTT K T, GASTEGGER M, TKATCHENKO A, et al. Unifying machine learning and quantum chemistry with a deep neural network for molecular wavefunctions[J]. Nature Communications, 2019, 10(1): 5024. doi: 10.1038/s41467-019-12875-2.
|
[15] |
NOÉ F, OLSSON S, KÖHLER J, et al. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning[J]. Science, 2019, 365(6457): eaaw1147. doi: 10.1126/science.aaw1147.
|
[16] |
MOMENI A and FLEURY R. Electromagnetic wave-based extreme deep learning with nonlinear time-Floquet entanglement[J]. Nature Communications, 2022, 13(1): 2651. doi: 10.1038/s41467-022-30297-5.
|
[17] |
GIANNAKIS I, GIANNOPOULOS A, and WARREN C. A machine learning-based fast-forward solver for ground penetrating radar with application to full-waveform inversion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(7): 4417–4426. doi: 10.1109/TGRS.2019.2891206.
|
[18] |
YAO Heming and JIANG Lijun. Machine learning based neural network solving methods for the FDTD method[C]. 2018 IEEE International Symposium on Antennas and Propagation, Boston, USA, 2018: 2321–2322.
|
[19] |
QI Shutong, WANG Yinpeng, LI Yongzhong, et al. Two-dimensional electromagnetic solver based on deep learning technique[J]. IEEE Journal on Multiscale and Multiphysics Computational Techniques, 2020, 5: 83–88. doi: 10.1109/JMMCT.2020.2995811.
|
[20] |
SHAN Tao, TANG Wei, DANG Xunwang, et al. Study on a fast solver for Poisson’s equation based on deep learning technique[J]. IEEE Transactions on Antennas and Propagation, 2020, 68(9): 6725–6733. doi: 10.1109/TAP.2020.2985172.
|
[21] |
NOAKOASTEEN O, WANG Shu, PENG Zhen, et al. Physics-informed deep neural networks for transient electromagnetic analysis[J]. IEEE Open Journal of Antennas and Propagation, 2020, 1: 404–412. doi: 10.1109/OJAP.2020.3013830.
|
[22] |
MA Zhenchao, XU Kuiwen, SONG Rencheng, et al. Learning-based fast electromagnetic scattering solver through generative adversarial network[J]. IEEE Transactions on Antennas and Propagation, 2021, 69(4): 2194–2208. doi: 10.1109/TAP.2020.3026447.
|
[23] |
GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
|
[24] |
ZHANG Wenwei, KONG Dehua, HE Xiaoyang, et al. A machine learning method for 2-D scattered far-field prediction based on wave coefficients[J]. IEEE Antennas and Wireless Propagation Letters, 2023, 22(5): 1174–1178. doi: 10.1109/LAWP.2023.3235928.
|
[25] |
KONG Dehua, ZHANG Wenwei, HE Xiaoyang, et al. Intelligent prediction for scattering properties based on multihead attention and target inherent feature parameter[J]. IEEE Transactions on Antennas and Propagation, 2023, 71(6): 5504–5509. doi: 10.1109/TAP.2023.3262341.
|
[26] |
YIN Tiantian, WANG Chaofu, XU Kuiwen, et al. Electric flux density learning method for solving 3-D electromagnetic scattering problems[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(7): 5144–5155. doi: 10.1109/TAP.2022.3145486.
|
[27] |
YAO Heming and JIANG Lijun. Machine-learning-based PML for the FDTD method[J]. IEEE Antennas and Wireless Propagation Letters, 2019, 18(1): 192–196. doi: 10.1109/LAWP.2018.2885570.
|
[28] |
YAO Heming and JIANG Lijun. Enhanced PML based on the long short term memory network for the FDTD method[J]. IEEE Access, 2020, 8: 21028–21035. doi: 10.1109/ACCESS.2020.2969569.
|
[29] |
HUGHES T W, WILLIAMSON I A D, MINKOV M, et al. Wave physics as an analog recurrent neural network[J]. Science Advances, 2019, 5(12): eaay6946. doi: 10.1126/sciadv.aay6946.
|
[30] |
FENG Naixing, CHEN Yingshi, ZHANG Yuxian, et al. An expedient DDF-based implementation of perfectly matched monolayer[J]. IEEE Microwave and Wireless Components Letters, 2021, 31(6): 541–544. doi: 10.1109/LMWC.2021.3062645.
|
[31] |
SUN Jiajing, SUN Sheng, CHEN Y P, et al. Machine-learning-based hybrid method for the multilevel fast multipole algorithm[J]. IEEE Antennas and Wireless Propagation Letters, 2020, 19(12): 2177–2181. doi: 10.1109/LAWP.2020.3026822.
|
[32] |
HAO Wenqu, CHEN Y P, CHEN Peiyao, et al. Solving two-dimensional scattering from multiple dielectric cylinders by artificial neural network accelerated numerical green’s function[J]. IEEE Antennas and Wireless Propagation Letters, 2021, 20(5): 783–787. doi: 10.1109/LAWP.2021.3063133.
|
[33] |
GUO Rui, LIN Zhichao, SHAN Tao, et al. Solving combined field integral equation with deep neural network for 2-D conducting object[J]. IEEE Antennas and Wireless Propagation Letters, 2021, 20(4): 538–542. doi: 10.1109/LAWP.2021.3056460.
|
[34] |
GUO Rui, SHAN Tao, SONG Xiaoqian, et al. Physics embedded deep neural network for solving volume integral equation: 2-D case[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(8): 6135–6147. doi: 10.1109/TAP.2021.3070152.
|
[35] |
XUE Bowen, GUO Rui, LI Maokun, et al. Deep-learning-equipped iterative solution of electromagnetic scattering from dielectric objects[J]. IEEE Transactions on Antennas and Propagation, 2023, 71(7): 5954–5966. doi: 10.1109/TAP.2023.3264701.
|
[36] |
RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686–707. doi: 10.1016/j.jcp.2018.10.045.
|
[37] |
LIM J and PSALTIS D. MaxwellNet: Physics-driven deep neural network training based on Maxwell's equations[J]. APL Photonics, 2022, 7(1): 011301. doi: 10.1063/5.0071616.
|
[38] |
GIGLI C, SABA A, AYOUB A B, et al. Predicting nonlinear optical scattering with physics-driven neural networks[J]. APL Photonics, 2023, 8(2): 026105. doi: 10.1063/5.0119186.
|
[39] |
FUJITA K. Electromagnetic field computation of multilayer vacuum chambers with physics-informed neural networks[J]. Frontiers in Physics, 2022, 10: 967645. doi: 10.3389/fphy.2022.967645.
|
[40] |
CHEN Mingkun, LUPOIU R, MAO Chenkai, et al. WaveY-Net: Physics-augmented deep-learning for high-speed electromagnetic simulation and optimization[C]. SPIE 12011, High Contrast Metastructures XI, San Francisco, USA, 2022: 120110C.
|
[41] |
LU Lu, JIN Pengzhan, PANG Guofei, et al. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators[J]. Nature Machine Intelligence, 2021, 3(3): 218–229. doi: 10.1038/s42256-021-00302-5.
|
[42] |
LU Lu, MENG Xuhui, MAO Zhiping, et al. DeepXDE: A deep learning library for solving differential equations[J]. Siam Review, 2021, 63(1): 208–228. doi: 10.1137/19M1274067.
|
[43] |
GUPTA G, XIAO Xiongye, and BOGDAN P. Multiwavelet-based operator learning for differential equations[C]. 35th Conference on Neural Information Processing Systems, 2021, 34: 24048–24062.
|
[44] |
LI Zongyi, KOVACHKI N, AZIZZADENESHELI K, et al. Fourier neural operator for parametric partial differential equations[C]. 9th International Conference on Learning Representations, Austria, 2021.
|
[45] |
AUGENSTEIN Y, REPÄN T, and ROCKSTUHL C. Neural operator-based surrogate solver for free-form electromagnetic inverse design[J]. ACS Photonics, 2023, 10(5): 1547–1557. doi: 10.1021/acsphotonics.3c00156.
|
[46] |
PENG Zhong, YANG Bo, XU Yixian, et al. Rapid surrogate modeling of electromagnetic data in frequency domain using neural operator[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 2007912. doi: 10.1109/TGRS.2022.3222507.
|
[47] |
PATHAK J, SUBRAMANIAN S, HARRINGTON P, et al. FourCastNet: A global data-driven high-resolution weather model using adaptive Fourier neural operators[J]. arXiv: 2202.11214, 2022.
|
[48] |
GUIBAS J, MARDANI M, LI Zongyi, et al. Adaptive Fourier neural operators: Efficient token mixers for transformers[C]. 10th International Conference on Learning Representations (ICLR), Virtual, Online, 2022.
|
[49] |
许威威, 周漾, 吴鸿智, 等. 可微绘制技术研究进展[J]. 中国图象图形学报, 2021, 26(6): 1521–1535. doi: 10.11834/jig.200853.
XU Weiwei, ZHOU Yang, WU Hongzhi, et al. Differential rendering: A survey[J]. Journal of Image and Graphics, 2021, 26(6): 1521–1535. doi: 10.11834/jig.200853.
|
[50] |
GUO Liangshuai, LI Maokun, XU Shenheng, et al. Electromagnetic modeling using an FDTD-equivalent recurrent convolution neural network: Accurate computing on a deep learning framework[J]. IEEE Antennas and Propagation Magazine, 2023, 65(1): 93–102. doi: 10.1109/MAP.2021.3127514.
|
[51] |
HU Yanyan, JIN Yuchen, WU Xuqing, et al. A theory-guided deep neural network for time domain electromagnetic simulation and inversion using a differentiable programming platform[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(1): 767–772. doi: 10.1109/TAP.2021.3098585.
|
[52] |
FU Shilei and XU Feng. Differentiable SAR renderer and image-based target reconstruction[J]. IEEE Transactions on Image Processing, 2022, 31: 6679–6693. doi: 10.1109/TIP.2022.3215069.
|
[53] |
LOPER M M and BLACK M J. OpenDR: An approximate differentiable renderer[C]. 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 154–169.
|
[54] |
CHEN Wenzheng, GAO Jun, LING Huan, et al. Learning to predict 3D objects with an interpolation-based differentiable renderer[C]. 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019: 862.
|
[55] |
MARKLEIN R, MAYER K, HANNEMANN R, et al. Linear and nonlinear inversion algorithms applied in nondestructive evaluation[J]. Inverse Problems, 2002, 18(6): 1733–1759. doi: 10.1088/0266-5611/18/6/319.
|
[56] |
ZOUGHI R and KHARKOVSKY S. Microwave and millimetre wave sensors for crack detection[J]. Fatigue &Fracture of Engineering Materials &Structures, 2008, 31(8): 695–713. doi: 10.1111/j.1460-2695.2008.01255.x.
|
[57] |
NEAL A. Ground-penetrating radar and its use in sedimentology: Principles, problems and progress[J]. Earth-Science Reviews, 2004, 66(3/4): 261–330. doi: 10.1016/j.earscirev.2004.01.004.
|
[58] |
ABUBAKAR A, HABASHY T M, DRUSKIN V L, et al. 2.5D forward and inverse modeling for interpreting low-frequency electromagnetic measurements[J]. Geophysics, 2008, 73(4): F165–F177. doi: 10.1190/1.2937466.
|
[59] |
BOND E J, LI Xu, HAGNESS S C, et al. Microwave imaging via space-time beamforming for early detection of breast cancer[J]. IEEE Transactions on Antennas and Propagation, 2003, 51(8): 1690–1705. doi: 10.1109/TAP.2003.815446.
|
[60] |
NIKOLOVA N K. Microwave imaging for breast cancer[J]. IEEE Microwave Magazine, 2011, 12(7): 78–94. doi: 10.1109/MMM.2011.942702.
|
[61] |
SHEEN D M, MCMAKIN D L, and HALL T E. Three-dimensional millimeter-wave imaging for concealed weapon detection[J]. IEEE Transactions on Microwave Theory and Techniques, 2001, 49(9): 1581–1592. doi: 10.1109/22.942570.
|
[62] |
ZHUGE Xiaodong and YAROVOY A G. A sparse aperture MIMO-SAR-based UWB imaging system for concealed weapon detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(1): 509–518. doi: 10.1109/TGRS.2010.2053038.
|
[63] |
VAN DEN BERG P M and KLEINMAN R E. A contrast source inversion method[J]. Inverse Problems, 1997, 13(6): 1607–1620. doi: 10.1088/0266-5611/13/6/013.
|
[64] |
VAN DEN BERG P M, ABUBAKAR A, and FOKKEMA J T. Multiplicative regularization for contrast profile inversion[J]. Radio Science, 2003, 38(2): 8022. doi: 10.1029/2001RS002555.
|
[65] |
CHEW W C and WANG Yiming. Reconstruction of two-dimensional permittivity distribution using the distorted Born iterative method[J]. IEEE Transactions on Medical Imaging, 1990, 9(2): 218–225. doi: 10.1109/42.56334.
|
[66] |
LI Lianlin, WANG Longgang, DING Jun, et al. A probabilistic model for the nonlinear electromagnetic inverse scattering: TM case[J]. IEEE Transactions on Antennas and Propagation, 2017, 65(11): 5984–5991. doi: 10.1109/TAP.2017.2751654.
|
[67] |
PASTORINO M. Stochastic optimization methods applied to microwave imaging: A review[J]. IEEE Transactions on Antennas and Propagation, 2007, 55(3): 538–548. doi: 10.1109/TAP.2007.891568.
|
[68] |
SALUCCI M, ARREBOLA M, SHAN Tao, et al. Artificial intelligence: New frontiers in real-time inverse scattering and electromagnetic imaging[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(8): 6349–6364. doi: 10.1109/TAP.2022.3177556.
|
[69] |
SUN Yu, XIA Zhihao, and KAMILOV U S. Efficient and accurate inversion of multiple scattering with deep learning[J]. Optics Express, 2018, 26(11): 14678–14688. doi: 10.1364/OE.26.014678.
|
[70] |
RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
|
[71] |
WEI Zhun and CHEN Xudong. Deep-learning schemes for full-wave nonlinear inverse scattering problems[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(4): 1849–1860. doi: 10.1109/TGRS.2018.2869221.
|
[72] |
XU Kuiwen, WU Liang, YE Xiuzhu, et al. Deep learning-based inversion methods for solving inverse scattering problems with phaseless data[J]. IEEE Transactions on Antennas and Propagation, 2020, 68(11): 7457–7470. doi: 10.1109/TAP.2020.2998171.
|
[73] |
ZHANG Huanhuan, YAO Heming, JIANG Lijun, et al. Enhanced two-step deep-learning approach for electromagnetic-inverse-scattering problems: Frequency extrapolation and scatterer reconstruction[J]. IEEE Transactions on Antennas and Propagation, 2023, 71(2): 1662–1672. doi: 10.1109/TAP.2022.3225532.
|
[74] |
ZHOU Yulong, ZHONG Yu, WEI Zhun, et al. An improved deep learning scheme for solving 2-D and 3-D inverse scattering problems[J]. IEEE Transactions on Antennas and Propagation, 2021, 69(5): 2853–2863. doi: 10.1109/TAP.2020.3027898.
|
[75] |
ZHONG Yu, LAMBERT M, LESSELIER D, et al. A new integral equation method to solve highly nonlinear inverse scattering problems[J]. IEEE Transactions on Antennas and Propagation, 2016, 64(5): 1788–1799. doi: 10.1109/TAP.2016.2535492.
|
[76] |
GUO Liang, SONG Guanfeng, and WU Hongsheng. Complex-valued pix2pix-deep neural network for nonlinear electromagnetic inverse scattering[J]. Electronics, 2021, 10(6): 752. doi: 10.3390/electronics10060752.
|
[77] |
ISOLA P, ZHU Junyan, ZHOU Tinghui, et al. Image-to-image translation with conditional adversarial networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5967–5976.
|
[78] |
SONG Rencheng, HUANG Youyou, YE Xiuzhu, et al. Learning-based inversion method for solving electromagnetic inverse scattering with mixed boundary conditions[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(8): 6218–6228. doi: 10.1109/TAP.2021.3139645.
|
[79] |
LI Lianlin, WANG Longgang, TEIXEIRA F L, et al. DeepNIS: Deep neural network for nonlinear electromagnetic inverse scattering[J]. IEEE Transactions on Antennas and Propagation, 2019, 67(3): 1819–1825. doi: 10.1109/TAP.2018.2885437.
|
[80] |
WEI Zhun and CHEN Xudong. Physics-inspired convolutional neural network for solving full-wave inverse scattering problems[J]. IEEE Transactions on Antennas and Propagation, 2019, 67(9): 6138–6148. doi: 10.1109/TAP.2019.2922779.
|
[81] |
GUO Rui, JIA Zekui, SONG Xiaoqian, et al. Pixel- and model-based microwave inversion with supervised descent method for dielectric targets[J]. IEEE Transactions on Antennas and Propagation, 2020, 68(12): 8114–8126. doi: 10.1109/TAP.2020.2999741.
|
[82] |
GUO Rui, LIN Zhichao, SHAN Tao, et al. Physics embedded deep neural network for solving full-wave inverse scattering problems[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(8): 6148–6159. doi: 10.1109/TAP.2021.3102135.
|
[83] |
LIU Che, ZHANG Hongrui, LI Lianlin, et al. Towards intelligent electromagnetic inverse scattering using deep learning techniques and information metasurfaces[J]. IEEE Journal of Microwaves, 2023, 3(1): 509–522. doi: 10.1109/JMW.2022.3225999.
|
[84] |
ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2242–2251.
|
[85] |
ZHANG Hongrui, CHEN Yanjin, CUI Tiejun, et al. Probabilistic deep learning solutions to electromagnetic inverse scattering problems using conditional renormalization group flow[J]. IEEE Transactions on Microwave Theory and Techniques, 2022, 70(11): 4955–4965. doi: 10.1109/TMTT.2022.3205890.
|
[86] |
CHEN Yanjin, ZHANG Hongrui, CUI Tiejun, et al. A mesh-free 3-D deep learning electromagnetic inversion method based on point clouds[J]. IEEE Transactions on Microwave Theory and Techniques, 2023.
|
[87] |
VESELAGO V G. The electrodynamics of substances with simultaneously negative values of ε and μ[J]. Soviet Physics Uspekhi, 1968, 10(4): 509–514. doi: 10.1070/PU1968v010n04ABEH003699.
|
[88] |
PARAZZOLI C G, GREEGOR R B, LI K, et al. Experimental verification and simulation of negative index of refraction using Snell’s law[J]. Physical Review Letters, 2003, 90(10): 107401. doi: 10.1103/PhysRevLett.90.107401.
|
[89] |
DRACHEV V P, CAI W, CHETTIAR U, et al. Experimental verification of an optical negative-index material[J]. Laser Physics Letters, 2006, 3(1): 49–55. doi: 10.1002/lapl.200510062.
|
[90] |
LIU Ruopeng, CHENG Qiang, HAND T, et al. Experimental demonstration of electromagnetic tunneling through an epsilon-near-zero metamaterial at microwave frequencies[J]. Physical Review Letters, 2008, 100(2): 023903. doi: 10.1103/PhysRevLett.100.023903.
|
[91] |
KUNDTZ N and SMITH D R. Extreme-angle broadband metamaterial lens[J]. Nature Materials, 2010, 9(2): 129–132. doi: 10.1038/nmat2610.
|
[92] |
CUI Tiejun, QI Meiqing, WAN Xiang, et al. Coding metamaterials, digital metamaterials and programmable metamaterials[J]. Light:Science &Applications, 2014, 3(10): e218. doi: 10.1038/lsa.2014.99.
|
[93] |
CUI Tiejun, LI Lianlin, LIU Shuo, et al. Information metamaterial systems[J]. iScience, 2020, 23(8): 101403. doi: 10.1016/j.isci.2020.101403.
|
[94] |
CUI Tiejun, LIU Shuo, and ZHANG Lei. Information metamaterials and metasurfaces[J]. Journal of Materials Chemistry C, 2017, 5(15): 3644–3668. doi: 10.1039/C7TC00548B.
|
[95] |
LI Lianlin and CUI Tiejun. Information metamaterials - from effective media to real-time information processing systems[J]. Nanophotonics, 2019, 8(5): 703–724. doi: 10.1515/nanoph-2019-0006.
|
[96] |
CUI Tiejun, LIU Shuo, and LI Lianlin. Information entropy of coding metasurface[J]. Light:Science &Applications, 2016, 5(11): e16172. doi: 10.1038/lsa.2016.172.
|
[97] |
刘彻, 马骞, 李廉林, 等. 人工智能超材料[J]. 光学学报, 2021, 41(8): 0823004. doi: 10.3788/AOS202141.0823004.
LIU Che, MA Qian, LI Lianlin, et al. Artificial intelligence metamaterials[J]. Acta Optica Sinica, 2021, 41(8): 0823004. doi: 10.3788/AOS202141.0823004.
|
[98] |
SHAN Tao, PAN Xiaotian, LI Maokun, et al. Coding programmable metasurfaces based on deep learning techniques[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2020, 10(1): 114–125. doi: 10.1109/JETCAS.2020.2972764.
|
[99] |
LI Shangyang, LIU Zhuoyang, FU Shilei, et al. Intelligent beamforming via physics-inspired neural networks on programmable metasurface[J]. IEEE Transactions on Antennas and Propagation, 2022, 70(6): 4589–4599. doi: 10.1109/TAP.2022.3140891.
|
[100] |
LIU Che, YU Wenming, MA Qian, et al. Intelligent coding metasurface holograms by physics-assisted unsupervised generative adversarial network[J]. Photonics Research, 2021, 9(4): B159–B167. doi: 10.1364/PRJ.416287.
|
[101] |
CHEN Xiaoqing, ZHANG Lei, and CUI Tiejun. Intelligent autoencoder for space-time-coding digital metasurfaces[J]. Applied Physics Letters, 2023, 122(16): 161702. doi: 10.1063/5.0132635.
|
[102] |
QIAN Chao, ZHENG Bin, SHEN Yichen, et al. Deep-learning-enabled self-adaptive microwave cloak without human intervention[J]. Nature Photonics, 2020, 14(6): 383–390. doi: 10.1038/s41566-020-0604-2.
|
[103] |
JIA Yuetian, QIAN Chao, FAN Zhixiang, et al. A knowledge-inherited learning for intelligent metasurface design and assembly[J]. Light:Science &Applications, 2023, 12(1): 82. doi: 10.1038/s41377-023-01131-4.
|
[104] |
REN Haoran, LI Xiangping, ZHANG Qiming, et al. On-chip noninterference angular momentum multiplexing of broadband light[J]. Science, 2016, 352(6287): 805–809. doi: 10.1126/science.aaf1112.
|
[105] |
LIN Xing, RIVENSON Y, YARDIMCI N T, et al. All-optical machine learning using diffractive deep neural networks[J]. Science, 2018, 361(6406): 1004–1008. doi: 10.1126/science.aat8084.
|
[106] |
KHORAM E, CHEN Ang, LIU Dianjing, et al. Nanophotonic media for artificial neural inference[J]. Photonics Research, 2019, 7(8): 823–827. doi: 10.1364/PRJ.7.000823.
|
[107] |
LIU Che, MA Qian, LUO Zhangjie, et al. A programmable diffractive deep neural network based on a digital-coding metasurface array[J]. Nature Electronics, 2022, 5(2): 113–122. doi: 10.1038/s41928-022-00719-9.
|
[108] |
PENDRY J B, MARTÍN-MORENO L, and GARCIA-VIDAL F J. Mimicking surface plasmons with structured surfaces[J]. Science, 2004, 305(5685): 847–848. doi: 10.1126/science.1098999.
|
[109] |
GAO Xinxin, MA Qian, GU Ze, et al. Programmable surface plasmonic neural networks for microwave detection and processing[J]. Nature Electronics, 2023, 6(4): 319–328. doi: 10.1038/s41928-023-00951-x.
|
[110] |
李廉林, 崔铁军. 智能电磁感知的若干进展[J]. 雷达学报, 2021, 10(2): 183–190. doi: 10.12000/JR21049.
LI Lianlin and CUI Tiejun. Recent progress in intelligent electromagnetic sensing[J]. Journal of Radars, 2021, 10(2): 183–190. doi: 10.12000/JR21049.
|
[111] |
ZHAO Hanting, HU Shengguo, ZHANG Hongrui, et al. Intelligent indoor metasurface robotics[J]. National Science Review, 2023, 10(8): nwac266. doi: 10.1093/NSR/NWAC266.
|
[112] |
WANG Zhuo, ZHANG Hongrui, ZHAO Hanting, et al. Multi-task and multi-scale intelligent electromagnetic sensing with distributed multi-frequency reprogrammable metasurfaces[J]. Advanced Optical Materials, 2023: 2203153.
|
[113] |
LI Weihan, MA Qian, LIU Che, et al. Intelligent metasurface system for automatic tracking of moving targets and wireless communications based on computer vision[J]. Nature Communications, 2023, 14(1): 989. doi: 10.1038/s41467-023-36645-3.
|
[1] | WU Shangrong, ZHAO Rongkun, CAO Hong, ZHA Yan, YU Qiangyi, WU Wenbin, YANG Peng. Research Progress on SAR Inversion of Crop and Soil Parameters Based on Microwave Scattering Theory[J]. Journal of Radars. doi: 10.12000/JR24260 |
[2] | QIU Xiaolan, LUO Yitong, SONG Shujie, PENG Lingxiao, CHENG Yao, YAN Qiancheng, SHANGGUAN Songtao, JIAO Zekun, ZHANG Zhe, DING Chibiao. Microwave Vision Three-dimensional SAR Experimental System and Full-polarimetric Data Processing Method[J]. Journal of Radars, 2024, 13(5): 941-954. doi: 10.12000/JR24137 |
[3] | DENG Shasa, ZHANG Fan, YIN Qiang, MA Fei, YUAN Xinzhe. Refined Ship Feature Characterization Method of Full-polarimetric Synthetic Aperture Radar for Visual Interpretation[J]. Journal of Radars, 2024, 13(2): 374-395. doi: 10.12000/JR23078 |
[4] | XUE Cewen, FENG Xuan, LI Xiaotian, LIANG Wenjing, ZHOU Haoqiu, WANG Ying. Multi-polarization Data Fusion Analysis of Full-Polarimetric Ground Penetrating Radar[J]. Journal of Radars, 2021, 10(1): 74-85. doi: 10.12000/JR20104 |
[5] | LI Jianbing, WANG Xuesong. Review of Radar Characteristics and Sensing Technologies of Distributed Soft Target[J]. Journal of Radars, 2021, 10(1): 86-99. doi: 10.12000/JR20052 |
[6] | SUN Dou, LU Dongwei, XING Shiqi, YANG Xiao, LI Yongzhen, WANG Xuesong. Full-polarization SAR Joint Multidimensional Reconstruction Based on Sparse Reconstruction[J]. Journal of Radars, 2020, 9(5): 865-877. doi: 10.12000/JR20092 |
[7] | CHEN Shiqiang, HONG Wen. Analysis on the Transmit Distortion of the Circular Polarized Wave Based on the Axial Ratio Parameter[J]. Journal of Radars, 2020, 9(2): 343-353. doi: 10.12000/JR19063 |
[8] | SHEN Chun, GAO Hang, WANG Xuesong, LI Jianbing. Aircraft Wake Vortex Parameter-retrieval System Based on Lidar[J]. Journal of Radars, 2020, 9(6): 1032-1044. doi: 10.12000/JR20046 |
[9] | Wang Fulai, Pang Chen, Li Yongzhen, Wang Xuesong. Orthogonal Polyphase Coded Waveform Design Method for Simultaneous Fully Polarimetric Radar[J]. Journal of Radars, 2017, 6(4): 340-348. doi: 10.12000/JR16150 |
[10] | Ai Xiaofeng, Zeng Yonghu, Gao Lei, Wang Xiaoyang, Wang Liandong. Research on Full-polarization Bistatic Scattering Characteristics of Aircraft[J]. Journal of Radars, 2016, 5(6): 639-646. doi: 10.12000/JR16070 |
[11] | Lu Dongwei, Que Xiaofeng, Qi Xin, Nie Zaiping. Simulation and Analysis of the Fully Polarimetric Scattering Characteristics of Aircraft in UHF Band[J]. Journal of Radars, 2016, 5(2): 182-189. doi: 10.12000/JR16030 |
[12] | Li Yongzhen, Hu Wanqiu, Sun Dou, Li Zhongwei. Scheme for Polarization Detection and Suppression of TRAD[J]. Journal of Radars, 2016, 5(6): 666-672. doi: 10.12000/JR16115 |
[13] | Li Mianquan, Shi Longfei. Effect of Beam Scanning on Target Polarization Scattering Matrix Observed by Fully Polarimetric Phased-array Radar[J]. Journal of Radars, 2016, 5(2): 200-207. doi: 10.12000/JR16035 |
[14] | Xing Yanxiao, Zhang Yi, Li Ning, Wang Yu, Hu Guixiang. Polarimetric SAR Image Supervised Classification Method Integrating Eigenvalues[J]. Journal of Radars, 2016, 5(2): 217-227. doi: 10.12000/JR16019 |
[15] | Zhao Chunlei, Wang Yaliang, Yang Yunlong, Mao Xingpeng, Yu Changjun. Review of Radar Polarization Information Acquisition and Polarimetric Signal Processing Techniques[J]. Journal of Radars, 2016, 5(6): 620-638. doi: 10.12000/JR16092 |
[16] | Wang Xuesong. Status and Prospects of Radar Polarimetry Techniques[J]. Journal of Radars, 2016, 5(2): 119-131. doi: 10.12000/JR16039 |
[17] | Dai Da-hai, Liao Bin, Xiao Shun-ping, Wang Xue-song. Advancements on Radar Polarization Information Acquisition and Processing[J]. Journal of Radars, 2016, 5(2): 143-155. doi: 10.12000/JR15103 |
[18] | Xu Li-ying, Li Shi-qiang, Deng Yun-kai, Wang Yu. Improved Three-stage Algorithm of Forest Height Retrieval with PolInSAR[J]. Journal of Radars, 2014, 3(1): 28-34. doi: 10.3724/SP.J.1300.2014.13089 |
[19] | Chong Jin-song, Zhou Xiao-zhong. Survey of Study on Internal Waves Detection in Synthetic Aperture Radar Image[J]. Journal of Radars, 2013, 2(4): 406-421. doi: 10.3724/SP.J.1300.2013.13012 |
[20] | Sun Hong-mei, Chen Guang-dong, Zhang Gong. Estimation of Aircraft Attitude/Heading Based on the Polarization of Electromagnetic Waves[J]. Journal of Radars, 2013, 2(4): 466-475. doi: 10.3724/SP.J.1300.2013.13087 |
1. | 阮航,崔家豪,毛秀华,任建迎,罗镔延,曹航,李海峰. SAR目标识别对抗攻击综述:从数字域迈向物理域. 雷达学报. 2024(06): 1298-1326 . ![]() |
输入:干净样本x,K个深度神经网络模型f1,f2,⋯,fK,对应 的网络模型逻辑值l1,l2,⋯,lK以及相应的网络模型集成权重 w1,w2,⋯,wK,扰动量大小ε,步长α,迭代次数T,系数v,β, β1和β2 |
输出:对抗样本xadv |
步骤1 α←ε/T,g0←0,m0←0,n0←0 |
步骤2 g0←0,m0←0,s0←0,xadv0←x |
步骤3 For t=0 to T−1 do |
步骤4 Update mt by mt=β1⋅mt−1+(1−β1)gt |
步骤5 Update ˆmt=mt1−βt1 |
步骤6 Update st=β2⋅st−1+(1−β2)(ˆgt−mt)2 |
步骤7 Update ˆst=st+ζ1−βt2 |
步骤8 ˜xadvt=xadvt+α√ˆst+ζˆmt |
步骤9 l(˜xadvt)=K∑k=1wklk(ST(˜xadvt;p)) |
步骤10 Update g∗t by g∗t=∇xadvtJ(ST(˜xadvt;p),y) |
步骤11 Update gt+1 by gt+1=βgt+(1−β)⋅g∗t‖g∗t‖1 |
步骤12 Update ˜gt+1 by ˜gt+1=(1−v)gt+1+v⋅g∗t‖g∗t‖1 |
步骤13 xadvt+1=Clipεx{xadvt+α⋅sign(˜gt+1)} |
步骤14 End for |
步骤15 Return xadvt=xadvt+1 |
目标类别 | 训练集 | 测试集 | |||
俯仰角(°) | 数量 | 俯仰角(°) | 数量 | ||
2S1 | 17 | 299 | 15 | 274 | |
BRDM2 | 17 | 298 | 15 | 274 | |
BTR60 | 17 | 233 | 15 | 195 | |
D7 | 17 | 299 | 15 | 274 | |
T62 | 17 | 299 | 15 | 273 | |
ZIL131 | 17 | 299 | 15 | 274 | |
BMP2 | 17 | 233 | 15 | 195 | |
ZSU23/4 | 17 | 299 | 15 | 274 | |
T72 | 17 | 232 | 15 | 196 | |
BTR70 | 17 | 233 | 15 | 196 |
目标类别 | 训练集数量 | 测试集数量 |
BulkCarrier | 97 | 25 |
CargoShip | 126 | 32 |
Fishing | 75 | 19 |
Tanker | 36 | 10 |
模型 | MSTAR ACC (%) | FUSAR-Ship ACC (%) |
AlexNet | 95.1 | 69.47 |
VGG16 | 95.6 | 70.23 |
ResNet18 | 96.6 | 68.10 |
ResNet50 | 97.7 | — |
InceptionV3 | 99.1 | — |
A-ConvNet | 99.8 | — |
MobileNet | 97.8 | — |
SqueezeNet | 95.4 | 72.25 |
PVTv2 | 98.8 | — |
MobileViTv2 | 99.4 | 72.70 |
代理模型 | 攻击算法 | 受害者模型 | |||||||||
AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | ||
AlexNet | MI-FGSM | 10.9 | 12.0 | 9.0 | 5.0 | 28.0 | 35.0 | 18.9 | 14.0 | 19.6 | |
NAM | 12.0 | 13.0 | 10.0 | 6.9 | 37.0 | 22.9 | 20.7 | ||||
VMI-FGSM | 19.5 | 19.5 | 6.0 | 29.5 | 39.5 | 27.0 | 21.5 | ||||
DI-FGSM | 16.0 | 29.5 | 32.0 | 20.5 | |||||||
Attack-Unet-GAN | 7.0 | 8.0 | 7.5 | 4.0 | 20.5 | 32.5 | 14.5 | 12.5 | 9.5 | ||
Fast C&W | 4.5 | 7.0 | 6.0 | 3.0 | 17.5 | 19.5 | 12.5 | 8.0 | 3.5 | ||
TBAA | |||||||||||
VGGNet16 | MI-FGSM | 61.0 | 58.0 | 56.0 | 40.0 | 55.0 | 41.0 | 43.0 | 26.0 | 30.0 | |
NAM | 60.0 | 61.0 | 59.0 | 42.0 | 47.0 | 31.0 | 35.0 | ||||
VMI-FGSM | 62.5 | 59.5 | 58.5 | 42.5 | 57.5 | 41.0 | 46.5 | ||||
DI-FGSM | 59.5 | 42.5 | 37.5 | ||||||||
Attack-Unet-GAN | 53.0 | 40.5 | 32.5 | 24.5 | 32.5 | 38.5 | 39.0 | 23.0 | 24.5 | ||
Fast C&W | 44.5 | 31.0 | 37.5 | 24.0 | 31.5 | 22.0 | 24.5 | 13.5 | 14.5 | ||
TBAA | |||||||||||
ResNet18 | MI-FGSM | 13.0 | 9.9 | 13.9 | 39.0 | 26.0 | 15.0 | 14.0 | 5.0 | ||
NAM | 15.0 | 9.0 | 16.0 | 38.0 | 31.0 | 17.0 | 21.0 | 5.3 | |||
VMI-FGSM | 17.0 | 15.8 | |||||||||
DI-FGSM | 14.0 | 21.0 | 41.0 | 29.5 | 23.5 | 8.6 | |||||
Attack-Unet-GAN | 12.5 | 6.5 | 11.5 | 5.0 | 19.5 | 18.5 | 11.5 | 11.0 | 3.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 3.0 | 9.0 | 11.5 | 12.0 | 13.5 | 4.0 | ||
TBAA | |||||||||||
ResNet50 | MI-FGSM | 8.0 | 12.0 | 10.5 | 21.0 | 16.0 | 22.9 | 10.0 | 12.0 | 9.0 | |
NAM | 10.0 | 14.0 | 14.0 | 22.0 | 24.0 | 13.0 | 17.0 | ||||
VMI-FGSM | 14.5 | 22.0 | 33.0 | 21.0 | 13.5 | ||||||
DI-FGSM | 26.5 | 23.0 | 28.0 | 11.5 | |||||||
Attack-Unet-GAN | 6.5 | 10.5 | 6.5 | 7.0 | 15.0 | 18.0 | 8.0 | 8.0 | 7.0 | ||
Fast C&W | 5.0 | 4.5 | 7.5 | 13.0 | 7.5 | 10.5 | 7.5 | 10.5 | 6.0 | ||
TBAA | |||||||||||
InceptionV3 | MI-FGSM | 29.0 | 31.4 | 65.5 | 38.0 | 65.0 | 31.0 | 39.0 | 12.0 | 28.0 | |
NAM | 66.9 | 33.9 | 18.0 | ||||||||
VMI-FGSM | 33.0 | 31.5 | 52.5 | 39.0 | 43.0 | 30.0 | |||||
DI-FGSM | 34.0 | 34.5 | 56.0 | 41.0 | 66.0 | 33.5 | 41.5 | 28.5 | |||
Attack-Unet-GAN | 20.6 | 24.5 | 53.0 | 31.0 | 32.5 | 25.0 | 26.5 | 9.0 | 24.5 | ||
Fast C&W | 11.0 | 16.5 | 30.0 | 28.0 | 20.0 | 12.0 | 15.0 | 10.5 | 16.5 | ||
TBAA | |||||||||||
A-ConvNet | MI-FGSM | 19.9 | 15.5 | 29.5 | 20.9 | 11.5 | 29.0 | 15.0 | 21.9 | 9.0 | |
NAM | 23.5 | 17.5 | 35.5 | 24.5 | 18.9 | 32.5 | 18.0 | 24.0 | 13.0 | ||
VMI-FGSM | 25.5 | 19.5 | |||||||||
DI-FGSM | 17.5 | 23.0 | 29.5 | 26.5 | 10.5 | ||||||
Attack-Unet-GAN | 10.8 | 5.6 | 9.0 | 13.0 | 7.0 | 11.6 | 11.0 | 14.7 | 8.0 | ||
Fast C&W | 11.5 | 4.0 | 8.5 | 5.0 | 3.0 | 97.5* | 10.5 | 12.5 | 13.5 | 4.0 | |
TBAA | |||||||||||
MobileNet | MI-FGSM | 16.0 | 15.1 | 10.0 | 15.0 | 15.6 | 18.0 | 18.9 | 8.0 | 9.0 | |
NAM | 18.0 | 14.9 | 18.9 | 9.5 | 10.5 | ||||||
VMI-FGSM | 18.0 | 23.0 | 23.5 | 14.0 | |||||||
DI-FGSM | 19.0 | 17.5 | 10.5 | 17.5 | 18.0 | 19.5 | 20.5 | ||||
Attack-Unet-GAN | 9.0 | 3.5 | 7.5 | 7.8 | 2.5 | 12.5 | 11.0 | 7.3 | 5.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 5.0 | 3.0 | 7.0 | 10.0 | 6.5 | 4.0 | ||
TBAA | |||||||||||
SqueezeNet | MI-FGSM | 19.5 | 9.5 | 20.5 | 18.0 | 6.0 | 40.5 | 31.4 | 18.0 | 18.0 | |
NAM | 18.5 | 10.3 | 20.9 | 19.5 | 6.5 | 40.5 | 24.0 | 21.0 | |||
VMI-FGSM | 28.5 | 11.0 | 32.0 | 19.5 | |||||||
DI-FGSM | 21.0 | 11.5 | 22.5 | 41.0 | 31.5 | 23.0 | |||||
Attack-Unet-GAN | 13.0 | 8.0 | 16.5 | 17.0 | 4.5 | 17.5 | 17.0 | 12.5 | 14.5 | ||
Fast C&W | 10.0 | 4.5 | 7.0 | 5.5 | 3.0 | 18.0 | 10.0 | 13.5 | 14.0 | ||
TBAA | |||||||||||
PVTv2 | MI-FGSM | 10.0 | 7.3 | 9.0 | 12.0 | 15.5 | 6.0 | 18.0 | 7.8 | 11.3 | |
NAM | 10.7 | 13.5 | 21.5 | 10.4 | 19.9 | 9.0 | 18.5 | ||||
VMI-FGSM | 12.0 | 12.0 | 9.5 | 22.5 | 23.0 | 11.0 | |||||
DI-FGSM | 11.0 | 15.0 | 12.0 | 13.0 | |||||||
Attack-Unet-GAN | 8.5 | 5.0 | 7.5 | 7.9 | 12.5 | 3.5 | 11.6 | 4.5 | 9.0 | ||
Fast C&W | 10.0 | 4.0 | 6.5 | 4.5 | 13.0 | 5.5 | 9.0 | 3.7 | 4.0 | ||
TBAA | |||||||||||
MobileViTv2 | MI-FGSM | 14.0 | 16.0 | 19.0 | 18.3 | 7.9 | 43.8 | 30.0 | 18.0 | 52.0 | |
NAM | 21.4 | 24.0 | 26.2 | 20.7 | 33.9 | 25.4 | 58.0 | ||||
VMI-FGSM | 21.0 | 21.5 | 11.5 | 45.0 | 35.0 | 27.0 | 56.0 | ||||
DI-FGSM | 23.1 | 10.5 | 46.3 | 98.0* | |||||||
Attack-Unet-GAN | 11.0 | 6.5 | 14.0 | 11.5 | 5.5 | 29.0 | 15.5 | 15.0 | 46.0 | ||
Fast C&W | 11.0 | 4.0 | 8.5 | 5.5 | 3.0 | 17.5 | 10.5 | 11.5 | 45.0 | ||
TBAA | |||||||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
代理模型 | 攻击算法 | 受害者模型 | ||||
AlexNet | VGGNet16 | ResNet18 | SqueezeNet | MobileViTv2 | ||
AlexNet | MI-FGSM | 38.00 | 40.00 | 33.90 | 68.00 | |
NAM | 62.00 | |||||
VMI-FGSM | 47.10 | 42.60 | 70.00 | |||
DI-FGSM | 98.41* | 47.40 | 63.56 | 44.93 | 74.94 | |
Attack-Unet-GAN | 23.80 | 33.20 | 15.60 | 30.00 | ||
Fast C&W | 18.70 | 29.10 | 12.40 | 24.00 | ||
TBAA | ||||||
VGGNet16 | MI-FGSM | 28.00 | 24.00 | 40.00 | 46.00 | |
NAM | 33.90 | 30.00 | 38.00 | 50.00 | ||
VMI-FGSM | 33.90 | 98.62* | 28.10 | 52.40 | ||
DI-FGSM | 98.76* | 42.40 | ||||
Attack-Unet-GAN | 13.50 | 20.40 | 24.00 | 26.00 | ||
Fast C&W | 9.30 | 19.60 | 22.90 | 24.00 | ||
TBAA | ||||||
ResNet18 | MI-FGSM | 6.00 | 7.90 | 15.90 | 40.00 | |
NAM | 7.90 | 9.90 | 21.90 | 50.00 | ||
VMI-FGSM | 9.30 | 10.60 | 28.60 | 53.80 | ||
DI-FGSM | 99.96* | |||||
Attack-Unet-GAN | 4.50 | 5.60 | 10.40 | 17.80 | ||
Fast C&W | 5.30 | 6.20 | 6.70 | 12.90 | ||
TBAA | ||||||
SqueezeNet | MI-FGSM | 16.00 | 9.90 | 28.00 | 45.90 | |
NAM | 21.90 | 14.00 | 43.90 | 56.00 | ||
VMI-FGSM | 44.20 | 53.30 | ||||
DI-FGSM | 25.07 | 16.32 | 99.59* | |||
Attack-Unet-GAN | 14.50 | 7.90 | 22.00 | 28.00 | ||
Fast C&W | 10.10 | 6.30 | 20.10 | 98.69* | 26.90 | |
TBAA | ||||||
MobileViTv2 | MI-FGSM | 4.00 | 7.90 | 42.00 | 21.90 | |
NAM | 9.00 | 16.00 | 26.00 | |||
VMI-FGSM | 12.80 | 45.90 | 24.10 | |||
DI-FGSM | 18.60 | 47.00 | ||||
Attack-Unet-GAN | 2.90 | 5.30 | 25.60 | 18.00 | ||
Fast C&W | 2.60 | 4.20 | 21.60 | 14.60 | ||
TBAA | ||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46.0 |
NAM | 63.1 | 41.5 | 68.2 | 45.0 | 75.7 | 53.2 | 54.0 | 75.6 | 51.4 | ||
VMI-FGSM | 66.4 | 43.5 | 72.5 | 65.8 | 46.5 | 74.6 | 52.0 | 53.0 | |||
DI-FGSM | 70.0 | 70.0 | 51.0 | ||||||||
Attack-Unet-GAN | 53.6 | 30.5 | 47.0 | 35.0 | 30.0 | 35.0 | 41.0 | 43.0 | 52.3 | 31.0 | |
Fast C&W | 46.0 | 26.8 | 35.0 | 38.0 | 28.0 | 33.0 | 28.5 | 30.0 | 51.0 | 24.0 | |
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 48.0 | — | — | — | — | 45.9 | — | 71.9 |
NAM | 34.5 | 50.5 | — | — | — | — | 48.5 | — | |||
VMI-FGSM | 35.8 | 53.2 | 67.0 | — | — | — | — | — | 76.5 | ||
DI-FGSM | 68.0 | — | — | — | — | 50.0 | — | ||||
Attack-Unet-GAN | 16.0 | 25.0 | 38.0 | — | — | — | — | 28.5 | — | 38.4 | |
Fast C&W | 12.5 | 22.5 | 34.2 | — | — | — | — | 26.0 | — | 32.0 | |
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击算法 | QHM | ABN | ST |
MI-FGSM | — | — | — |
AN-QHMI-FGSM | √ | — | — |
ABN-QHMI-FGSM | √ | √ | — |
TBAA | √ | √ | √ |
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46 |
AN-QHMI-FGSM | 65.7 | 48.0 | 75.0 | 78.0 | 56.0 | 82.0 | 56.0 | 57.2 | 82.0 | 58.0 | |
ABN-QHMI-FGSM | |||||||||||
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 38.0 | — | — | — | — | 45.9 | — | 71.9 |
AN-QHMI-FGSM | 36.9 | 52.0 | 76.0 | — | — | — | — | 50.0 | — | 81.6 | |
ABN-QHMI-FGSM | — | — | — | — | — | ||||||
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | Mean |
MI-FGSM | 0.951 | 0.959 | 0.968 | 0.976 | 0.970 | 0.962 | 0.969 | 0.960 | 0.963 | 0.960 | 0.9638 |
NAM | 0.962 | 0.965 | 0.971 | 0.978 | 0.973 | 0.967 | 0.973 | 0.966 | 0.968 | 0.962 | 0.9685 |
VMI-FGSM | 0.965 | 0.961 | 0.972 | 0.976 | 0.975 | 0.969 | 0.977 | 0.967 | 0.969 | 0.965 | 0.9696 |
DI-FGSM | 0.960 | 0.970 | 0.974 | 0.974 | 0.976 | 0.971 | 0.979 | 0.974 | 0.970 | 0.963 | 0.9711 |
Attack-Unet- GAN |
0.975 | 0.978 | |||||||||
Fast C&W | 0.980 | 0.9744 | |||||||||
TBAA | |||||||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击方法 | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | Squeezenet | Ensemble |
MI-FGSM | 0.2970 | 0.2404 | 0.3621 | 0.5217 | 0.1797 | 0.3258 | 0.2550 | 1.8953 |
NAM | 0.3014 | 0.2410 | 0.3623 | 0.5303 | 0.1822 | 0.3248 | 0.2257 | 1.8973 |
VMI-FGSM | 0.2980 | 0.2498 | 0.3625 | 0.5289 | 0.1826 | 0.3289 | 0.2274 | 1.9766 |
DI-FGSM | 0.2984 | 0.2485 | 0.3623 | 0.5280 | 0.1823 | 0.3283 | 0.2294 | 1.9795 |
Attack-Unet-GAN | ||||||||
Fast C&W | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 |
TBAA | ||||||||
注:标红数字为最大值,标蓝数字为最小值。 |
输入:干净样本x,K个深度神经网络模型f1,f2,⋯,fK,对应 的网络模型逻辑值l1,l2,⋯,lK以及相应的网络模型集成权重 w1,w2,⋯,wK,扰动量大小ε,步长α,迭代次数T,系数v,β, β1和β2 |
输出:对抗样本xadv |
步骤1 α←ε/T,g0←0,m0←0,n0←0 |
步骤2 g0←0,m0←0,s0←0,xadv0←x |
步骤3 For t=0 to T−1 do |
步骤4 Update mt by mt=β1⋅mt−1+(1−β1)gt |
步骤5 Update ˆmt=mt1−βt1 |
步骤6 Update st=β2⋅st−1+(1−β2)(ˆgt−mt)2 |
步骤7 Update ˆst=st+ζ1−βt2 |
步骤8 ˜xadvt=xadvt+α√ˆst+ζˆmt |
步骤9 l(˜xadvt)=K∑k=1wklk(ST(˜xadvt;p)) |
步骤10 Update g∗t by g∗t=∇xadvtJ(ST(˜xadvt;p),y) |
步骤11 Update gt+1 by gt+1=βgt+(1−β)⋅g∗t‖g∗t‖1 |
步骤12 Update ˜gt+1 by ˜gt+1=(1−v)gt+1+v⋅g∗t‖g∗t‖1 |
步骤13 xadvt+1=Clipεx{xadvt+α⋅sign(˜gt+1)} |
步骤14 End for |
步骤15 Return xadvt=xadvt+1 |
目标类别 | 训练集 | 测试集 | |||
俯仰角(°) | 数量 | 俯仰角(°) | 数量 | ||
2S1 | 17 | 299 | 15 | 274 | |
BRDM2 | 17 | 298 | 15 | 274 | |
BTR60 | 17 | 233 | 15 | 195 | |
D7 | 17 | 299 | 15 | 274 | |
T62 | 17 | 299 | 15 | 273 | |
ZIL131 | 17 | 299 | 15 | 274 | |
BMP2 | 17 | 233 | 15 | 195 | |
ZSU23/4 | 17 | 299 | 15 | 274 | |
T72 | 17 | 232 | 15 | 196 | |
BTR70 | 17 | 233 | 15 | 196 |
目标类别 | 训练集数量 | 测试集数量 |
BulkCarrier | 97 | 25 |
CargoShip | 126 | 32 |
Fishing | 75 | 19 |
Tanker | 36 | 10 |
模型 | MSTAR ACC (%) | FUSAR-Ship ACC (%) |
AlexNet | 95.1 | 69.47 |
VGG16 | 95.6 | 70.23 |
ResNet18 | 96.6 | 68.10 |
ResNet50 | 97.7 | — |
InceptionV3 | 99.1 | — |
A-ConvNet | 99.8 | — |
MobileNet | 97.8 | — |
SqueezeNet | 95.4 | 72.25 |
PVTv2 | 98.8 | — |
MobileViTv2 | 99.4 | 72.70 |
代理模型 | 攻击算法 | 受害者模型 | |||||||||
AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | ||
AlexNet | MI-FGSM | 10.9 | 12.0 | 9.0 | 5.0 | 28.0 | 35.0 | 18.9 | 14.0 | 19.6 | |
NAM | 12.0 | 13.0 | 10.0 | 6.9 | 37.0 | 22.9 | 20.7 | ||||
VMI-FGSM | 19.5 | 19.5 | 6.0 | 29.5 | 39.5 | 27.0 | 21.5 | ||||
DI-FGSM | 16.0 | 29.5 | 32.0 | 20.5 | |||||||
Attack-Unet-GAN | 7.0 | 8.0 | 7.5 | 4.0 | 20.5 | 32.5 | 14.5 | 12.5 | 9.5 | ||
Fast C&W | 4.5 | 7.0 | 6.0 | 3.0 | 17.5 | 19.5 | 12.5 | 8.0 | 3.5 | ||
TBAA | |||||||||||
VGGNet16 | MI-FGSM | 61.0 | 58.0 | 56.0 | 40.0 | 55.0 | 41.0 | 43.0 | 26.0 | 30.0 | |
NAM | 60.0 | 61.0 | 59.0 | 42.0 | 47.0 | 31.0 | 35.0 | ||||
VMI-FGSM | 62.5 | 59.5 | 58.5 | 42.5 | 57.5 | 41.0 | 46.5 | ||||
DI-FGSM | 59.5 | 42.5 | 37.5 | ||||||||
Attack-Unet-GAN | 53.0 | 40.5 | 32.5 | 24.5 | 32.5 | 38.5 | 39.0 | 23.0 | 24.5 | ||
Fast C&W | 44.5 | 31.0 | 37.5 | 24.0 | 31.5 | 22.0 | 24.5 | 13.5 | 14.5 | ||
TBAA | |||||||||||
ResNet18 | MI-FGSM | 13.0 | 9.9 | 13.9 | 39.0 | 26.0 | 15.0 | 14.0 | 5.0 | ||
NAM | 15.0 | 9.0 | 16.0 | 38.0 | 31.0 | 17.0 | 21.0 | 5.3 | |||
VMI-FGSM | 17.0 | 15.8 | |||||||||
DI-FGSM | 14.0 | 21.0 | 41.0 | 29.5 | 23.5 | 8.6 | |||||
Attack-Unet-GAN | 12.5 | 6.5 | 11.5 | 5.0 | 19.5 | 18.5 | 11.5 | 11.0 | 3.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 3.0 | 9.0 | 11.5 | 12.0 | 13.5 | 4.0 | ||
TBAA | |||||||||||
ResNet50 | MI-FGSM | 8.0 | 12.0 | 10.5 | 21.0 | 16.0 | 22.9 | 10.0 | 12.0 | 9.0 | |
NAM | 10.0 | 14.0 | 14.0 | 22.0 | 24.0 | 13.0 | 17.0 | ||||
VMI-FGSM | 14.5 | 22.0 | 33.0 | 21.0 | 13.5 | ||||||
DI-FGSM | 26.5 | 23.0 | 28.0 | 11.5 | |||||||
Attack-Unet-GAN | 6.5 | 10.5 | 6.5 | 7.0 | 15.0 | 18.0 | 8.0 | 8.0 | 7.0 | ||
Fast C&W | 5.0 | 4.5 | 7.5 | 13.0 | 7.5 | 10.5 | 7.5 | 10.5 | 6.0 | ||
TBAA | |||||||||||
InceptionV3 | MI-FGSM | 29.0 | 31.4 | 65.5 | 38.0 | 65.0 | 31.0 | 39.0 | 12.0 | 28.0 | |
NAM | 66.9 | 33.9 | 18.0 | ||||||||
VMI-FGSM | 33.0 | 31.5 | 52.5 | 39.0 | 43.0 | 30.0 | |||||
DI-FGSM | 34.0 | 34.5 | 56.0 | 41.0 | 66.0 | 33.5 | 41.5 | 28.5 | |||
Attack-Unet-GAN | 20.6 | 24.5 | 53.0 | 31.0 | 32.5 | 25.0 | 26.5 | 9.0 | 24.5 | ||
Fast C&W | 11.0 | 16.5 | 30.0 | 28.0 | 20.0 | 12.0 | 15.0 | 10.5 | 16.5 | ||
TBAA | |||||||||||
A-ConvNet | MI-FGSM | 19.9 | 15.5 | 29.5 | 20.9 | 11.5 | 29.0 | 15.0 | 21.9 | 9.0 | |
NAM | 23.5 | 17.5 | 35.5 | 24.5 | 18.9 | 32.5 | 18.0 | 24.0 | 13.0 | ||
VMI-FGSM | 25.5 | 19.5 | |||||||||
DI-FGSM | 17.5 | 23.0 | 29.5 | 26.5 | 10.5 | ||||||
Attack-Unet-GAN | 10.8 | 5.6 | 9.0 | 13.0 | 7.0 | 11.6 | 11.0 | 14.7 | 8.0 | ||
Fast C&W | 11.5 | 4.0 | 8.5 | 5.0 | 3.0 | 97.5* | 10.5 | 12.5 | 13.5 | 4.0 | |
TBAA | |||||||||||
MobileNet | MI-FGSM | 16.0 | 15.1 | 10.0 | 15.0 | 15.6 | 18.0 | 18.9 | 8.0 | 9.0 | |
NAM | 18.0 | 14.9 | 18.9 | 9.5 | 10.5 | ||||||
VMI-FGSM | 18.0 | 23.0 | 23.5 | 14.0 | |||||||
DI-FGSM | 19.0 | 17.5 | 10.5 | 17.5 | 18.0 | 19.5 | 20.5 | ||||
Attack-Unet-GAN | 9.0 | 3.5 | 7.5 | 7.8 | 2.5 | 12.5 | 11.0 | 7.3 | 5.0 | ||
Fast C&W | 10.0 | 4.0 | 6.0 | 5.0 | 3.0 | 7.0 | 10.0 | 6.5 | 4.0 | ||
TBAA | |||||||||||
SqueezeNet | MI-FGSM | 19.5 | 9.5 | 20.5 | 18.0 | 6.0 | 40.5 | 31.4 | 18.0 | 18.0 | |
NAM | 18.5 | 10.3 | 20.9 | 19.5 | 6.5 | 40.5 | 24.0 | 21.0 | |||
VMI-FGSM | 28.5 | 11.0 | 32.0 | 19.5 | |||||||
DI-FGSM | 21.0 | 11.5 | 22.5 | 41.0 | 31.5 | 23.0 | |||||
Attack-Unet-GAN | 13.0 | 8.0 | 16.5 | 17.0 | 4.5 | 17.5 | 17.0 | 12.5 | 14.5 | ||
Fast C&W | 10.0 | 4.5 | 7.0 | 5.5 | 3.0 | 18.0 | 10.0 | 13.5 | 14.0 | ||
TBAA | |||||||||||
PVTv2 | MI-FGSM | 10.0 | 7.3 | 9.0 | 12.0 | 15.5 | 6.0 | 18.0 | 7.8 | 11.3 | |
NAM | 10.7 | 13.5 | 21.5 | 10.4 | 19.9 | 9.0 | 18.5 | ||||
VMI-FGSM | 12.0 | 12.0 | 9.5 | 22.5 | 23.0 | 11.0 | |||||
DI-FGSM | 11.0 | 15.0 | 12.0 | 13.0 | |||||||
Attack-Unet-GAN | 8.5 | 5.0 | 7.5 | 7.9 | 12.5 | 3.5 | 11.6 | 4.5 | 9.0 | ||
Fast C&W | 10.0 | 4.0 | 6.5 | 4.5 | 13.0 | 5.5 | 9.0 | 3.7 | 4.0 | ||
TBAA | |||||||||||
MobileViTv2 | MI-FGSM | 14.0 | 16.0 | 19.0 | 18.3 | 7.9 | 43.8 | 30.0 | 18.0 | 52.0 | |
NAM | 21.4 | 24.0 | 26.2 | 20.7 | 33.9 | 25.4 | 58.0 | ||||
VMI-FGSM | 21.0 | 21.5 | 11.5 | 45.0 | 35.0 | 27.0 | 56.0 | ||||
DI-FGSM | 23.1 | 10.5 | 46.3 | 98.0* | |||||||
Attack-Unet-GAN | 11.0 | 6.5 | 14.0 | 11.5 | 5.5 | 29.0 | 15.5 | 15.0 | 46.0 | ||
Fast C&W | 11.0 | 4.0 | 8.5 | 5.5 | 3.0 | 17.5 | 10.5 | 11.5 | 45.0 | ||
TBAA | |||||||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
代理模型 | 攻击算法 | 受害者模型 | ||||
AlexNet | VGGNet16 | ResNet18 | SqueezeNet | MobileViTv2 | ||
AlexNet | MI-FGSM | 38.00 | 40.00 | 33.90 | 68.00 | |
NAM | 62.00 | |||||
VMI-FGSM | 47.10 | 42.60 | 70.00 | |||
DI-FGSM | 98.41* | 47.40 | 63.56 | 44.93 | 74.94 | |
Attack-Unet-GAN | 23.80 | 33.20 | 15.60 | 30.00 | ||
Fast C&W | 18.70 | 29.10 | 12.40 | 24.00 | ||
TBAA | ||||||
VGGNet16 | MI-FGSM | 28.00 | 24.00 | 40.00 | 46.00 | |
NAM | 33.90 | 30.00 | 38.00 | 50.00 | ||
VMI-FGSM | 33.90 | 98.62* | 28.10 | 52.40 | ||
DI-FGSM | 98.76* | 42.40 | ||||
Attack-Unet-GAN | 13.50 | 20.40 | 24.00 | 26.00 | ||
Fast C&W | 9.30 | 19.60 | 22.90 | 24.00 | ||
TBAA | ||||||
ResNet18 | MI-FGSM | 6.00 | 7.90 | 15.90 | 40.00 | |
NAM | 7.90 | 9.90 | 21.90 | 50.00 | ||
VMI-FGSM | 9.30 | 10.60 | 28.60 | 53.80 | ||
DI-FGSM | 99.96* | |||||
Attack-Unet-GAN | 4.50 | 5.60 | 10.40 | 17.80 | ||
Fast C&W | 5.30 | 6.20 | 6.70 | 12.90 | ||
TBAA | ||||||
SqueezeNet | MI-FGSM | 16.00 | 9.90 | 28.00 | 45.90 | |
NAM | 21.90 | 14.00 | 43.90 | 56.00 | ||
VMI-FGSM | 44.20 | 53.30 | ||||
DI-FGSM | 25.07 | 16.32 | 99.59* | |||
Attack-Unet-GAN | 14.50 | 7.90 | 22.00 | 28.00 | ||
Fast C&W | 10.10 | 6.30 | 20.10 | 98.69* | 26.90 | |
TBAA | ||||||
MobileViTv2 | MI-FGSM | 4.00 | 7.90 | 42.00 | 21.90 | |
NAM | 9.00 | 16.00 | 26.00 | |||
VMI-FGSM | 12.80 | 45.90 | 24.10 | |||
DI-FGSM | 18.60 | 47.00 | ||||
Attack-Unet-GAN | 2.90 | 5.30 | 25.60 | 18.00 | ||
Fast C&W | 2.60 | 4.20 | 21.60 | 14.60 | ||
TBAA | ||||||
注:标红字体为最优值,标蓝字体为次优值。*表示白盒攻击成功率,其余数值表示黑盒攻击成功率。 |
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46.0 |
NAM | 63.1 | 41.5 | 68.2 | 45.0 | 75.7 | 53.2 | 54.0 | 75.6 | 51.4 | ||
VMI-FGSM | 66.4 | 43.5 | 72.5 | 65.8 | 46.5 | 74.6 | 52.0 | 53.0 | |||
DI-FGSM | 70.0 | 70.0 | 51.0 | ||||||||
Attack-Unet-GAN | 53.6 | 30.5 | 47.0 | 35.0 | 30.0 | 35.0 | 41.0 | 43.0 | 52.3 | 31.0 | |
Fast C&W | 46.0 | 26.8 | 35.0 | 38.0 | 28.0 | 33.0 | 28.5 | 30.0 | 51.0 | 24.0 | |
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 48.0 | — | — | — | — | 45.9 | — | 71.9 |
NAM | 34.5 | 50.5 | — | — | — | — | 48.5 | — | |||
VMI-FGSM | 35.8 | 53.2 | 67.0 | — | — | — | — | — | 76.5 | ||
DI-FGSM | 68.0 | — | — | — | — | 50.0 | — | ||||
Attack-Unet-GAN | 16.0 | 25.0 | 38.0 | — | — | — | — | 28.5 | — | 38.4 | |
Fast C&W | 12.5 | 22.5 | 34.2 | — | — | — | — | 26.0 | — | 32.0 | |
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击算法 | QHM | ABN | ST |
MI-FGSM | — | — | — |
AN-QHMI-FGSM | √ | — | — |
ABN-QHMI-FGSM | √ | √ | — |
TBAA | √ | √ | √ |
数据集 | 攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 |
MSTAR | MI-FGSM | 62.9 | 39.0 | 52.0 | 65.0 | 42.0 | 67.9 | 50.0 | 51.5 | 68.0 | 46 |
AN-QHMI-FGSM | 65.7 | 48.0 | 75.0 | 78.0 | 56.0 | 82.0 | 56.0 | 57.2 | 82.0 | 58.0 | |
ABN-QHMI-FGSM | |||||||||||
TBAA | |||||||||||
FUSAR- Ship |
MI-FGSM | 31.9 | 40.9 | 38.0 | — | — | — | — | 45.9 | — | 71.9 |
AN-QHMI-FGSM | 36.9 | 52.0 | 76.0 | — | — | — | — | 50.0 | — | 81.6 | |
ABN-QHMI-FGSM | — | — | — | — | — | ||||||
TBAA | — | — | — | — | — | ||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击算法 | AlexNet | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | SqueezeNet | PVTv2 | MobileViTv2 | Mean |
MI-FGSM | 0.951 | 0.959 | 0.968 | 0.976 | 0.970 | 0.962 | 0.969 | 0.960 | 0.963 | 0.960 | 0.9638 |
NAM | 0.962 | 0.965 | 0.971 | 0.978 | 0.973 | 0.967 | 0.973 | 0.966 | 0.968 | 0.962 | 0.9685 |
VMI-FGSM | 0.965 | 0.961 | 0.972 | 0.976 | 0.975 | 0.969 | 0.977 | 0.967 | 0.969 | 0.965 | 0.9696 |
DI-FGSM | 0.960 | 0.970 | 0.974 | 0.974 | 0.976 | 0.971 | 0.979 | 0.974 | 0.970 | 0.963 | 0.9711 |
Attack-Unet- GAN |
0.975 | 0.978 | |||||||||
Fast C&W | 0.980 | 0.9744 | |||||||||
TBAA | |||||||||||
注:标红数字为最优值,标蓝数字为次优值。 |
攻击方法 | VGGNet16 | ResNet18 | ResNet50 | InceptionV3 | A-ConvNet | MobileNet | Squeezenet | Ensemble |
MI-FGSM | 0.2970 | 0.2404 | 0.3621 | 0.5217 | 0.1797 | 0.3258 | 0.2550 | 1.8953 |
NAM | 0.3014 | 0.2410 | 0.3623 | 0.5303 | 0.1822 | 0.3248 | 0.2257 | 1.8973 |
VMI-FGSM | 0.2980 | 0.2498 | 0.3625 | 0.5289 | 0.1826 | 0.3289 | 0.2274 | 1.9766 |
DI-FGSM | 0.2984 | 0.2485 | 0.3623 | 0.5280 | 0.1823 | 0.3283 | 0.2294 | 1.9795 |
Attack-Unet-GAN | ||||||||
Fast C&W | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0053 |
TBAA | ||||||||
注:标红数字为最大值,标蓝数字为最小值。 |