面向雷达微多普勒特征的人体步态识别攻击方法研究

杨阳 杨静文

杨阳, 杨静文. 面向雷达微多普勒特征的人体步态识别攻击方法研究[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR26056
引用本文: 杨阳, 杨静文. 面向雷达微多普勒特征的人体步态识别攻击方法研究[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR26056
YANG Yang and YANG Jingwen. Adversarial attacks on gait recognition based on radar micro-doppler signatures[J]. Journal of Radars, in press. doi: 10.12000/JR26056
Citation: YANG Yang and YANG Jingwen. Adversarial attacks on gait recognition based on radar micro-doppler signatures[J]. Journal of Radars, in press. doi: 10.12000/JR26056

面向雷达微多普勒特征的人体步态识别攻击方法研究

DOI: 10.12000/JR26056 CSTR: 32380.14.JR26056
基金项目: 国家自然科学基金(62471329)
详细信息
    作者简介:

    杨 阳,博士,副教授,主要研究方向为雷达信号处理、雷达人体感知、深度学习、遥感技术及计算机视觉

    杨静文,硕士研究生,主要研究方向为深度学习、对抗攻击与雷达信号处理

    通讯作者:

    杨阳 yang_yang@tju.edu.cn

    责任主编:渠晓东 Corresponding Editor: QU Xiaodong

  • 中图分类号: TN957

Adversarial Attacks on Gait Recognition Based on Radar Micro-Doppler Signatures

Funds: The National Natural Science Foundation of China (62471329)
More Information
  • 摘要: 雷达微多普勒步态识别系统在对抗攻击条件下的安全边界评估具有重要意义。现有攻击方法大多直接迁移自光学图像领域,忽略了微多普勒谱图在细粒度特征分布和时频结构上的特点,从而导致其在跨模型的黑盒目标攻击场景中的迁移性能受限。为此,该文提出一种面向人体步态微多普勒特征的黑盒目标攻击框架GAC-Attack (Gradient Guidance and Adaptive Cropping Radar Gait Targeted Attack, GAC-Attack)。针对类别间特征分布接近、目标攻击方向易发生语义偏移的问题,构建类间关系引导的鲁棒梯度优化机制。针对判别信息主要集中于局部时频区域的特点,设计自适应局部裁剪机制,以增强扰动对跨模型共享判别特征的干扰能力。该文构建了单动作步态识别数据集与多动作身份识别数据集,并在7种网络架构和7种黑盒目标攻击算法下进行了系统对比实验。结果表明,所提方法在步态数据集和身份数据集上的目标攻击成功率分别较次优基线提升约7%和4%,并在多数模型组合中保持领先,该方法在细粒度复杂场景下的有效性与跨模型迁移稳定性得到验证。

     

  • 图  1  面向雷达步态识别的黑盒目标攻击过程示例

    Figure  1.  Example of a black-box targeted attack process for radar gait recognition

    图  2  GAC-Attack 算法框图

    Figure  2.  Flowchart of the GAC-Attack algorithm

    图  3  雷达步态特征空间分析

    Figure  3.  Analysis of the radar gait feature space

    图  4  雷达步态图像结构空间分析

    Figure  4.  Analysis of the structural space of radar gait images

    图  5  信号采集系统与场景

    Figure  5.  Signal acquisition system and experimental scenario

    图  6  步态数据集微多普勒谱图样例

    Figure  6.  Examples of micro-Doppler spectrograms in the gait dataset

    图  7  身份数据集微多普勒谱图样例

    Figure  7.  Examples of micro-Doppler spectrograms in the identity dataset

    图  8  基于步态数据集的不同对比算法在不同代理模型下的攻击成功率

    Figure  8.  Attack success rates of different baseline algorithms across different surrogate models on the gait dataset

    图  9  基于身份数据集的不同对比算法在不同代理模型下的攻击成功率

    Figure  9.  Attack success rates of different baseline algorithms across different surrogate models on the identity dataset

    图  10  干净样本、对抗扰动以及对抗样本主观结果展示

    Figure  10.  Visual comparison of clean samples, adversarial perturbations, and adversarial examples

    图  11  目标类别在不同目标模型上的置信度可视化

    Figure  11.  Visualization of target-class confidence across different target models

    图  12  消融变体Var(7)与GAC-Attack方法在不同代理模型下的攻击成功率

    Figure  12.  Attack success rates of ablation variant Var(7) and GAC-Attack across different surrogate models

    图  13  单区域与多区域裁剪关键局部对比

    Figure  13.  Comparison of key local regions between single-region and multi-region cropping

    图  14  攻击成功率随百分位数p的变化趋势图

    Figure  14.  Attack success rate versus percentilep

    图  15  攻击成功率随温度T的变化趋势图

    Figure  15.  Attack success rate versus temperature T

    图  16  不同扰动预算下各代理模型的平均攻击成功率变化图

    Figure  16.  Average attack success rates of different surrogate models under varying perturbation budgets

    图  17  攻击成功率随$ \lambda $变化图

    Figure  17.  Attack success rate versus $ \lambda $

    图  18  攻击成功率随裁剪数量k变化图

    Figure  18.  Attack success rate versus number of cropsk

    1  GAC-Attack对抗样本生成算法

    1.   Adversarial Example Generation Procedure of GAC-Attack

     输入:干净样本$ {\boldsymbol{x}}_{\text{clean}} $,代理模型f,One-hot目标标签$ {\boldsymbol{y}}_{\text{target}} $,不同类的软标签$ {{\tilde{\boldsymbol{y}}}}_{\text{target}} $;
     参数:迭代次数I,梯度更新步长$ \alpha $,扰动预算$ \epsilon $,裁剪数量k,温度T,阈值分割百分位数p,比例参数$ \beta $和$ \lambda $;
     输出:对抗样本$ {\boldsymbol{x}}_{\text{adv}} $
     步骤1 $ \boldsymbol{x}_{\text{adv}}^{0}=\boldsymbol{x},{\boldsymbol{g}}_{0}=0,{\delta }_{0}=0 $
     步骤2 For $ i=0 $ to Ido:
     步骤3 If $ i=0 $random crop else $ X_{\text{local}}^{i} $← crop by Eq.32
     步骤4 利用背景值填充$ X_{\text{local}}^{i} $中的每一个局部对抗样本$ \boldsymbol{x}_{\text{local}}^{i,\mathrm{l}} $至与$ \boldsymbol{x}_{\text{adv}}^{i} $相同大小
     步骤5 $ {{\tilde{\boldsymbol{y}}}}^{{\boldsymbol{x}_{\text{adv}}^{i}}}{}_{\text{model}}\leftarrow f(\boldsymbol{x}_{\text{adv}}^{i}),\,\,\,\,\,\,\,\,\,\,\, {{\tilde{\boldsymbol{y}}}}^{{X_{\text{local}}^{i}}}{}_{\text{model}}\leftarrow f(X_{\text{local}}^{i}) $,其中,$ {{\tilde{\boldsymbol{y}}}}^{{X_{\text{local}}^{t}}}{}_{\text{model}} $表示代理模型对局部样本集合$ X_{\text{local}}^{i} $逐一前向传播后得到
     的预测结果集合。
     步骤6 计算损失:根据$ \boldsymbol{x}_{\text{adv}}^{i} $计算$ {L}_{\text{global}} $by Eq.2- Eq.5
     根据$ X_{\text{local}}^{i} $中的每个局部样本$ \boldsymbol{x}_{\text{local}}^{(\mathrm{l})} $计算$ {L}_{\text{local,}\mathrm{l}} $by Eq.2-Eq.5,得到$ {L}_{\text{local}}=\displaystyle\sum \nolimits_{l=1}^{k}{L}_{\text{local,}\mathrm{l}} $ $ {L}_{\text{total}}={L}_{\text{global}}+{L}_{\text{local}} $
     步骤7 计算$ \boldsymbol{g}_{\text{r}}^{\text{norm}} $by Eq.10
     步骤8 梯度计算$ \boldsymbol{g}_{\text{sum}}^{i+1}=\nabla {L}_{\text{total}}-\boldsymbol{g}_{{}_{\text{r}}}^{\text{norm}} $
     步骤9 MI变换:$ \boldsymbol{g}_{\text{sum}}^{i+1}=\boldsymbol{g}_{\text{sum}}^{i}+\dfrac{\boldsymbol{g}_{\text{sum}}^{i+1}}{{\left|\left|\boldsymbol{g}_{\text{sum}}^{i+1}\right|\right|}_{2}} $
     步骤10 If $ i \gt 0 $且$ \,i\%20=0 $:计算$ {\boldsymbol{n}}_{\text{final}} $by Eq.15 ;$ \boldsymbol{g}_{\text{total}}^{i+1}=\boldsymbol{g}_{\text{sum}}^{i+1}+{\boldsymbol{n}}_{\text{final}} $
     步骤11 $ \boldsymbol{x}_{\text{adv}}^{i+1}\text{=Clamp}_{\boldsymbol{x}}^{ \epsilon }(\boldsymbol{x}_{\text{adv}}^{i}+\alpha \cdot \text{sign}(\boldsymbol{g}_{\text{total}}^{i+1})) $
     步骤12 $ \boldsymbol{x}_{\text{adv}}^{i}=\boldsymbol{x}_{\text{adv}}^{i+1} $
     End for
     Return $ \boldsymbol{x}_{\text{adv}}^{I-1} $
    下载: 导出CSV

    表  1  步态数据集构成

    Table  1.   Composition of the gait dataset

    目标类别训练集数量测试集数量
    实验人员1730146
    实验人员2676135
    实验人员3718143
    实验人员4594118
    实验人员5585116
    实验人员6662132
    总计3965790
    下载: 导出CSV

    表  2  身份数据集构成

    Table  2.   Composition of the identity dataset

    目标类别训练集数量测试集数量
    实验人员113327
    实验人员212024
    实验人员313227
    实验人员410321
    实验人员55712
    实验人员614930
    总计694141
    下载: 导出CSV

    表  3  目标模型识别精度

    Table  3.   Recognition accuracy of the target model

    模型步态数据集ACC (%)身份数据集ACC (%)
    ResNet5098.6197.16
    ResNet1899.2499.29
    DenseNet12199.2498.58
    VGGNet1697.7298.58
    MSF-Net95.3298.58
    Deform-DCGAN95.9599.29
    MF-CNN97.9793.62
    下载: 导出CSV

    表  4  基于步态数据集不同对比算法在多个代理模型下的平均攻击成功率 (%)

    Table  4.   Average attack success rates of different baseline algorithms across multiple surrogate models on the gait dataset (%)

    代理模型CFMDlTAFTGISUTI-FGSMDI2-FGSMGAC-Attack
    ResNet5030.437.729.325.726.517.216.946.6
    ResNet1831.739.829.123.126.516.918.648.3
    DenseNet12126.531.425.221.224.114.718.536.4
    VGGNet1623.223.715.717.621.210.614.230.6
    MSF-Net25.728.523.321.025.115.020.425.3
    Deform-DCGAN21.421.920.421.221.716.217.423.7
    MF-CNN26.516.822.76.824.99.313.837.4
    Average26.528.523.719.524.314.317.135.5
    下载: 导出CSV

    表  5  基于身份数据集不同对比算法在多个代理模型下的平均攻击成功率(%)

    Table  5.   Average attack success rates of different baseline algorithms across multiple surrogate models on the identity dataset (%)

    代理模型CFMDlTAFTGISUTI-FGSMDI2-FGSMGAC-Attack
    ResNet5037.235.628.524.725.511.913.643.5
    ResNet1832.637.228.224.126.616.514.145.7
    DenseNet12127.827.623.023.122.616.515.531.0
    VGGNet1624.021.014.217.620.211.510.726.2
    MSF-Net26.225.825.420.822.019.418.124.0
    Deform-DCGAN20.420.520.019.519.215.115.721.2
    MF-CNN28.815.315.75.220.512.411.033.3
    Average28.126.122.119.322.415.114.132.1
    下载: 导出CSV

    表  6  不同消融变体的平均攻击成功率

    Table  6.   Average attack success rates of different ablation variants

    消融变体 对数加权 软标签损失 梯度修正 噪声探索 候选区域选取 关键区域筛选 裁剪参数确定 平均攻击成功率(%)
    Baseline 20.5
    Var(1) 25.4
    Var(2) 28.6
    Var(3) 31.7
    Var(4) 31.5
    Var(5) 33.5
    Var(6) 34.5
    Var(7) 30.9
    Var(8) 33.9
    Var(9) 34.0
    GAC-Attack 35.5
    下载: 导出CSV

    表  7  不同加权方式下的平均目标攻击成功率(%)

    Table  7.   Average targeted attack success rate (%) under different weighting schemes

    加权方式 ResNet50 ResNet18 DenseNet121 VGGNet16 MSF-Net Deform-DCGAN MF-CNN Average
    ln(·) 46.6 48.3 36.4 30.6 25.3 23.7 37.4 35.5
    平方根 43.4 45.9 32.6 26.9 23.3 21.2 35.6 32.7
    倒数 45.5 47.6 32.0 25.6 24.1 20.2 38.1 33.3
    下载: 导出CSV

    表  8  不同SNR条件下各代理模型的目标攻击成功率(%)

    Table  8.   Targeted attack success rates (%) of different surrogate models under different SNR conditions

    SNR ResNet50 ResNet18 DenseNet121 VGGNet16 MSF-Net Deform-DCGAN MF-CNN Average
    Clean 46.6 48.3 36.4 30.6 25.3 23.7 37.4 35.5
    30 dB 47.4 49.6 37.2 31.2 24.5 22.7 36.5 35.6
    20 dB 47.9 49.6 37.2 31.6 24.9 22.8 36.5 35.8
    10 dB 43.9 44.3 31.1 31.2 19.6 24.2 33.3 32.5
    下载: 导出CSV

    表  9  不同攻击算法在各代理模型下单样本生成时间开销对比(ms)

    Table  9.   Comparison of per-sample generation time (ms) of different attack algorithms across surrogate models

    算法ResNet18DenseNet121MSF-NetMF-CNN
    Dl361.91106.1196.8222.3
    CFM326.81194.3123.4149.4
    GI419.41579.1558.0296.2
    SU408.51249.6513.7287.1
    TAFT633.92055.6358.4371.5
    TI-FGSM348.31099.4139.8166.4
    DI2-FGSM359.61191.8213.4209.7
    GAC-Attack1249.74943.4779.7776.9
    下载: 导出CSV

    表  10  GAC-Attack部分消融变体在各代理模型下单样本生成时间开销对比(ms)

    Table  10.   Comparison of per-sample generation time (ms) of GAC-Attack partial ablation variants across surrogate models

    算法ResNet18DenseNet121MSF-NetMF-CNN
    Var(7)1275.83997.5619.3684.5
    Var(1)774.02739.6427.3423.9
    Var(5)994.63512.9516.6574.0
    Var(4)1106.83780.0594.0553.6
    Var(6)1231.24059.3761.8740.7
    GAC-Attack1249.74943.4779.7776.9
    下载: 导出CSV

    表  11  采集顺序划分条件下的目标模型识别精度(%)

    Table  11.   Target model recognition accuracy under data partitioning based on acquisition order(%)

    模型步态数据集ACC
    ResNet5083.58
    ResNet1883.24
    DenseNet12182.34
    VGGNet1672.44
    MSF-Net78.18
    Deform-DCGAN75.93
    MF-CNN77.50
    下载: 导出CSV

    表  12  采集顺序划分条件下不同攻击算法在各代理模型下的目标攻击成功率(%)

    Table  12.   Targeted attack success rates (%) of different attack algorithms across surrogate models under data partitioning based on acquisition order

    算法 ResNet50 ResNet18 DenseNet121 VGGNet16 MSF-Net Deform-DCGAN MF-CNN Average
    GAC-Attack 60.2 60.1 49.3 41.5 27.8 41.4 47.0 46.8
    Dl 45.7 48.1 39.1 33.4 29.9 34.8 26.0 36.7
    CFM 37.0 34.4 29.7 29.3 27.0 26.8 32.7 31.0
    下载: 导出CSV
  • [1] FIGUEIREDO B, FRAZÃO Á, ROUCO A, et al. A review: Radar remote-based gait identification methods and techniques[J]. Remote Sensing, 2025, 17(7): 1282. doi: 10.3390/rs17071282.
    [2] HE Wentao, REN Jianfeng, BAI Ruibin, et al. Radar gait recognition using dual-branch swin transformer with asymmetric attention fusion[J]. Pattern Recognition, 2025, 159: 111101. doi: 10.1016/j.patcog.2024.111101.
    [3] 孙延鹏, 王爽, 屈乐乐, 等. 基于点云数据与微多普勒谱图多模态特征的雷达步态识别[J/OL]. 雷达科学与技术. https://link.cnki.net/urlid/34.1264.tn.20251112.1509.002.2025, 2025.

    SUN Yanpeng, WANG Shuang, QU Lele, et al. Radar gait recognition based on point cloud and micro-Doppler multimodal features[J/OL]. Radar Science and Technology. https://link.cnki.net/urlid/34.1264.tn.20251112.1509.002.2025, 2025.
    [4] 马进昇, 宋一轩, 刘家彤, 等. 基于DDPM-MBN的井下人员步态识别方法[J]. 工矿自动化, 2025, 51(9): 60–65. doi: 10.13272/j.issn.1671-251x.2025030104.

    MA Jinsheng, SONG Yixuan, LIU Jiatong, et al. Gait recognition method for underground personnel based on DDPM-MBN[J]. Journal of Mine Automation, 2025, 51(9): 60–65. doi: 10.13272/j.issn.1671-251x.2025030104.
    [5] BAI Xueru, HUI Ye, WANG Li, et al. Radar-based human gait recognition using dual-channel deep convolutional neural network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 9767–9778. doi: 10.1109/TGRS.2019.2929096.
    [6] SHI Yu, DU Lan, CHEN Xiaoyang, et al. Robust gait recognition based on deep CNNs with camera and radar sensor fusion[J]. IEEE Internet of Things Journal, 2023, 10(12): 10817–10832. doi: 10.1109/JIOT.2023.3242417.
    [7] JIANG Xinrui, ZHANG Ye, YANG Qi, et al. Millimeter-wave array radar-based human gait recognition using multi-channel three-dimensional convolutional neural network[J]. Sensors, 2020, 20(19): 5466. doi: 10.3390/s20195466.
    [8] YANG Yang, MU Qingshuang, LI Beichen, et al. Few-shot open-set gait recognition based on radar micro-Doppler signatures[J]. IEEE Sensors Journal, 2025, 25(13): 25134–25145. doi: 10.1109/JSEN.2025.3571781.
    [9] GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [10] KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[M]. YAMPOLSKIY R V. Artificial Intelligence Safety and Security. New York: Chapman and Hall/CRC, 2018: 99–112. doi: 10.1201/9781351251389.
    [11] LI Ang, WANG Yifei, GUO Yiwen, et al. Adversarial examples are not real features[C]. The 37th International Conference on Neural Information Processing Systems, New Orleans, USA, 2023: 754.
    [12] MAZZIERI R, PEGORARO J, and ROSSI M. Open-set gait recognition from sparse mmWave radar point clouds[J]. IEEE Sensors Journal, 2025, 25(17): 33051–33063. doi: 10.1109/JSEN.2025.3587503.
    [13] DIRACO G, RESCIO G, and LEONE A. Radar-based activity recognition in strictly privacy-sensitive settings through deep feature learning[J]. Biomimetics, 2025, 10(4): 243. doi: 10.3390/biomimetics10040243.
    [14] IBITOYE O, ABOU-KHAMIS R, EL SHEHABY M, et al. The threat of adversarial attacks on machine learning in network security: A survey[EB/OL]. https://arxiv.org/abs/1911.02621, 2019.
    [15] WEI Zhipeng, CHEN Jingjing, WEI Xingxing, et al. Heuristic black-box adversarial attacks on video recognition models[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 12338–12345. doi: 10.1609/aaai.v34i07.6918.
    [16] CARLINI N, ATHALYE A, PAPERNOT N, et al. On evaluating adversarial robustness[EB/OL]. https://arxiv.org/abs/1902.06705, 2019.
    [17] LIU Chang, DONG Yinpeng, XIANG Wenzhao, et al. A comprehensive study on robustness of image classification models: Benchmarking and rethinking[J]. International Journal of Computer Vision, 2025, 133(2): 567–589. doi: 10.1007/s11263-024-02196-3.
    [18] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [19] MOOSAVI-DEZFOOLI S M, FAWZI A, and FROSSARD P. DeepFool: A simple and accurate method to fool deep neural networks[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2574–2582. doi: 10.1109/CVPR.2016.282.
    [20] CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
    [21] LI Maosen, DENG Cheng, LI Tengjiao, et al. Towards transferable targeted attack[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 641–649. doi: 10.1109/CVPR42600.2020.00072.
    [22] CHEN Pinyu, ZHANG Huan, SHARMA Y, et al. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]. The 10th ACM Workshop on Artificial Intelligence and Security, Dallas, USA, 2017: 15–26. doi: 10.1145/3128572.3140448.
    [23] MURPHY K, SCHÖLKOPF B, WIERSTRA D, et al. Natural evolution strategies[J]. The Journal of Machine Learning Research, 2014, 15(1): 949–980.
    [24] DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
    [25] DONG Yinpeng, PANG Tianyu, SU Hang, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 4312–4321. doi: 10.1109/CVPR.2019.00444.
    [26] XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 2730–2739. doi: 10.1109/CVPR.2019.00284.
    [27] ZHOU Junfan, FENG Sijia, SUN Hao, et al. Attributed scattering center guided adversarial attack for DCNN SAR target recognition[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 20: 4001805. doi: 10.1109/LGRS.2023.3235051.
    [28] GAO Fei, LI Mingyang, WANG Jun, et al. General sparse adversarial attack method for SAR images based on keypoints[J]. IEEE Transactions on Aerospace and Electronic Systems, 2025, 61(5): 14943–14960. doi: 10.1109/TAES.2025.3588821.
    [29] DU Chuan, CONG Yulai, ZHANG Lei, et al. A practical deceptive jamming method based on vulnerable location awareness adversarial attack for radar HRRP target recognition[J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 2410–2424. doi: 10.1109/TIFS.2022.3170275.
    [30] WAN Xuanshen, LIU Wei, NIU Chaoyang, et al. Black-box universal adversarial attack for DNN-based models of SAR automatic target recognition[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2024, 17: 8673–8696. doi: 10.1109/JSTARS.2024.3384188.
    [31] YIN Fei, ZHANG Yong, WU Baoyuan, et al. Generalizable black-box adversarial attack with meta learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(3): 1804–1818. doi: 10.1109/TPAMI.2022.3194988.
    [32] ZHOU Jie, PENG Bo, XIE Jianyue, et al. Conditional random field-based adversarial attack against SAR target detection[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 4004505. doi: 10.1109/LGRS.2024.3365788.
    [33] 苏薪元, 全斯农, 蔡志豪, 等. 联合误导性与逼真度优化的SAR ATR最优对抗样本生成方法[J]. 雷达学报(中英文), 2026, 15(2): 563–582. doi: 10.12000/JR25179.

    SU Xinyuan, QUAN Sinong, CAI Zhihao, et al. Optimal adversarial sample generation method in SAR ATR based on joint misleading and fidelity optimization[J]. Journal of Radars, 2026, 15(2): 563–582. doi: 10.12000/JR25179.
    [34] 万烜申, 刘伟, 牛朝阳, 等. 基于动量迭代快速梯度符号的SAR-ATR深度神经网络黑盒攻击算法[J]. 雷达学报, 2024, 13(3): 714–729. doi: 10.12000/JR23220.

    WAN Xuanshen, LIU Wei, NIU Chaoyang, et al. Black-box attack algorithm for SAR-ATR deep neural networks based on MI-FGSM[J]. Journal of Radars, 2024, 13(3): 714–729. doi: 10.12000/JR23220.
    [35] KIM H, PARK J, and LEE J. Generating transferable adversarial examples for speech classification[J]. Pattern Recognition, 2023, 137: 109286. doi: 10.1016/j.patcog.2022.109286.
    [36] YU Jianfeng, QIU Kai, WANG Pengju, et al. Perturbing BEAMs: EEG adversarial attack to deep learning models for epilepsy diagnosing[J]. BMC Medical Informatics and Decision Making, 2023, 23(1): 115. doi: 10.1186/s12911-023-02212-5.
    [37] EL HOUDA SAYAH BEN AISSA N, KORICHI A, LAKAS A, et al. Assessing robustness to adversarial attacks in attention-based networks: Case of EEG-based motor imagery classification[J]. SLAS Technology, 2024, 29(4): 100142. doi: 10.1016/j.slast.2024.100142.
    [38] 于振华, 胡旭飞, 叶鸥. 类别条件生成对抗网络的语音对抗样本生成方法[J]. 西安交通大学学报, 2024, 58(12): 153–164. doi: 10.7652/xjtuxb202412015.

    YU Zhenhua, HU Xufei, and YE Ou. Speech adversarial sample generation method based on class-conditional generative adversarial networks[J]. Journal of Xi’an Jiaotong University, 2024, 58(12): 153–164. doi: 10.7652/xjtuxb202412015.
    [39] YANG Yang, ZHAO Junyu, LI Beichen, et al. Semisupervised radar-based gait recognition in random walking conditions[J]. IEEE Transactions on Aerospace and Electronic Systems, 2026, 62: 2265–2279. doi: 10.1109/TAES.2025.3637174.
    [40] XUE Shikun, DU Lan, SHI Yu, et al. Fine-grained spatial–temporal gait recognition network based on millimeter-wave radar point cloud[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5101316. doi: 10.1109/TGRS.2023.3345829.
    [41] YANG Yang, GE Yanyan, LI Beichen, et al. Multiscenario open-set gait recognition based on radar micro-Doppler signatures[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 2519813. doi: 10.1109/TIM.2022.3214271.
    [42] LIN T Y, ROYCHOWDHURY A, and MAJI S. Bilinear CNN models for fine-grained visual recognition[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1449–1457. doi: 10.1109/ICCV.2015.170.
    [43] KATSIKAS D, PASSALIS N, and TEFAS A. Inducing neural collapse via anticlasses and one-cold cross-entropy loss[J]. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(10): 18133–18144. doi: 10.1109/TNNLS.2025.3580892.
    [44] WANG Yujian, ZHANG Jianxun, and SUN Renhao. A facial expression recognition method integrating uncertainty estimation and active learning[J]. Computers, Materials & Continua, 2024, 81(1): 533–548. doi: 10.32604/cmc.2024.054644.
    [45] OH S J, BENENSON R, KHOREVA A, et al. Exploiting saliency for object segmentation from image level labels[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 5038–5047. doi: 10.1109/CVPR.2017.535.
    [46] SIMONYAN K, VEDALDI A, and ZISSERMAN A. Deep inside convolutional networks: Visualising image classification models and saliency maps[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [47] LIU K H, LIU T J, LIU H H, et al. Facial makeup detection via selected gradient orientation of entropy information[C]. 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, Canada, 2015: 4067–4071. doi: 10.1109/ICIP.2015.7351570.
    [48] ZHAO Jufeng, CUI Guangmang, GONG Xiaoli, et al. Fusion of visible and infrared images using global entropy and gradient constrained regularization[J]. Infrared Physics & Technology, 2017, 81: 201–209. doi: 10.1016/j.infrared.2017.01.012.
    [49] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    [50] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [51] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 4700–4708. doi: 10.1109/CVPR.2017.243.
    [52] CHEN Zhaoxi, LI Gang, FIORANELLI F, et al. Dynamic hand gesture classification based on multistatic radar micro-Doppler signatures using convolutional neural network[C]. 2019 IEEE Radar Conference (RadarConf), Boston, USA, 2019: 1–5. doi: 10.1109/RADAR.2019.8835796.
    [53] ZHANG Jiajun and SHI Zhiguo. Deformable deep convolutional generative adversarial network in microwave based hand gesture recognition system[C]. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP), Nanjing, China, 2017: 1–6. doi: 10.1109/WCSP.2017.8170976.
    [54] LANG Yue, WANG Qing, YANG Yang, et al. Person identification with limited training data using radar micro-Doppler signatures[J]. Microwave and Optical Technology Letters, 2020, 62(3): 1060–1068. doi: 10.1002/mop.32125.
    [55] WANG Jiafeng, CHEN Zhaoyu, JIANG Kaixun, et al. Boosting the transferability of adversarial attacks with global momentum initialization[J]. Expert Systems with Applications, 2024, 255: 124757. doi: 10.1016/j.eswa.2024.124757.
    [56] WEI Zhipeng, CHEN Jingjing, WU Zuxuan, et al. Enhancing the self-universality for transferable targeted attacks[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 12281–12290. doi: 10.1109/CVPR52729.2023.01182.
    [57] BYUN J, KWON M J, CHO S, et al. Introducing competition to boost the transferability of targeted adversarial examples through clean feature mixup[C]. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 24648–24657. doi: 10.1109/CVPR52729.2023.02361.
    [58] ZENG Hui, CHEN Biwei, and PENG Anjie. Enhancing targeted transferability VIA feature space fine-tuning[C]. 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 2024: 4475–4479. doi: 10.1109/ICASSP48485.2024.10446654.
    [59] ZHANG Ming, CHEN Yongkang, LI Hu, et al. Dynamic loss yielding more transferable targeted adversarial examples[J]. Neurocomputing, 2024, 590: 127754. doi: 10.1016/j.neucom.2024.127754.
    [60] GU Jindong, JIA Xiaojun, DE JORGE P, et al. A survey on transferability of adversarial examples across deep neural networks[J]. Transactions on Machine Learning Research, 2024, 2024. (查阅网上资料,未找到对应的期号页码信息,请确认).
  • 加载中
图(18) / 表(13)
计量
  • 文章访问数: 
  • HTML全文浏览量: 
  • PDF下载量: 
  • 被引次数: 0
出版历程
  • 收稿日期:  2026-03-09
  • 修回日期:  2026-05-10

目录

    /

    返回文章
    返回