基于多域协同训练的半监督雷达人体动作识别方法

赵兴鹏 徐航 常朋发 刘丽 李静霞 张建国 王冰洁

赵兴鹏, 徐航, 常朋发, 等. 基于多域协同训练的半监督雷达人体动作识别方法[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR25223
引用本文: 赵兴鹏, 徐航, 常朋发, 等. 基于多域协同训练的半监督雷达人体动作识别方法[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR25223
ZHAO Xingpeng, XU Hang, CHANG Pengfa, et al. A semisupervised radar-based human action recognition method via multidomain collaborative training[J]. Journal of Radars, in press. doi: 10.12000/JR25223
Citation: ZHAO Xingpeng, XU Hang, CHANG Pengfa, et al. A semisupervised radar-based human action recognition method via multidomain collaborative training[J]. Journal of Radars, in press. doi: 10.12000/JR25223

基于多域协同训练的半监督雷达人体动作识别方法

DOI: 10.12000/JR25223 CSTR: 32380.14.JR25223
基金项目: 国家密码科学基金(2025NCSF02059),国家自然科学基金(42174175, 62105233),山西省基础研究计划资助项目(202203021221090)
详细信息
    作者简介:

    赵兴鹏,硕士生,主要研究方向为半监督人体动作识别方法

    徐 航,博士,副教授,主要研究方向为MIMO雷达在人体行为感知和生命体征监测中的应用研究

    常朋发,博士,讲师,主要研究方向为回音壁模式微腔、物理不可克隆函数和保密通信

    刘 丽,博士,教授,主要研究方向为探地雷达信号处理及应用

    李静霞,博士,副教授,主要研究方向为新体制探地雷达及应用

    张建国,博士,副教授,主要研究方向为毫米波雷达信号处理

    王冰洁,博士,教授,主要研究方向为混沌激光测距与成像雷达

    通讯作者:

    徐航 xuhang@tyut.edu.cn

    责任主编:金添 Corresponding Editor: JIN Tian

  • 中图分类号: TN957

A Semisupervised Radar-Based Human Action Recognition Method via Multidomain Collaborative Training

Funds: The National Cryptologic Science Fund of China under Grant (2025NCSF02059), The National Natural Science Foundation of China (42174175, 62105233), The Fundamental Research Program of Shanxi Province (202203021221090)
More Information
  • 摘要: 针对雷达人体动作识别任务中标注数据不足的问题,该文提出了一种基于多域协同训练的半监督学习方法。该方法融合慢时间-距离域、慢时间-多普勒频率域和距离-多普勒频率域的动作特征,构建决策层集成框架,通过域间一致性评估机制动态调整各域在集成预测中的权重,并设计了分层置信度动态伪标签策略,通过多层次质量评估和动态阈值校准实现伪标签质量与利用率的平衡。此外,该方法引入了特征对齐约束机制,利用快速主成分分析方法提取多域特征的主成分,引导深度网络学习紧凑的特征表示,增强模型的判别能力。在基于随机码雷达的穿墙人体动作数据集上,该文所提方法在5%标注比例下平均识别准确率达到(93.6±1.6)%。在基于调频连续波雷达的室内人体动作数据集上,5%标注比例下平均识别准确率达到(91.3±1.9)%,既高于包括Bi-LSTM, LH-ViT和MFAFN的监督学习方法,也高于包括FixMatch, C-TGAN, MF-Match和LW-HGR的半监督学习方法。实验结果证明该方法在随机码雷达和调频连续波雷达两种不同雷达体制下,以及在穿墙和室内两种不同探测场景下,均表现稳定,验证了其跨体制和跨场景的适应性。此外,基于多域协同训练的半监督学习模型的参数量为1.30 M,浮点计算量为26.16 M,模型大小为5.01 MB,展现出较高的计算效率。

     

  • 图  1  基于多域协同训练的半监督雷达HAR方法

    Figure  1.  Semi-supervised radar-based HAR method via multi-domain collaborative training

    图  2  随机码雷达探测的10种穿墙人体动作

    Figure  2.  Ten through-wall human actions detected by the random code radar

    图  3  随机码雷达探测的10种穿墙人体动作的可视化结果

    Figure  3.  Visualization results of ten through-wall human actions detected by the random code radar

    图  4  FMCW雷达探测的6种室内人体动作

    Figure  4.  Six indoor human actions detected by the FMCW radar

    图  5  FMCW雷达探测的6种室内人体动作的可视化结果

    Figure  5.  Visualization results of six indoor human actions detected by the FMCW radar

    图  6  基于穿墙人体动作数据集的HAR训练过程曲线

    Figure  6.  HAR training process curves based on through-wall human action dataset

    图  7  损失函数中权重参数的敏感性分析

    Figure  7.  Sensitivity analysis of weight parameters in loss function

    图  8  基于穿墙人体动作数据集的特征空间聚类分布与混淆矩阵

    Figure  8.  Feature space clustering distribution and confusion matrix based on through-wall human action dataset

    图  9  基于室内人体动作数据集的特征空间聚类分布与混淆矩阵

    Figure  9.  Feature space clustering distribution and confusion matrix based on indoor human action dataset

    表  1  CNN分类器的结构参数

    Table  1.   Structural parameters of CNN classifiers

    配置输出维度
    Conv17×7, 16通道, BN, ReLU, MaxPool, Dropout16×H×W
    Conv25×5, 32通道, BN, ReLU, MaxPool, Dropout32×H×W
    FC1128, BN, ReLU, Dropout128
    FC264, BN, ReLU, Dropout64
    FC3K10/6
    下载: 导出CSV

    表  2  两个数据集样本的统计信息

    Table  2.   Statistical information of two dataset samples

    数据集动作类别原始样本增强后样本数据分布增强倍数
    穿墙人体动作数据集10120010800均衡9倍
    室内人体动作数据集6175415786不均衡9倍
    下载: 导出CSV

    表  3  基于穿墙人体动作数据集的融合策略对比实验 (%)

    Table  3.   Comparison experiment of fusion strategies based on through-wall human action dataset (%)

    融合方式标注比例
    3%5%10%15%
    特征级融合80.284.688.992.0
    本文方法84.893.694.898.2
    注:表中加粗数值表示最优。
    下载: 导出CSV

    表  4  基于穿墙人体动作数据集的权重机制对比实验 (%)

    Table  4.   Comparison experiment of weighting mechanisms based on through-wall human action dataset (%)

    权重机制标注比例
    3%5%10%15%
    均匀权重79.687.689.391.2
    自适应加权82.189.191.594.9
    本文方法84.893.694.898.2
    注:表中加粗数值表示最优。
    下载: 导出CSV

    表  5  基于穿墙人体动作数据集的多域协同方法消融实验 (%)

    Table  5.   Ablation experiment of multi-domain collaborative method based on through-wall human action dataset (%)

    实验配置标注比例
    3%5%10%15%
    单域ST-DF68.8±3.775.6±3.177.3±2.583.9±2.2
    R-DF72.1±3.278.3±2.883.5±2.385.9±1.9
    ST-R74.5±3.380.6±2.684.3±2.186.7±1.7
    双域ST-DF+R-DF75.4±3.279.7±2.784.9±2.287.5±1.8
    ST-R+ST-DF75.2±2.982.3±2.486.3±2.088.9±1.8
    ST-R+R-DF77.6±2.883.1±2.386.8±1.989.6±1.6
    多域多域79.2±2.984.9±2.589.6±1.992.6±1.3
    多域+特征对齐83.1±2.687.4±2.391.7±1.795.6±1.5
    多域+动态阈值80.4±2.885.2±2.390.7±1.894.4±1.3
    本文方法84.8±2.593.6±1.694.8±1.598.2±1.2
    注:表中加粗数值表示最优。
    下载: 导出CSV

    表  6  基于穿墙人体动作数据集的HAR方法性能对比 (%)

    Table  6.   Performance comparison of various HAR methods based on through-wall human action dataset (%)

    类型方法输入标注比例
    3%5%10%15%
    监督Bi-LSTM[13]ST-DF41.9±5.844.2±5.354.7±4.564.4±3.6
    ST-R69.3±4.273.0±3.779.5±3.185.3±2.5
    R-DF59.5±4.968.2±4.168.5±4.276.8±3.3
    LH-ViT[18]ST-DF36.8±6.548.2±5.753.9±4.862.2±3.8
    ST-R81.0±3.084.3±2.688.2±2.291.1±2.0
    R-DF85.1±2.591.3±2.191.7±2.093.2±1.7
    MFAFN[29]ST-DF+
    ST-R
    81.3±2.386.5±2.492.2±1.794.5±1.4
    半监督FixMatch[32]ST-DF64.3±4.074.7±3.383.6±2.687.7±2.1
    ST-R83.7±2.690.3±2.291.9±2.293.0±1.6
    R-DF84.5±2.788.7±2.390.4±2.092.7±1.7
    C-TGAN[21]ST-DF49.5±5.160.0±4.468.7±3.676.0±3.0
    ST-R78.2±3.283.9±2.787.4±2.792.0±2.1
    R-DF84.4±2.589.3±2.291.6±1.993.3±1.9
    MF-Match[22]ST-DF71.6±3.775.9±3.286.9±2.391.9±1.8
    ST-R85.6±2.991.4±2.193.7±1.995.4±1.3
    R-DF84.7±2.989.1±2.391.5±1.993.1±1.7
    LW-HGR[30]ST-DF
    +ST-R
    80.0±2.083.2±2.187.8±2.091.3±1.8
    本文方法多域84.8±2.593.6±1.694.8±1.598.2±1.2
    注:表中加粗数值表示最优。
    下载: 导出CSV

    表  7  基于室内人体动作数据集的HAR方法性能与计算效率对比 (%)

    Table  7.   Performance and computational efficiency comparison of various HAR methods based on indoor human action dataset (%)

    类型方法输入标注比例计算效率
    3%5%10%15%参数量浮点计算量模型大小推理时间

    Bi-LSTM[13]ST-DF64.3±3.371.5±3.073.2±2.983.4±2.40.37M0.13G1.27 MB3.46 ms
    LH-ViT[18]ST-DF79.9±2.381.2±2.084.5±2.288.9±2.11.41M5.44G5.38 MB5.79 ms
    MFAFN[29]ST-DF
    +ST-R
    78.5±2.581.9±2.283.7±1.991.2±1.621.79M5.14G83.21 MB7.57 ms


    FixMatch[32]ST-DF80.8±2.482.3±2.385.7±1.889.8±1.811.17M8.95G42.70 MB6.65 ms
    C-TGAN[21]ST-DF82.2±2.884.5±2.689.9±2.392.8±1.91.55M3.58G5.93 MB2.69 ms
    MF-Match[22]ST-DF82.9±2.385.7±2.290.5±1.893.8±1.622.52M8.96G86.06 MB6.89 ms
    LW-HGR[30]ST-DF
    +ST-R
    77.8±2.282.1±2.185.6±1.990.6±1.80.21M9.62M0.81 MB2.51 ms
    本文方法多域83.8±2.491.3±1.993.2±1.695.6±1.41.30M26.16M5.01 MB3.09 ms
    注:表中加粗数值表示最优。
    下载: 导出CSV
  • [1] LI Xinyu, HE Yuan, and JING Xiaojun. A survey of deep learning-based human activity recognition in radar[J]. Remote Sensing, 2019, 11(9): 1068. doi: 10.3390/rs11091068.
    [2] 丁一鹏, 厍彦龙. 穿墙雷达人体动作识别技术的研究现状与展望[J]. 电子与信息学报, 2022, 44(4): 1156–1175. doi: 10.11999/JEIT211051.

    DING Yipeng and SHE Yanlong. Research status and prospect of human movement recognition technique using through-wall radar[J]. Journal of Electronics & Information Technology, 2022, 44(4): 1156–1175. doi: 10.11999/JEIT211051.
    [3] JARAMILLO I E, JEONG J G, LOPEZ P R, et al. Real-time human activity recognition with IMU and encoder sensors in wearable exoskeleton robot via deep learning networks[J]. Sensors, 2022, 22(24): 9690. doi: 10.3390/s22249690.
    [4] WANG Zhengwei, SHE Qi, and SMOLIC A. ACTION-Net: Multipath excitation for action recognition[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 13214–13223. doi: 10.1109/CVPR46437.2021.01301.
    [5] GUAN Qiuju, YIN Xuguang, GUO Xuemei, et al. A novel infrared motion sensing system for compressive classification of physical activity[J]. IEEE Sensors Journal, 2016, 16(8): 2251–2259. doi: 10.1109/JSEN.2016.2514606.
    [6] 杨小鹏, 高炜程, 渠晓东. 基于微多普勒角点特征与Non-Local机制的穿墙雷达人体步态异常终止行为辨识技术[J]. 雷达学报(中英文), 2024, 13(1): 68–86. doi: 10.12000/JR23181.

    YANG Xiaopeng, GAO Weicheng, and QU Xiaodong. Human anomalous gait termination recognition via through-the-wall radar based on micro-Doppler corner features and Non-Local mechanism[J]. Journal of Radars, 2024, 13(1): 68–86. doi: 10.12000/JR23181.
    [7] SONG Yongkun, DAI Yongpeng, JIN Tian, et al. Dual-task human activity sensing for pose reconstruction and action recognition using 4-D imaging radar[J]. IEEE Sensors Journal, 2023, 23(19): 23927–23940. doi: 10.1109/JSEN.2023.3308788.
    [8] AMIN M G, ZHANG Y D, AHMAD F, et al. Radar signal processing for elderly fall detection: The future for in-home monitoring[J]. IEEE Signal Processing Magazine, 2016, 33(2): 71–80. doi: 10.1109/MSP.2015.2502784.
    [9] 金添, 李志, 戴永鹏, 等. RIS-4D生物雷达多人体定位与生命体征监测[J]. 信号处理, 2024, 40(2): 225–235. doi: 10.16798/j.issn.1003-0530.2024.02.001.

    JIN Tian, LI Zhi, DAI Yongpeng, et al. Multi-subject localization and vital-sign monitoring with RIS-4D bioradar[J]. Journal of Signal Processing, 2024, 40(2): 225–235. doi: 10.16798/j.issn.1003-0530.2024.02.001.
    [10] KIM Y and LING Hao. Human activity classification based on micro-Doppler signatures using a support vector machine[J]. IEEE Transactions on Geoscience and Remote Sensing, 2009, 47(5): 1328–1337. doi: 10.1109/TGRS.2009.2012849.
    [11] CHOWDHURY A, DAS T, RANI S, et al. Activity recognition using ultra wide band range-time scan[C]. 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, Netherlands, 2021: 1338–1342. doi: 10.23919/Eusipco47968.2020.9287598.
    [12] WANG Mingyang, ZHANG Y D, and CUI Guolong. Human motion recognition exploiting radar with stacked recurrent neural network[J]. Digital Signal Processing, 2019, 87: 125–131. doi: 10.1016/j.dsp.2019.01.013.
    [13] SHRESTHA A, LI Haobo, LE KERNEC J, et al. Continuous human activity classification from FMCW radar with Bi-LSTM networks[J]. IEEE Sensors Journal, 2020, 20(22): 13607–13619. doi: 10.1109/JSEN.2020.3006386.
    [14] NGUYEN N, NGUYEN T, PHAM M, et al. Improving human activity classification based on micro-Doppler signatures separation of FMCW radar[C]. 2023 12th International Conference on Control, Automation and Information Sciences (ICCAIS), Hanoi, Vietnam, 2023: 454–459. doi: 10.1109/ICCAIS59597.2023.10382332.
    [15] 蒋留兵, 魏光萌, 车俐. 基于卷积神经网络的雷达人体动作识别方法[J]. 计算机应用与软件, 2019, 36(11): 168–174,234. doi: 10.3969/j.issn.1000-386x.2019.11.028.

    JIANG Liubing, WEI Guangmeng, and CHE Li. Human motion recognition method by radar based on CNN[J]. Computer Applications and Software, 2019, 36(11): 168–174,234. doi: 10.3969/j.issn.1000-386x.2019.11.028.
    [16] ZHENG Zhijie, PAN Jun, NI Zhikang, et al. Human posture reconstruction for through-the-wall radar imaging using convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 3505205. doi: 10.1109/LGRS.2021.3073073.
    [17] 渠晓东, 王文远, 孟昊宇, 等. 基于鲁棒主成分分析及YOLOv8的穿墙雷达运动人员检测方法[J]. 信号处理, 2025, 41(8): 1390–1403. doi: 10.12466/xhcl.2025.08.008.

    QU Xiaodong, WANG Wenyuan, MENG Haoyu, et al. Moving human detection method in through-the-wall radar based on robust principal component analysis and YOLOv8[J]. Journal of Signal Processing, 2025, 41(8): 1390–1403. doi: 10.12466/xhcl.2025.08.008.
    [18] HUAN Sha, WANG Zhaoyue, WANG Xiaoqiang, et al. A lightweight hybrid vision transformer network for radar-based human activity recognition[J]. Scientific Reports, 2023, 13(1): 17996. doi: 10.1038/s41598-023-45149-5.
    [19] WU Yizhuo, FIORANELLI F, and GAO Chang. RadMamba: Efficient human activity recognition through radar-based micro-Doppler-oriented Mamba state-space model[EB/OL]. https://arxiv.org/abs/2504.12039, 2025.
    [20] LI Xinyu, HE Yuan, FIORANELLI F, et al. Semisupervised human activity recognition with radar micro-Doppler signatures[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5103112. doi: 10.1109/TGRS.2021.3090106.
    [21] LIU Li, WANG Shengyao, SONG Chenyan, et al. Radar-based human motion recognition using semisupervised triple-GAN[J]. IEEE Sensors Journal, 2023, 23(24): 30691–30702. doi: 10.1109/JSEN.2023.3327963.
    [22] YUN Tianhe and WANG Zhangang. MF-Match: A semi-supervised model for human action recognition[J]. Sensors, 2024, 24(15): 4940. doi: 10.3390/s24154940.
    [23] SADEGHI-ADL Z and AHMAD F. Semi-supervised convolutional autoencoder with attention mechanism for activity recognition[C]. 2023 31st European Signal Processing Conference, Helsinki, Finland, 2023: 785–789. doi: 10.23919/EUSIPCO58844.2023.10289719.
    [24] PINYOANUNTAPONG E, ALI A, JAKKALA K, et al. GaitSADA: Self-aligned domain adaptation for mmWave gait recognition[C]. 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems (MASS), Toronto, Canada, 2023: 218–226. doi: 10.1109/MASS58611.2023.00034.
    [25] XU Hang, LI Yong, LI Yingxin, et al. Through-wall human motion recognition using random code radar sensor with multi-domain feature fusion[J]. IEEE Sensors Journal, 2022, 22(15): 15123–15132. doi: 10.1109/JSEN.2022.3183292.
    [26] XU Hang, LI Yong, DONG Qingran, et al. Random code radar with range-time-frequency points and improved PointConv network for through-wall human action recognition[J]. IEEE Sensors Journal, 2025, 25(8): 13719–13728. doi: 10.1109/JSEN.2025.3548121.
    [27] DING Chuanwei, ZHANG Li, CHEN Haoyu, et al. Sparsity-based human activity recognition with PointNet using a portable FMCW radar[J]. IEEE Internet of Things Journal, 2023, 10(11): 10024–10037. doi: 10.1109/JIOT.2023.3235808.
    [28] DING Wen, GUO Xuemei, and WANG Guoli. Radar-based human activity recognition using hybrid neural network model with multidomain fusion[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(5): 2889–2898. doi: 10.1109/TAES.2021.3068436.
    [29] CAO Lin, LIANG Song, ZHAO Zongmin, et al. Human activity recognition method based on FMCW radar sensor with multi-domain feature attention fusion network[J]. Sensors, 2023, 23(11): 5100. doi: 10.3390/s23115100.
    [30] WU Yajie, WANG Xiang, GUO Shisheng, et al. A lightweight network with multifeature fusion for mmWave radar-based hand gesture recognition[J]. IEEE Sensors Journal, 2024, 24(12): 19553–19561. doi: 10.1109/JSEN.2024.3395638.
    [31] BU Yuqing, WANG Xiang, ZHANG Bo, et al. Multidomain fusion method for human head movement recognition[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 2504608. doi: 10.1109/TIM.2023.3238750.
    [32] SOHN K, BERTHELOT D, LI Chunliang, et al. FixMatch: Simplifying semi-supervised learning with consistency and confidence[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 51.
    [33] BLUM A and MITCHELL T. Combining labeled and unlabeled data with co-training[C]. The Eleventh Annual Conference on Computational Learning Theory, Madison, USA, 1998: 92–100. doi: 10.1145/279943.279962.
    [34] XU Yi, SHANG Lei, YE Jinxing, et al. Dash: Semi-supervised learning with dynamic thresholding[C]. The 38th International Conference on Machine Learning, Virtual Event, 2021: 11525–11536.
    [35] ZHU Rui, LV Songlin, WANG Zikang, et al. Bi-CoG: Bi-consistency-guided self-training for vision-language models[EB/OL]. https://arxiv.org/abs/2510.20477, 2025.
    [36] FIORANELLI F, SHAH S A, LI H, et al. Radar signatures of human activities[DS/OL]. University of Glasgow. https://doi.org/10.5525/gla.researchdata.848, 2019.
  • 加载中
图(9) / 表(7)
计量
  • 文章访问数: 
  • HTML全文浏览量: 
  • PDF下载量: 
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-11-04

目录

    /

    返回文章
    返回