MMRGait-1.0:多视角多穿着条件下的雷达时频谱图步态识别数据集

杜兰 陈晓阳 石钰 薛世鲲 解蒙

杜兰, 陈晓阳, 石钰, 等. MMRGait-1.0:多视角多穿着条件下的雷达时频谱图步态识别数据集[J]. 雷达学报, 2023, 12(4): 892–905. doi: 10.12000/JR22227
引用本文: 杜兰, 陈晓阳, 石钰, 等. MMRGait-1.0:多视角多穿着条件下的雷达时频谱图步态识别数据集[J]. 雷达学报, 2023, 12(4): 892–905. doi: 10.12000/JR22227
DU Lan, CHEN Xiaoyang, SHI Yu, et al. MMRGait-1.0: A radar time-frequency spectrogram dataset for gait recognition under multi-view and multi-wearing conditions[J]. Journal of Radars, 2023, 12(4): 892–905. doi: 10.12000/JR22227
Citation: DU Lan, CHEN Xiaoyang, SHI Yu, et al. MMRGait-1.0: A radar time-frequency spectrogram dataset for gait recognition under multi-view and multi-wearing conditions[J]. Journal of Radars, 2023, 12(4): 892–905. doi: 10.12000/JR22227

MMRGait-1.0:多视角多穿着条件下的雷达时频谱图步态识别数据集

doi: 10.12000/JR22227
基金项目: 国家自然科学基金(U21B2039)
详细信息
    作者简介:

    杜 兰,博士,教授,主要研究方向为雷达目标识别、雷达信号处理、机器学习等

    陈晓阳,硕士生,主要研究方向为雷达信号处理、雷达步态识别

    石 钰,博士生,主要研究方向为雷达目标检测与识别、多源信息跨域学习

    薛世鲲,硕士生,主要研究方向为雷达步态识别、动作识别等

    解 蒙,硕士生,主要研究方向为雷达信号处理、雷达目标跟踪

    通讯作者:

    杜兰 dulan@mail.xidian.edu.cn

  • 责任主编:金添 Corresponding Editor: JIN Tian
  • 中图分类号: TN957

MMRGait-1.0: A Radar Time-frequency Spectrogram Dataset for Gait Recognition under Multi-view and Multi-wearing Conditions

Funds: The National Natural Science Foundation of China (U21B2039)
More Information
  • 摘要: 步态识别作为一种生物识别技术,在实际生活中通常被认为是一项检索任务。然而,受限于现有雷达步态识别数据集的规模,目前的研究主要针对分类任务且局限于单一行走视角和相同穿着条件,这限制了基于雷达的步态识别在实际场景中的应用。该文公开了一个多视角多穿着条件下的雷达步态识别数据集,该数据集使用毫米波雷达采集了121位受试者在多种穿着条件下沿不同视角行走的时频谱图数据,每位受试者共采集8个视角,每个视角采集10组,其中6组为正常穿着,2组为穿大衣,2组为挎包。同时,该文提出一种基于检索任务的雷达步态识别方法,并在公布数据集上进行了实验,实验结果可以作为基准性能指标,方便更多学者在此数据集上开展进一步研究。

     

  • 图  1  毫米波雷达天线阵列分布图

    Figure  1.  Antenna array distribution of millimeter-wave radar

    图  2  数据采集平台和室内采集场景

    Figure  2.  Data collection platform and indoor collection scene

    图  3  行走视角示意图

    Figure  3.  The view of walking

    图  4  3种穿着条件示例

    Figure  4.  Examples of three wearing conditions

    图  5  信号处理流程

    Figure  5.  Signal processing flow

    图  6  8种行走视角、3种穿着条件下的时频谱图

    Figure  6.  Time-frequency spectrograms for eight walking views and three wearing conditions

    图  7  具体数据集结构示意图

    Figure  7.  Structure of the dataset

    图  8  基于检索任务的雷达步态识别流程图

    Figure  8.  Flow chart of radar gait recognition based on retrieval task

    图  9  基于检索任务的特征提取网络模型结构框图

    Figure  9.  Framework for feature extraction network model based on retrieval task

    图  10  空间注意力图计算流程

    Figure  10.  Spatial attention map calculation process

    图  11  长短时特征提取模块计算流程

    Figure  11.  Long-short time feature extraction module calculation process

    图  12  同一人不同起始状态下行走的两组时频谱图数据

    Figure  12.  Two sets of time-spectrogram data of walking in different starting states

    1  MMRGait-1.0:多视角多穿着条件下的雷达时频谱图步态识别数据集发布网页

    1.  Release webpage of MMRGait-1.0: A radar time-frequency spectrogram dataset for gait recognition under multi-view and multi-wearing conditions dataset

    表  1  雷达发射波形参数配置

    Table  1.   Parameter configurations of the radar transmitting waveform

    参数数值
    起始频率(GHz)77
    Chirp重复周期(μs)78.125
    调频斜率(MHz/μs)9.753
    调频时长(μs)60
    ADC采样时间(μs)40.96
    ADC采样点数512
    帧内Chirp数255
    帧周期(ms)19.922
    连续发射帧数60
    连续发射时长(s)1.195
    空闲时长(s)0.005
    下载: 导出CSV

    表  2  不同步态识别方法在多视角条件下的识别准确度(%)

    Table  2.   Recognition accuracy of different gait recognition methods in multi-view conditions (%)

    查询样本方法30º45º60º90º300º315º330º平均值
    NM05-06方法140.7039.5337.2138.3737.2138.3736.0534.8837.79
    方法232.3939.5353.4946.5144.1943.0247.6755.8145.35
    方法348.8460.4760.4766.2855.8158.1463.9553.4958.43
    方法473.2676.7475.5874.4262.7975.5869.7773.2672.68
    方法561.6356.9855.8158.1446.5160.4762.7964.7158.38
    本文方法91.8694.1996.5194.1991.8698.8491.8696.5194.48
    BG01-02方法120.9329.0729.0723.2632.5638.8239.5330.2330.43
    方法236.0536.0543.0244.1939.5347.0643.0232.5640.19
    方法341.8634.8846.5137.2139.5349.3153.4944.1943.39
    方法446.5154.6563.9551.1643.0260.0060.4759.3054.88
    方法552.3341.8648.8453.4943.0250.0047.6745.3547.82
    本文方法80.2375.5877.9183.7280.2383.7284.8883.7281.25
    CT01-02方法127.9133.7236.0527.9134.8833.7227.9139.5332.70
    方法233.7232.5632.5645.3536.0536.0534.8834.8835.76
    方法337.2132.5654.6544.1934.8845.3545.3541.8642.01
    方法445.3547.6755.8161.6345.3547.6760.4751.1651.89
    方法544.1941.8656.9843.0234.8852.3350.0050.0046.66
    本文方法74.4276.7483.7284.8880.2376.7482.5674.4279.21
    下载: 导出CSV

    表  3  不同步态识别方法在跨视角条件下的识别准确度(%)

    Table  3.   Recognition accuracy of different gait recognition methods in cross-view conditions (%)

    查询样本方法30º45º60º90º300º315º330º平均值
    NM05-06方法127.9126.4130.4028.4122.9226.9128.9032.7228.07
    方法229.5731.5640.0334.2232.3932.3936.0539.2034.43
    方法332.5635.7239.8934.5528.7440.7037.3834.3935.49
    方法432.3943.8546.3539.0427.2439.0443.8545.6839.68
    方法535.2233.7239.3741.6924.2536.7139.3739.5036.23
    本文方法50.6664.9568.1164.6250.1763.6270.1066.7862.38
    BG01-02方法124.0924.2523.7523.4221.1026.5526.7424.0924.25
    方法229.9028.5735.2131.5628.4136.4738.5431.3932.51
    方法333.2223.5931.3928.2424.0933.6139.0432.2330.68
    方法430.0735.5536.8834.5521.9336.3037.8736.7133.73
    方法541.5336.8839.0433.7228.2433.3939.3735.7135.98
    本文方法52.8454.9861.7957.6441.3660.4666.1156.4856.46
    CT01-02方法122.4226.5826.4121.9324.5923.2621.7626.0824.13
    方法226.2525.4230.7331.0625.5829.2431.0626.9128.28
    方法326.4128.0733.3929.7321.1026.9134.5531.4028.95
    方法424.9234.0537.5135.3819.7730.7336.5532.5631.44
    方法531.0636.7141.5334.2223.7532.8936.7135.3834.03
    本文方法50.5056.1561.7957.9743.8652.6657.6454.1554.34
    下载: 导出CSV

    表  4  不同步态识别方法的模型复杂度

    Table  4.   Model complexity of different gait recognition methods

    方法计算量FLOPs (G)参数量(M)
    方法113.6840.47
    方法22.989.96
    方法30.140.90
    方法42.350.72
    方法52.264.33
    本文方法14.0121.29
    下载: 导出CSV

    表  5  消融实验识别准确度(%)

    Table  5.   Recognition accuracy of ablation studies (%)

    方法NMBGCT平均值
    Base+HPM66.2853.6455.6758.53
    Base+LST80.9665.9964.2470.40
    Base+LST+MSF90.7076.6075.4480.91
    Base+LST+MSF+SA94.4881.2579.2184.98
    下载: 导出CSV
  • [1] DELIGIANNI F, GUO Yao, and YANG Guangzhong. From emotions to mood disorders: A survey on gait analysis methodology[J]. IEEE Journal of Biomedical and Health Informatics, 2019, 23(6): 2302–2316. doi: 10.1109/JBHI.2019.2938111
    [2] SEPAS-MOGHADDAM A and ETEMAD A. Deep gait recognition: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 264–284. doi: 10.1109/TPAMI.2022.3151865
    [3] LI Haobo, MEHUL A, LE KERNEC J, et al. Sequential human gait classification with distributed radar sensor fusion[J]. IEEE Sensors Journal, 2021, 21(6): 7590–7603. doi: 10.1109/JSEN.2020.3046991
    [4] CAO Peibei, XIA Weijie, YE Ming, et al. Radar-ID: Human identification based on radar micro-Doppler signatures using deep convolutional neural networks[J]. IET Radar, Sonar & Navigation, 2018, 12(7): 729–734. doi: 10.1049/iet-rsn.2017.0511
    [5] LANG Yue, WANG Qing, YANG Yang, et al. Joint motion classification and person identification via multitask learning for smart homes[J]. IEEE Internet of Things Journal, 2019, 6(6): 9596–9605. doi: 10.1109/JIOT.2019.2929833
    [6] PAPANASTASIOU V S, TROMMEL R P, HARMANNY R I A, et al. Deep learning-based identification of human gait by radar micro-Doppler measurements[C]. The 17th European Radar Conference (EuRAD), Utrecht, Netherlands, 2021: 49–52.
    [7] DONG Shiqi, XIA Weijie, LI Yi, et al. Radar-based human identification using deep neural network for long-term stability[J]. IET Radar, Sonar & Navigation, 2020, 14(10): 1521–1527. doi: 10.1049/iet-rsn.2019.0618
    [8] NIAZI U, HAZRA S, SANTRA A, et al. Radar-based efficient gait classification using Gaussian prototypical networks[C]. 2021 IEEE Radar Conference (RadarConf21), Atlanta, USA, 2021: 1–5.
    [9] CHEN V C, LI F, HO S S, et al. Micro-Doppler effect in radar: Phenomenon, model, and simulation study[J]. IEEE Transactions on Aerospace and Electronic Systems, 2006, 42(1): 2–21. doi: 10.1109/TAES.2006.1603402
    [10] BAI Xueru, HUI Ye, WANG Li, et al. Radar-based human gait recognition using dual-channel deep convolutional neural network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 9767–9778. doi: 10.1109/TGRS.2019.2929096
    [11] ADDABBO P, BERNARDI M L, BIONDI F, et al. Gait recognition using FMCW radar and temporal convolutional deep neural networks[C]. 2020 IEEE 7th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Pisa, Italy, 2020: 171–175.
    [12] DOHERTY H G, BURGUEÑO R A, TROMMEL R P, et al. Attention-based deep learning networks for identification of human gait using radar micro-Doppler spectrograms[J]. International Journal of Microwave and Wireless Technologies, 2021, 13(7): 734–739. doi: 10.1017/S1759078721000830
    [13] YANG Yang, HOU Chunping, LANG Yue, et al. Person identification using micro-Doppler signatures of human motions and UWB radar[J]. IEEE Microwave and Wireless Components Letters, 2019, 29(5): 366–368. doi: 10.1109/LMWC.2019.2907547
    [14] XIA Zhaoyang, DING Genming, WANG Hui, et al. Person identification with millimeter-wave radar in realistic smart home scenarios[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 19: 3509405. doi: 10.1109/LGRS.2021.3117001
    [15] CHENG Yuwei and LIU Yimin. Person reidentification based on automotive radar point clouds[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5101913. doi: 10.1109/TGRS.2021.3073664
    [16] TAHMOUSH D and SILVIOUS J. Angle, elevation, PRF, and illumination in radar microDoppler for security applications[C]. 2009 IEEE Antennas and Propagation Society International Symposium, North Charleston, USA, 2009: 1–4.
    [17] YANG Yang, YANG Xiaoyi, SAKAMOTO T, et al. Unsupervised domain adaptation for disguised-gait-based person identification on micro-Doppler signatures[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(9): 6448–6460. doi: 10.1109/TCSVT.2022.3161515
    [18] AWR1843 single-chip 76-GHz to 81-GHz automotive radar sensor evaluation module[EB/OL]. https://www.ti.com/tool/AWR1843BOOST, 2022.
    [19] CHEN V C and LING Hao. Time-Frequency Transforms for Radar Imaging and Signal Analysis[M]. Boston: Artech House, 2002, 28–31.
    [20] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [21] HERMANS A, BEYER L, and LEIBE B. In defense of the triplet loss for person re-identification[J]. arXiv preprint arXiv: 1703.07737, 2017.
    [22] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]. 15th European Conference on Computer Vision, Munich, Germany, 2018: 3–19.
    [23] WANG Guanshuo, YUAN Yufeng, CHEN Xiong, et al. Learning discriminative features with multiple granularities for person re-identification[C]. 26th ACM International Conference on Multimedia, Seoul, Korea, 2018: 274–282.
    [24] FU Yang, WEI Yunchao, ZHOU Yuqian, et al. Horizontal pyramid matching for person re-identification[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 8295–8302. doi: 10.1609/aaai.v33i01.33018295
    [25] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [26] CHAO Hanqing, HE Yiwei, ZHANG Junping, et al. GaitSet: Regarding gait as a set for cross-view gait recognition[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 8126–8133. doi: 10.1609/aaai.v33i01.33018126
  • 加载中
图(13) / 表(5)
计量
  • 文章访问数:  1804
  • HTML全文浏览量:  935
  • PDF下载量:  327
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-11-24
  • 修回日期:  2023-02-10
  • 网络出版日期:  2023-03-06
  • 刊出日期:  2023-08-28

目录

    /

    返回文章
    返回