UWB-HA4D-1.0: 超宽带雷达人体动作四维成像数据集

金添 宋永坤 戴永鹏 胡锡坤 宋勇平 周小龙 邱志峰

金添, 宋永坤, 戴永鹏, 等. UWB-HA4D-1.0: 超宽带雷达人体动作四维成像数据集[J]. 雷达学报, 2022, 11(1): 27–39. doi: 10.12000/JR22008
引用本文: 金添, 宋永坤, 戴永鹏, 等. UWB-HA4D-1.0: 超宽带雷达人体动作四维成像数据集[J]. 雷达学报, 2022, 11(1): 27–39. doi: 10.12000/JR22008
JIN Tian, SONG Yongkun, DAI Yongpeng, et al. UWB-HA4D-1.0: An ultra-wideband radar human activity 4D imaging dataset[J]. Journal of Radars, 2022, 11(1): 27–39. doi: 10.12000/JR22008
Citation: JIN Tian, SONG Yongkun, DAI Yongpeng, et al. UWB-HA4D-1.0: An ultra-wideband radar human activity 4D imaging dataset[J]. Journal of Radars, 2022, 11(1): 27–39. doi: 10.12000/JR22008

UWB-HA4D-1.0: 超宽带雷达人体动作四维成像数据集

DOI: 10.12000/JR22008
基金项目: 国家自然科学基金(61971430)
详细信息
    作者简介:

    金 添(1980–),男,湖北人,国防科技大学教授、博士生导师。主要研究方向为新体制雷达系统、智能感知与处理。全国百篇优秀博士论文获得者,国际无线电科学联盟青年科学家奖,入选教育部新世纪优秀人才支持计划,中国电子学会优秀科技工作者。“信号处理与系统”国家精品课程和资源共享课主讲教师,信号处理系列课程国家级教学团队主要成员。出版专著4部、译著1部、教材1部,发表论文百余篇,授权国家发明专利10余项。获省部级科技进步一等奖1项、二等奖2项,电子学会自然科学二等奖1项。中国电子学会雷达分会委员、信号处理分会委员,《雷达学报》、《信号处理》、《雷达科学与技术》、《现代雷达》等期刊编委。多次担任APSAR国际会议、CIE国际雷达会议、IET国际雷达会议等TPC委员或分会主席

    宋永坤(1993–),男,河南人,国防科技大学信息与通信工程专业博士研究生。主要研究方向为超宽带雷达信号处理及深度学习

    戴永鹏(1992–),男,山东人,国防科技大学电子科学学院讲师,博士。主要研究方向为MIMO阵列雷达成像与图像增强

    胡锡坤(1994–),男,湖北人,国防科技大学信息与通信工程专业博士研究生。主要研究方向为遥感图像处理和深度学习

    宋勇平(1989–),男,四川人,国防科技大学电子科学学院助理研究员,博士。主要研究方向为穿墙探测、MIMO雷达成像、微弱目标检测

    周小龙(1992–),男,江西人。国防科技大学信息与通信工程专业博士研究生。主要研究方向为雷达信号处理、人体行为识别

    邱志峰(1999–),男,江西人,国防科技大学电子科学学院硕士研究生。主要研究方向为雷达信号处理与深度学习

    通讯作者:

    金添 tianjin@nudt.edu.cn

  • 责任主编:李廉林 Corresponding Editor: LI Lianlin
  • 中图分类号: TN957

UWB-HA4D-1.0: An Ultra-wideband Radar Human Activity 4D Imaging Dataset

Funds: The National Natural Science Foundation of China (61971430)
More Information
  • 摘要: 雷达人体行为感知系统具有穿透探测能力,在安防、救援、医疗等领域具有广泛的应用前景。近年来,深度学习技术的出现促进了雷达传感器在人体行为感知领域的发展,同时对相关数据集的样本规模和丰富性提出了更高的要求。该文公开了一个超宽带雷达人体动作四维成像数据集,该数据集以超宽带多输入多输出雷达为探测传感器来获取了人体目标的距离-方位-高度-时间四维动作数据,共采集了11个人体目标的2757组动作数据,动作类型包含走路、挥手、打拳等10种常见动作,有穿透探测和不穿透探测的实验场景。该文详细介绍了数据集的系统参数、制作流程、数据分布等信息。同时,基于飞桨平台使用计算机视觉领域应用较多的深度学习算法对该数据集进行人体动作识别实验,实验对比结果可以作为参考,为学者使用该数据集提供技术支撑,方便在此基础上进一步探索研究。

     

  • 图  1  三维超宽带MIMO雷达系统

    Figure  1.  Three-dimensional UWB MIMO radar system

    图  2  二维MIMO阵列

    Figure  2.  Two-dimensional MIMO array

    图  3  数据采集与处理流程

    Figure  3.  Data collection and processing flow

    图  4  数据集采集场景

    Figure  4.  Dataset collection scenes

    图  5  动作类型

    Figure  5.  Activity types

    图  6  三维雷达图像投影

    Figure  6.  Projection of three-dimensional images

    图  7  TSN结构图

    Figure  7.  TSN structure

    图  8  TSM网络核心结构

    Figure  8.  The core structure of TSM network

    图  9  Res3D网络结构图

    Figure  9.  Res3D network structure

    图  10  SFN结构图

    Figure  10.  SFN structure

    图  11  TSM网络测试结果

    Figure  11.  TSM network test results

    1  超宽带雷达人体动作四维成像数据集1.0发布网页

    1.  Release webpage of ultra-wideband radar human activity 4D imaging dataset

    表  1  雷达系统参数

    Table  1.   Radar system parameters

    参数指标
    工作频段1.78~2.78 GHz
    信号带宽1 GHz
    信号体制步进频信号
    信号步进带宽4 MHz
    脉冲重复频率10 Hz
    天线阵元数10发10收(MIMO)
    信号发射功率20 dBm (100 mW)
    系统尺寸60 cm×88 cm
    可穿透介质幕布、木板、塑料、泡沫、砖墙等
    下载: 导出CSV

    表  2  数据集采集场景信息

    Table  2.   Dataset collection scene information

    场景编号遮挡情况训练集测试集
    S1无遮挡
    S23 cm塑料板遮挡×
    S327 cm砖墙遮挡×
    注:√表示有,×表示无。
    下载: 导出CSV

    表  3  不同动作的数据量(组)

    Table  3.   The amount of data for different actions (groups)

    标号动作S1场景训练S1场景测试S2场景测试S3场景测试总数
    1开双臂149404040269
    2打拳155404040275
    3静坐156404040276
    4踢腿158404040278
    5坐下155404040275
    6站立156404040276
    7向前走157404040277
    8向左走156404040276
    9向右走158404040278
    10挥手157404040277
    下载: 导出CSV

    表  4  人体目标信息

    Table  4.   Human target information

    目标编号身高(cm)体重(kg)S1场景S2场景S3场景
    H117570××
    H217272××
    H317868××
    H418285××
    H517075××
    H617974
    H716560××
    H816965
    H916253××
    H1018680××
    H1117167××
    下载: 导出CSV

    表  5  人体动作标号

    Table  5.   Human activity labels

    动作编号动作类型真值标号动作编号动作类型真值标号
    A1开双臂0A6站立5
    A2打拳1A7向前走6
    A3静坐2A8向左走7
    A4踢腿3A9向右走8
    A5坐下4A10挥手9
    下载: 导出CSV

    表  6  实验结果对比表

    Table  6.   Experimental results comparison table

    识别方法网络框架S1识别精度S2识别精度S3识别精度
    2D CNNTSN85.75%83.5%60.75%
    TSM91.50%88.0%73.75%
    3D CNNSFN88.00%80.5%70.25%
    Res3D92.25%90.0%77.00%
    下载: 导出CSV

    表  7  Res3D网络在不同场景下的动作识别精度(%)

    Table  7.   Human activity recognition accuracy of Res3D networks in different scenes (%)

    探测场景张开双臂打拳静坐踢腿坐下站立向前走向左走向右走挥手平均
    S1场景9090.097.582.510085.097.51001008092.25
    S2场景8592.5100.085.010082.585.01001007090.00
    S3场景9082.5100.042.510065.050.0701007077.00
    下载: 导出CSV
  • [1] KUMAR P. Human activity recognition with deep learning: Overview, challenges & possibilities[J]. CCF Transactions on Pervasive Computing and Interaction, 2021, 339(3): 1–29. doi: 10.20944/preprints202102.0349.v1.
    [2] 黄晴晴, 周风余, 刘美珍. 基于视频的人体动作识别算法综述[J]. 计算机应用研究, 2020, 37(11): 3213–3219. doi: 10.19734/j.issn.1001-3695.2019.08.0253

    HUANG Qingqing, ZHOU Fengyu, and LIU Meizhen. Survey of human action recognition algorithms based on video[J]. Application Research of Computers, 2020, 37(11): 3213–3219. doi: 10.19734/j.issn.1001-3695.2019.08.0253
    [3] 钱慧芳, 易剑平, 付云虎. 基于深度学习的人体动作识别综述[J]. 计算机科学与探索, 2021, 15(3): 438–455. doi: 10.3778/j.issn.1673-9418.2009095

    QIAN Huifang, YI Jianping, and FU Yunhu. Review of human action recognition based on deep learning[J]. Journal of Frontiers of Computer Science &Technology, 2021, 15(3): 438–455. doi: 10.3778/j.issn.1673-9418.2009095
    [4] SCHULDT C, LAPTEV I, and CAPUTO B. Recognizing human actions: A local SVM approach[C]. 2004 IEEE International Conference on Pattern Recognition, Cambridge, UK, 2004: 32–36.
    [5] SOOMRO K, ZAMIR A R, and SHAH M. UCF101: A dataset of 101 human actions classes from videos in the wild[EB/OL]. https://arxiv.org/abs/1212.0402, 2012.
    [6] KUEHNE H, JHUANG H, GARROTE E, et al. HMDB: A large video database for human motion recognition[C]. 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 2011: 2556–2563.
    [7] KAY W, CARREIRA J, SIMONYAN K, et al. The kinetics human action video dataset[EB/OL]. https://arxiv.org/abs/1705.06950, 2017.
    [8] SHAHROUDY A, LIU Jun, NG T T, et al. NTU RGB+D: A large scale dataset for 3D human activity analysis[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 1010–1019.
    [9] 杜浩. 基于深度学习的超宽带雷达人体行为辨识研究[D]. [博士论文], 国防科技大学, 2020: 1–5.

    DU Hao. Research on deep learning-based human behavior recognition in ultra-wideband radar[D]. [Ph. D. dissertation], National University of Defense Technology, 2020: 1–5.
    [10] PAULI M, GOTTEL B, SCHERR S, et al. Miniaturized millimeter-wave radar sensor for high-accuracy applications[J]. IEEE Transactions on Microwave Theory and Techniques, 2017, 65(5): 1707–1715. doi: 10.1109/TMTT.2017.2677910
    [11] 刘熠辰, 徐丰. 基于雷达技术的手势识别[J]. 中国电子科学研究院学报, 2016, 11(6): 609–613. doi: 10.3969/j.issn.1673-5692.2016.06.009

    LIU Yichen and XU Feng. Gesture recognition based on radar technology[J]. Journal of China Academy of Electronics and Information Technology, 2016, 11(6): 609–613. doi: 10.3969/j.issn.1673-5692.2016.06.009
    [12] DING Chuanwei, ZHANG Li, GU Chen, et al. Non-contact human motion recognition based on UWB radar[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2018, 8(2): 306–315. doi: 10.1109/JETCAS.2018.2797313
    [13] KIM Y and MOON T. Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(1): 8–12. doi: 10.1109/LGRS.2015.2491329
    [14] CRALEY J, MURRAY T S, MENDAT D R, et al. Action recognition using micro-Doppler signatures and a recurrent neural network[C]. 2017 51st Annual Conference on Information Sciences and Systems, Baltimore, USA, 2017: 1–5.
    [15] WANG Mingyang, ZHANG Y D, and CUI Guolong. Human motion recognition exploiting radar with stacked recurrent neural network[J]. Digital Signal Processing, 2019, 87: 125–131. doi: 10.1016/j.dsp.2019.01.013
    [16] LI Xinyu, HE Yuan, FIORANELLI F, et al. Semisupervised human activity recognition with radar micro-Doppler signatures[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5103112. doi: 10.1109/TGRS.2021.3090106
    [17] DU Hao, JIN Tian, SONG Yongping, et al. A three-dimensional deep learning framework for human behavior analysis using range-Doppler time points[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(4): 611–615. doi: 10.1109/LGRS.2019.2930636
    [18] 李廉林, 崔铁军. 智能电磁感知的若干进展[J]. 雷达学报, 2021, 10(2): 183–190. doi: 10.12000/JR21049

    LI Lianlin and CUI Tiejun. Recent progress in intelligent electromagnetic sensing[J]. Journal of Radars, 2021, 10(2): 183–190. doi: 10.12000/JR21049
    [19] LI Lianlin, SHUANG Ya, MA Qian, et al. Intelligent metasurface imager and recognizer[J]. Light:Science & Applications, 2019, 8(1): 97. doi: 10.1038/s41377-019-0209-z
    [20] FIORANELLI F, SHAH S A, LI Haobo, et al. Radar sensing for healthcare[J]. Electronics Letters, 2019, 55(19): 1022–1024. doi: 10.1049/el.2019.2378
    [21] MENG Zhen, FU Song, YAN Jie, et al. Gait recognition for co-existing multiple people using millimeter wave sensing[C]. The AAAI Conference on Artificial Intelligence, New York, USA, 2020: 849–856.
    [22] ZHU Zhengliang, YANG Degui, ZHANG Junchao, et al. Dataset of human motion status using IR-UWB through-wall radar[J]. Journal of Systems Engineering and Electronics, 2021, 32(5): 1083–1096. doi: 10.23919/JSEE.2021.000093
    [23] SONG Yongkun, JIN Tian, DAI Yongpeng, et al. Through-wall human pose reconstruction via UWB MIMO radar and 3D CNN[J]. Remote Sensing, 2021, 13(2): 241. doi: 10.3390/rs13020241
    [24] AMIN M G, 朱国富, 陆必应, 金添, 等译. 穿墙雷达成像[M]. 北京: 电子工业出版社, 2014: 22–25.

    AMIN M G, ZHU Guofu, LU Biying, JIN Tian, et al. translation. Through-The-Wall Radar Imaging[M]. Beijing, China: Publishing House of Electronic Industry, 2014: 22–25.
    [25] 詹姆斯 D. 泰勒, 胡春明, 王建明, 孙俊, 等译. 超宽带雷达应用与设计[M]. 北京: 电子工业出版社, 2017: 54–55.

    TAYLOR J D, HU Chunming, WANG Jianming, SUN Jun, et al. translation. Ultrawideband Radar: Applications and Design[M]. Beijing, China: Publishing House of Electronic Industry, 2017: 54–55.
    [26] 孙鑫. 超宽带穿墙雷达成像方法与技术研究[D]. [博士论文], 国防科学技术大学, 2015: 16–17.

    SUN Xin. Research on method and technique of ultra-wideband through-the-wall radar imaging[D]. [Ph. D. dissertation], National University of Defense Technology, 2015: 16–17.
    [27] 金添, 宋勇平. 穿墙雷达人体目标探测技术综述[J]. 电波科学学报, 2020, 35(4): 486–495. doi: 10.13443/j.cjors.2020040804

    JIN Tian and SONG Yongping. Review on human target detection using through-wall radar[J]. Chinese Journal of Radio Science, 2020, 35(4): 486–495. doi: 10.13443/j.cjors.2020040804
    [28] ASH M, RITCHIE M, and CHETTY K. On the application of digital moving target indication techniques to short-range FMCW radar data[J]. IEEE Sensors Journal, 2018, 18(10): 4167–4175. doi: 10.1109/JSEN.2018.2823588
    [29] SONG Yongping, LOU Jun, and TIAN Jin. A novel II-CFAR detector for ROI extraction in SAR image[C]. 2013 IEEE International Conference on Signal Processing, Communication and Computing, Kunming, China, 2013: 1–4.
    [30] NORTON-WAYNE L. Image reconstruction from projections[J]. Optica Acta:International Journal of Optics, 1980, 27(3): 281–282. doi: 10.1080/713820221
    [31] MCCORKLE J W. Focusing of synthetic aperture ultra wideband data[C]. 1991 IEEE International Conference on Systems Engineering, Dayton, USA, 1991: 1–5.
    [32] BOBICK A F and DAVIS J W. The recognition of human movement using temporal templates[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(3): 257–267. doi: 10.1109/34.910878
    [33] DAS DAWN D and SHAIKH S H. A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector[J]. The Visual Computer, 2016, 32(3): 289–306. doi: 10.1007/s00371-015-1066-2
    [34] WANG Heng, KLÄSER A, SCHMID C, et al. Dense trajectories and motion boundary descriptors for action recognition[J]. International Journal of Computer Vision, 2013, 103(1): 60–79. doi: 10.1007/s11263-012-0594-8
    [35] SIMONYAN K and ZISSERMAN A. Two-stream convolutional networks for action recognition in videos[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 568–576.
    [36] WANG Limin, XIONG Yuanjun, WANG Zhe, et al. Temporal segment networks: Towards good practices for deep action recognition[C]. 2016 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 20–36.
    [37] LIN Ji, GAN Chuang, and HAN Song. TSM: Temporal shift module for efficient video understanding[C]. 2019 IEEE/CVF IEEE International Conference on Computer Vision, Seoul, Korea, 2019: 7083–7093.
    [38] JI Shuiwang, XU Wei, YANG Ming, et al. 3D convolutional neural networks for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 221–231. doi: 10.1109/TPAMI.2012.59
    [39] TRAN D, BOURDEV L, FERGUS R, et al. Learning spatiotemporal features with 3D convolutional networks[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4489–4497.
    [40] TRAN D, RAY J, SHOU Zheng, et al. ConvNet architecture search for spatiotemporal feature learning[EB/OL]. https://arxiv.org/abs/1708.05038, 2017.
    [41] FEICHTENHOFER C, FAN Haoqi, MALIK J, et al. SlowFast networks for video recognition[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 2019: 6202–6210.
  • 加载中
图(12) / 表(7)
计量
  • 文章访问数:  4726
  • HTML全文浏览量:  3279
  • PDF下载量:  535
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-09
  • 修回日期:  2022-02-16
  • 网络出版日期:  2022-02-24
  • 刊出日期:  2022-02-28

目录

    /

    返回文章
    返回