单通道超宽带雷达人体姿态增量估计技术

李柯蒙 戴永鹏 宋勇平 周小龙 宋永坤 金添

李柯蒙, 戴永鹏, 宋勇平, 等. 单通道超宽带雷达人体姿态增量估计技术[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR24109
引用本文: 李柯蒙, 戴永鹏, 宋勇平, 等. 单通道超宽带雷达人体姿态增量估计技术[J]. 雷达学报(中英文), 待出版. doi: 10.12000/JR24109
LI Kemeng, DAI Yongpeng, SONG Yongping, et al. Single-channel ultrawideband radar human pose-incremental estimation technology[J]. Journal of Radars, in press. doi: 10.12000/JR24109
Citation: LI Kemeng, DAI Yongpeng, SONG Yongping, et al. Single-channel ultrawideband radar human pose-incremental estimation technology[J]. Journal of Radars, in press. doi: 10.12000/JR24109

单通道超宽带雷达人体姿态增量估计技术

DOI: 10.12000/JR24109
基金项目: 国家自然科学基金(61971430)
详细信息
    作者简介:

    李柯蒙,博士生,主要研究方向为超宽带雷达人体姿态估计、机器学习与人工智能

    戴永鹏,博士,讲师,主要研究方向为MIMO阵列雷达成像与图像增强

    宋勇平,博士,讲师,主要研究方向为MIMO雷达成像、雷达目标检测和雷达抗干扰

    周小龙,博士生,主要研究方向为超宽带雷达人体姿态估计、机器学习与人工智能

    宋永坤,博士,讲师,主要研究方向为MIMO雷达成像、雷达信号处理与机器学习

    金 添,博士,教授,主要研究方向为新体制雷达系统、智能感知与处理

    通讯作者:

    戴永鹏 dai_yongpeng@nudt.edu.cn

    金添 tianjin@nudt.edu.cn

  • 责任主编:陈彦 Corresponding Editor: CHEN Yan
  • 中图分类号: TN958.95

Single-channel Ultrawideband Radar Human Pose-incremental Estimation Technology

Funds: The National Natural Science Foundation of China (61971430)
More Information
  • 摘要: 该文针对光学与雷达传感器融合人体姿态估计研究,基于连续时间微动累积量与姿态增量的物理对应关系,提出了一种单通道超宽带雷达人体姿态增量估计方案。具体来说,通过构造空时分步增量估计网络,采用空域伪3D卷积层与时域膨胀卷积层分步提取空时微动特征,将其映射为时间段内人体姿态增量,结合光学提供的姿态初值,实现人体三维姿态估计。实测数据结果表明,融合姿态估计在原地动作集取得5.38 cm估计误差,并能够实现一段时间行走动作连续姿态估计。与其他雷达姿态估计对比和消融实验证明了该文方法的优势。

     

  • 图  1  人体微动回波特征

    Figure  1.  Human body micro Doppler characteristics

    图  2  空时分步姿态增量估计框架

    Figure  2.  Structure of pose increment estimation using spatiotemporal step-by-step

    图  3  P3D的不同组织形式

    Figure  3.  Different organizational forms of P3D

    图  4  实验场景与关节定义

    Figure  4.  Experimental scenario and joint definition

    图  5  时域感受野对原地动作增量估计影响

    Figure  5.  Impact of time-domain receptive field on increment estimation in situ

    图  6  时域感受野对行走动作增量估计影响

    Figure  6.  Impact of time-domain receptive field on increment estimation of walking

    图  7  姿态估计可视化结果

    Figure  7.  Visualization results of attitude estimation

    图  8  噪声对姿态估计的影响

    Figure  8.  The impact of noise on attitude estimation

    图  9  行走迭代估计效果

    Figure  9.  Walking iteration estimation effect

    表  1  原地动作增量估计误差(cm)

    Table  1.   Incremental estimation error of in situ actions (cm)

    关节点 跌倒 挥拳 踏步 弯腰 转圈 平均
    头部 7.47 2.23 2.14 10.47 5.99 5.66
    胸部 3.76 3.52 2.24 4.16 4.81 3.70
    右肩 4.72 2.45 2.30 6.60 10.18 5.25
    右肘 8.48 3.75 2.69 6.93 9.03 6.18
    右腕 6.94 3.33 2.20 6.96 13.99 6.68
    左肩 6.07 3.10 5.74 5.93 8.20 5.81
    左肘 6.78 1.96 3.62 6.68 8.52 5.51
    左腕 6.75 1.35 5.83 9.11 11.17 6.84
    右髋 10.18 3.91 2.74 4.90 8.81 6.11
    右膝 3.42 2.05 2.50 8.59 8.70 5.05
    右脚 13.64 1.09 2.89 1.42 8.08 5.42
    左髋 6.95 1.94 1.99 3.58 6.67 4.23
    左膝 4.48 2.65 3.75 3.67 8.00 4.51
    左脚 5.67 1.36 2.78 1.55 10.24 4.32
    平均 6.81 2.48 3.10 5.75 8.74 5.37
    下载: 导出CSV

    表  2  本文与其他方法对比(cm)

    Table  2.   Comparison between this article and other methods (cm)

    方法 关节点 平均
    头部 胸部 肩膀 肘部 手腕 髋部 膝盖 脚踝
    JGLNet[18] 14.70 9.82 14.10 17.40 24.60 8.90 18.80 21.60 16.24
    KCL[17] 13.60 4.37 6.49 11.70 12.90 1.67 6.86 10.10 8.46
    Ours 5.66 3.67 5.53 5.85 6.76 5.17 4.78 4.87 5.29
    下载: 导出CSV

    表  3  各组件性能与计算成本

    Table  3.   Performance and computational cost of each component

    方法 估计性能 计算成本
    平均(cm) $\varDelta $ (%) 参数量(M) 推理时间(ms)
    2D 6.14 85.88 8.61
    P3D 5.82 +5.21 35.48 6.16
    2D-TDC 5.79 +5.70 84.73 9.26
    P3D-TDC 5.38 +12.38 34.33 6.73
    下载: 导出CSV
  • [1] LI Ming, QIN Hao, HUANG M, et al. RGB-D image-based pose estimation with Monte Carlo localization[C]. 2017 3rd International Conference on Control, Automation and Robotics, Nagoya, Japan, 2017: 109–114. DOI: 10.1109/ICCAR.2017.7942670.
    [2] KHAN A, GUPTA S, and GUPTA S K. Multi-hazard disaster studies: Monitoring, detection, recovery, and management, based on emerging technologies and optimal techniques[J]. International Journal of Disaster Risk Reduction, 2020, 47: 101642. doi: 10.1016/j.ijdrr.2020.101642.
    [3] 鲁勇, 吕绍和, 王晓东, 等. 基于WiFi信号的人体行为感知技术研究综述[J]. 计算机学报, 2019, 42(2): 231–251. doi: 10.11897/SP.J.1016.2019.00231.

    LU Yong, LV Shaohe, WANG Xiaodong, et al. A survey on WiFi based human behavior analysis technology[J]. Chinese Journal of Computers, 2019, 42(2): 231–251. doi: 10.11897/SP.J.1016.2019.00231.
    [4] VON MARCARD T, ROSENHAHN B, BLACK M J, et al. Sparse inertial poser: Automatic 3D human pose estimation from sparse IMUs[J]. Computer Graphics Forum, 2017, 36(2): 349–360. doi: 10.1111/cgf.13131.
    [5] DAI Yongpeng, JIN Tian, LI Haoran, et al. Imaging enhancement via CNN in MIMO virtual array-based radar[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7449–7458. doi: 10.1109/TGRS.2020.3035064.
    [6] 金添, 何元, 李新羽, 等. 超宽带雷达人体行为感知研究进展[J]. 电子与信息学报, 2022, 44(4): 1147–1155. doi: 10.11999/JEIT211044.

    JIN Tian, HE Yuan, LI Xinyu, et al. Advances in human activity sensing using ultra-wide band radar[J]. Journal of Electronics & Information Technology, 2022, 44(4): 1147–1155. doi: 10.11999/JEIT211044.
    [7] ADIB F, HSU C Y, MAO Hongzi, et al. Capturing the human figure through a wall[J]. ACM Transactions on Graphics (TOG), 2015, 34(6): 219. doi: 10.1145/2816795.2818072.
    [8] ZHAO Mingmin, LI Tianhong, ALSHEIKH M A, et al. Through-wall human pose estimation using radio signals[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7356–7365. DOI: 10.1109/CVPR.2018.00768.
    [9] ZHAO Mingmin, TIAN Yonglong, ZHAO Hang, et al. RF-based 3D skeletons[C]. The 2018 Conference of the ACM Special Interest Group on Data Communication, Budapest, Hungary, 2018: 267–281. DOI: 10.1145/3230543.3230579.
    [10] SENGUPTA A, JIN Feng, ZHANG Renyuan, et al. mm-Pose: Real-time human skeletal posture estimation using mmWave radars and CNNs[J]. IEEE Sensors Journal, 2020, 20(17): 10032–10044. doi: 10.1109/JSEN.2020.2991741.
    [11] YU Cong, ZHANG Dongheng, WU Zhi, et al. RFPose-OT: RF-based 3D human pose estimation via optimal transport theory[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(10): 1445–1457. doi: 10.1631/FITEE.2200550.
    [12] XIE Chunyang, ZHANG Dongheng, WU Zhi, et al. RPM: RF-based pose machines[J]. IEEE Transactions on Multimedia, 2024, 26: 637–649. doi: 10.1109/TMM.2023.3268376.
    [13] XIE Chunyang, ZHANG Dongheng, WU Zhi, et al. RPM 2.0: RF-based pose machines for multi-person 3D pose estimation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(1): 490–503. doi: 10.1109/TCSVT.2023.3287329.
    [14] SONG Yongkun, JIN Tian, DAI Yongpeng, et al. Through-wall human pose reconstruction via UWB MIMO radar and 3D CNN[J]. Remote Sensing, 2021, 13(2): 241. doi: 10.3390/rs13020241.
    [15] CHEN V C. The Micro-Doppler Effect in Radar[M]. Boston: Artech House, 2011.
    [16] ZHOU Xiaolong, JIN Tian, DAI Yongpeng, et al. MD-Pose: Human pose estimation for single-channel UWB radar[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2023, 5(4): 449–463. doi: 10.1109/TBIOM.2023.3265206.
    [17] DING Wen, CAO Zhongping, ZHANG Jianxiong, et al. Radar-based 3D human skeleton estimation by kinematic constrained learning[J]. IEEE Sensors Journal, 2021, 21(20): 23174–23184. doi: 10.1109/JSEN.2021.3107361.
    [18] CAO Zhongping, DING Wen, CHEN Rihui, et al. A joint global–local network for human pose estimation with millimeter wave radar[J]. IEEE Internet of Things Journal, 2023, 10(1): 434–446. doi: 10.1109/JIOT.2022.3201005.
    [19] DU Hao, JIN Tian, SONG Yongping, et al. A three-dimensional deep learning framework for human behavior analysis using range-Doppler time points[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(4): 611–615. doi: 10.1109/LGRS.2019.2930636.
    [20] BOULIC R, THALMANN N M, and THALMANN D. A global human walking model with real-time kinematic personification[J]. The Visual Computer, 1990, 6(6): 344–358. doi: 10.1007/BF01901021.
    [21] ZHENG Ce, ZHU Sijie, MENDIETA M, et al. 3D human pose estimation with spatial and temporal transformers[C]. The 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 11636–11645. DOI: 10.1109/ICCV48922.2021.01145.
    [22] FANG Yuming, DING Guanqun, LI Jia, et al. Deep3DSaliency: Deep stereoscopic video saliency detection model by 3D convolutional networks[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2305–2318. doi: 10.1109/TIP.2018.2885229.
    [23] QIU Zhaofan, YAO Ting, and MEI Tao. Learning spatio-temporal representation with pseudo-3d residual networks[C]. The 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 5534–5542. DOI: 10.1109/ICCV.2017.590.
    [24] PAVLLO D, FEICHTENHOFER C, GRANGIER D, et al. 3D human pose estimation in video with temporal convolutions and semi-supervised training[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2020: 7745–7754. DOI: 10.1109/CVPR.2019.00794.
    [25] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[J]. arXiv, 2016.
    [26] WANG Panqu, CHEN Pengfei, YUAN Ye, et al. Understanding convolution for semantic segmentation[C]. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, USA, 2018: 1451–1460. DOI: 10.1109/WACV.2018.00163.
  • 加载中
图(9) / 表(3)
计量
  • 文章访问数:  189
  • HTML全文浏览量:  63
  • PDF下载量:  169
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-06-05
  • 修回日期:  2024-08-14
  • 网络出版日期:  2024-09-14

目录

    /

    返回文章
    返回