Current Issue

Papers
Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness. Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness.
This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultra-wideband radar human pose-incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method. This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultra-wideband radar human pose-incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method.
Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods. Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods.
Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%. Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%.
Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing. Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing.
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments. Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data. This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data.
Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS. Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS.
In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°. In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°.
In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra-WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%). In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra-WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%).
Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold-based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition. Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold-based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition.
Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-Noise Ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection. Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-Noise Ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection.
Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments. Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments.
Reviews
Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-WideBand) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information-processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time-frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research. Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-WideBand) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information-processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time-frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research.
Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field. Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field.