Display Method:
Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multiview SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multiview SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multiview SAR image registration, with an average error reduction of over 65%, 3–5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime. Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multiview SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multiview SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multiview SAR image registration, with an average error reduction of over 65%, 3–5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime.
Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio. Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio.
Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%. Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%). In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%).
Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy. The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field. Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field.
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors. In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected. The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition. Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-noise ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection. Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-noise ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection.
Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing. Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing.
Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-Wide Band) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information–processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time–frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research. Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-Wide Band) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information–processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time–frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research.
Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods. Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods.
Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments. Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments.
Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS. Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS.
In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°. In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value. As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data. This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data.
This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultrawideband radar human-pose incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method. This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultrawideband radar human-pose incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method.
Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%. Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%.
Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness. Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness.
Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold–based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition. Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold–based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments. Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
Radar Signal and Data Processing
To reduce the large over-the-horizon localization errors of long-range shortwave emitter, a novel cooperative positioning method is proposed. This method combines two-Dimensional (2D) Direction-Of-Arrival (DOA) and Time-Difference-Of-Arrival (TDOA) measurements under scenarios in which observation stations can simultaneously obtain the two types of parameters. Initially, based on the single-hop ionospheric virtual height model, the nonlinear measurement models of 2D DOA and TDOA are established for over-the-horizon shortwave localization. Subsequently, by combining the over-the-horizon localization geometric and algebraic model, the two types of nonlinear measurement equations are successively transformed into the corresponding pseudo-linear measurement equations. On this basis, a novel two-stage cooperative positioning method is proposed without iteration. In the first stage, the closed-form solution of the target position vector is obtained by solving the roots of a sixth-order polynomial. In the second stage, an equality-constrained optimization problem is established to refine the localization result obtained in the first stage, yielding a more accurate target position estimate using the Lagrange multiplier technique. In addition, the estimation performance of the proposed cooperative positioning method is theoretically analyzed based on the constrained error perturbation theory, and the asymptotic efficiency of the new estimator is proved. Meanwhile, the influence of the altitude information error of the emitter on the positioning accuracy is quantitatively analyzed by applying the theory of constrained error perturbation, and the maximum threshold value of this error, which ensures that the constrained solution remains better than the unconstrained one, is deduced. Simulation results show that the newly proposed method can achieve significant cooperative gain. To reduce the large over-the-horizon localization errors of long-range shortwave emitter, a novel cooperative positioning method is proposed. This method combines two-Dimensional (2D) Direction-Of-Arrival (DOA) and Time-Difference-Of-Arrival (TDOA) measurements under scenarios in which observation stations can simultaneously obtain the two types of parameters. Initially, based on the single-hop ionospheric virtual height model, the nonlinear measurement models of 2D DOA and TDOA are established for over-the-horizon shortwave localization. Subsequently, by combining the over-the-horizon localization geometric and algebraic model, the two types of nonlinear measurement equations are successively transformed into the corresponding pseudo-linear measurement equations. On this basis, a novel two-stage cooperative positioning method is proposed without iteration. In the first stage, the closed-form solution of the target position vector is obtained by solving the roots of a sixth-order polynomial. In the second stage, an equality-constrained optimization problem is established to refine the localization result obtained in the first stage, yielding a more accurate target position estimate using the Lagrange multiplier technique. In addition, the estimation performance of the proposed cooperative positioning method is theoretically analyzed based on the constrained error perturbation theory, and the asymptotic efficiency of the new estimator is proved. Meanwhile, the influence of the altitude information error of the emitter on the positioning accuracy is quantitatively analyzed by applying the theory of constrained error perturbation, and the maximum threshold value of this error, which ensures that the constrained solution remains better than the unconstrained one, is deduced. Simulation results show that the newly proposed method can achieve significant cooperative gain.
The target detection performance of skywave Over-the-Horizon Radar (OTHR) often struggles with transient interference. To address this issue, we have developed a transient interference suppression algorithm that uses Time Frequency Sparsity Prior (TFSP). TFSP uses the sparse nature of transient interference in the slow-time domain along with the sparse prior of sea clutter and targets in the Doppler frequency domain to construct an objective function, that is optimized using the Alternating Direction Method of Multipliers (ADMM) to effectively suppress transient interference. Unlike traditional methods that focus on locating and eliminating interference before recovering data, TFSP can directly separate transient interference components and restore an interference-free Doppler spectrum. Experimental results from OTHR data confirm that TFSP effectively suppresses transient interference in sea and air modes. TFSP offers a higher output Signal-to-Noise Ratio (SNR) and higher computational efficiency than most existing methods. In particular, it increases the output SNR by approximately 3~5 dB while maintaining computational complexity at a linear logarithmic order. The target detection performance of skywave Over-the-Horizon Radar (OTHR) often struggles with transient interference. To address this issue, we have developed a transient interference suppression algorithm that uses Time Frequency Sparsity Prior (TFSP). TFSP uses the sparse nature of transient interference in the slow-time domain along with the sparse prior of sea clutter and targets in the Doppler frequency domain to construct an objective function, that is optimized using the Alternating Direction Method of Multipliers (ADMM) to effectively suppress transient interference. Unlike traditional methods that focus on locating and eliminating interference before recovering data, TFSP can directly separate transient interference components and restore an interference-free Doppler spectrum. Experimental results from OTHR data confirm that TFSP effectively suppresses transient interference in sea and air modes. TFSP offers a higher output Signal-to-Noise Ratio (SNR) and higher computational efficiency than most existing methods. In particular, it increases the output SNR by approximately 3~5 dB while maintaining computational complexity at a linear logarithmic order.
Ground Penetrating Radar (GPR) image detection currently faces challenges such as low accuracy, false detections, and missed detections. To overcome these challenges, we propose a novel model referred to as GDS-YOLOv8n for detecting common underground targets in GPR images. The model incorporates the DRRB (Dilated Residual Reparam Block) feature extraction module to achieve enhanced multiscale feature extraction, with certain C2f modules in the YOLOv8n architecture being effectively replaced. In addition, the space-to-depth Conv downsampling module is used to replace the Conv modules corresponding to feature maps with a resolution of 320×320 pixels and less. This replacement assists in mitigating information loss during the downsampling of GPR images, particularly for images with limited resolution and small targets. Furthermore, the detection performance is enhanced using an auxiliary training module, ensuring performance improvement without increasing inference complexity. The introduction of the Inner-SIoU loss function refines bounding box predictions by imposing new constraints tailored to GPR image characteristics. Experimental results on real-world GPR datasets demonstrate the effectiveness of the GDS-YOLOv8n model. For six classes of common underground targets, including metal pipes, PVC pipes, and cables, the model achieves a precision of 97.1%, recall of 96.2%, and mean average precision at 50% IoU (mAP50) of 96.9%. These results indicate improvements of 4.0%, 6.1%, and 4.1%, respectively, compared to corresponding values of the YOLOv8n model, with notable improvements observed when detecting PVC pipes and cables. Compared with those of models such as YOLOv5n, YOLOv7-tiny, and SSD (Single Shot multibox Detector), our model’s mAP50 is improved by 7.20%, 5.70%, and 14.48%, respectively. Finally, the application of our model on a NVIDIA Jetson Orin NX embedded system results in an increase in the detection speed from 22 to 40.6 FPS after optimization via TensorRT and FP16 quantization, meeting the demands for the real-time detection of underground targets in mobile scenarios. Ground Penetrating Radar (GPR) image detection currently faces challenges such as low accuracy, false detections, and missed detections. To overcome these challenges, we propose a novel model referred to as GDS-YOLOv8n for detecting common underground targets in GPR images. The model incorporates the DRRB (Dilated Residual Reparam Block) feature extraction module to achieve enhanced multiscale feature extraction, with certain C2f modules in the YOLOv8n architecture being effectively replaced. In addition, the space-to-depth Conv downsampling module is used to replace the Conv modules corresponding to feature maps with a resolution of 320×320 pixels and less. This replacement assists in mitigating information loss during the downsampling of GPR images, particularly for images with limited resolution and small targets. Furthermore, the detection performance is enhanced using an auxiliary training module, ensuring performance improvement without increasing inference complexity. The introduction of the Inner-SIoU loss function refines bounding box predictions by imposing new constraints tailored to GPR image characteristics. Experimental results on real-world GPR datasets demonstrate the effectiveness of the GDS-YOLOv8n model. For six classes of common underground targets, including metal pipes, PVC pipes, and cables, the model achieves a precision of 97.1%, recall of 96.2%, and mean average precision at 50% IoU (mAP50) of 96.9%. These results indicate improvements of 4.0%, 6.1%, and 4.1%, respectively, compared to corresponding values of the YOLOv8n model, with notable improvements observed when detecting PVC pipes and cables. Compared with those of models such as YOLOv5n, YOLOv7-tiny, and SSD (Single Shot multibox Detector), our model’s mAP50 is improved by 7.20%, 5.70%, and 14.48%, respectively. Finally, the application of our model on a NVIDIA Jetson Orin NX embedded system results in an increase in the detection speed from 22 to 40.6 FPS after optimization via TensorRT and FP16 quantization, meeting the demands for the real-time detection of underground targets in mobile scenarios.
Ground Penetrating radars (GPR) are essential for detecting buried targets in civilian and military applications, especially given the increasing demand for detecting and imaging small targets within walls. The complex structures and materials of walls pose substantial challenges for precisely reconstructing small targets. To address this issue, this study proposes a multistage cascaded U-Net approach for the three-dimensional reconstruction of small targets within walls. First, we developed a high-resolution detection model and a dataset tailored to handle complex wall scenes. Thereafter, using the Monte Carlo sampling method, we sampled aggregate particle sizes to create a physical three-dimensional aggregate scattering model that satisfies grading requirements, thus enhancing the realism and accuracy of the simulated scenes. Our multistage network design effectively suppresses noise and inhomogeneous clutter in C-scan data, thereby improving signal quality. The preprocessed data are then fed into subsequent network stages to reconstruct the distribution of three-dimensional reconstruction values. In addition, we proposed an adaptive multiscale module and a cascaded network training strategy to better fit small target information in complex scenes. Through comparisons with simulated and measured data, we confirmed the effectiveness and generalizability of our method. Unlike existing techniques, our approach successfully reconstructs small targets within three-dimensional walls, thereby considerably enhancing the peak signal-to-noise ratio and providing critical technical support for accurately detecting small targets within walls. Ground Penetrating radars (GPR) are essential for detecting buried targets in civilian and military applications, especially given the increasing demand for detecting and imaging small targets within walls. The complex structures and materials of walls pose substantial challenges for precisely reconstructing small targets. To address this issue, this study proposes a multistage cascaded U-Net approach for the three-dimensional reconstruction of small targets within walls. First, we developed a high-resolution detection model and a dataset tailored to handle complex wall scenes. Thereafter, using the Monte Carlo sampling method, we sampled aggregate particle sizes to create a physical three-dimensional aggregate scattering model that satisfies grading requirements, thus enhancing the realism and accuracy of the simulated scenes. Our multistage network design effectively suppresses noise and inhomogeneous clutter in C-scan data, thereby improving signal quality. The preprocessed data are then fed into subsequent network stages to reconstruct the distribution of three-dimensional reconstruction values. In addition, we proposed an adaptive multiscale module and a cascaded network training strategy to better fit small target information in complex scenes. Through comparisons with simulated and measured data, we confirmed the effectiveness and generalizability of our method. Unlike existing techniques, our approach successfully reconstructs small targets within three-dimensional walls, thereby considerably enhancing the peak signal-to-noise ratio and providing critical technical support for accurately detecting small targets within walls.
To address the challenges associated with the data association and stable long-term tracking of multiple targets in complex environments, this study proposes an innovative end-to-end multitarget tracking model called Track-MT3 based on a transformer network. First, a dual-query mechanism comprising detection and tracking queries is introduced to implicitly perform measurement-to-target data association and enable accurate target state estimation. Subsequently, a cross-frame target alignment strategy is employed to enhance the temporal continuity of tracking trajectories, ensuring consistent target identities across frames. In addition, a query transformation and temporal feature encoding module is designed to improve target motion pattern modeling by adaptively combining target dynamics information at different time scales. During model training, a collective average loss function is adopted to achieve the global optimization of tracking performance, considering the entire tracking process in an end-to-end manner. Finally, the performance of Track-MT3 is extensively evaluated under various complex multitarget tracking scenarios using multiple metrics. Experimental results demonstrate that Track-MT3 exhibits superior long-term tracking performance than baseline methods such as MT3. Specifically, Track-MT3 achieves overall performance improvements of 6% and 20% against JPDA and MHT, respectively. By effectively exploiting temporal information, Track-MT3 ensures stable and robust multitarget tracking in complex dynamic environments. To address the challenges associated with the data association and stable long-term tracking of multiple targets in complex environments, this study proposes an innovative end-to-end multitarget tracking model called Track-MT3 based on a transformer network. First, a dual-query mechanism comprising detection and tracking queries is introduced to implicitly perform measurement-to-target data association and enable accurate target state estimation. Subsequently, a cross-frame target alignment strategy is employed to enhance the temporal continuity of tracking trajectories, ensuring consistent target identities across frames. In addition, a query transformation and temporal feature encoding module is designed to improve target motion pattern modeling by adaptively combining target dynamics information at different time scales. During model training, a collective average loss function is adopted to achieve the global optimization of tracking performance, considering the entire tracking process in an end-to-end manner. Finally, the performance of Track-MT3 is extensively evaluated under various complex multitarget tracking scenarios using multiple metrics. Experimental results demonstrate that Track-MT3 exhibits superior long-term tracking performance than baseline methods such as MT3. Specifically, Track-MT3 achieves overall performance improvements of 6% and 20% against JPDA and MHT, respectively. By effectively exploiting temporal information, Track-MT3 ensures stable and robust multitarget tracking in complex dynamic environments.
In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy. In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy.
Traditional Low Probability of Intercept (LPI) array radars that use phased array or Multiple-Input Multiple-Output (MIMO) systems face limitations in terms of controlling radiation energy only at specific angles and cannot achieve energy control over specific areas of range and angle. To address these issues, this paper proposes an LPI waveform design method for Frequency Diverse Array (FDA)-MIMO radar utilizing neural networks. This method jointly designs the transmit waveform and receive beamforming in FDA-MIMO radars to ensure target detection probability while uniformly distributing radar energy across the spatial domain. This minimizes energy directed toward the target, thereby reducing the probability of the radar signal being intercepted. Initially, we formulate an optimization objective function aimed at LPI performance for transmitting waveform design and receiving beamforming by focusing on minimizing pattern matching errors. This function is then used as the loss function in a neural network. Through iterative training, the neural network minimizes this loss function until convergence, resulting in optimized transmit signal waveforms and solving the corresponding receive weighting vectors. Simulation results indicate that our proposed method significantly enhances radar power distribution control. Compared to traditional methods, it shows a 5 dB improvement in beam energy distribution control across nontarget regions of the transmit beam pattern. Furthermore, the receiver beam pattern achieves more concentrated energy, with deep nulls below −50 dB at multiple interference locations, demonstrating excellent interference suppression capabilities. Traditional Low Probability of Intercept (LPI) array radars that use phased array or Multiple-Input Multiple-Output (MIMO) systems face limitations in terms of controlling radiation energy only at specific angles and cannot achieve energy control over specific areas of range and angle. To address these issues, this paper proposes an LPI waveform design method for Frequency Diverse Array (FDA)-MIMO radar utilizing neural networks. This method jointly designs the transmit waveform and receive beamforming in FDA-MIMO radars to ensure target detection probability while uniformly distributing radar energy across the spatial domain. This minimizes energy directed toward the target, thereby reducing the probability of the radar signal being intercepted. Initially, we formulate an optimization objective function aimed at LPI performance for transmitting waveform design and receiving beamforming by focusing on minimizing pattern matching errors. This function is then used as the loss function in a neural network. Through iterative training, the neural network minimizes this loss function until convergence, resulting in optimized transmit signal waveforms and solving the corresponding receive weighting vectors. Simulation results indicate that our proposed method significantly enhances radar power distribution control. Compared to traditional methods, it shows a 5 dB improvement in beam energy distribution control across nontarget regions of the transmit beam pattern. Furthermore, the receiver beam pattern achieves more concentrated energy, with deep nulls below −50 dB at multiple interference locations, demonstrating excellent interference suppression capabilities.
Distinguishing between ships and corner reflectors is challenging in radar observations of the sea. Traditional identification methods, including high resolution range profiles, polarization decomposition, and polarization modulation, improve radial range resolution to the target by transmitting signals with a large bandwidth. The latter two methods use polarization to improve target identification. Single-carrier pulse signals, often used in civil marine radars owing to their low hardware cost, pose challenges in identifying ships and corner reflectors owing to their low range resolution and pulse compression gain. This article proposes a novel method for identifying ships and corner reflectors using polarization modulation in civil marine radars. This approach aims to fully exploit the target identification potential of the narrowband signal joint polarization modulation technology. Through constructing the polarization-range 2D images, the method differentiates between ships and corner reflectors through their unique polarization scattering characteristics. The process involves calculating the average Pearson correlation coefficient between each polarization image and the range image, which serves as the correlation feature parameter. A support vector machine is then employed to achieve accurate target identification. Electromagnetic simulations show that by increasing the device bandwidth to 2~6 times the original signal bandwidth (2 MHz), civil marine radar can achieve a comprehensive identification rate of 90.18%~92.31% at a Signal to Noise Ratio (SNR) of 15 dB and a sampling rate of 100 MHz. The study also explores the influence of missing 50% of pitch angle and azimuth angle data in the training set, finding that identification rates in all four cases exceed 85% when the SNR is above 15 dB. Comparisons with the polarization decomposition method under the same narrowband observation conditions show that when the SNR is 15 dB or higher and the device bandwidth is increased sixfold, the average identification rate of the proposed method improves by 22.67%. This strongly supports the effectiveness of the proposed method. In addition, two cases with different polarization scattering characteristics are constructed in the anechoic chamber using dihedral and trihedral setups. Five sets of measured data show that when the SNR of the echo is 8~12 dB, the experiments demonstrate strong intra-class aggregation and clear inter-class separability. These results effectively support the electromagnetic simulation findings. Distinguishing between ships and corner reflectors is challenging in radar observations of the sea. Traditional identification methods, including high resolution range profiles, polarization decomposition, and polarization modulation, improve radial range resolution to the target by transmitting signals with a large bandwidth. The latter two methods use polarization to improve target identification. Single-carrier pulse signals, often used in civil marine radars owing to their low hardware cost, pose challenges in identifying ships and corner reflectors owing to their low range resolution and pulse compression gain. This article proposes a novel method for identifying ships and corner reflectors using polarization modulation in civil marine radars. This approach aims to fully exploit the target identification potential of the narrowband signal joint polarization modulation technology. Through constructing the polarization-range 2D images, the method differentiates between ships and corner reflectors through their unique polarization scattering characteristics. The process involves calculating the average Pearson correlation coefficient between each polarization image and the range image, which serves as the correlation feature parameter. A support vector machine is then employed to achieve accurate target identification. Electromagnetic simulations show that by increasing the device bandwidth to 2~6 times the original signal bandwidth (2 MHz), civil marine radar can achieve a comprehensive identification rate of 90.18%~92.31% at a Signal to Noise Ratio (SNR) of 15 dB and a sampling rate of 100 MHz. The study also explores the influence of missing 50% of pitch angle and azimuth angle data in the training set, finding that identification rates in all four cases exceed 85% when the SNR is above 15 dB. Comparisons with the polarization decomposition method under the same narrowband observation conditions show that when the SNR is 15 dB or higher and the device bandwidth is increased sixfold, the average identification rate of the proposed method improves by 22.67%. This strongly supports the effectiveness of the proposed method. In addition, two cases with different polarization scattering characteristics are constructed in the anechoic chamber using dihedral and trihedral setups. Five sets of measured data show that when the SNR of the echo is 8~12 dB, the experiments demonstrate strong intra-class aggregation and clear inter-class separability. These results effectively support the electromagnetic simulation findings.
As one of the most promising next-generation radars, Moving platform based Distributed Aperture Radar (MDAR) cannot only coherently combining distributed apertures to obtain the same detection performance of a large aperture, but also enhance the detection and anti-damage capabilities through mobility and flexible deployment. However, time and phase synchronization among radars should be done before coherently combining due to internal clock differences and external propagation path differences. Moreover, grating lobes will generate as the distance between multiple radars usually exceeds half a wavelength, which affects the estimation accuracy of target angle. To obtain Coherent Parameters (CPs), this paper established a cognitive framework for MDAR based on closed-loop structure. And a multi-pulse correlation CPs estimation method considering motion conditions is proposed to improve estimation accuracy. In the meanwhile, an unambiguous angle estimation method based on array configuration accumulation is proposed considering platform motion characteristics. Finally, based on the simulation verification and the proposed framework, a prototype of a 3-node ground Moving platform based Distributed Coherent Aperture Radar (MDCAR) system is designed and experiments are conducted. Compared to a single radar, a maximum value of 14.2 dB signal-to-noise ratio improvement can be achieved, which can further enhance range detection accuracy. Besides, unambiguous angle estimation is also realized under certain conditions. This work is expected to provide support for the research and development of MDCAR. As one of the most promising next-generation radars, Moving platform based Distributed Aperture Radar (MDAR) cannot only coherently combining distributed apertures to obtain the same detection performance of a large aperture, but also enhance the detection and anti-damage capabilities through mobility and flexible deployment. However, time and phase synchronization among radars should be done before coherently combining due to internal clock differences and external propagation path differences. Moreover, grating lobes will generate as the distance between multiple radars usually exceeds half a wavelength, which affects the estimation accuracy of target angle. To obtain Coherent Parameters (CPs), this paper established a cognitive framework for MDAR based on closed-loop structure. And a multi-pulse correlation CPs estimation method considering motion conditions is proposed to improve estimation accuracy. In the meanwhile, an unambiguous angle estimation method based on array configuration accumulation is proposed considering platform motion characteristics. Finally, based on the simulation verification and the proposed framework, a prototype of a 3-node ground Moving platform based Distributed Coherent Aperture Radar (MDCAR) system is designed and experiments are conducted. Compared to a single radar, a maximum value of 14.2 dB signal-to-noise ratio improvement can be achieved, which can further enhance range detection accuracy. Besides, unambiguous angle estimation is also realized under certain conditions. This work is expected to provide support for the research and development of MDCAR.
Radar Countermeasure Technique
Deep Neural Network (DNN)-based Synthetic Aperture Radar (SAR) image target recognition has become a prominent area of interest in SAR applications. However, deep neural network models are vulnerable to adversarial example attacks. Adversarial examples are input samples that introduce minute perturbations within the dataset, causing the model to make highly confident yet incorrect judgments. Existing generation techniques of SAR adversarial examples fundamentally operate on two-dimensional images, which are classified as digital-domain adversarial examples. Although recent research has started to incorporate SAR imaging scattering mechanisms in adversarial example generation, two important flaws still remain: (1) imaging scattering mechanisms are only applied to SAR images without being integrated into the actual SAR imaging process, and (2) the mechanisms achieve only pseudo-physical-domain adversarial attacks, failing to realize true three-dimensional physical-domain adversarial attacks. This study investigates the current state and development trends in adversarial attacks on SAR intelligent target recognition. First, the development trajectory of traditional generation technologies of SAR-image adversarial examples is meticulously traced and a comparative analysis of various technologies is conducted, thus summarizing their deficiencies. Building on the principles and actual processes of SAR imaging, physical-domain adversarial attack techniques are then proposed. These techniques manipulate the target object’s backscattering properties or emit finely adjustable interference signals in amplitude and phase to counter SAR intelligent target recognition algorithms. The paper also envisions practical implementations of SAR adversarial attacks in the physical domain. Finally, this paper concludes by discussing the future directions of SAR intelligent adversarial attack technologies. Deep Neural Network (DNN)-based Synthetic Aperture Radar (SAR) image target recognition has become a prominent area of interest in SAR applications. However, deep neural network models are vulnerable to adversarial example attacks. Adversarial examples are input samples that introduce minute perturbations within the dataset, causing the model to make highly confident yet incorrect judgments. Existing generation techniques of SAR adversarial examples fundamentally operate on two-dimensional images, which are classified as digital-domain adversarial examples. Although recent research has started to incorporate SAR imaging scattering mechanisms in adversarial example generation, two important flaws still remain: (1) imaging scattering mechanisms are only applied to SAR images without being integrated into the actual SAR imaging process, and (2) the mechanisms achieve only pseudo-physical-domain adversarial attacks, failing to realize true three-dimensional physical-domain adversarial attacks. This study investigates the current state and development trends in adversarial attacks on SAR intelligent target recognition. First, the development trajectory of traditional generation technologies of SAR-image adversarial examples is meticulously traced and a comparative analysis of various technologies is conducted, thus summarizing their deficiencies. Building on the principles and actual processes of SAR imaging, physical-domain adversarial attack techniques are then proposed. These techniques manipulate the target object’s backscattering properties or emit finely adjustable interference signals in amplitude and phase to counter SAR intelligent target recognition algorithms. The paper also envisions practical implementations of SAR adversarial attacks in the physical domain. Finally, this paper concludes by discussing the future directions of SAR intelligent adversarial attack technologies.
The performance of Synthetic Aperture Radar (SAR) active deception jamming detection based on the interferometric phase is analyzed. Based on the slant-range local fringe frequency probability distributions of a real scene and a false target, the influences of the vertical baseline length, jamming-to-signal ratio, and local fringe frequency estimation window size on the True Positive Rate (TPR) are analyzed. Furthermore, when the False Positive Rate (FPR) is known, the vertical baseline length required for the SAR system to meet the detection probability requirements is analyzed, thereby providing a theoretical basis for the baseline design of the SAR system. Finally, the result of theoretical analysis is verified by simulation. The theoretical analysis and experimental results show that, for a certain false alarm probability, as the vertical baseline length, jamming-to-signal ratio, or local fringe frequency estimation window value increases, the detection probability also increases. The performance of Synthetic Aperture Radar (SAR) active deception jamming detection based on the interferometric phase is analyzed. Based on the slant-range local fringe frequency probability distributions of a real scene and a false target, the influences of the vertical baseline length, jamming-to-signal ratio, and local fringe frequency estimation window size on the True Positive Rate (TPR) are analyzed. Furthermore, when the False Positive Rate (FPR) is known, the vertical baseline length required for the SAR system to meet the detection probability requirements is analyzed, thereby providing a theoretical basis for the baseline design of the SAR system. Finally, the result of theoretical analysis is verified by simulation. The theoretical analysis and experimental results show that, for a certain false alarm probability, as the vertical baseline length, jamming-to-signal ratio, or local fringe frequency estimation window value increases, the detection probability also increases.
Interrupted Sampling Repeater Jamming (ISRJ) is a type of intra-pulse coherent jamming that can easily generate false targets resembling real ones, thus posing a severe threat to radar systems. Traditional methods for countering ISRJ techniques are relatively passive and often fail to adapt to evolving jamming techniques, leading to residual jamming effects and signal loss. To improve radar’s anti-jamming capabilities, a novel scheme integrating “jamming perception, parameter estimation, and jamming suppression” has been developed in this study. This method begins by using a bidirectional double sliding window pulse edge detector and a sliding truncated matched filter. These devices are used to extract the ISRJ components of received radar signals and accurately estimate the parameters such as sampling duration and period. The jamming components are then reconstructed and eliminated, allowing for effective target detection. Simulation experiments demonstrate that the proposed method effectively overcomes ISRJ across different modulation modes with almost no loss of signal energy. When the jamming-to-noise ratio is 9 dB, the method boosts the signal-to-jamming ratio by over 33 dB after jamming suppression, ensuring robust anti-ISRJ performance. Interrupted Sampling Repeater Jamming (ISRJ) is a type of intra-pulse coherent jamming that can easily generate false targets resembling real ones, thus posing a severe threat to radar systems. Traditional methods for countering ISRJ techniques are relatively passive and often fail to adapt to evolving jamming techniques, leading to residual jamming effects and signal loss. To improve radar’s anti-jamming capabilities, a novel scheme integrating “jamming perception, parameter estimation, and jamming suppression” has been developed in this study. This method begins by using a bidirectional double sliding window pulse edge detector and a sliding truncated matched filter. These devices are used to extract the ISRJ components of received radar signals and accurately estimate the parameters such as sampling duration and period. The jamming components are then reconstructed and eliminated, allowing for effective target detection. Simulation experiments demonstrate that the proposed method effectively overcomes ISRJ across different modulation modes with almost no loss of signal energy. When the jamming-to-noise ratio is 9 dB, the method boosts the signal-to-jamming ratio by over 33 dB after jamming suppression, ensuring robust anti-ISRJ performance.
To address the ineffectiveness of single-base radar in suppressing adjoint main-lobe interference, an equivalent large-aperture array can be designed by deploying sparse auxiliary arrays to separate main-lobe interference from targets in the spatial domain. However, this method is prone to generating spatial grating lobes. To overcome this problem, this study proposes a dual-parameter iterative optimization framework comprising two parts: array configuration optimization and subarray element number optimization. Array configuration optimization caters to the number of subarray elements and creates nulls in the main-lobe interference direction on the basis of the minimum variance distortionless response criterion. To suppress grating lobes of the beam an improved adaptive genetic particle swarm algorithm is used to optimize the array configuration under constraints, such as aperture size, minimum subarray spacing, and null depth in the main-lobe interference direction. Subarray element number optimization uses the above-mentioned algorithm to optimize the number of subarray elements under constraints, such as a limited number of subarray elements and null depth in the main-lobe interference direction, further suppressing beam grating lobes. Finally, numerical simulations confirmed the effectiveness of the dual-parameter iterative optimization framework for array configuration and element number under the same parameter conditions. Additionally, this study explores the performance boundaries of main-lobe interference suppression and grating lobe suppression for typical distributed mobile platform cooperative detection scenarios. To address the ineffectiveness of single-base radar in suppressing adjoint main-lobe interference, an equivalent large-aperture array can be designed by deploying sparse auxiliary arrays to separate main-lobe interference from targets in the spatial domain. However, this method is prone to generating spatial grating lobes. To overcome this problem, this study proposes a dual-parameter iterative optimization framework comprising two parts: array configuration optimization and subarray element number optimization. Array configuration optimization caters to the number of subarray elements and creates nulls in the main-lobe interference direction on the basis of the minimum variance distortionless response criterion. To suppress grating lobes of the beam an improved adaptive genetic particle swarm algorithm is used to optimize the array configuration under constraints, such as aperture size, minimum subarray spacing, and null depth in the main-lobe interference direction. Subarray element number optimization uses the above-mentioned algorithm to optimize the number of subarray elements under constraints, such as a limited number of subarray elements and null depth in the main-lobe interference direction, further suppressing beam grating lobes. Finally, numerical simulations confirmed the effectiveness of the dual-parameter iterative optimization framework for array configuration and element number under the same parameter conditions. Additionally, this study explores the performance boundaries of main-lobe interference suppression and grating lobe suppression for typical distributed mobile platform cooperative detection scenarios.

微信 | 公众平台

随时查询稿件 获取最新论文 知晓行业信息

  • EI
  • Scopus
  • DOAJ
  • JST
  • CSCD
  • CSTPCD
  • CNKI
  • 中文核心期刊