Display Method:
To reduce the large over-the-horizon localization errors of long-range shortwave emitter, a novel cooperative positioning method is proposed. This method combines two-Dimensional (2D) Direction-Of-Arrival (DOA) and Time-Difference-Of-Arrival (TDOA) measurements under scenarios in which observation stations can simultaneously obtain the two types of parameters. Initially, based on the single-hop ionospheric virtual height model, the nonlinear measurement models of 2D DOA and TDOA are established for over-the-horizon shortwave localization. Subsequently, by combining the over-the-horizon localization geometric and algebraic model, the two types of nonlinear measurement equations are successively transformed into the corresponding pseudo-linear measurement equations. On this basis, a novel two-stage cooperative positioning method is proposed without iteration. In the first stage, the closed-form solution of the target position vector is obtained by solving the roots of a sixth-order polynomial. In the second stage, an equality-constrained optimization problem is established to refine the localization result obtained in the first stage, yielding a more accurate target position estimate using the Lagrange multiplier technique. In addition, the estimation performance of the proposed cooperative positioning method is theoretically analyzed based on the constrained error perturbation theory, and the asymptotic efficiency of the new estimator is proved. Meanwhile, the influence of the altitude information error of the emitter on the positioning accuracy is quantitatively analyzed by applying the theory of constrained error perturbation, and the maximum threshold value of this error, which ensures that the constrained solution remains better than the unconstrained one, is deduced. Simulation results show that the newly proposed method can achieve significant cooperative gain. To reduce the large over-the-horizon localization errors of long-range shortwave emitter, a novel cooperative positioning method is proposed. This method combines two-Dimensional (2D) Direction-Of-Arrival (DOA) and Time-Difference-Of-Arrival (TDOA) measurements under scenarios in which observation stations can simultaneously obtain the two types of parameters. Initially, based on the single-hop ionospheric virtual height model, the nonlinear measurement models of 2D DOA and TDOA are established for over-the-horizon shortwave localization. Subsequently, by combining the over-the-horizon localization geometric and algebraic model, the two types of nonlinear measurement equations are successively transformed into the corresponding pseudo-linear measurement equations. On this basis, a novel two-stage cooperative positioning method is proposed without iteration. In the first stage, the closed-form solution of the target position vector is obtained by solving the roots of a sixth-order polynomial. In the second stage, an equality-constrained optimization problem is established to refine the localization result obtained in the first stage, yielding a more accurate target position estimate using the Lagrange multiplier technique. In addition, the estimation performance of the proposed cooperative positioning method is theoretically analyzed based on the constrained error perturbation theory, and the asymptotic efficiency of the new estimator is proved. Meanwhile, the influence of the altitude information error of the emitter on the positioning accuracy is quantitatively analyzed by applying the theory of constrained error perturbation, and the maximum threshold value of this error, which ensures that the constrained solution remains better than the unconstrained one, is deduced. Simulation results show that the newly proposed method can achieve significant cooperative gain.
Ground Penetrating Radar (GPR) image detection currently faces challenges such as low accuracy, false detections, and missed detections. To overcome these challenges, we propose a novel model referred to as GDS-YOLOv8n for detecting common underground targets in GPR images. The model incorporates the DRRB (Dilated Residual Reparam Block) feature extraction module to achieve enhanced multiscale feature extraction, with certain C2f modules in the YOLOv8n architecture being effectively replaced. In addition, the space-to-depth Conv downsampling module is used to replace the Conv modules corresponding to feature maps with a resolution of 320×320 pixels and less. This replacement assists in mitigating information loss during the downsampling of GPR images, particularly for images with limited resolution and small targets. Furthermore, the detection performance is enhanced using an auxiliary training module, ensuring performance improvement without increasing inference complexity. The introduction of the Inner-SIoU loss function refines bounding box predictions by imposing new constraints tailored to GPR image characteristics. Experimental results on real-world GPR datasets demonstrate the effectiveness of the GDS-YOLOv8n model. For six classes of common underground targets, including metal pipes, PVC pipes, and cables, the model achieves a precision of 97.1%, recall of 96.2%, and mean average precision at 50% IoU (mAP50) of 96.9%. These results indicate improvements of 4.0%, 6.1%, and 4.1%, respectively, compared to corresponding values of the YOLOv8n model, with notable improvements observed when detecting PVC pipes and cables. Compared with those of models such as YOLOv5n, YOLOv7-tiny, and SSD (Single Shot multibox Detector), our model’s mAP50 is improved by 7.20%, 5.70%, and 14.48%, respectively. Finally, the application of our model on a NVIDIA Jetson Orin NX embedded system results in an increase in the detection speed from 22 to 40.6 FPS after optimization via TensorRT and FP16 quantization, meeting the demands for the real-time detection of underground targets in mobile scenarios. Ground Penetrating Radar (GPR) image detection currently faces challenges such as low accuracy, false detections, and missed detections. To overcome these challenges, we propose a novel model referred to as GDS-YOLOv8n for detecting common underground targets in GPR images. The model incorporates the DRRB (Dilated Residual Reparam Block) feature extraction module to achieve enhanced multiscale feature extraction, with certain C2f modules in the YOLOv8n architecture being effectively replaced. In addition, the space-to-depth Conv downsampling module is used to replace the Conv modules corresponding to feature maps with a resolution of 320×320 pixels and less. This replacement assists in mitigating information loss during the downsampling of GPR images, particularly for images with limited resolution and small targets. Furthermore, the detection performance is enhanced using an auxiliary training module, ensuring performance improvement without increasing inference complexity. The introduction of the Inner-SIoU loss function refines bounding box predictions by imposing new constraints tailored to GPR image characteristics. Experimental results on real-world GPR datasets demonstrate the effectiveness of the GDS-YOLOv8n model. For six classes of common underground targets, including metal pipes, PVC pipes, and cables, the model achieves a precision of 97.1%, recall of 96.2%, and mean average precision at 50% IoU (mAP50) of 96.9%. These results indicate improvements of 4.0%, 6.1%, and 4.1%, respectively, compared to corresponding values of the YOLOv8n model, with notable improvements observed when detecting PVC pipes and cables. Compared with those of models such as YOLOv5n, YOLOv7-tiny, and SSD (Single Shot multibox Detector), our model’s mAP50 is improved by 7.20%, 5.70%, and 14.48%, respectively. Finally, the application of our model on a NVIDIA Jetson Orin NX embedded system results in an increase in the detection speed from 22 to 40.6 FPS after optimization via TensorRT and FP16 quantization, meeting the demands for the real-time detection of underground targets in mobile scenarios.
Passive radar plays an important role in early warning detection and Low–Slow–Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multidomain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition. Passive radar plays an important role in early warning detection and Low–Slow–Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multidomain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
To address the challenges associated with the data association and stable long-term tracking of multiple targets in complex environments, this study proposes an innovative end-to-end multitarget tracking model called Track-MT3 based on a transformer network. First, a dual-query mechanism comprising detection and tracking queries is introduced to implicitly perform measurement-to-target data association and enable accurate target state estimation. Subsequently, a cross-frame target alignment strategy is employed to enhance the temporal continuity of tracking trajectories, ensuring consistent target identities across frames. In addition, a query transformation and temporal feature encoding module is designed to improve target motion pattern modeling by adaptively combining target dynamics information at different time scales. During model training, a collective average loss function is adopted to achieve the global optimization of tracking performance, considering the entire tracking process in an end-to-end manner. Finally, the performance of Track-MT3 is extensively evaluated under various complex multitarget tracking scenarios using multiple metrics. Experimental results demonstrate that Track-MT3 exhibits superior long-term tracking performance than baseline methods such as MT3. Specifically, Track-MT3 achieves overall performance improvements of 6% and 20% against JPDA and MHT, respectively. By effectively exploiting temporal information, Track-MT3 ensures stable and robust multitarget tracking in complex dynamic environments. To address the challenges associated with the data association and stable long-term tracking of multiple targets in complex environments, this study proposes an innovative end-to-end multitarget tracking model called Track-MT3 based on a transformer network. First, a dual-query mechanism comprising detection and tracking queries is introduced to implicitly perform measurement-to-target data association and enable accurate target state estimation. Subsequently, a cross-frame target alignment strategy is employed to enhance the temporal continuity of tracking trajectories, ensuring consistent target identities across frames. In addition, a query transformation and temporal feature encoding module is designed to improve target motion pattern modeling by adaptively combining target dynamics information at different time scales. During model training, a collective average loss function is adopted to achieve the global optimization of tracking performance, considering the entire tracking process in an end-to-end manner. Finally, the performance of Track-MT3 is extensively evaluated under various complex multitarget tracking scenarios using multiple metrics. Experimental results demonstrate that Track-MT3 exhibits superior long-term tracking performance than baseline methods such as MT3. Specifically, Track-MT3 achieves overall performance improvements of 6% and 20% against JPDA and MHT, respectively. By effectively exploiting temporal information, Track-MT3 ensures stable and robust multitarget tracking in complex dynamic environments.
Land–sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range–Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land–sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land–sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%. Land–sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range–Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land–sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land–sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
In non-inductive radar vital sign monitoring, frequency-modulated radars (such as frequency modulated continuous wave and ultra wideband) are more effective than continuous wave radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range fast Fourier transform presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17 ± 5 dB. In addition, experiments recorded Doppler heartbeat diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C–C interval in the DHD signal relative to the J–J interval in the BCG signal was 21.58 ± 13.26 ms (3.40% ± 2.08%). In non-inductive radar vital sign monitoring, frequency-modulated radars (such as frequency modulated continuous wave and ultra wideband) are more effective than continuous wave radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range fast Fourier transform presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17 ± 5 dB. In addition, experiments recorded Doppler heartbeat diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C–C interval in the DHD signal relative to the J–J interval in the BCG signal was 21.58 ± 13.26 ms (3.40% ± 2.08%).
Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-noise ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection. Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-noise ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection.
The performance of Synthetic Aperture Radar (SAR) active deception jamming detection based on the interferometric phase is analyzed. Based on the slant-range local fringe frequency probability distributions of a real scene and a false target, the influences of the vertical baseline length, jamming-to-signal ratio, and local fringe frequency estimation window size on the True Positive Rate (TPR) are analyzed. Furthermore, when the False Positive Rate (FPR) is known, the vertical baseline length required for the SAR system to meet the detection probability requirements is analyzed, thereby providing a theoretical basis for the baseline design of the SAR system. Finally, the result of theoretical analysis is verified by simulation. The theoretical analysis and experimental results show that, for a certain false alarm probability, as the vertical baseline length, jamming-to-signal ratio, or local fringe frequency estimation window value increases, the detection probability also increases. The performance of Synthetic Aperture Radar (SAR) active deception jamming detection based on the interferometric phase is analyzed. Based on the slant-range local fringe frequency probability distributions of a real scene and a false target, the influences of the vertical baseline length, jamming-to-signal ratio, and local fringe frequency estimation window size on the True Positive Rate (TPR) are analyzed. Furthermore, when the False Positive Rate (FPR) is known, the vertical baseline length required for the SAR system to meet the detection probability requirements is analyzed, thereby providing a theoretical basis for the baseline design of the SAR system. Finally, the result of theoretical analysis is verified by simulation. The theoretical analysis and experimental results show that, for a certain false alarm probability, as the vertical baseline length, jamming-to-signal ratio, or local fringe frequency estimation window value increases, the detection probability also increases.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SARs). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected. The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SARs). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing. Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing.
Interrupted Sampling Repeater Jamming (ISRJ) is a type of intra-pulse coherent jamming that can easily generate false targets resembling real ones, thus posing a severe threat to radar systems. Traditional methods for countering ISRJ techniques are relatively passive and often fail to adapt to evolving jamming techniques, leading to residual jamming effects and signal loss. To improve radar’s anti-jamming capabilities, a novel scheme integrating “jamming perception, parameter estimation, and jamming suppression” has been developed in this study. This method begins by using a bidirectional double sliding window pulse edge detector and a sliding truncated matched filter. These devices are used to extract the ISRJ components of received radar signals and accurately estimate the parameters such as sampling duration and period. The jamming components are then reconstructed and eliminated, allowing for effective target detection. Simulation experiments demonstrate that the proposed method effectively overcomes ISRJ across different modulation modes with almost no loss of signal energy. When the jamming-to-noise ratio is 9 dB, the method boosts the signal-to-jamming ratio by over 33 dB after jamming suppression, ensuring robust anti-ISRJ performance. Interrupted Sampling Repeater Jamming (ISRJ) is a type of intra-pulse coherent jamming that can easily generate false targets resembling real ones, thus posing a severe threat to radar systems. Traditional methods for countering ISRJ techniques are relatively passive and often fail to adapt to evolving jamming techniques, leading to residual jamming effects and signal loss. To improve radar’s anti-jamming capabilities, a novel scheme integrating “jamming perception, parameter estimation, and jamming suppression” has been developed in this study. This method begins by using a bidirectional double sliding window pulse edge detector and a sliding truncated matched filter. These devices are used to extract the ISRJ components of received radar signals and accurately estimate the parameters such as sampling duration and period. The jamming components are then reconstructed and eliminated, allowing for effective target detection. Simulation experiments demonstrate that the proposed method effectively overcomes ISRJ across different modulation modes with almost no loss of signal energy. When the jamming-to-noise ratio is 9 dB, the method boosts the signal-to-jamming ratio by over 33 dB after jamming suppression, ensuring robust anti-ISRJ performance.
Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-Wide Band) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information–processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time–frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research. Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-Wide Band) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information–processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time–frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research.
Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods. Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods.
Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments. Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments.
Traditional Low Probability of Intercept (LPI) array radars that use phased array or Multiple-Input Multiple-Output (MIMO) systems face limitations in terms of controlling radiation energy only at specific angles and cannot achieve energy control over specific areas of range and angle. To address these issues, this paper proposes an LPI waveform design method for Frequency Diverse Array (FDA)-MIMO radar utilizing neural networks. This method jointly designs the transmit waveform and receive beamforming in FDA-MIMO radars to ensure target detection probability while uniformly distributing radar energy across the spatial domain. This minimizes energy directed toward the target, thereby reducing the probability of the radar signal being intercepted. Initially, we formulate an optimization objective function aimed at LPI performance for transmitting waveform design and receiving beamforming by focusing on minimizing pattern matching errors. This function is then used as the loss function in a neural network. Through iterative training, the neural network minimizes this loss function until convergence, resulting in optimized transmit signal waveforms and solving the corresponding receive weighting vectors. Simulation results indicate that our proposed method significantly enhances radar power distribution control. Compared to traditional methods, it shows a 5 dB improvement in beam energy distribution control across nontarget regions of the transmit beam pattern. Furthermore, the receiver beam pattern achieves more concentrated energy, with deep nulls below −50 dB at multiple interference locations, demonstrating excellent interference suppression capabilities. Traditional Low Probability of Intercept (LPI) array radars that use phased array or Multiple-Input Multiple-Output (MIMO) systems face limitations in terms of controlling radiation energy only at specific angles and cannot achieve energy control over specific areas of range and angle. To address these issues, this paper proposes an LPI waveform design method for Frequency Diverse Array (FDA)-MIMO radar utilizing neural networks. This method jointly designs the transmit waveform and receive beamforming in FDA-MIMO radars to ensure target detection probability while uniformly distributing radar energy across the spatial domain. This minimizes energy directed toward the target, thereby reducing the probability of the radar signal being intercepted. Initially, we formulate an optimization objective function aimed at LPI performance for transmitting waveform design and receiving beamforming by focusing on minimizing pattern matching errors. This function is then used as the loss function in a neural network. Through iterative training, the neural network minimizes this loss function until convergence, resulting in optimized transmit signal waveforms and solving the corresponding receive weighting vectors. Simulation results indicate that our proposed method significantly enhances radar power distribution control. Compared to traditional methods, it shows a 5 dB improvement in beam energy distribution control across nontarget regions of the transmit beam pattern. Furthermore, the receiver beam pattern achieves more concentrated energy, with deep nulls below −50 dB at multiple interference locations, demonstrating excellent interference suppression capabilities.
As one of the most promising next-generation radars, Moving platform based Distributed Aperture Radar (MDAR) cannot only coherently combining distributed apertures to obtain the same detection performance of a large aperture, but also enhance the detection and anti-damage capabilities through mobility and flexible deployment. However, time and phase synchronization among radars should be done before coherently combining due to internal clock differences and external propagation path differences. Moreover, grating lobes will generate as the distance between multiple radars usually exceeds half a wavelength, which affects the estimation accuracy of target angle. To obtain Coherent Parameters (CPs), this paper established a cognitive framework for MDAR based on closed-loop structure. And a multi-pulse correlation CPs estimation method considering motion conditions is proposed to improve estimation accuracy. In the meanwhile, an unambiguous angle estimation method based on array configuration accumulation is proposed considering platform motion characteristics. Finally, based on the simulation verification and the proposed framework, a prototype of a 3-node ground Moving platform based Distributed Coherent Aperture Radar (MDCAR) system is designed and experiments are conducted. Compared to a single radar, a maximum value of 14.2 dB signal-to-noise ratio improvement can be achieved, which can further enhance range detection accuracy. Besides, unambiguous angle estimation is also realized under certain conditions. This work is expected to provide support for the research and development of MDCAR. As one of the most promising next-generation radars, Moving platform based Distributed Aperture Radar (MDAR) cannot only coherently combining distributed apertures to obtain the same detection performance of a large aperture, but also enhance the detection and anti-damage capabilities through mobility and flexible deployment. However, time and phase synchronization among radars should be done before coherently combining due to internal clock differences and external propagation path differences. Moreover, grating lobes will generate as the distance between multiple radars usually exceeds half a wavelength, which affects the estimation accuracy of target angle. To obtain Coherent Parameters (CPs), this paper established a cognitive framework for MDAR based on closed-loop structure. And a multi-pulse correlation CPs estimation method considering motion conditions is proposed to improve estimation accuracy. In the meanwhile, an unambiguous angle estimation method based on array configuration accumulation is proposed considering platform motion characteristics. Finally, based on the simulation verification and the proposed framework, a prototype of a 3-node ground Moving platform based Distributed Coherent Aperture Radar (MDCAR) system is designed and experiments are conducted. Compared to a single radar, a maximum value of 14.2 dB signal-to-noise ratio improvement can be achieved, which can further enhance range detection accuracy. Besides, unambiguous angle estimation is also realized under certain conditions. This work is expected to provide support for the research and development of MDCAR.
Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS. Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS.
In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°. In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value. As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data. This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data.
This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultrawideband radar human-pose incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method. This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultrawideband radar human-pose incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method.
Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%. Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%.
Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness. Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness.
Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold–based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition. Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold–based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments. Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy. In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy.
Synthetic Aperture Radar
Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research. Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research.
Range Cell Migration Correction (RCMC) represents an important advancement in the estimation of moving target parameters and imaging of targets in high-resolution Synthetic Aperture Radar (SAR) systems. When the motion of a target or platform becomes complex, the traditional low-order RCMC method may no longer be suitable. Meanwhile, the existing high-order RCMC method based on parameterization is susceptible to issues such as model mismatch and high computational complexity. Additionally, its performance may decrease significantly under a low Signal-to-Noise Ratio (SNR). This research utilizes Extended Kalman Filter (EKF) to track the phase responsible for RCM and develop a phase compensation function to achieve RCMC. The proposed approach is model-independent and can track high-order components in the phase, thereby enabling high-order RCMC of moving targets in SAR. In addition, EKF can filter signals during phase tracking to effectively lower the SNR threshold of the proposed method. Thus, this method offers broad applicability, moderate computational complexity, and the ability to correct non-negligible high-order residual range cell migrations, thereby distinguishing it from traditional methods. This study thoroughly explains the principles and mathematical model behind the proposed method, demonstrating its effectiveness and superiority through multiple sets of simulations and measured data processing. Range Cell Migration Correction (RCMC) represents an important advancement in the estimation of moving target parameters and imaging of targets in high-resolution Synthetic Aperture Radar (SAR) systems. When the motion of a target or platform becomes complex, the traditional low-order RCMC method may no longer be suitable. Meanwhile, the existing high-order RCMC method based on parameterization is susceptible to issues such as model mismatch and high computational complexity. Additionally, its performance may decrease significantly under a low Signal-to-Noise Ratio (SNR). This research utilizes Extended Kalman Filter (EKF) to track the phase responsible for RCM and develop a phase compensation function to achieve RCMC. The proposed approach is model-independent and can track high-order components in the phase, thereby enabling high-order RCMC of moving targets in SAR. In addition, EKF can filter signals during phase tracking to effectively lower the SNR threshold of the proposed method. Thus, this method offers broad applicability, moderate computational complexity, and the ability to correct non-negligible high-order residual range cell migrations, thereby distinguishing it from traditional methods. This study thoroughly explains the principles and mathematical model behind the proposed method, demonstrating its effectiveness and superiority through multiple sets of simulations and measured data processing.
The success of deep supervised learning in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) relies on a large number of labeled samples. However, label noise often exists in large-scale datasets, which highly influence network training. This study proposes loss curve fitting-based label noise uncertainty modeling and a noise uncertainty-based correction method. The loss curve is a discriminative feature to model label noise uncertainty using an unsupervised fuzzy clustering algorithm. Then, according to this uncertainty, the sample set is divided into different subsets: the noisy-label set, clean-label set, and fuzzy-label set, which are further used in training loss with different weights to correct label noise. Experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove that our method can deal with varying ratios of label noise during network training and correct label noise effectively. When the training dataset contains a small ratio of label noise (40%), the proposed method corrects 98.6% of these labels and trains the network with 98.7% classification accuracy. Even when the proportion of label noise is large (80%), the proposed method corrects 87.8% of label noise and trains the network with 82.3% classification accuracy. The success of deep supervised learning in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) relies on a large number of labeled samples. However, label noise often exists in large-scale datasets, which highly influence network training. This study proposes loss curve fitting-based label noise uncertainty modeling and a noise uncertainty-based correction method. The loss curve is a discriminative feature to model label noise uncertainty using an unsupervised fuzzy clustering algorithm. Then, according to this uncertainty, the sample set is divided into different subsets: the noisy-label set, clean-label set, and fuzzy-label set, which are further used in training loss with different weights to correct label noise. Experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove that our method can deal with varying ratios of label noise during network training and correct label noise effectively. When the training dataset contains a small ratio of label noise (40%), the proposed method corrects 98.6% of these labels and trains the network with 98.7% classification accuracy. Even when the proportion of label noise is large (80%), the proposed method corrects 87.8% of label noise and trains the network with 82.3% classification accuracy.
Radar Countermeasure Technique
Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images. Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images.
Achieving robust joint utilization of multidomain characteristics and deep-network features while maintaining a high jamming-recognition accuracy with limited samples is challenging. To address this issue, this paper proposes a multidomain characteristic-guided multimodal contrastive recognition method for active radar jamming. This method involves first thoroughly extracting the multidomain characteristics of active jamming and then designing an optimization unit to automatically select effective characteristics and generate a text modality imbued with implicit expert knowledge. The text modality and involved time-frequency transformation image are separately fed into text and image encoders to construct multimodal-feature pairs and map them to a high-dimensional space for modal alignment. The text features are used as anchors and a guide to time-frequency image features for aggregation around the anchors through contrastive learning, optimizing the image encoder’s representation capability, achieving tight intraclass and separated interclass distributions of active jamming. Experiments show that compared to existing methods, which involve directly combining multidomain characteristics and deep-network features, the proposed guided-joint method can achieve differential feature processing, thereby enhancing the discriminative and generalization capabilities of recognition features. Moreover, under extremely small-sample conditions (2~3 training samples for each type of jamming), the accuracy of our method is 9.84% higher than those of comparative methods, proving the effectiveness and robustness of the proposed method. Achieving robust joint utilization of multidomain characteristics and deep-network features while maintaining a high jamming-recognition accuracy with limited samples is challenging. To address this issue, this paper proposes a multidomain characteristic-guided multimodal contrastive recognition method for active radar jamming. This method involves first thoroughly extracting the multidomain characteristics of active jamming and then designing an optimization unit to automatically select effective characteristics and generate a text modality imbued with implicit expert knowledge. The text modality and involved time-frequency transformation image are separately fed into text and image encoders to construct multimodal-feature pairs and map them to a high-dimensional space for modal alignment. The text features are used as anchors and a guide to time-frequency image features for aggregation around the anchors through contrastive learning, optimizing the image encoder’s representation capability, achieving tight intraclass and separated interclass distributions of active jamming. Experiments show that compared to existing methods, which involve directly combining multidomain characteristics and deep-network features, the proposed guided-joint method can achieve differential feature processing, thereby enhancing the discriminative and generalization capabilities of recognition features. Moreover, under extremely small-sample conditions (2~3 training samples for each type of jamming), the accuracy of our method is 9.84% higher than those of comparative methods, proving the effectiveness and robustness of the proposed method.
Interrupted Sampling Repeater Jamming (ISRJ) is a type of intrapulse coherent jamming that can form multiple realistic false targets that lead or lag behind the actual target, severely affecting radar detection. It is one of the hotspots of current research on electronic counter-countermeasures. To address this problem, an anti-ISRJ method based on an intrapulse frequency-coded joint Frequency Modulation (FM) slope agile waveform is proposed in this paper. In this method, the radar first transmits an intrapulse frequency-coded joint FM slope agile signal to improve the mutual coverability of subpulses by manipulating subpulse center frequency and FM slope agility. Next, the echo signal is divided into several slices according to the subpulse timing of the transmitted signal. Then, the Fuzzy C-Means (FCM) algorithm is used to classify the echo slices. Finally, the interference is suppressed via fractional-domain joint time domain filtering. Simulation results show that the FCM-based method can identify 100% of the interfered echo slices in a jammer synchronous sampling scenario when the Signal-to-Noise Ratio (SNR) is greater than −2.5 dB, and the Jamming-to-Signal Ratio (JSR) is greater than 5 dB. For high JSRs and low SNRs, the proposed method can effectively reduce the target energy loss and suppress the range sidelobes generated via residual interference. Moreover, the target detection probability after interference suppression exceeds 90% when JSR = 50 dB. Interrupted Sampling Repeater Jamming (ISRJ) is a type of intrapulse coherent jamming that can form multiple realistic false targets that lead or lag behind the actual target, severely affecting radar detection. It is one of the hotspots of current research on electronic counter-countermeasures. To address this problem, an anti-ISRJ method based on an intrapulse frequency-coded joint Frequency Modulation (FM) slope agile waveform is proposed in this paper. In this method, the radar first transmits an intrapulse frequency-coded joint FM slope agile signal to improve the mutual coverability of subpulses by manipulating subpulse center frequency and FM slope agility. Next, the echo signal is divided into several slices according to the subpulse timing of the transmitted signal. Then, the Fuzzy C-Means (FCM) algorithm is used to classify the echo slices. Finally, the interference is suppressed via fractional-domain joint time domain filtering. Simulation results show that the FCM-based method can identify 100% of the interfered echo slices in a jammer synchronous sampling scenario when the Signal-to-Noise Ratio (SNR) is greater than −2.5 dB, and the Jamming-to-Signal Ratio (JSR) is greater than 5 dB. For high JSRs and low SNRs, the proposed method can effectively reduce the target energy loss and suppress the range sidelobes generated via residual interference. Moreover, the target detection probability after interference suppression exceeds 90% when JSR = 50 dB.
In the context of counter-reconnaissance against airborne interferometers, this study proposes a jamming method designed to disrupt the parameter measurement capabilities of interferometers by generating distributed signals based on an interrupted-sampling repeating technique. An emitter and a transmitting jammer are combined to form a distributed jamming system. The transmitting jammer samples the emitter signal and transmits the repeating signal to an interferometer. A quasi-synchronization constraint is established according to the change in the positional relation between the airborne interferometer and the jamming system. Additionally, a model for the superposition of distributed signals is provided. Then, the mathematical principle underlying distributed signal jamming is expounded according to the pulse spatial and temporal parameter measurement using the interferometer system. Moreover, the influence of various signal parameters on the jamming effect is analyzed to propose a principle for distributed signal design. Simulation and darkroom experiments show that the proposed method can effectively disrupt the accurate measurement of the pulse spatial domain and time domain parameters, such as azimuth-of-arrival, pulse width, and repetition interval. In the context of counter-reconnaissance against airborne interferometers, this study proposes a jamming method designed to disrupt the parameter measurement capabilities of interferometers by generating distributed signals based on an interrupted-sampling repeating technique. An emitter and a transmitting jammer are combined to form a distributed jamming system. The transmitting jammer samples the emitter signal and transmits the repeating signal to an interferometer. A quasi-synchronization constraint is established according to the change in the positional relation between the airborne interferometer and the jamming system. Additionally, a model for the superposition of distributed signals is provided. Then, the mathematical principle underlying distributed signal jamming is expounded according to the pulse spatial and temporal parameter measurement using the interferometer system. Moreover, the influence of various signal parameters on the jamming effect is analyzed to propose a principle for distributed signal design. Simulation and darkroom experiments show that the proposed method can effectively disrupt the accurate measurement of the pulse spatial domain and time domain parameters, such as azimuth-of-arrival, pulse width, and repetition interval.
Radar Signal and Data Processing
In multichannel adaptive radar target detection, diverse nonhomogeneous background factors can cause considerable outlier interference, making it challenging to meet the requirements of independent and identically distributed training data. Current methods for screening training data rely on prior knowledge of the number of outliers, often leading to poor performance in real-world scenarios where this number is usually unknown. This paper addresses these issues by focusing on adaptive training data screening when the number of outliers is unknown. First, the outlier set is estimated using maximum likelihood estimation, assuming known covariance matrices of clutter and noise. In particular, the training data is initially ranked based on the generalized inner product of each range cell data, approximately transforming the maximum likelihood estimation of the outlier set to the estimation of the number of outliers. Second, a fast maximum likelihood estimation algorithm is employed to calculate the unknown covariance matrix, and an adaptive screening approach is designed for scenarios with an unspecified number of outliers. Furthermore, to address the adverse effects of outliers on ranking performance, a normalized generalized inner product form is devised utilizing the normalized sampling covariance matrix. This form is subsequently incorporated into an iterative estimation procedure to improve the adaptive screening accuracy of training data. Simulation results demonstrate that the screening accuracy of the normalized generalized inner product exceeds that of the generalized inner product. Moreover, through even a small number of reiterations, maintaining a consistent enhancement in terms of the Normalized Signal-to-Interference Ratio (NSIR) is still possible. Compared with existing methods, the proposed algorithm considerably improves screening performance, especially when the number of outliers is unknown. In multichannel adaptive radar target detection, diverse nonhomogeneous background factors can cause considerable outlier interference, making it challenging to meet the requirements of independent and identically distributed training data. Current methods for screening training data rely on prior knowledge of the number of outliers, often leading to poor performance in real-world scenarios where this number is usually unknown. This paper addresses these issues by focusing on adaptive training data screening when the number of outliers is unknown. First, the outlier set is estimated using maximum likelihood estimation, assuming known covariance matrices of clutter and noise. In particular, the training data is initially ranked based on the generalized inner product of each range cell data, approximately transforming the maximum likelihood estimation of the outlier set to the estimation of the number of outliers. Second, a fast maximum likelihood estimation algorithm is employed to calculate the unknown covariance matrix, and an adaptive screening approach is designed for scenarios with an unspecified number of outliers. Furthermore, to address the adverse effects of outliers on ranking performance, a normalized generalized inner product form is devised utilizing the normalized sampling covariance matrix. This form is subsequently incorporated into an iterative estimation procedure to improve the adaptive screening accuracy of training data. Simulation results demonstrate that the screening accuracy of the normalized generalized inner product exceeds that of the generalized inner product. Moreover, through even a small number of reiterations, maintaining a consistent enhancement in terms of the Normalized Signal-to-Interference Ratio (NSIR) is still possible. Compared with existing methods, the proposed algorithm considerably improves screening performance, especially when the number of outliers is unknown.
Airborne radar receivers that utilize subarray processing face challenges owing to the complex space-time coupling distribution caused by grating-lobe clutter. This results in multiple performance notches in the main beam, which severely affects target detection performance. To address this issue, we analyze the characteristics of grating-lobe clutter distribution in subarray processing and propose an approach for space-time clutter suppression based on the design of a receiving subarray beam pattern. Our approach leverages an overlapping subarray scheme to form wide nulls in the regions between subarrays where grating-lobe clutter is prevalent through beam pattern design. This design facilitates grating-lobe clutter pre-filtering between subarrays. Furthermore, we develop a subarray-level space-time processor that avoids the grating-lobe clutter coupling diffusion in the space-time two-dimensional plane by performing clutter pre-filtering within each subarray. This strategy enhances clutter suppression and moving-target-detection capabilities. Simulation results verify that the proposed method can remarkably improve the output signal to clutter plus noise ratio loss performance in grating-lobe clutter regions. Airborne radar receivers that utilize subarray processing face challenges owing to the complex space-time coupling distribution caused by grating-lobe clutter. This results in multiple performance notches in the main beam, which severely affects target detection performance. To address this issue, we analyze the characteristics of grating-lobe clutter distribution in subarray processing and propose an approach for space-time clutter suppression based on the design of a receiving subarray beam pattern. Our approach leverages an overlapping subarray scheme to form wide nulls in the regions between subarrays where grating-lobe clutter is prevalent through beam pattern design. This design facilitates grating-lobe clutter pre-filtering between subarrays. Furthermore, we develop a subarray-level space-time processor that avoids the grating-lobe clutter coupling diffusion in the space-time two-dimensional plane by performing clutter pre-filtering within each subarray. This strategy enhances clutter suppression and moving-target-detection capabilities. Simulation results verify that the proposed method can remarkably improve the output signal to clutter plus noise ratio loss performance in grating-lobe clutter regions.
Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network. Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network.
Due to the short wavelength of millimeter-wave, active electrical scanning millimeter-wave imaging system requires large imaging scenarios and high resolutions in practical applications. These requirements lead to a large uniform array size and high complexity of the feed network that satisfies the Nyquist sampling theorem. Accordingly, the system faces contradictions among imaging accuracy, imaging speed, and system cost. To this end, a novel, Credible Bayesian Inference of near-field Sparse Array Synthesis (CBI-SAS) algorithm is proposed under the framework of sparse Bayesian learning. The algorithm optimizes the complex-valued excitation weights based on Bayesian inference in a sparse manner. Therefore, it obtains the full statistical posterior Probability Density Function (PDF) of these weights. This enables the algorithm to utilize higher-order statistical information to obtain the optimal values, confidence intervals, and confidence levels of the excitation weights. In Bayesian inference, to achieve a small number of array elements to synthesize the desired beam orientation pattern, a heavy-tailed Laplace sparse prior is introduced to the excitation weights. However, considering that the prior probability model is not conjugated with the reference pattern data probability, the prior model is encoded in a hierarchical Bayesian manner so that the full posterior distribution can be represented in closed-form solutions. To avoid the high-dimensional integral in the full posterior distribution, a variational Bayesian expectation maximization method is employed to calculate the posterior PDF of the excitation weights, enabling reliable Bayesian inference. Simulation results show that compared with conventional sparse array synthesis algorithms, the proposed algorithm achieves lower element sparsity, a smaller normalized mean square error, and higher accuracy for matching the desired directional pattern. In addition, based on the measured raw data from near-field 1D electrical scanning and 2D plane electrical scanning, an improved 3D time domain algorithm is applied for 3D image reconstruction. Results verify that the proposed CBI-SAS algorithm can guarantee imaging results and reduce the complexity of the system. Due to the short wavelength of millimeter-wave, active electrical scanning millimeter-wave imaging system requires large imaging scenarios and high resolutions in practical applications. These requirements lead to a large uniform array size and high complexity of the feed network that satisfies the Nyquist sampling theorem. Accordingly, the system faces contradictions among imaging accuracy, imaging speed, and system cost. To this end, a novel, Credible Bayesian Inference of near-field Sparse Array Synthesis (CBI-SAS) algorithm is proposed under the framework of sparse Bayesian learning. The algorithm optimizes the complex-valued excitation weights based on Bayesian inference in a sparse manner. Therefore, it obtains the full statistical posterior Probability Density Function (PDF) of these weights. This enables the algorithm to utilize higher-order statistical information to obtain the optimal values, confidence intervals, and confidence levels of the excitation weights. In Bayesian inference, to achieve a small number of array elements to synthesize the desired beam orientation pattern, a heavy-tailed Laplace sparse prior is introduced to the excitation weights. However, considering that the prior probability model is not conjugated with the reference pattern data probability, the prior model is encoded in a hierarchical Bayesian manner so that the full posterior distribution can be represented in closed-form solutions. To avoid the high-dimensional integral in the full posterior distribution, a variational Bayesian expectation maximization method is employed to calculate the posterior PDF of the excitation weights, enabling reliable Bayesian inference. Simulation results show that compared with conventional sparse array synthesis algorithms, the proposed algorithm achieves lower element sparsity, a smaller normalized mean square error, and higher accuracy for matching the desired directional pattern. In addition, based on the measured raw data from near-field 1D electrical scanning and 2D plane electrical scanning, an improved 3D time domain algorithm is applied for 3D image reconstruction. Results verify that the proposed CBI-SAS algorithm can guarantee imaging results and reduce the complexity of the system.
Academic Information
Vortex Electromagnetic Waves (VEMWs) have unique wavefront phase modulation characteristics. As a new degree of freedom in the diversity of radar transmitters, the VEMW Radar (VEMWR) provides Radar Cross-Section (RCS) diversity and improves signal and information processing dimensions and performances. The detection and imaging performances of VEMWR have been verified in various radar systems. This article focuses on the applying background of forward-looking radar imaging and proposes a time-division multiplemode scanning imaging method based on a Uniform Circular Array (UCA) system with multiple transmitters and a single receiver at the UCA center. First, we establish the forward-looking VEMWR imaging mode and corresponding signal mode. Next, an improved three-Dimensional (3D) back-projection and range-Doppler algorithm is proposed, which utilizes the magnitude difference at various elevation angles of multimode VEMW, phase difference at different azimuth angles, and Doppler effect resulting from the relative motion of the radar and target to achieve 3D imaging of the target. As the elevation angle increases, the beam pattern gain of the high-mode VEMW decreases sharply due to the energy divergence of the VEMW. The proposed method can maintain stability at low or high elevation angles using the energy distribution of multiple modes in the spatial domain. Imaging results of point targets revealed that the normalized gain of target-imaging results is equivalent either at low or high elevation angles within the multimode VEMW field of view. The proposed method is validated through experiments with an aircraft target. Based on the imaging results, it is verified that the proposed method can accurately reconstruct the 3D structure of complex targets. Vortex Electromagnetic Waves (VEMWs) have unique wavefront phase modulation characteristics. As a new degree of freedom in the diversity of radar transmitters, the VEMW Radar (VEMWR) provides Radar Cross-Section (RCS) diversity and improves signal and information processing dimensions and performances. The detection and imaging performances of VEMWR have been verified in various radar systems. This article focuses on the applying background of forward-looking radar imaging and proposes a time-division multiplemode scanning imaging method based on a Uniform Circular Array (UCA) system with multiple transmitters and a single receiver at the UCA center. First, we establish the forward-looking VEMWR imaging mode and corresponding signal mode. Next, an improved three-Dimensional (3D) back-projection and range-Doppler algorithm is proposed, which utilizes the magnitude difference at various elevation angles of multimode VEMW, phase difference at different azimuth angles, and Doppler effect resulting from the relative motion of the radar and target to achieve 3D imaging of the target. As the elevation angle increases, the beam pattern gain of the high-mode VEMW decreases sharply due to the energy divergence of the VEMW. The proposed method can maintain stability at low or high elevation angles using the energy distribution of multiple modes in the spatial domain. Imaging results of point targets revealed that the normalized gain of target-imaging results is equivalent either at low or high elevation angles within the multimode VEMW field of view. The proposed method is validated through experiments with an aircraft target. Based on the imaging results, it is verified that the proposed method can accurately reconstruct the 3D structure of complex targets.
To improve the accuracy of Direction Of Arrival (DOA) estimation in Multiple Input Multiple Output (MIMO) radar systems under unknown mutual coupling, we propose a mutual coupling calibration and DOA estimation algorithm based on Sparse Learning via Iterative Minimization (SLIM). The proposed algorithm utilizes the spatial sparsity of target signals and estimates the spatial pseudo-spectra and the mutual coupling matrices of MIMO arrays through cyclic optimization. Moreover, it is hyperparameter-free and guarantees convergence. Numerical examples demonstrate that for MIMO radar systems under unknown mutual coupling conditions, the proposed algorithm can accurately estimate the DOA of targets with small angle separations and relatively high Signal-to-Noise Ratios (SNRs), even with a limited number of samples. In addition, low DOA estimation errors are achieved for targets with large angle separations and small sample sizes, even under low-SNR conditions. To improve the accuracy of Direction Of Arrival (DOA) estimation in Multiple Input Multiple Output (MIMO) radar systems under unknown mutual coupling, we propose a mutual coupling calibration and DOA estimation algorithm based on Sparse Learning via Iterative Minimization (SLIM). The proposed algorithm utilizes the spatial sparsity of target signals and estimates the spatial pseudo-spectra and the mutual coupling matrices of MIMO arrays through cyclic optimization. Moreover, it is hyperparameter-free and guarantees convergence. Numerical examples demonstrate that for MIMO radar systems under unknown mutual coupling conditions, the proposed algorithm can accurately estimate the DOA of targets with small angle separations and relatively high Signal-to-Noise Ratios (SNRs), even with a limited number of samples. In addition, low DOA estimation errors are achieved for targets with large angle separations and small sample sizes, even under low-SNR conditions.

微信 | 公众平台

随时查询稿件 获取最新论文 知晓行业信息

  • EI
  • Scopus
  • DOAJ
  • JST
  • CSCD
  • CSTPCD
  • CNKI
  • 中文核心期刊