Display Method:
The ground terrain classification using Polarimetric Synthetic Aperture Radar (PolSAR) is one of the research hotspots in the field of intelligent interpretation of SAR images. To further promote the development of research in this field, this paper organizes and releases a polarimetric SAR ground terrain classification dataset named AIR-PolSAR-Seg-2.0 for large-scale complex scenes. This dataset is composed of three L1A-level complex SAR images of the Gaofen-3 satellite from different regions, with a spatial resolution of 8 meters. It includes four polarization modes: HH, HV, VH, and VV, and covers six typical ground terrain categories such as water bodies, vegetation, bare land, buildings, roads, and mountains. It has the characteristics of large-scale complex scenes, diverse strong and weak scattering, irregular boundary distribution, diverse category scales, and unbalanced sample distribution. To facilitate experimental verification, this paper cuts the three complete SAR images into 24,672 slices of 512×512 pixels, and conducts experimental verification using a series of common deep learning methods. The experimental results show that the DANet based on the dual-channel self-attention method performs the best, with the average intersection over union ratio reaching 85.96% for amplitude data and 87.03% for amplitude-phase fusion data. This dataset and the experimental index benchmark are helpful for other scholars to further carry out research related to polarimetric SAR ground terrain classification.
The ground terrain classification using Polarimetric Synthetic Aperture Radar (PolSAR) is one of the research hotspots in the field of intelligent interpretation of SAR images. To further promote the development of research in this field, this paper organizes and releases a polarimetric SAR ground terrain classification dataset named AIR-PolSAR-Seg-2.0 for large-scale complex scenes. This dataset is composed of three L1A-level complex SAR images of the Gaofen-3 satellite from different regions, with a spatial resolution of 8 meters. It includes four polarization modes: HH, HV, VH, and VV, and covers six typical ground terrain categories such as water bodies, vegetation, bare land, buildings, roads, and mountains. It has the characteristics of large-scale complex scenes, diverse strong and weak scattering, irregular boundary distribution, diverse category scales, and unbalanced sample distribution. To facilitate experimental verification, this paper cuts the three complete SAR images into 24,672 slices of 512×512 pixels, and conducts experimental verification using a series of common deep learning methods. The experimental results show that the DANet based on the dual-channel self-attention method performs the best, with the average intersection over union ratio reaching 85.96% for amplitude data and 87.03% for amplitude-phase fusion data. This dataset and the experimental index benchmark are helpful for other scholars to further carry out research related to polarimetric SAR ground terrain classification.
With the rapid development of electronic technology, the electromagnetic environment is becoming increasingly complex. For instance, adaptive beamforming cannot suppress main-lobe jammers for traditional phased array radars; therefore, developing measures to tackle this common problem is an urgent need in radar technology. This study addresses the problem of main-lobe deceptive jammer suppression using space-time multidimensional coding. The first step is to design a three-dimensional phase coding scheme applicable across transmit channels, pulses, and subpulses. A Doppler division multiple access technique is employed at the receiver to separate the transmit signals. To solve the problem of waveform misalignment caused by high-speed moving targets, a novel approach is proposed to estimate the compensation index according to differences in beamforming energy. Subsequently, a dual-phase compensation method that leverages the phase differences between the main-lobe deceptive jammers and the target is proposed; this method can distinguish the true target, pulse-delayed jammers, and rapidly generated jammers in the transmit spatial frequency domain. Moreover, spatial filtering is applied to suppress all the main-lobe deceptive jammers by designing an appropriate transmit–receive weight vector. Additionally, an optimization problem aiming to maximize the output Signal-to-Interference-plus-Noise Ratio (SINR) is formulated to address the problem of performance degradation due to the direction of arrival errors. Further, to solve this problem, an alternating optimization method is utilized to obtain the optimized weight vector and transmit and receive coding coefficients iteratively to improve the SINR. Simulation results demonstrate that the proposed method suppresses the main-lobe deceptive jammers more effectively than other radar frameworks. Specifically, compared to the conventional multiple-input multiple-output radar, the proposed method achieves an SINR improvement of 34 dB in the presence of four main-lobe deceptive jammers.
With the rapid development of electronic technology, the electromagnetic environment is becoming increasingly complex. For instance, adaptive beamforming cannot suppress main-lobe jammers for traditional phased array radars; therefore, developing measures to tackle this common problem is an urgent need in radar technology. This study addresses the problem of main-lobe deceptive jammer suppression using space-time multidimensional coding. The first step is to design a three-dimensional phase coding scheme applicable across transmit channels, pulses, and subpulses. A Doppler division multiple access technique is employed at the receiver to separate the transmit signals. To solve the problem of waveform misalignment caused by high-speed moving targets, a novel approach is proposed to estimate the compensation index according to differences in beamforming energy. Subsequently, a dual-phase compensation method that leverages the phase differences between the main-lobe deceptive jammers and the target is proposed; this method can distinguish the true target, pulse-delayed jammers, and rapidly generated jammers in the transmit spatial frequency domain. Moreover, spatial filtering is applied to suppress all the main-lobe deceptive jammers by designing an appropriate transmit–receive weight vector. Additionally, an optimization problem aiming to maximize the output Signal-to-Interference-plus-Noise Ratio (SINR) is formulated to address the problem of performance degradation due to the direction of arrival errors. Further, to solve this problem, an alternating optimization method is utilized to obtain the optimized weight vector and transmit and receive coding coefficients iteratively to improve the SINR. Simulation results demonstrate that the proposed method suppresses the main-lobe deceptive jammers more effectively than other radar frameworks. Specifically, compared to the conventional multiple-input multiple-output radar, the proposed method achieves an SINR improvement of 34 dB in the presence of four main-lobe deceptive jammers.
Most existing specific emitter identification technologies rely on supervised learning, making them unsuitable for scenarios with label loss due to factors such as the acquisition environment (e.g., weather conditions, terrain, obstacles, and interference sources), device performance (e.g., radar resolution, signal processing capabilities, and hardware failures), and tagger level. In this study, a weakly labeled specific emitter identification algorithm based on the Weakly Supervised War-KAN (WSW-KAN) network is proposed. First, a WSW-KAN baseline network is constructed by integrating the unique learnable edge function of the KAN network with the multiresolution analysis of the wavelet function. The weakly labeled dataset is then divided into a small labeled dataset and a large unlabeled dataset, with the small labeled dataset used for initial model training. Finally, based on the pretrained model, Adaptive Pseudo-Label Weighted Selection (APLWS) is used to extract features from the unlabeled data using a contrast learning method, followed by iterative training, thereby effectively improving the generalization capability of the model. Experimental validation using a real acquisition radar dataset demonstrates that the proposed algorithm achieves a recognition accuracy of approximately 95% for specific emitters while maintaining high efficiency, a small parameter scale, and strong adaptability, making it suitable for practical applications.
Most existing specific emitter identification technologies rely on supervised learning, making them unsuitable for scenarios with label loss due to factors such as the acquisition environment (e.g., weather conditions, terrain, obstacles, and interference sources), device performance (e.g., radar resolution, signal processing capabilities, and hardware failures), and tagger level. In this study, a weakly labeled specific emitter identification algorithm based on the Weakly Supervised War-KAN (WSW-KAN) network is proposed. First, a WSW-KAN baseline network is constructed by integrating the unique learnable edge function of the KAN network with the multiresolution analysis of the wavelet function. The weakly labeled dataset is then divided into a small labeled dataset and a large unlabeled dataset, with the small labeled dataset used for initial model training. Finally, based on the pretrained model, Adaptive Pseudo-Label Weighted Selection (APLWS) is used to extract features from the unlabeled data using a contrast learning method, followed by iterative training, thereby effectively improving the generalization capability of the model. Experimental validation using a real acquisition radar dataset demonstrates that the proposed algorithm achieves a recognition accuracy of approximately 95% for specific emitters while maintaining high efficiency, a small parameter scale, and strong adaptability, making it suitable for practical applications.
Integrated Sensing And Communications (ISAC) based on reusing random communication signals within the existing network architecture may drastically reduce implementation costs, thereby accelerating the integration of sensing functionalities into current communication networks. However, the randomness of communication data introduces fluctuations in sensing performance across different signal realizations, leading to unstable sensing accuracy. To address this issue, we delve into random ISAC signal processing methods and propose a joint transceiver precoding optimization design for Multiple-Input Multiple-Output ISAC (MIMO-ISAC) systems. Specifically, considering target impulse response matrix estimation, we first define the Ergodic Cramér–Rao Bound (ECRB) as an average sensing performance metric under random signaling. By deriving the closed-form expression of the ECRB based on the distribution of complex inverse Wishart matrices, we theoretically reveal the performance loss arising when using random signals for sensing compared to the conventional deterministic orthogonal signals. Furthermore, we formulate the sensing-optimal subproblem by minimizing the ECRB and the communication-optimal subproblem of multiantenna multiuser signal estimation and derive the corresponding sensing-optimal and communication-optimal precoding designs. Subsequently, we extend the proposed transceiver precoding optimization framework to ISAC scenarios by explicitly constraining the communication requirements. Finally, through numerous simulations, we validate the effectiveness of the proposed method. The results demonstrate that the joint transceiver precoding design may allow high-accuracy target response matrix estimation while enabling flexible trade-offs between communication signal estimation and target response matrix estimation errors.
Integrated Sensing And Communications (ISAC) based on reusing random communication signals within the existing network architecture may drastically reduce implementation costs, thereby accelerating the integration of sensing functionalities into current communication networks. However, the randomness of communication data introduces fluctuations in sensing performance across different signal realizations, leading to unstable sensing accuracy. To address this issue, we delve into random ISAC signal processing methods and propose a joint transceiver precoding optimization design for Multiple-Input Multiple-Output ISAC (MIMO-ISAC) systems. Specifically, considering target impulse response matrix estimation, we first define the Ergodic Cramér–Rao Bound (ECRB) as an average sensing performance metric under random signaling. By deriving the closed-form expression of the ECRB based on the distribution of complex inverse Wishart matrices, we theoretically reveal the performance loss arising when using random signals for sensing compared to the conventional deterministic orthogonal signals. Furthermore, we formulate the sensing-optimal subproblem by minimizing the ECRB and the communication-optimal subproblem of multiantenna multiuser signal estimation and derive the corresponding sensing-optimal and communication-optimal precoding designs. Subsequently, we extend the proposed transceiver precoding optimization framework to ISAC scenarios by explicitly constraining the communication requirements. Finally, through numerous simulations, we validate the effectiveness of the proposed method. The results demonstrate that the joint transceiver precoding design may allow high-accuracy target response matrix estimation while enabling flexible trade-offs between communication signal estimation and target response matrix estimation errors.
The resolving power of traditional radar is mainly analyzed using the ambiguity function, and its limit is generally characterized by the Rayleigh limit. Bats have a rather sensitive auditory system. Researchers have proposed the Spectrogram Correlation And Transformation (SCAT) model to represent the auditory system of bats, explored their super-resolution principle, and provided a possible means to break through the conventional (Rayleigh) resolving power limit of radar targets. To further enhance the discriminative performance of the SCAT model, two bat-auditory-system-based super-resolution models, namely the base vector deconvolution method and baseband SCAT (BSCT), are improved by suppressing redundant wave flaps at the negative semiaxis of the range profile and at the origin. Meanwhile, the concept and computation method of reliable discriminative power are proposed to unify the measurements of SCAT and Rayleigh discriminative powers. Further, a comparison is made to validate the rationality of the concept of reliable discriminative power, and the effectiveness of the improved models is verified. Simulation and real experiments show that the improved super-resolution models achieve a sizable increase in the resolving power. Notably, the improved base vector deconvolution method performs the best, improving the resolving power of the original method by ~2 dB while enhancing the matched filtering resolving power by ~5 dB.
The resolving power of traditional radar is mainly analyzed using the ambiguity function, and its limit is generally characterized by the Rayleigh limit. Bats have a rather sensitive auditory system. Researchers have proposed the Spectrogram Correlation And Transformation (SCAT) model to represent the auditory system of bats, explored their super-resolution principle, and provided a possible means to break through the conventional (Rayleigh) resolving power limit of radar targets. To further enhance the discriminative performance of the SCAT model, two bat-auditory-system-based super-resolution models, namely the base vector deconvolution method and baseband SCAT (BSCT), are improved by suppressing redundant wave flaps at the negative semiaxis of the range profile and at the origin. Meanwhile, the concept and computation method of reliable discriminative power are proposed to unify the measurements of SCAT and Rayleigh discriminative powers. Further, a comparison is made to validate the rationality of the concept of reliable discriminative power, and the effectiveness of the improved models is verified. Simulation and real experiments show that the improved super-resolution models achieve a sizable increase in the resolving power. Notably, the improved base vector deconvolution method performs the best, improving the resolving power of the original method by ~2 dB while enhancing the matched filtering resolving power by ~5 dB.
Traditional multifunctional radar systems optimize transmission resources solely based on target characteristics. However, this approach poses challenges in dynamic electromagnetic environments owing to the intelligent time-varying nature of jamming and the mismatch between traditional optimization models and real-world scenarios. To address these limitations, this paper proposes a data-driven integrated transmission resource management scheme designed to enhance the multiple target tracking (MTT) performance of multifunctional radars in complex and dynamic electromagnetic environments. The proposed scheme achieves this by enabling online perception and utilization of dynamic jamming information. The scheme initially establishes a Markov decision process (MDP) to mathematically model the risks associated with radar interception and adversarial jamming. This MDP provides a structured approach to perceive jamming information, which is then integrated into the calculation of MTT. The integrated resource management challenge is formulated as an optimization problem with constraints on the action space. To solve this problem effectively, a greedy sorting backtracking algorithm is introduced. Simulation results demonstrate the efficacy of the proposed method, demonstrating its ability to significantly reduce the probability of radar interception in dynamic jamming environments. Furthermore, the method mitigates the impact of jamming on radar performance during adversarial interference, thereby improving MTT performance.
Traditional multifunctional radar systems optimize transmission resources solely based on target characteristics. However, this approach poses challenges in dynamic electromagnetic environments owing to the intelligent time-varying nature of jamming and the mismatch between traditional optimization models and real-world scenarios. To address these limitations, this paper proposes a data-driven integrated transmission resource management scheme designed to enhance the multiple target tracking (MTT) performance of multifunctional radars in complex and dynamic electromagnetic environments. The proposed scheme achieves this by enabling online perception and utilization of dynamic jamming information. The scheme initially establishes a Markov decision process (MDP) to mathematically model the risks associated with radar interception and adversarial jamming. This MDP provides a structured approach to perceive jamming information, which is then integrated into the calculation of MTT. The integrated resource management challenge is formulated as an optimization problem with constraints on the action space. To solve this problem effectively, a greedy sorting backtracking algorithm is introduced. Simulation results demonstrate the efficacy of the proposed method, demonstrating its ability to significantly reduce the probability of radar interception in dynamic jamming environments. Furthermore, the method mitigates the impact of jamming on radar performance during adversarial interference, thereby improving MTT performance.
Inverse Synthetic Aperture Radar (ISAR) is an important tool for imaging and monitoring space targets. The large rotation angle of space targets can exacerbate the phenomenon of Migration Through Resolution Cells (MTRC), seriously affecting the ISAR imaging performance. For the fast estimation and compensation of echo phase errors caused by the motion of space targets, this paper proposes an ISAR space-target imaging method based on the rapid estimation of joint motion parameters. This method combines the advantages of the high efficiency of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm and the high compensation accuracy of the Polar Format Algorithm (PFA) algorithm. The proposed method formulates an image entropy minimization model considering the joint estimation of the translation and rotation parameters of the target. To reduce the possibility of optimization falling into local optima, the proposed method solves sub-steps, which comprise rough and fine estimations of the target motion parameters, using the BFGS optimization algorithm. The proposed method rapidly estimates target rotation parameters and performs quick MTRC compensation under large rotation angles. The simulation of point targets and imaging results of actual civil aircraft data show that compared with the Particle Swarm Optimization-Polar Format Algorithm (PSO–PFA) algorithm, the proposed method estimates motion parameters with a higher accuracy under low signal-to-noise ratio conditions. Further, the computational efficiency is improved by more than five times, which is significantly advantageous.
Inverse Synthetic Aperture Radar (ISAR) is an important tool for imaging and monitoring space targets. The large rotation angle of space targets can exacerbate the phenomenon of Migration Through Resolution Cells (MTRC), seriously affecting the ISAR imaging performance. For the fast estimation and compensation of echo phase errors caused by the motion of space targets, this paper proposes an ISAR space-target imaging method based on the rapid estimation of joint motion parameters. This method combines the advantages of the high efficiency of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm and the high compensation accuracy of the Polar Format Algorithm (PFA) algorithm. The proposed method formulates an image entropy minimization model considering the joint estimation of the translation and rotation parameters of the target. To reduce the possibility of optimization falling into local optima, the proposed method solves sub-steps, which comprise rough and fine estimations of the target motion parameters, using the BFGS optimization algorithm. The proposed method rapidly estimates target rotation parameters and performs quick MTRC compensation under large rotation angles. The simulation of point targets and imaging results of actual civil aircraft data show that compared with the Particle Swarm Optimization-Polar Format Algorithm (PSO–PFA) algorithm, the proposed method estimates motion parameters with a higher accuracy under low signal-to-noise ratio conditions. Further, the computational efficiency is improved by more than five times, which is significantly advantageous.
Beamforming enhances the received signal power by transmitting signals in specific directions. However, in high-speed and dynamic vehicular network scenarios, frequent channel state updates and beam adjustments impose substantial system overhead. Furthermore, real-time alignment between the beam and user location becomes challenging, leading to potential misalignment that undermines communication stability. Obstructions and channel fading in complex road environments further constrain the effectiveness of beamforming. To address these challenges, this study proposes a multimodal feature fusion beamforming method based on a convolutional neural network and an attention mechanism model to achieve sensor-assisted high-reliability communication. Data heterogeneity is solved by customizing data conversion and standardization strategies for radar and lidar data collected by sensors. Three-dimensional convolutional residual blocks are employed to extract multimodal features, while the cross-attention mechanism integrates integrate these features for beamforming. Experimental results show that the proposed method achieves an average Top-3 accuracy of nearly 90% in high-speed environments, which is substantially improved compared with the single-modal beamforming scheme.
Beamforming enhances the received signal power by transmitting signals in specific directions. However, in high-speed and dynamic vehicular network scenarios, frequent channel state updates and beam adjustments impose substantial system overhead. Furthermore, real-time alignment between the beam and user location becomes challenging, leading to potential misalignment that undermines communication stability. Obstructions and channel fading in complex road environments further constrain the effectiveness of beamforming. To address these challenges, this study proposes a multimodal feature fusion beamforming method based on a convolutional neural network and an attention mechanism model to achieve sensor-assisted high-reliability communication. Data heterogeneity is solved by customizing data conversion and standardization strategies for radar and lidar data collected by sensors. Three-dimensional convolutional residual blocks are employed to extract multimodal features, while the cross-attention mechanism integrates integrate these features for beamforming. Experimental results show that the proposed method achieves an average Top-3 accuracy of nearly 90% in high-speed environments, which is substantially improved compared with the single-modal beamforming scheme.
Compared to ground-based external radiation source radar, satellite signal-based external radiation source radar (i.e., satellite signal external radiation source radar) offers advantages such as global, all-time, and all-weather coverage, which can compensate for the limitations of ground-based external radiation source radar in terms of maritime coverage. In contrast to medium and high-altitude satellite signals, Low-Earth Orbit (LEO) communication satellite signals have advantages such as strong reception power and a large number of satellites, which can provide substantial detection range and accuracy for passive detection of maritime targets. In response to future development needs, this paper provides a detailed discussion of the research status and application prospects of satellite signal external radiation source radar, and presents a feasibility analysis for constructing a low-earth orbit communication satellite signal external radiation source radar system using Iridium and Starlink, two types of LEO communication satellite systems, which integrates high and low frequencies with both wide and narrow bandwidths. Based on this, the paper summarizes the technical challenges and potential solutions in the development of low-earth orbit communication satellite signal external radiation source radar systems. The aforementioned research can serve as an important reference for wide-area external radiation source radar detection.
Compared to ground-based external radiation source radar, satellite signal-based external radiation source radar (i.e., satellite signal external radiation source radar) offers advantages such as global, all-time, and all-weather coverage, which can compensate for the limitations of ground-based external radiation source radar in terms of maritime coverage. In contrast to medium and high-altitude satellite signals, Low-Earth Orbit (LEO) communication satellite signals have advantages such as strong reception power and a large number of satellites, which can provide substantial detection range and accuracy for passive detection of maritime targets. In response to future development needs, this paper provides a detailed discussion of the research status and application prospects of satellite signal external radiation source radar, and presents a feasibility analysis for constructing a low-earth orbit communication satellite signal external radiation source radar system using Iridium and Starlink, two types of LEO communication satellite systems, which integrates high and low frequencies with both wide and narrow bandwidths. Based on this, the paper summarizes the technical challenges and potential solutions in the development of low-earth orbit communication satellite signal external radiation source radar systems. The aforementioned research can serve as an important reference for wide-area external radiation source radar detection.
Hyperspectral LiDAR (HSL) can obtain high precision and resolution spatial data along with the spectral information of the target, which can provide effective and multidimensional data for various research and application fields. However, differences in transmitting signal intensities of HSL at various wavelengths lead to variations in corresponding echo intensities, making it challenging to directly reconstruct accurate optical characteristics (reflectance spectral profile) of the target with echo intensities. To obtain the target reflectance spectral profile, a common solution is to correct the echo intensity (standard reference correction method) using standard diffuse reflectance whiteboards. However, in complex detection environments, whiteboards are susceptible to contamination, and the transmitting intensity of the laser may fluctuate due to changes in the environment and equipment conditions, which may potentially impact the calculation accuracy. The direct transmission of information from the full-waveform signals to the reconstruction of the reflectance spectral profiles is a more efficient approach. Therefore, we propose an echo intensity correction method based on HSL full-waveform data for the rapid generation of reflectance spectral profiles of targets. The initial step is to conduct a theoretical analysis that illustrates the similarity between the echo signals and the transmitting signals in terms of their waveforms. A skew-normal Gaussian function is then employed to fit the transmitting and echo signals of the HSL full waveform. Thereafter, the transmit-to-echo signal peak ratios (normalization factors) of the standard diffuse reflectance whiteboard at different wavelengths are calculated under ideal conditions. Finally, the reflectance spectral profile of the target is constructed by combining the normalization factor of the standard diffuse reflectance whiteboard with that of the target. To verify the effectiveness of the proposed method, we conducted experiments to compare the reflectance spectral profiles calculated using the standard reference correction method. Moreover, we performed wood–leaf separation and target classification experiments to assess its reliability and usability. The experimental results reveal the following: (1) the reconstructed reflectance spectral profiles of the target can be obtained by correcting the echo intensity with the transmitting signals, which is similar to that obtained by the standard reference correction method. Moreover, it demonstrates excellent stability under various temperatures and lighting conditions. Compared with the standard reference correction method, this approach effectively overcomes the influence of laser emission energy fluctuations, thereby considerably improving the measurement accuracy and consistency of reflectance spectral curves, especially under prolonged HSL operation conditions. (2) The wood–leaf separation and the multiple target classification can be conducted using the reconstructed target reflectance spectral profiles, with a classification accuracy of over 90%. Overall, the proposed method simplifies the correction of echo intensity for full-waveform HSL, which is suitable for the rapid reconstruction of target hyperspectral information during data acquisition.
Hyperspectral LiDAR (HSL) can obtain high precision and resolution spatial data along with the spectral information of the target, which can provide effective and multidimensional data for various research and application fields. However, differences in transmitting signal intensities of HSL at various wavelengths lead to variations in corresponding echo intensities, making it challenging to directly reconstruct accurate optical characteristics (reflectance spectral profile) of the target with echo intensities. To obtain the target reflectance spectral profile, a common solution is to correct the echo intensity (standard reference correction method) using standard diffuse reflectance whiteboards. However, in complex detection environments, whiteboards are susceptible to contamination, and the transmitting intensity of the laser may fluctuate due to changes in the environment and equipment conditions, which may potentially impact the calculation accuracy. The direct transmission of information from the full-waveform signals to the reconstruction of the reflectance spectral profiles is a more efficient approach. Therefore, we propose an echo intensity correction method based on HSL full-waveform data for the rapid generation of reflectance spectral profiles of targets. The initial step is to conduct a theoretical analysis that illustrates the similarity between the echo signals and the transmitting signals in terms of their waveforms. A skew-normal Gaussian function is then employed to fit the transmitting and echo signals of the HSL full waveform. Thereafter, the transmit-to-echo signal peak ratios (normalization factors) of the standard diffuse reflectance whiteboard at different wavelengths are calculated under ideal conditions. Finally, the reflectance spectral profile of the target is constructed by combining the normalization factor of the standard diffuse reflectance whiteboard with that of the target. To verify the effectiveness of the proposed method, we conducted experiments to compare the reflectance spectral profiles calculated using the standard reference correction method. Moreover, we performed wood–leaf separation and target classification experiments to assess its reliability and usability. The experimental results reveal the following: (1) the reconstructed reflectance spectral profiles of the target can be obtained by correcting the echo intensity with the transmitting signals, which is similar to that obtained by the standard reference correction method. Moreover, it demonstrates excellent stability under various temperatures and lighting conditions. Compared with the standard reference correction method, this approach effectively overcomes the influence of laser emission energy fluctuations, thereby considerably improving the measurement accuracy and consistency of reflectance spectral curves, especially under prolonged HSL operation conditions. (2) The wood–leaf separation and the multiple target classification can be conducted using the reconstructed target reflectance spectral profiles, with a classification accuracy of over 90%. Overall, the proposed method simplifies the correction of echo intensity for full-waveform HSL, which is suitable for the rapid reconstruction of target hyperspectral information during data acquisition.
With the widespread application of Wi-Fi sensing technology in intelligent health monitoring, constructing high-quality perception datasets has become a key challenge. Particularly in monitoring abnormal behaviors, such as falls, traditional methods rely on repeated human experiments, which not only poses safety risks but also raises ethical concerns. To address these issues, this paper proposes a time-domain digital coding metasurface-assisted data acquisition method. By simulating the Doppler effect and micro-Doppler characteristics of the human body, the time-domain digital coding metasurface can effectively replace human experiments and assist in constructing Wi-Fi sensing datasets. To verify the feasibility of this method, we develop a time-domain digital coding metasurface with 0°–360° full-phase modulation capability. Experimental results show that the signals generated by the metasurface retain the motion characteristics of the human body, complement real samples, reduce the complexity of data collection, and finally improve the monitoring accuracy of the classification model significantly. This method provides an innovative and feasible solution for data acquisition for Wi-Fi sensing technology.
With the widespread application of Wi-Fi sensing technology in intelligent health monitoring, constructing high-quality perception datasets has become a key challenge. Particularly in monitoring abnormal behaviors, such as falls, traditional methods rely on repeated human experiments, which not only poses safety risks but also raises ethical concerns. To address these issues, this paper proposes a time-domain digital coding metasurface-assisted data acquisition method. By simulating the Doppler effect and micro-Doppler characteristics of the human body, the time-domain digital coding metasurface can effectively replace human experiments and assist in constructing Wi-Fi sensing datasets. To verify the feasibility of this method, we develop a time-domain digital coding metasurface with 0°–360° full-phase modulation capability. Experimental results show that the signals generated by the metasurface retain the motion characteristics of the human body, complement real samples, reduce the complexity of data collection, and finally improve the monitoring accuracy of the classification model significantly. This method provides an innovative and feasible solution for data acquisition for Wi-Fi sensing technology.
Dual Function Radar and Communication (DFRC)-integrated electronic equipment platform, which combines detection and communication functions, effectively addresses issues such as platform limitations, resource constraints, and electromagnetic compatibility by sharing hardware platforms and transmitting waveforms. Therefore, it has become a research hotspot in recent years. The DFRC technology, centered on detection functionality and incorporating limited communication capabilities, has remarkable application prospects in typical detection scenarios, such as early warning and surveillance and tracking guidance under future combat conditions. This paper focuses on using the signal design method to optimize radar detection performance by effectively adjusting the trade-off between detection and communication in multi-domain resource utilization by guaranteeing a minimum communication performance. First, the performance measurement criteria of DFRC systems were summarized. Then, the paper provides a comprehensive introduction to the DFRC signal design methods under typical detection scenarios and a thorough analysis of the problems and current solutions of each signal design method. Finally, a summary and future research directions are outlined.
Dual Function Radar and Communication (DFRC)-integrated electronic equipment platform, which combines detection and communication functions, effectively addresses issues such as platform limitations, resource constraints, and electromagnetic compatibility by sharing hardware platforms and transmitting waveforms. Therefore, it has become a research hotspot in recent years. The DFRC technology, centered on detection functionality and incorporating limited communication capabilities, has remarkable application prospects in typical detection scenarios, such as early warning and surveillance and tracking guidance under future combat conditions. This paper focuses on using the signal design method to optimize radar detection performance by effectively adjusting the trade-off between detection and communication in multi-domain resource utilization by guaranteeing a minimum communication performance. First, the performance measurement criteria of DFRC systems were summarized. Then, the paper provides a comprehensive introduction to the DFRC signal design methods under typical detection scenarios and a thorough analysis of the problems and current solutions of each signal design method. Finally, a summary and future research directions are outlined.
Bistatic Synthetic Aperture Radar (SAR), with the separated transmitter and receiver working in coordination, cannot only achieves high-resolution imaging in the forward-looking mode, but also possesses outstanding concealment and anti-interference capabilities. Therefore, bistatic SAR thrives in both civilian and military applications, such as ocean monitoring or reconnaissance imaging. However, ship targets are typically influenced by sea waves, generating unknown and complex three-dimensional oscillations. These random oscillations and radar motions vary with slow time, making the imaging view of bistatic SAR ship targets strongly time-dependent, so that it is extremely difficult to extract effective target features from final imaging results. Moreover, target oscillations are also coupled with the motion of bistatic platforms, which causes severe nonlinear spatial Doppler shifts in target echoes, and thus bistatic SAR images are usually defocused. To address these problems, this paper proposes an imaging method for bistatic SAR ship target by imaging time optimization, which generates well-focused bistatic SAR ship target images with the optimal views. Firstly, short-time Fourier transform is utilized to extract the time-frequency information of the ship. Secondly, based on this time-frequency information from multiple strong scatterers, the optimal three-dimensional rotation parameters are estimated, revealing the time-varying characteristics of the imaging projection plane. Then, the optimal imaging time centers are selected based on the optimal imaging projection planes, while the corresponding optimal imaging time intervals are chosen based on the optimal imaging resolutions. Finally, with the selected optimal imaging times, the desired images of the bistatic SAR ship target are produced. Simulation experiments verify the accuracy of target rotation parameter estimation under different bistatic configurations and noise conditions, as well as the effectiveness of imaging projection plane selection. In general, this method tackles with the issues of the time-varying imaging views of bistatic SAR ship targets and nonlinear spatial Doppler shifts, obtaining well-focused and optimally viewed target images, which significantly enhances the accuracy of subsequent target feature extraction.
Bistatic Synthetic Aperture Radar (SAR), with the separated transmitter and receiver working in coordination, cannot only achieves high-resolution imaging in the forward-looking mode, but also possesses outstanding concealment and anti-interference capabilities. Therefore, bistatic SAR thrives in both civilian and military applications, such as ocean monitoring or reconnaissance imaging. However, ship targets are typically influenced by sea waves, generating unknown and complex three-dimensional oscillations. These random oscillations and radar motions vary with slow time, making the imaging view of bistatic SAR ship targets strongly time-dependent, so that it is extremely difficult to extract effective target features from final imaging results. Moreover, target oscillations are also coupled with the motion of bistatic platforms, which causes severe nonlinear spatial Doppler shifts in target echoes, and thus bistatic SAR images are usually defocused. To address these problems, this paper proposes an imaging method for bistatic SAR ship target by imaging time optimization, which generates well-focused bistatic SAR ship target images with the optimal views. Firstly, short-time Fourier transform is utilized to extract the time-frequency information of the ship. Secondly, based on this time-frequency information from multiple strong scatterers, the optimal three-dimensional rotation parameters are estimated, revealing the time-varying characteristics of the imaging projection plane. Then, the optimal imaging time centers are selected based on the optimal imaging projection planes, while the corresponding optimal imaging time intervals are chosen based on the optimal imaging resolutions. Finally, with the selected optimal imaging times, the desired images of the bistatic SAR ship target are produced. Simulation experiments verify the accuracy of target rotation parameter estimation under different bistatic configurations and noise conditions, as well as the effectiveness of imaging projection plane selection. In general, this method tackles with the issues of the time-varying imaging views of bistatic SAR ship targets and nonlinear spatial Doppler shifts, obtaining well-focused and optimally viewed target images, which significantly enhances the accuracy of subsequent target feature extraction.
With the emergence of the low-altitude economy, the communication and detection issues of Unmanned Aerial Vehicles (UAVs) have gained considerable attention. This paper investigates sensing reference signal design for Integrated Sensing And Communication (ISAC) in Orthogonal Frequency Division Multiplexing (OFDM) systems aimed at detecting long-range, high-speed UAVs. To address the ambiguity problem in long-range and high-speed UAV detection, traditional reference signal designs require densely arranged reference signals, leading to significant resource overhead. In addition, long-range detection based on OFDM waveforms faces challenges from Inter-Symbol Interference (ISI). To address these issues, this paper first proposes a reference signal pattern that supports long-range detection and resists ISI, achieving the maximum unambiguous detection range of the system with reduced resource overhead. Then, to address the challenge of high-speed detection, the paper incorporates range-rate into the Chinese Remainder Theorem-based method. Through the proper configuration of sensing reference signals and the cancellation of ghost targets, this approach significantly increases the unambiguous detection velocity while minimizing resource usage and avoiding the generation of ghost targets. The effectiveness of the proposed methods is validated through simulations. Simulation results show that compared with the traditional sensing reference signal design, our proposed scheme can reduce 72% overhead of reference signals for long-range and high-speed UAV detections.
With the emergence of the low-altitude economy, the communication and detection issues of Unmanned Aerial Vehicles (UAVs) have gained considerable attention. This paper investigates sensing reference signal design for Integrated Sensing And Communication (ISAC) in Orthogonal Frequency Division Multiplexing (OFDM) systems aimed at detecting long-range, high-speed UAVs. To address the ambiguity problem in long-range and high-speed UAV detection, traditional reference signal designs require densely arranged reference signals, leading to significant resource overhead. In addition, long-range detection based on OFDM waveforms faces challenges from Inter-Symbol Interference (ISI). To address these issues, this paper first proposes a reference signal pattern that supports long-range detection and resists ISI, achieving the maximum unambiguous detection range of the system with reduced resource overhead. Then, to address the challenge of high-speed detection, the paper incorporates range-rate into the Chinese Remainder Theorem-based method. Through the proper configuration of sensing reference signals and the cancellation of ghost targets, this approach significantly increases the unambiguous detection velocity while minimizing resource usage and avoiding the generation of ghost targets. The effectiveness of the proposed methods is validated through simulations. Simulation results show that compared with the traditional sensing reference signal design, our proposed scheme can reduce 72% overhead of reference signals for long-range and high-speed UAV detections.
This paper proposes an intelligent framework based on a cell-free network architecture, called HRT-Net. HRT-Net is designed to enhance multi-station collaborative sensing problems for joint radar and communication systems, offering accurate and resource-efficient target location estimation. First, the sensing area is divided into sub-regions and a lightweight region selection network employing depthwise separable convolution; this approach coarsely identifies the target’s sub-region, reducing computational demands and enabling extensive area coverage. To tackle interstation data disparity, we propose a channel-wise unidimensional attention mechanism. This mechanism aggregates multi-station sensing data effectively, enhancing feature extraction and representation by generating attention weight maps that refine the original features. Finally, we design a target localization network featuring multi-scale and multi-residual connections. This network extracts comprehensive, deep features and achieves multi-level feature fusion, allowing for reliable mapping of data to the target coordinates. Extensive simulations and real-world experiments validate the effectiveness and robustness of our scheme. The results show that compared with the existing methods, HRT-Net achieves centimeter-level target localization with low computational complexity and minimal storage overhead.
This paper proposes an intelligent framework based on a cell-free network architecture, called HRT-Net. HRT-Net is designed to enhance multi-station collaborative sensing problems for joint radar and communication systems, offering accurate and resource-efficient target location estimation. First, the sensing area is divided into sub-regions and a lightweight region selection network employing depthwise separable convolution; this approach coarsely identifies the target’s sub-region, reducing computational demands and enabling extensive area coverage. To tackle interstation data disparity, we propose a channel-wise unidimensional attention mechanism. This mechanism aggregates multi-station sensing data effectively, enhancing feature extraction and representation by generating attention weight maps that refine the original features. Finally, we design a target localization network featuring multi-scale and multi-residual connections. This network extracts comprehensive, deep features and achieves multi-level feature fusion, allowing for reliable mapping of data to the target coordinates. Extensive simulations and real-world experiments validate the effectiveness and robustness of our scheme. The results show that compared with the existing methods, HRT-Net achieves centimeter-level target localization with low computational complexity and minimal storage overhead.
Modern radar systems face increasingly complex challenges in tasks such as detection, tracking, and identification. The diversity of task types, limited data resources, and strict execution time requirements make radar task scheduling a strongly NP-hard problem. However, existing scheduling algorithms struggle to efficiently handle multiradar collaborative tasks involving complex logical constraints. Therefore, Artificial Intelligence (AI)-based scheduling algorithms have gained significant attention. However, their efficiency is heavily dependent on effectively extracting the key features of the problem. The ability to quickly and comprehensively extract common features of multiradar scheduling problems is essential for improving the efficiency of such AI scheduling algorithms. Therefore, this paper proposes a Model Knowledge Embedded Graph Neural Network (MKEGNN) scheduling algorithm. This method frames the radar task collaborative scheduling problem as a heterogeneous network graph, leveraging model knowledge to optimize the training process of the Graph Neural Network (GNN) algorithm. A key innovation of this algorithm is its capability to capture critical model knowledge using low-complexity calculations, which helps to further optimize the GNN model. During the feature extraction stage, the algorithm employs a random unitary matrix transformation. This approach utilizes the spectral features of the random Laplacian matrix from the task’s heterogeneous graph as global features, enhancing the GNN’s ability to extract shared problem features while downplaying individual characteristics. In the parameterized decision-making stage, the algorithm leverages the upper and lower bound knowledge derived from guiding and empirical solutions of the problem model. This strategy significantly reduces the decision space, enabling the network to optimize quickly and accelerating the learning process. Extensive simulation experiments confirm the effectiveness of the MKEGNN algorithm. Compared to existing approaches, it demonstrates improved stability and accuracy across all task sets, boosting the scheduling success rate by 3%~10% and the weighted success rate by 5%~15%. For particularly challenging task sets involving complex multiradar collaborations, the success rate improves by over 4%. The results highlight the algorithm’s stability and robustness.
Modern radar systems face increasingly complex challenges in tasks such as detection, tracking, and identification. The diversity of task types, limited data resources, and strict execution time requirements make radar task scheduling a strongly NP-hard problem. However, existing scheduling algorithms struggle to efficiently handle multiradar collaborative tasks involving complex logical constraints. Therefore, Artificial Intelligence (AI)-based scheduling algorithms have gained significant attention. However, their efficiency is heavily dependent on effectively extracting the key features of the problem. The ability to quickly and comprehensively extract common features of multiradar scheduling problems is essential for improving the efficiency of such AI scheduling algorithms. Therefore, this paper proposes a Model Knowledge Embedded Graph Neural Network (MKEGNN) scheduling algorithm. This method frames the radar task collaborative scheduling problem as a heterogeneous network graph, leveraging model knowledge to optimize the training process of the Graph Neural Network (GNN) algorithm. A key innovation of this algorithm is its capability to capture critical model knowledge using low-complexity calculations, which helps to further optimize the GNN model. During the feature extraction stage, the algorithm employs a random unitary matrix transformation. This approach utilizes the spectral features of the random Laplacian matrix from the task’s heterogeneous graph as global features, enhancing the GNN’s ability to extract shared problem features while downplaying individual characteristics. In the parameterized decision-making stage, the algorithm leverages the upper and lower bound knowledge derived from guiding and empirical solutions of the problem model. This strategy significantly reduces the decision space, enabling the network to optimize quickly and accelerating the learning process. Extensive simulation experiments confirm the effectiveness of the MKEGNN algorithm. Compared to existing approaches, it demonstrates improved stability and accuracy across all task sets, boosting the scheduling success rate by 3%~10% and the weighted success rate by 5%~15%. For particularly challenging task sets involving complex multiradar collaborations, the success rate improves by over 4%. The results highlight the algorithm’s stability and robustness.
,
Light Detection And Ranging (LiDAR) systems lack texture and color information, while cameras lack depth information. Thus, the information obtained from LiDAR and cameras is highly complementary. Therefore, combining these two types of sensors can obtain rich observation data and improve the accuracy and stability of environmental perception. The accurate joint calibration of the external parameters of these two types of sensors is the premise of data fusion. At present, most joint calibration methods need to be processed through target calibration and manual point selection. This makes it impossible to use them in dynamic application scenarios. This paper presents a ResCalib deep neural network model, which can be used to solve the problem of the online joint calibration of LiDAR and a camera. The method uses LiDAR point clouds, monocular images, and in-camera parameter matrices as the input to achieve the external parameters solving of LiDAR and cameras; however, the method has low dependence on external features or targets. ResCalib is a geometrically supervised deep neural network that automatically estimates the six-degree-of-freedom external parameter relationship between LiDAR and cameras by implementing supervised learning to maximize the geometric and photometric consistencies of input images and point clouds. Experiments show that the proposed method can correct errors in calibrating rotation by ±10° and translation by ±0.2 m. The average absolute errors of the rotation and translation components of the calibration solution are 0.35° and 0.032 m, respectively, and the time required for single-group calibration is 0.018 s, which provides technical support for realizing automatic joint calibration in a dynamic environment.
Light Detection And Ranging (LiDAR) systems lack texture and color information, while cameras lack depth information. Thus, the information obtained from LiDAR and cameras is highly complementary. Therefore, combining these two types of sensors can obtain rich observation data and improve the accuracy and stability of environmental perception. The accurate joint calibration of the external parameters of these two types of sensors is the premise of data fusion. At present, most joint calibration methods need to be processed through target calibration and manual point selection. This makes it impossible to use them in dynamic application scenarios. This paper presents a ResCalib deep neural network model, which can be used to solve the problem of the online joint calibration of LiDAR and a camera. The method uses LiDAR point clouds, monocular images, and in-camera parameter matrices as the input to achieve the external parameters solving of LiDAR and cameras; however, the method has low dependence on external features or targets. ResCalib is a geometrically supervised deep neural network that automatically estimates the six-degree-of-freedom external parameter relationship between LiDAR and cameras by implementing supervised learning to maximize the geometric and photometric consistencies of input images and point clouds. Experiments show that the proposed method can correct errors in calibrating rotation by ±10° and translation by ±0.2 m. The average absolute errors of the rotation and translation components of the calibration solution are 0.35° and 0.032 m, respectively, and the time required for single-group calibration is 0.018 s, which provides technical support for realizing automatic joint calibration in a dynamic environment.
This paper addresses the task allocation problem in swarm Unmanned Aerial Vehicle (UAV) Synthetic Aperture Radar (SAR) systems and proposes a method based on low-redundancy chromosome encoding. It starts with a thorough analysis of the relationship between imaging performance and geometric configurations in SAR imaging tasks and accordingly constructs a path function that reflects imaging resolution performance. The task allocation problem is then formulated as a generalized, balanced multiple traveling salesman problem. To enhance the search efficiency and accuracy of the algorithm, a two-part chromosome encoding scheme with low redundancy is introduced. Additionally, considering possible unexpected situations and dynamic changes in practical applications, a dynamic task allocation strategy integrating a contract net protocol and attention mechanisms is proposed. This method can flexibly adjust task allocation strategies based on actual conditions, ensuring the robustness of the system. Simulation experiments validate the effectiveness of the proposed method.
This paper addresses the task allocation problem in swarm Unmanned Aerial Vehicle (UAV) Synthetic Aperture Radar (SAR) systems and proposes a method based on low-redundancy chromosome encoding. It starts with a thorough analysis of the relationship between imaging performance and geometric configurations in SAR imaging tasks and accordingly constructs a path function that reflects imaging resolution performance. The task allocation problem is then formulated as a generalized, balanced multiple traveling salesman problem. To enhance the search efficiency and accuracy of the algorithm, a two-part chromosome encoding scheme with low redundancy is introduced. Additionally, considering possible unexpected situations and dynamic changes in practical applications, a dynamic task allocation strategy integrating a contract net protocol and attention mechanisms is proposed. This method can flexibly adjust task allocation strategies based on actual conditions, ensuring the robustness of the system. Simulation experiments validate the effectiveness of the proposed method.
The meter-wave radar, known for its wide beamwidth, often faces challenges in detecting low-elevation targets due to interference from multipath signals. These reflected signals diminish the strength of the direct signal, leading to poor accuracy in low-elevation angle measurements. To solve this problem, this paper proposes a multipath suppression and high-precision angle measurement method. This method, based on a signal-level feature game approach, incorporates two interconnected components working together. The direct signal extractor mines the direct signal submerged within the multipath signal. The direct signal feature discriminator ensures the integrity and validity of the extracted direct signal. By continuously interacting and optimizing one another, these components suppress the multipath interference effectively and enhance the quality of the direct signal. The refined signal is then processed using advanced super-resolution algorithms to estimate the direction of arrival. Computer simulations have shown that the proposed algorithm achieves high performance without relying on strict target angle information, effectively suppressing multipath signals. This approach noticeably enhances the estimation accuracy of classic super-resolution algorithms. Compared to existing supervised learning models, the proposed algorithm offers better generalization to unknown signal parameters and multipath distribution models.
The meter-wave radar, known for its wide beamwidth, often faces challenges in detecting low-elevation targets due to interference from multipath signals. These reflected signals diminish the strength of the direct signal, leading to poor accuracy in low-elevation angle measurements. To solve this problem, this paper proposes a multipath suppression and high-precision angle measurement method. This method, based on a signal-level feature game approach, incorporates two interconnected components working together. The direct signal extractor mines the direct signal submerged within the multipath signal. The direct signal feature discriminator ensures the integrity and validity of the extracted direct signal. By continuously interacting and optimizing one another, these components suppress the multipath interference effectively and enhance the quality of the direct signal. The refined signal is then processed using advanced super-resolution algorithms to estimate the direction of arrival. Computer simulations have shown that the proposed algorithm achieves high performance without relying on strict target angle information, effectively suppressing multipath signals. This approach noticeably enhances the estimation accuracy of classic super-resolution algorithms. Compared to existing supervised learning models, the proposed algorithm offers better generalization to unknown signal parameters and multipath distribution models.
,
Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features.
Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features.
Obtaining internal layout information before entering unfamiliar buildings is crucial for various applications, such as counter-terrorism operations, disaster relief, and surveillance, highlighting its great practical significance and research value. To enable the acquisition of the building layout information, this paper presents a building layout tomography method based on joint multidomain direct wave estimation. First, a linear approximation model is established to map the relationship between the propagation delay of direct wave signals and the layout of the unknown building. Using this model, the distribution characteristics of direct wave and multipath signals in the fast-time, slow-time, and Doppler domains are analyzed in the tomographic imaging mode. A joint multidomain direct wave estimation algorithm is then proposed to achieve the suppression of multipath interference and precise estimation of direct wave signals. Additionally, a projection matrix adaptive correction algebraic reconstruction algorithm with total variation constraints is proposed, which enhances building layout inversion quality under limited data scenarios. Finally, electromagnetic simulation and experimental results demonstrate the effectiveness of the proposed building layout tomography method, with structural similarity indices of 91.2% and 81.7% for the reconstructed results, significantly outperforming existing building layout tomography methods.
Obtaining internal layout information before entering unfamiliar buildings is crucial for various applications, such as counter-terrorism operations, disaster relief, and surveillance, highlighting its great practical significance and research value. To enable the acquisition of the building layout information, this paper presents a building layout tomography method based on joint multidomain direct wave estimation. First, a linear approximation model is established to map the relationship between the propagation delay of direct wave signals and the layout of the unknown building. Using this model, the distribution characteristics of direct wave and multipath signals in the fast-time, slow-time, and Doppler domains are analyzed in the tomographic imaging mode. A joint multidomain direct wave estimation algorithm is then proposed to achieve the suppression of multipath interference and precise estimation of direct wave signals. Additionally, a projection matrix adaptive correction algebraic reconstruction algorithm with total variation constraints is proposed, which enhances building layout inversion quality under limited data scenarios. Finally, electromagnetic simulation and experimental results demonstrate the effectiveness of the proposed building layout tomography method, with structural similarity indices of 91.2% and 81.7% for the reconstructed results, significantly outperforming existing building layout tomography methods.
Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multi-view SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multi-view SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multi-view SAR image registration, with an average error reduction of over 65%, 3~5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime.
Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multi-view SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multi-view SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multi-view SAR image registration, with an average error reduction of over 65%, 3~5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime.
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio.
Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio.
Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR
Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
2025, 14(1): 1-15.
Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness.
Low-frequency Ultra-WideBand (UWB) radar offers significant advantages in the field of human activity recognition owing to its excellent penetration and resolution. To address the issues of high computational complexity and extensive network parameters in existing action recognition algorithms, this study proposes an efficient and lightweight human activity recognition method using UWB radar based on spatiotemporal point clouds. First, four-dimensional motion data of the human body are collected using UWB radar. A discrete sampling method is then employed to convert the radar images into point cloud representations. Because human activity recognition is a classification problem on time series, this paper combines the PointNet++ network with the Transformer network to propose a lightweight spatiotemporal network. By extracting and analyzing the spatiotemporal features of four-dimensional point clouds, end-to-end human activity recognition is achieved. During the model training process, a multithreshold fusion method is proposed for point cloud data to further enhance the model’s generalization and recognition capabilities. The proposed method is then validated using a public four-dimensional radar imaging dataset and compared with existing methods. The results show that the proposed method achieves a human activity recognition rate of 96.75% while consuming fewer parameters and computational resources, thereby verifying its effectiveness.
2025, 14(1): 16-27.
This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultra-wideband radar human pose-incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method.
This study focuses on integrating optical and radar sensors for human pose estimation. Based on the physical correspondence between the continuous-time micromotion accumulation and pose increment, a single-channel ultra-wideband radar human pose-incremental estimation scheme is proposed. Specifically, by constructing a spatiotemporal incremental estimation network, using spatiotemporal pseudo-3D convolutional and time-domain-dilated convolutional layers to extract spatiotemporal micromotion features step by step, mapping these features to human pose increments within a time period, and combining them with the initial pose values provided by optics, we can realize a 3D pose estimation of the human body. The measured data results show that fusion attitude estimation achieves an estimation error of 5.38 cm in the original action set and can achieve continuous attitude estimation for the period of walking actions. Comparison and ablation experiments with other radar attitude estimation methods demonstrate the advantages of the proposed method.
2025, 14(1): 28-44.
Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods.
Ultra-WideBand (UWB) radar exhibits strong antijamming capabilities and high penetrability, making it widely used for through-wall human-target detection. Although single-transmitter, single-receiver radar offers the advantages of a compact size and lightweight design, it cannot achieve Two-Dimensional (2D) target localization. Multiple-Input Multiple-Output (MIMO) array radar can localize targets but faces a trade-off between size and resolution and involves longer computation durations. This paper proposes an automatic multitarget detection method based on distributed through-wall radar. First, the echo signal is preprocessed in the time domain and then transformed into the time-frequency domain. Target candidate distance cells are identified using a constant false alarm rate detection method, and candidate signals are enhanced using a filtering matrix. The enhanced signals are then correlated based on vital information, such as breathing, to achieve target matching. Finally, a positioning module is employed to determine the radar’s location, enabling rapid and automatic detection of the target’s location. To mitigate the effect of occasional errors on the final positioning results, a scene segmentation method is used to achieve 2D localization of human targets in through-wall scenarios. Experimental results demonstrate that the proposed method can successfully detect and localize multiple targets in through-wall scenarios, with a computation duration of 0.95 s based on the measured data. In particular, the method is over four times faster than other methods.
2025, 14(1): 44-61.
Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%.
Through-wall human pose reconstruction and behavior recognition have enormous potential in fields like intelligent security and virtual reality. However, existing methods for through-wall human sensing often fail to adequately model four-Dimensional (4D) spatiotemporal features and overlook the influence of walls on signal quality. To address these issues, this study proposes an innovative architecture for through-wall human sensing using a 4D imaging radar. The core of this approach is the ST2W-AP fusion network, which is designed using a stepwise spatiotemporal separation strategy. This network overcomes the limitations of mainstream deep learning libraries that currently lack 4D convolution capabilities, which hinders the effective use of multiframe three-Dimensional (3D) voxel spatiotemporal domain information. By preserving 3D spatial information and using long-sequence temporal information, the proposed ST2W-AP network considerably enhances the pose estimation and behavior recognition performance. Additionally, to address the influence of walls on signal quality, this paper introduces a deep echo domain compensator that leverages the powerful fitting performance and parallel output characteristics of deep learning, thereby reducing the computational overhead of traditional wall compensation methods. Extensive experimental results demonstrate that compared with the best existing methods, the ST2W-AP network reduces the average joint position error by 33.57% and improves the F1 score for behavior recognition by 0.51%.
2025, 14(1): 62-72.
Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing.
Unmanned Aerial Vehicle (UAV)-borne radar technology can solve the problems associated with noncontact vital sign sensing, such as limited detection range, slow moving speed, and difficult access to certain areas. In this study, we mount a 4D imaging radar on a multirotor UAV and propose a UAV-borne radar-based method for sensing vital signs through point cloud registration. Through registration and motion compensation of the radar point cloud, the motion error interference of UAV hovering is eliminated; vital sign signals are then obtained after aligning the human target. Simulation results show that the proposed method can effectively align the 4D radar point cloud sequence and accurately extract the respiration and heartbeat signals of human targets, thereby providing a way to realize UAV-borne vital sign sensing.
2025, 14(1): 73-90.
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
2025, 14(1): 90-102.
This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data.
This study proposes a computer vision-assisted millimeter wave wireless channel simulation method incorporating the scattering characteristics of human motions. The aim is to rapidly and cost-effectively generate a training dataset for wireless human motion recognition, thereby avoiding the laborious and cost-intensive efforts associated with physical measurements. Specifically, the simulation process includes the following steps. First, the human body is modeled as 35 interconnected ellipsoids using a primitive-based model, and motion data of these ellipsoids are extracted from videos of human motion. A simplified ray tracing method is then used to obtain the channel response for each snapshot of the primitive model during the motion process. Finally, Doppler analysis is performed on the channel responses of the snapshots to obtain the Doppler spectrograms. The Doppler spectrograms obtained from the simulation can be used to train deep neural network for real wireless human motion recognition. This study examines the channel simulation and action recognition results for four common human actions (“walking” “running” “falling” and “sitting down”) in the 60 GHz band. Experimental results indicate that the deep neural network trained with the simulated dataset achieves an average recognition accuracy of 73.0% in real-world wireless motion recognition. Furthermore, he recognition accuracy can be increased to 93.75% via unlabeled transfer learning and fine-tuning with a small amount of actual data.
2025, 14(1): 102-116.
Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS.
Sleep Apnea Hypopnea Syndrome (SAHS) is a common chronic sleep-related breathing disorder that affects individuals’ sleep quality and physical health. This article presents a sleep apnea and hypopnea detection framework based on multisource signal fusion. Integrating millimeter-wave radar micro-motion signals and pulse wave signals of PhotoPlethysmoGraphy (PPG) achieves a highly reliable and light-contact diagnosis of SAHS, addressing the drawbacks of traditional medical methods that rely on PolySomnoGraphy (PSG) for sleep monitoring, such as poor comfort and high costs. This study used a radar and pulse wave data preprocessing algorithm to extract time-frequency information and artificial features from the signals, balancing the accuracy and robustness of sleep-breathing abnormality event detection Additionally, a deep neural network was designed to fuse the two types of signals for precise identification of sleep apnea and hypopnea events, and to estimate the Apnea-Hypopnea Index (AHI) for quantitative assessment of sleep-breathing abnormality severity. Experimental results of a clinical trial dataset at Shanghai Jiaotong University School of Medicine Affiliated Sixth People’s Hospital demonstrated that the AHI estimated by the proposed approach correlates with the gold standard PSG with a coefficient of 0.93, indicating good consistency. This approach is a promiseing tool for home sleep-breathing monitoring and preliminary diagnosis of SAHS.
2025, 14(1): 117-134.
In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°.
In recent years, there has been an increasing interest in respiratory monitoring in multiperson environments and simultaneous monitoring of the health status of multiple people. Among the algorithms developed for multiperson respiratory detection, blind source separation algorithms have attracted the attention of researchers because they do not require prior information and are less dependent on hardware performance. However, in the context of multiperson respiratory monitoring, the current blind source separation algorithm usually separates phase signals as the source signal. This article compares the distance dimension and phase signals under Frequency-modulated continuous-wave radar, calculates the approximate error associated with using the phase signal as the source signal, and verifies the separation effect through simulations. The distance dimension signal is better to use as the source signal. In addition, this article proposes a multiperson respiratory signal separation algorithm based on noncircular complex independent component analysis and analyzes the impact of different respiratory signal parameters on the separation effect. Simulation and experimental measurements show that the proposed method is suitable for detecting multiperson respiratory signals under controlled conditions and can accurately separate respiratory signals when the angle of the two targets to the radar is 9.46°.
2025, 14(1): 135-150.
In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra-WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%).
In non-inductive radar vital sign monitoring, frequency-modulated radars (such as Frequency Modulated Continuous Wave (FMCW) and Ultra-WideBand (UWB)) are more effective than Continuous Wave (CW) radars at distinguishing targets from clutter in terms of distance. Using range Fourier transform, the heartbeat and breathing signals can be extracted from quasi-static targets across various distance intervals, thereby improving monitoring accuracy. However, the commonly used range Fast Fourier Transform (FFT) presents certain limitations: The breathing amplitude of the subject may cross the range bin boundary, compromising signal integrity, while breathing movements can cause amplitude modulation of physiological signals, hindering waveform recovery. To address these reasons, we propose an algorithm architecture featuring range tap reconstruction and dynamic demodulation. We tested the algorithm performance in simulations and experiments for the cross range bin cases. Simulation results indicate that processing signals crossing range bins with our algorithm improves the signal-to-noise ratio by 17±5 dB. In addition, experiments recorded Doppler Heartbeat Diagram (DHD) signals from eight subjects, comparing the consistency between the DHD signals and the ballistocardiogram. The root means square error of the C-C interval in the DHD signal relative to the J-J interval in the BallistoCardioGram (BCG) signal was 21.58±13.26 ms (3.40%±2.08%).
2025, 14(1): 151-167.
Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold-based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition.
Recent research on radar-based human activity recognition has typically focused on activities that move toward or away from radar in radial directions. Conventional Doppler-based methods can barely describe the true characteristics of nonradial activities, especially static postures or tangential activities, resulting in a considerable decline in recognition performance. To address this issue, a method for recognizing tangential human postures based on sequential images of a Multiple-Input Multiple-Output (MIMO) radar system is proposed. A time sequence of high-quality images is achieved to describe the structure of the human body and corresponding dynamic changes, where spatial and temporal features are extracted to enhance the recognition performance. First, a Constant False Alarm Rate (CFAR) algorithm is applied to locate the human target. A sliding window along the slow time axis is then utilized to divide the received signal into sequential frames. Next, a fast Fourier transform and the 2D Capon algorithm are performed on each frame to estimate range, pitch angle, and azimuth angle information, which are fused to create a tangential posture image. They are connected to form a time sequence of tangential posture images. To improve image quality, a modified joint multidomain adaptive threshold-based denoising algorithm is applied to improve the image quality by suppressing noises and enhancing human body outline and structure. Finally, a Spatio-Temporal-Convolution Long Short Term Memory (ST-ConvLSTM) network is designed to process the sequential images. In particular, the ConvLSTM cell is used to extract continuous image features by combining convolution operation with the LSTM cell. Moreover, spatial and temporal attention modules are utilized to emphasize intraframe and interframe focus for improving recognition performance. Extensive experiments show that our proposed method can achieve an accuracy rate of 96.9% in classifying eight typical tangential human postures, demonstrating its feasibility and superiority in tangential human posture recognition.
2025, 14(1): 168-188.
Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-Noise Ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection.
Amidst the global aging trend and a growing emphasis on healthy living, there is an increased demand for unobtrusive home health monitoring systems. However, the current mainstream detection methods in this regard suffer from low privacy trust, poor electromagnetic compatibility, and high manufacturing costs. To address these challenges, this paper introduces a noncontact vital signal collection device using Ultrasonic radar (U-Sodar), including a set of hardware based on a three-transmitter four-receiver Multiple Input Multiple Output (MIMO) architecture and a set of signal processing algorithms. The U-Sodar local oscillator uses frequency division technology with low phase noise and high detection accuracy; the receiver employs front-end direct sampling technology to simplify the involved structure and effectively reduce external noise, and the transmitter uses an adjustable PWM direct drive to emit various ultrasonic waveforms, possessing software-defined ultrasonic system characteristics. The signal processing algorithm of U-Sodar adopts the graph processing technique of signal chord length and realizes accurate recovery of signal phase under 5 dB Signal-to-Noise Ratio (SNR) using picture filtering and then reconstruction. Experimental tests on the U-Sodar system demonstrated its anti-interference and penetration capabilities, proving that ultrasonic penetration relies on material porosity rather than intermedium vibration conduction. The minimum measurable displacement for a given SNR with correct demodulation probability is also derived. The results of actual human vital sign signal measurement experiments indicate that U-Sodar can accurately measure respiration and heartbeat at 3.0 m and 1.5 m, respectively, and the heartbeat waveforms can be measured within 1.0 m. Overall, the experimental results demonstrate the feasibility and application potential of U-Sodar in noncontact vital sign detection.
2025, 14(1): 189-203.
Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments.
Since 2010, the utilization of commercial WiFi devices for contact-free respiration monitoring has garnered significant attention. However, existing WiFi-based respiration detection methods are susceptible to constraints imposed by hardware limitations and require the person to directly face the WiFi device. Specifically, signal reflection from the thoracic cavity diminishes when the body is oriented sideways or with the back toward the device, leading to complexities in respiratory monitoring. To mitigate these hardware-associated limitations and enhance robustness, we leveraged the signal-amplifying potential of Intelligent Reflecting Surfaces (IRS) to establish a high-precision respiration detection system. This system capitalizes on IRS technology to manipulate signal propagation within the environment to enhance signal reflection from the body, finally achieving posture-resilient respiratory monitoring. Furthermore, the system can be easily deployed without the prior knowledge of antenna placement or environmental intricacies. Compared with conventional techniques, our experimental results validate that this system markedly enhances respiratory monitoring across various postural configurations in indoor environments.
2025, 14(1): 204-228.
Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-WideBand) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information-processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time-frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research.
Due to their many advantages, such as simple structure, low transmission power, strong penetration capability, high resolution, and high transmission speed, UWB (Ultra-WideBand) radars have been widely used for detecting life information in various scenarios. To effectively detect life information, the key is to use radar echo information-processing technology to extract the breathing and heartbeat signals of the involved person from UWB radar echoes. This technology is crucial for determining life information in different scenarios, such as obtaining location information, monitoring and preventing diseases, and ensuring personnel safety. Therefore, this paper introduces a UWB radar and its classification, electromagnetic scattering mechanisms, and detection principles. It also analyzes the current state of radar echo model construction for breathing and heartbeat signals. The paper then reviews existing methods for extracting breathing and heartbeat signals, including time domain, frequency domain, and time-frequency domain analysis methods. Finally, it summarizes research progress in breathing and heartbeat signal extraction in various scenarios, such as mine rescue, earthquake rescue, medical health, and through-wall detection, as well as the main problems in current research and focus areas for future research.
2025, 14(1): 229-247.
Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field.
Human pose estimation holds tremendous potential in fields such as human-computer interaction, motion capture, and virtual reality, making it a focus in human perception research. However, optical image-based pose estimation methods are often limited by lighting conditions and privacy concerns. Therefore, the use of wireless signals that can operate under various lighting conditions and obstructions while ensuring privacy is gaining increasing attention for human pose estimation. Wireless signal-based pose estimation technologies can be categorized into high-frequency and low-frequency methods. These methods differ in their hardware systems, signal characteristics, noise processing, and deep learning algorithm design based on the signal frequency used. This paper highlights research advancements and notable works in human pose reconstruction using millimeter-wave radar, through-wall radar, and WiFi. It analyzes the advantages and limitations of each signal type and explores potential research challenges and future developments in the field.
Announcements
More- Call for Papers: Special Issue on Radar-Based Indoor Human Body Sensing Technology
- Call for Papers: Special Issue on Digital Twin of Electromagnetic Space and Electromagnetic Feature Remodeling
- The First Announcement of the 2nd Radar Academic Frontier Conference (RAFC 2025)
- 2025 Radar Future Star Application Notice
- Call for Papers: Special Issue on SAR Marine Remote Sensing Information Acquisition and Application
- Shortlist of 2024 Outstanding Papers
Journal News
More-
4th Journal of Radars Doctoral Forum Successfully Held
- The 4th Founding meeting of the Editorial Board of the Journal of Radars was held in Xi'an
- The 1st Radar Academic Frontier Conference was successfully held in Xi 'an!
- Radar and Microwave Vision Academic Conference and the 4th Radar Journal Lecture Hall was held in Xi 'an
- 2024 The 3rd Radar Rising Star Academic Report was successfully held!
- A Guide to the Core Journal of China 2023 edition released! Journal of Radars ranked first in electronic technology and communication technology
Downloads
MoreLinks
More 微信 | 公众平台
随时查询稿件 获取最新论文 知晓行业信息
- EI
- Scopus
- DOAJ
- JST
- CSCD
- CSTPCD
- CNKI
- 中文核心期刊