Display Method:
Interrupted Sampling Repeater Jamming (ISRJ) is a type of intrapulse coherent jamming that can form multiple realistic false targets that lead or lag behind the actual target, severely affecting radar detection. It is one of the hotspots of current research on electronic counter-countermeasures. To address this problem, an anti-ISRJ method based on an intrapulse frequency-coded joint Frequency Modulation (FM) slope agile waveform is proposed in this paper. In this method, the radar first transmits an intrapulse frequency-coded joint FM slope agile signal to improve the mutual coverability of subpulses by manipulating subpulse center frequency and FM slope agility. Next, the echo signal is divided into several slices according to the subpulse timing of the transmitted signal. Then, the Fuzzy C-Means (FCM) algorithm is used to classify the echo slices. Finally, the interference is suppressed via fractional-domain joint time domain filtering. Simulation results show that the FCM-based method can identify 100% of the interfered echo slices in a jammer synchronous sampling scenario when the Signal-to-Noise Ratio (SNR) is greater than −2.5 dB, and the Jamming-to-Signal Ratio (JSR) is greater than 5 dB. For high JSRs and low SNRs, the proposed method can effectively reduce the target energy loss and suppress the range sidelobes generated via residual interference. Moreover, the target detection probability after interference suppression exceeds 90% when JSR = 50 dB. Interrupted Sampling Repeater Jamming (ISRJ) is a type of intrapulse coherent jamming that can form multiple realistic false targets that lead or lag behind the actual target, severely affecting radar detection. It is one of the hotspots of current research on electronic counter-countermeasures. To address this problem, an anti-ISRJ method based on an intrapulse frequency-coded joint Frequency Modulation (FM) slope agile waveform is proposed in this paper. In this method, the radar first transmits an intrapulse frequency-coded joint FM slope agile signal to improve the mutual coverability of subpulses by manipulating subpulse center frequency and FM slope agility. Next, the echo signal is divided into several slices according to the subpulse timing of the transmitted signal. Then, the Fuzzy C-Means (FCM) algorithm is used to classify the echo slices. Finally, the interference is suppressed via fractional-domain joint time domain filtering. Simulation results show that the FCM-based method can identify 100% of the interfered echo slices in a jammer synchronous sampling scenario when the Signal-to-Noise Ratio (SNR) is greater than −2.5 dB, and the Jamming-to-Signal Ratio (JSR) is greater than 5 dB. For high JSRs and low SNRs, the proposed method can effectively reduce the target energy loss and suppress the range sidelobes generated via residual interference. Moreover, the target detection probability after interference suppression exceeds 90% when JSR = 50 dB.
Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network. Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments. Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
To improve the accuracy of Direction Of Arrival (DOA) estimation in Multiple Input Multiple Output (MIMO) radar systems under unknown mutual coupling, we propose a mutual coupling calibration and DOA estimation algorithm based on Sparse Learning via Iterative Minimization (SLIM). The proposed algorithm utilizes the spatial sparsity of target signals and estimates the spatial pseudo-spectra and the mutual coupling matrices of MIMO arrays through cyclic optimization. Moreover, it is hyperparameter-free and guarantees convergence. Numerical examples demonstrate that for MIMO radar systems under unknown mutual coupling conditions, the proposed algorithm can accurately estimate the DOA of targets with small angle separations and relatively high Signal-to-Noise Ratios (SNRs), even with a limited number of samples. In addition, low DOA estimation errors are achieved for targets with large angle separations and small sample sizes, even under low-SNR conditions. To improve the accuracy of Direction Of Arrival (DOA) estimation in Multiple Input Multiple Output (MIMO) radar systems under unknown mutual coupling, we propose a mutual coupling calibration and DOA estimation algorithm based on Sparse Learning via Iterative Minimization (SLIM). The proposed algorithm utilizes the spatial sparsity of target signals and estimates the spatial pseudo-spectra and the mutual coupling matrices of MIMO arrays through cyclic optimization. Moreover, it is hyperparameter-free and guarantees convergence. Numerical examples demonstrate that for MIMO radar systems under unknown mutual coupling conditions, the proposed algorithm can accurately estimate the DOA of targets with small angle separations and relatively high Signal-to-Noise Ratios (SNRs), even with a limited number of samples. In addition, low DOA estimation errors are achieved for targets with large angle separations and small sample sizes, even under low-SNR conditions.
Synthetic Aperture Radar (SAR) has the advantage of noncontact monitoring around the clock and is an important tool for closed space security monitoring. However, when SAR is employed in complex closed spaces, it is susceptible to multipath effects, resulting in a considerable number of virtual images in the image, which has a detrimental impact on interpretation. Existing methods require scene priors for multipath estimation or subaperture weighted fusion to suppress multipath; however, accurately distinguishing multipath virtual images from target images is challenging. This paper proposes a novel multi-angle dual-layer deviation measurement method that effectively distinguishes multipath virtual images from targets. The proposed method employs a large viewing angle difference to conduct multi-angle observation of the target scene, capitalizing on the fact that the position of the multipath virtual image varies with the observation angle, whereas the actual target position remains constant; this is followed by applying a dual-layer deviation measurement algorithm. The algorithm calculates the deviation between the sequence amplitude value and mean twice based on the sparsity of multipath in the multiangle sequence. The proposed method accurately detects and removes sparse and unstable multipath components, whereas the remaining stable components are averaged. This effectively suppresses multipath while retaining target information. Finally, the simulation and actual millimeter wave radar data processing verified the effectiveness of the proposed method. Synthetic Aperture Radar (SAR) has the advantage of noncontact monitoring around the clock and is an important tool for closed space security monitoring. However, when SAR is employed in complex closed spaces, it is susceptible to multipath effects, resulting in a considerable number of virtual images in the image, which has a detrimental impact on interpretation. Existing methods require scene priors for multipath estimation or subaperture weighted fusion to suppress multipath; however, accurately distinguishing multipath virtual images from target images is challenging. This paper proposes a novel multi-angle dual-layer deviation measurement method that effectively distinguishes multipath virtual images from targets. The proposed method employs a large viewing angle difference to conduct multi-angle observation of the target scene, capitalizing on the fact that the position of the multipath virtual image varies with the observation angle, whereas the actual target position remains constant; this is followed by applying a dual-layer deviation measurement algorithm. The algorithm calculates the deviation between the sequence amplitude value and mean twice based on the sparsity of multipath in the multiangle sequence. The proposed method accurately detects and removes sparse and unstable multipath components, whereas the remaining stable components are averaged. This effectively suppresses multipath while retaining target information. Finally, the simulation and actual millimeter wave radar data processing verified the effectiveness of the proposed method.
In the context of counter-reconnaissance against airborne interferometers, this study proposes a jamming method designed to disrupt the parameter measurement capabilities of interferometers by generating distributed signals based on an interrupted-sampling repeating technique. An emitter and a transmitting jammer are combined to form a distributed jamming system. The transmitting jammer samples the emitter signal and transmits the repeating signal to an interferometer. A quasi-synchronization constraint is established according to the change in the positional relation between the airborne interferometer and the jamming system. Additionally, a model for the superposition of distributed signals is provided. Then, the mathematical principle underlying distributed signal jamming is expounded according to the pulse spatial and temporal parameter measurement using the interferometer system. Moreover, the influence of various signal parameters on the jamming effect is analyzed to propose a principle for distributed signal design. Simulation and darkroom experiments show that the proposed method can effectively disrupt the accurate measurement of the pulse spatial domain and time domain parameters, such as azimuth-of-arrival, pulse width, and repetition interval. In the context of counter-reconnaissance against airborne interferometers, this study proposes a jamming method designed to disrupt the parameter measurement capabilities of interferometers by generating distributed signals based on an interrupted-sampling repeating technique. An emitter and a transmitting jammer are combined to form a distributed jamming system. The transmitting jammer samples the emitter signal and transmits the repeating signal to an interferometer. A quasi-synchronization constraint is established according to the change in the positional relation between the airborne interferometer and the jamming system. Additionally, a model for the superposition of distributed signals is provided. Then, the mathematical principle underlying distributed signal jamming is expounded according to the pulse spatial and temporal parameter measurement using the interferometer system. Moreover, the influence of various signal parameters on the jamming effect is analyzed to propose a principle for distributed signal design. Simulation and darkroom experiments show that the proposed method can effectively disrupt the accurate measurement of the pulse spatial domain and time domain parameters, such as azimuth-of-arrival, pulse width, and repetition interval.
Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images. Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images.
Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging. Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging.
In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy. In practical applications, the field of view and computation resources of an individual sensor are limited, and the development and application of multisensor networks provide more possibilities for solving challenging target tracking problems. Compared with multitarget tracking, group target tracking encounters more challenging data association and computation problems due to factors such as the proximity of targets within groups, coordinated motions, a large number of involved targets, and group splitting and merging, which will be further complicated in the multisensor fusion systems. For group target trackingunder sensors with limited field of view, we propose a scalable multisensor group target tracking method via belief propagation. Within the Bayesian framework, the method considers the uncertainty of the group structure, constructs the decomposition of the joint posterior probability density of the multisensor group targets and corresponding factor graph, and efficiently solves the data association problem by running belief propagation on the devised factor graph. Furthermore, the method has excellent scalability and low computational complexity, scaling linearly only on the numbers of sensors, preserved group partitions, and sensor measurements, and scaling quadratically on the number of targets. Finally, simulation experiments compare the performance of different methods on GOSPA and OSPA(2), which verify that the proposed method can seamlessly track grouped and ungrouped targets, fully utilize the complementary information among sensors, and improve tracking accuracy.
This paper addresses the problem of high-resolution imaging of shadowed multiple-targets with limited labeled data, by proposing a transfer-learning-based method for through-the-wall radar imaging. First, a generative adversarial sub-network is developed to facilitate the migration of labeled simulation data to measured data, overcoming the difficulty of generating labeled data. This method incorporates an attention mechanism, adaptive residual blocks, and a multi-scale discriminator to improve the quality of image migration. It also incorporates a structural consistency loss function to minimize perceptual differences between images. Finally, the labeled data are used to train the through-the-wall radar target-imaging sub-network, achieving high-resolution imaging of multiple targets through walls. Experimental results show that the proposed method effectively reduces discrepancies between simulated and obtained images, and generates pseudo-measured images with labels. It systematically addresses issues such as side/grating ghost interference, target image defocusing, and multi-target mutual interference, significantly improving the multi-target imaging quality of the through-the-wall radar. The imaging accuracy achieved is 98.24%, 90.97% and 55.17% for single, double, and triple-target scenarios, respectively. Compared with CycleGAN, the imaging accuracy for the corresponding scenarios is improved by 2.29%, 40.28% and 15.51%, respectively. This paper addresses the problem of high-resolution imaging of shadowed multiple-targets with limited labeled data, by proposing a transfer-learning-based method for through-the-wall radar imaging. First, a generative adversarial sub-network is developed to facilitate the migration of labeled simulation data to measured data, overcoming the difficulty of generating labeled data. This method incorporates an attention mechanism, adaptive residual blocks, and a multi-scale discriminator to improve the quality of image migration. It also incorporates a structural consistency loss function to minimize perceptual differences between images. Finally, the labeled data are used to train the through-the-wall radar target-imaging sub-network, achieving high-resolution imaging of multiple targets through walls. Experimental results show that the proposed method effectively reduces discrepancies between simulated and obtained images, and generates pseudo-measured images with labels. It systematically addresses issues such as side/grating ghost interference, target image defocusing, and multi-target mutual interference, significantly improving the multi-target imaging quality of the through-the-wall radar. The imaging accuracy achieved is 98.24%, 90.97% and 55.17% for single, double, and triple-target scenarios, respectively. Compared with CycleGAN, the imaging accuracy for the corresponding scenarios is improved by 2.29%, 40.28% and 15.51%, respectively.
Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman total-variation regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper. Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman total-variation regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper.
The imaging of aerial targets using Inverse Synthetic Aperture Radar (ISAR) is affected by micro-Doppler effects resulting from localized micromotions, such as rotation and vibration. These effects introduce additional Doppler frequency modulation into the echo, leading to spectral broadening. Under ultrahigh-resolution conditions, these micromotions interfere with the focusing process of subject scatterers, resulting in images with poor focus showing significantly reduced quality. Furthermore, micro-Doppler signals exhibit temporal variability and nonstationary characteristics, posing difficulties in their estimation and differentiation from the echo. To address these challenges, this paper proposes a nonparametric method based on Variational Mode Decomposition (VMD) and mode optimization to separate the echo of the subject from micro-Doppler components. This separation is achieved by utilizing differences in their respective time-frequency distributions. This methodology mitigates the effect of micro-Doppler signals on the echo and obtains imaging results of a drone with ultrahigh-resolution. The VMD algorithm is introduced and subsequently extended to the complex domain. The method entails the decomposition of the ISAR echo along the azimuth direction into several mode functions distributed uniformly across the Doppler sampling bandwidth. Subsequently, image entropy indices are employed to optimize the decomposition parameters and select the imaging modes. This ensures the effective suppression of micro-Doppler signals and preservation of the subject echo. Compared to existing methods based on Empirical Mode Decomposition (EMD) and Local Mean Decomposition (LMD), the proposed method exhibits superior performance in suppressing image blurring caused by micro-Doppler effects while ensuring complete retention of fuselage details. Furthermore, the effectiveness and advantages of the proposed method are validated through simulations and processing of ultrawideband microwave photonic data obtained from drone measurements. The imaging of aerial targets using Inverse Synthetic Aperture Radar (ISAR) is affected by micro-Doppler effects resulting from localized micromotions, such as rotation and vibration. These effects introduce additional Doppler frequency modulation into the echo, leading to spectral broadening. Under ultrahigh-resolution conditions, these micromotions interfere with the focusing process of subject scatterers, resulting in images with poor focus showing significantly reduced quality. Furthermore, micro-Doppler signals exhibit temporal variability and nonstationary characteristics, posing difficulties in their estimation and differentiation from the echo. To address these challenges, this paper proposes a nonparametric method based on Variational Mode Decomposition (VMD) and mode optimization to separate the echo of the subject from micro-Doppler components. This separation is achieved by utilizing differences in their respective time-frequency distributions. This methodology mitigates the effect of micro-Doppler signals on the echo and obtains imaging results of a drone with ultrahigh-resolution. The VMD algorithm is introduced and subsequently extended to the complex domain. The method entails the decomposition of the ISAR echo along the azimuth direction into several mode functions distributed uniformly across the Doppler sampling bandwidth. Subsequently, image entropy indices are employed to optimize the decomposition parameters and select the imaging modes. This ensures the effective suppression of micro-Doppler signals and preservation of the subject echo. Compared to existing methods based on Empirical Mode Decomposition (EMD) and Local Mean Decomposition (LMD), the proposed method exhibits superior performance in suppressing image blurring caused by micro-Doppler effects while ensuring complete retention of fuselage details. Furthermore, the effectiveness and advantages of the proposed method are validated through simulations and processing of ultrawideband microwave photonic data obtained from drone measurements.
Ultra-wideband through-wall radar, leveraging its ability to penetrate walls, can be used together with Multiple-Input Multiple-Output (MIMO) technology to image hidden targets behind walls. This approach provides rich information for detecting and locating people within buildings. This paper introduces a closed-loop interferometric calibration method based on a multitransmitter multireceiver ultrawideband wall-penetrating radar system in the frequency modulated continuous wave (FMCW) regime. This method aims to correct scattering issues caused by internal system errors. The presence of walls causes the target imaging position to deviate from the real position. To address this, this paper derives a Three-Dimensional (3D) wall compensation algorithm jointing channels and pixel points. Then, a fast refocusing algorithm is proposed based on the geometric properties of the imaging area. The first step involves removing the influence of walls on delay time and determining the presence of the target. Subsequently, in view of the geometric properties of the region, a spherical coordinate grid division adapted to the region shape is selected. Localized refocusing is then performed in the subregion. This avoids the issue of electromagnetic wave attenuation, causing strong targets to mask weak ones in the imaging results. At the same time, the adoption of spherical coordinates for gridding and localized imaging greatly reduces the overall time consumption by the algorithm. Through simulation analysis and experimental verification, the proposed calibration method can effectively compensate for system errors. The fast refocusing algorithm can be used to realize multitarget 3D localization of the human body behind walls, with the localization accuracy of each dimension surpassing 10 cm and computational speeds improving by five times compared with those of existing algorithms. In terms of target detection probability, the proposed algorithm consistently identifies weak targets that other algorithms may overlook. Ultra-wideband through-wall radar, leveraging its ability to penetrate walls, can be used together with Multiple-Input Multiple-Output (MIMO) technology to image hidden targets behind walls. This approach provides rich information for detecting and locating people within buildings. This paper introduces a closed-loop interferometric calibration method based on a multitransmitter multireceiver ultrawideband wall-penetrating radar system in the frequency modulated continuous wave (FMCW) regime. This method aims to correct scattering issues caused by internal system errors. The presence of walls causes the target imaging position to deviate from the real position. To address this, this paper derives a Three-Dimensional (3D) wall compensation algorithm jointing channels and pixel points. Then, a fast refocusing algorithm is proposed based on the geometric properties of the imaging area. The first step involves removing the influence of walls on delay time and determining the presence of the target. Subsequently, in view of the geometric properties of the region, a spherical coordinate grid division adapted to the region shape is selected. Localized refocusing is then performed in the subregion. This avoids the issue of electromagnetic wave attenuation, causing strong targets to mask weak ones in the imaging results. At the same time, the adoption of spherical coordinates for gridding and localized imaging greatly reduces the overall time consumption by the algorithm. Through simulation analysis and experimental verification, the proposed calibration method can effectively compensate for system errors. The fast refocusing algorithm can be used to realize multitarget 3D localization of the human body behind walls, with the localization accuracy of each dimension surpassing 10 cm and computational speeds improving by five times compared with those of existing algorithms. In terms of target detection probability, the proposed algorithm consistently identifies weak targets that other algorithms may overlook.
The advancement in the miniaturization technology of Synthetic Aperture Radar (SAR) systems and SAR three-dimensional (3D) imaging has enabled the 3D imaging of urban areas through Unmanned Aerial Vehicle (UAV)-borne array Interferometric SAR (array-InSAR), offering significant utility in urban cartography, complex environment reconstruction, and related domains. Despite the challenges posed by multipath signals in urban scene imaging, these signals serve as a crucial asset for imaging hidden targets in Non-Line-of-Sight (NLOS) areas. Hence, this paper studies NLOS targets in UAV-borne array-InSAR 3D imaging at low altitudes and establishes a multipath model for 3D imaging at low altitudes. Then, a calculation method is proposed for obtaining the multipath reachable range in urban canyon areas based on building plane fitting. Finally, a relocation method for NLOS targets is presented. The simulation and real data experiments of UAV-borne array InSAR show that the proposed method can effectively obtain 3D images and relocate NLOS targets in urban canyon areas, with errors typically below 0.5 m, which realizes the acquisition of hidden NLOS region information. The advancement in the miniaturization technology of Synthetic Aperture Radar (SAR) systems and SAR three-dimensional (3D) imaging has enabled the 3D imaging of urban areas through Unmanned Aerial Vehicle (UAV)-borne array Interferometric SAR (array-InSAR), offering significant utility in urban cartography, complex environment reconstruction, and related domains. Despite the challenges posed by multipath signals in urban scene imaging, these signals serve as a crucial asset for imaging hidden targets in Non-Line-of-Sight (NLOS) areas. Hence, this paper studies NLOS targets in UAV-borne array-InSAR 3D imaging at low altitudes and establishes a multipath model for 3D imaging at low altitudes. Then, a calculation method is proposed for obtaining the multipath reachable range in urban canyon areas based on building plane fitting. Finally, a relocation method for NLOS targets is presented. The simulation and real data experiments of UAV-borne array InSAR show that the proposed method can effectively obtain 3D images and relocate NLOS targets in urban canyon areas, with errors typically below 0.5 m, which realizes the acquisition of hidden NLOS region information.
Airborne radar receivers that utilize subarray processing face challenges owing to the complex space-time coupling distribution caused by grating-lobe clutter. This results in multiple performance notches in the main beam, which severely affects target detection performance. To address this issue, we analyze the characteristics of grating-lobe clutter distribution in subarray processing and propose an approach for space-time clutter suppression based on the design of a receiving subarray beam pattern. Our approach leverages an overlapping subarray scheme to form wide nulls in the regions between subarrays where grating-lobe clutter is prevalent through beam pattern design. This design facilitates grating-lobe clutter pre-filtering between subarrays. Furthermore, we develop a subarray-level space-time processor that avoids the grating-lobe clutter coupling diffusion in the space-time two-dimensional plane by performing clutter pre-filtering within each subarray. This strategy enhances clutter suppression and moving-target-detection capabilities. Simulation results verify that the proposed method can remarkably improve the output signal to clutter plus noise ratio loss performance in grating-lobe clutter regions. Airborne radar receivers that utilize subarray processing face challenges owing to the complex space-time coupling distribution caused by grating-lobe clutter. This results in multiple performance notches in the main beam, which severely affects target detection performance. To address this issue, we analyze the characteristics of grating-lobe clutter distribution in subarray processing and propose an approach for space-time clutter suppression based on the design of a receiving subarray beam pattern. Our approach leverages an overlapping subarray scheme to form wide nulls in the regions between subarrays where grating-lobe clutter is prevalent through beam pattern design. This design facilitates grating-lobe clutter pre-filtering between subarrays. Furthermore, we develop a subarray-level space-time processor that avoids the grating-lobe clutter coupling diffusion in the space-time two-dimensional plane by performing clutter pre-filtering within each subarray. This strategy enhances clutter suppression and moving-target-detection capabilities. Simulation results verify that the proposed method can remarkably improve the output signal to clutter plus noise ratio loss performance in grating-lobe clutter regions.
In ship detection through remote sensing images, optical images often provide rich details and texture information; however, the quality of such optical images can be affected by cloud and fog interferences. In contrast, Synthetic Aperture Radar (SAR) provides all-weather and all-day imaging capabilities; however, SAR images are susceptible to interference from complex sea clutter. Cooperative ship detection combining the advantages of optical and SAR images can enhance the detection performance of ships. In this paper, by focusing on the slight shift of ships in a small neighborhood range in the prior and later temporal images, we propose a method for cooperative ship detection based on neighborhood saliency in multisource heterogeneous remote sensing images, including optical and SAR data. Initially, a sea-land segmentation algorithm of optical and SAR images is applied to reduce interference from land regions. Next, single-source ship detection from optical and SAR images is performed using the RetinaNet and YOLOv5s models, respectively. Then, we introduce a multisource cooperative ship target detection strategy based on the neighborhood window opening of single-source detection results in remote sensing images and secondary detection of neighborhood salient ships. This strategy further leverages the complementary advantages of both optical and SAR heterogeneous images, reducing the possibility of missing ship and false alarms to improve overall detection performance. The performance of the proposed method has been validated using optical and SAR remote sensing data measured from Yantai, China, in 2022. Compared with existing ship detection methods, our method improves detection accuracy AP50 by ≥1.9%, demonstrating its effectiveness and superiority. In ship detection through remote sensing images, optical images often provide rich details and texture information; however, the quality of such optical images can be affected by cloud and fog interferences. In contrast, Synthetic Aperture Radar (SAR) provides all-weather and all-day imaging capabilities; however, SAR images are susceptible to interference from complex sea clutter. Cooperative ship detection combining the advantages of optical and SAR images can enhance the detection performance of ships. In this paper, by focusing on the slight shift of ships in a small neighborhood range in the prior and later temporal images, we propose a method for cooperative ship detection based on neighborhood saliency in multisource heterogeneous remote sensing images, including optical and SAR data. Initially, a sea-land segmentation algorithm of optical and SAR images is applied to reduce interference from land regions. Next, single-source ship detection from optical and SAR images is performed using the RetinaNet and YOLOv5s models, respectively. Then, we introduce a multisource cooperative ship target detection strategy based on the neighborhood window opening of single-source detection results in remote sensing images and secondary detection of neighborhood salient ships. This strategy further leverages the complementary advantages of both optical and SAR heterogeneous images, reducing the possibility of missing ship and false alarms to improve overall detection performance. The performance of the proposed method has been validated using optical and SAR remote sensing data measured from Yantai, China, in 2022. Compared with existing ship detection methods, our method improves detection accuracy AP50 by ≥1.9%, demonstrating its effectiveness and superiority.
In this study, aiming at fulfilling the requirement of polarization acquisition and utilization, a method for active deception jamming recognition based on the time-varying polarization-conversion metasurface is investigated. First, an anisotropic phase-modulated metasurface supporting 3-bit phase quantization in the 9.6~10.1 GHz frequency band is designed. By optimizing the periodical phase coding, the polarization state can be converted on demand. And then, loading the polarization-conversion metasurface on a single polarization radar antenna so that the polarization states of the antenna can change along a specific trajectory. By extracting the difference in the polarization domain between target and active deception jamming, the active deception jamming could be distinguished from the radar echo. The simulation results show that under the constraints of three different polarization trajectories, the active deception jamming and targets exhibit a significant clustering effect, and the identification effect is stable. Compared with jamming identification methods that rely on dual-polarization or full-polarization radar systems, the proposed method has both low cost and high efficiency, which has great application potential in radar anti-jamming. In this study, aiming at fulfilling the requirement of polarization acquisition and utilization, a method for active deception jamming recognition based on the time-varying polarization-conversion metasurface is investigated. First, an anisotropic phase-modulated metasurface supporting 3-bit phase quantization in the 9.6~10.1 GHz frequency band is designed. By optimizing the periodical phase coding, the polarization state can be converted on demand. And then, loading the polarization-conversion metasurface on a single polarization radar antenna so that the polarization states of the antenna can change along a specific trajectory. By extracting the difference in the polarization domain between target and active deception jamming, the active deception jamming could be distinguished from the radar echo. The simulation results show that under the constraints of three different polarization trajectories, the active deception jamming and targets exhibit a significant clustering effect, and the identification effect is stable. Compared with jamming identification methods that rely on dual-polarization or full-polarization radar systems, the proposed method has both low cost and high efficiency, which has great application potential in radar anti-jamming.
The conventional terahertz radar suffers from limited operation range for long-distance, noncooperative target detection due to the low transmitter power and atmospheric attenuation effect, both of which pose a hindrance in meeting the requirements of warning detection applications. To improve the radar detection capability, this paper studies an ultrasensitive target detection method based on single-photon detectors to replace traditional radar receivers. The method is expected to considerably expand the operation range of terahertz radars. First, the statistical law of the number of echo photons of a terahertz single-photon radar system is analyzed, and the echo characteristics of the target are expounded from a microscopic perspective. Furthermore, a terahertz single-photon target detection model, incorporating the characteristics of a quantum capacitor detector, is established. In addition, the mathematical expression of the target detection performance is derived, and the performance is evaluated via simulations. Further, a target detection performance curve is obtained. Finally, a time-resolved terahertz photon-counting mechanism experiment is performed, wherein we realize high-precision ranging by counting echo pulses. This work can provide support for the research and development of ultrasensitive target detection technologies and single-photon radar systems in the terahertz band. The conventional terahertz radar suffers from limited operation range for long-distance, noncooperative target detection due to the low transmitter power and atmospheric attenuation effect, both of which pose a hindrance in meeting the requirements of warning detection applications. To improve the radar detection capability, this paper studies an ultrasensitive target detection method based on single-photon detectors to replace traditional radar receivers. The method is expected to considerably expand the operation range of terahertz radars. First, the statistical law of the number of echo photons of a terahertz single-photon radar system is analyzed, and the echo characteristics of the target are expounded from a microscopic perspective. Furthermore, a terahertz single-photon target detection model, incorporating the characteristics of a quantum capacitor detector, is established. In addition, the mathematical expression of the target detection performance is derived, and the performance is evaluated via simulations. Further, a target detection performance curve is obtained. Finally, a time-resolved terahertz photon-counting mechanism experiment is performed, wherein we realize high-precision ranging by counting echo pulses. This work can provide support for the research and development of ultrasensitive target detection technologies and single-photon radar systems in the terahertz band.
Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m. Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m.
In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time. In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time.
With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications. With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications.
Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%. Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%.
Special Topic Papers: Radar Multi-dimension, Multi-domain and Multi-feature Information Processing
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset. Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset.
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
In this study, a collaborative radar selection and transmit resource allocation strategy is proposed for multitarget tracking applications in multiple distributed phased array radar networks with imperfect detection performance. The closed-form expression for the Bayesian Cramér-Rao Lower Bound (BCRLB) with imperfect detection performance is obtained and adopted as the criterion function to characterize the precision of target state estimates. The key concept of the developed strategy is to collaboratively adjust the radar node selection, transmitted power, and effective bandwidth allocation of multiple distributed phased array radar networks to minimize the total transmit power consumption in an imperfect detection environment. This will be achieved under the constraints of the predetermined tracking accuracy requirements of multiple targets and several illumination resource budgets to improve its radio frequency stealth performance. The results revealed that the formulated problem is a mixed-integer programming, nonlinear, and nonconvex optimization model. By incorporating the barrier function approach and cyclic minimization technique, an efficient four-step-based solution methodology is proposed to solve the resulting optimization problem. The numerical simulation examples demonstrate that the proposed strategy can effectively reduce the total power consumption of multiple distributed phased array radar networks by at least 32.3% and improve its radio frequency stealth performance while meeting the given multitarget tracking accuracy requirements compared with other existing algorithms. In this study, a collaborative radar selection and transmit resource allocation strategy is proposed for multitarget tracking applications in multiple distributed phased array radar networks with imperfect detection performance. The closed-form expression for the Bayesian Cramér-Rao Lower Bound (BCRLB) with imperfect detection performance is obtained and adopted as the criterion function to characterize the precision of target state estimates. The key concept of the developed strategy is to collaboratively adjust the radar node selection, transmitted power, and effective bandwidth allocation of multiple distributed phased array radar networks to minimize the total transmit power consumption in an imperfect detection environment. This will be achieved under the constraints of the predetermined tracking accuracy requirements of multiple targets and several illumination resource budgets to improve its radio frequency stealth performance. The results revealed that the formulated problem is a mixed-integer programming, nonlinear, and nonconvex optimization model. By incorporating the barrier function approach and cyclic minimization technique, an efficient four-step-based solution methodology is proposed to solve the resulting optimization problem. The numerical simulation examples demonstrate that the proposed strategy can effectively reduce the total power consumption of multiple distributed phased array radar networks by at least 32.3% and improve its radio frequency stealth performance while meeting the given multitarget tracking accuracy requirements compared with other existing algorithms.
Distributed radar with moving platforms can enhance the survivability and detection performance of a system, however, it is difficult to equip these platforms with sufficient communication bandwidth to transmit high-precision observed data, posing a great challenge to the high-performance detection of a distributed radar system. Because low-bit quantization can effectively reduce the computation cost and resource consumption of distributed radar systems, in this paper, we investigate the high-performance detection of multiple moving targets using the distributed radar system on moving platforms by adopting the low-bit quantization strategy. First, according to system resources, multipulse observed data of each node may be quantized with a low-bit quantizer and the likelihood function relative to the quantizer and states of multiple targets are derived. Subsequently, based on the convexity of the likelihood function relative to the unknown reflection coefficients, a joint estimation algorithm is designed for the Doppler shifts and reflection coefficients. Then, a generalized likelihood ratio test based multi-target detector is designed for detecting multiple targets in the surveillance area with unknown states, and deriving the constant false alarm rate detection threshold. Finally, the optimal low-bit quantizer is designed by deriving the asymptotic detection performance of the system, which effectively improves the detection performance and ensures robustness. Simulation experiments are conducted to analyze the detection and estimation performance of the proposed algorithm, thereby demonstrating the effectiveness of the proposed algorithm for weak signals, and showing that the low-bit quantized data can achieve detection and estimation performance close to that of the high-precision (16-bit quantization) data while consuming a complementary 20% of the communication bandwidth. Besides, according to the simulated results, the two-bit quantization strategy may be a trade-off between the detection performance and resource consumption of the distributed radar system. Distributed radar with moving platforms can enhance the survivability and detection performance of a system, however, it is difficult to equip these platforms with sufficient communication bandwidth to transmit high-precision observed data, posing a great challenge to the high-performance detection of a distributed radar system. Because low-bit quantization can effectively reduce the computation cost and resource consumption of distributed radar systems, in this paper, we investigate the high-performance detection of multiple moving targets using the distributed radar system on moving platforms by adopting the low-bit quantization strategy. First, according to system resources, multipulse observed data of each node may be quantized with a low-bit quantizer and the likelihood function relative to the quantizer and states of multiple targets are derived. Subsequently, based on the convexity of the likelihood function relative to the unknown reflection coefficients, a joint estimation algorithm is designed for the Doppler shifts and reflection coefficients. Then, a generalized likelihood ratio test based multi-target detector is designed for detecting multiple targets in the surveillance area with unknown states, and deriving the constant false alarm rate detection threshold. Finally, the optimal low-bit quantizer is designed by deriving the asymptotic detection performance of the system, which effectively improves the detection performance and ensures robustness. Simulation experiments are conducted to analyze the detection and estimation performance of the proposed algorithm, thereby demonstrating the effectiveness of the proposed algorithm for weak signals, and showing that the low-bit quantized data can achieve detection and estimation performance close to that of the high-precision (16-bit quantization) data while consuming a complementary 20% of the communication bandwidth. Besides, according to the simulated results, the two-bit quantization strategy may be a trade-off between the detection performance and resource consumption of the distributed radar system.
Passive radars based on FM radio signals have low detection probability, high false alarm rates and poor accuracy, presenting considerable challenges to target tracking in radar networks. Moreover, a high false alarm rate increases the computational burden and puts forward high requirements for the real-time performance of networking algorithms. In addition, low detection probability and poor azimuth accuracy result in a lack of redundant information, making measurement association and track initiation challenging. To address these issues, this paper proposes an FM-based passive radar network based on the concepts of elementary hypothesis points and elementary hypothesis track, as well as a track initiation algorithm. First, we construct possible low-dimensional association hypotheses and solve for their corresponding elementary hypothesis points. Subsequently, we associate elementary hypothesis points from different frames to form multiple possible elementary hypothesis tracks. Finally, by combining multi-frame radar network data for hypothesis track judgment, we confirm the elementary hypothesis tracks corresponding to the real targets, and eliminate the false elementary hypothesis tracks caused by incorrect associations. Result reveal that the proposed algorithm has lower computational complexity and faster track initiation speed than existing algorithms. Moreover, we verified the effectiveness of the proposed algorithm using simulation and experimental results. Passive radars based on FM radio signals have low detection probability, high false alarm rates and poor accuracy, presenting considerable challenges to target tracking in radar networks. Moreover, a high false alarm rate increases the computational burden and puts forward high requirements for the real-time performance of networking algorithms. In addition, low detection probability and poor azimuth accuracy result in a lack of redundant information, making measurement association and track initiation challenging. To address these issues, this paper proposes an FM-based passive radar network based on the concepts of elementary hypothesis points and elementary hypothesis track, as well as a track initiation algorithm. First, we construct possible low-dimensional association hypotheses and solve for their corresponding elementary hypothesis points. Subsequently, we associate elementary hypothesis points from different frames to form multiple possible elementary hypothesis tracks. Finally, by combining multi-frame radar network data for hypothesis track judgment, we confirm the elementary hypothesis tracks corresponding to the real targets, and eliminate the false elementary hypothesis tracks caused by incorrect associations. Result reveal that the proposed algorithm has lower computational complexity and faster track initiation speed than existing algorithms. Moreover, we verified the effectiveness of the proposed algorithm using simulation and experimental results.
The modern radar confrontation situation is complex and changeable, and inter-system combat has become a basic feature. The overall system performance affects the initiative on the battlefield and even the final victory or defeat. By optimizing the beam resources of radar and jammers in a system, the overall performance can be improved, and the effective low-intercept detection effect can be obtained in the spatial and temporal domains. However, joint optimization of cooperative beamforming in the spatial and temporal domains is a nonconvex problem with complex multiparameter coupling. In this paper, an optimization model is established for a multitasking dynamic scene in the spatial and temporal domains. Radar detection performance is the optimization goal, while the interference performance and energy limitation of jammers are the constraints. To solve the model, a joint design method of space-time cooperative beamforming based on iterative optimization was proposed; that is, iterative optimization of radar transmitting, receiving, and multiple jammers transmitting beamforming vectors was alternately optimized. To solve the Quadratically Constrained Quadratic Programs (QCQP) problem with indefinite matrices for multijammer collaborative optimization, this paper is based on the Feasible Point Pursuit Successive Convex Approximation (FPP-SCA) algorithm. In other words, on the basis of the SCA algorithm, algorithm feasibility is ensured through reasonable relaxation by introducing relaxation variables and a penalty term, which solves the difficulty of obtaining a feasible solution when a problem contains indefinite matrices. Simulation results show that under the constraint of certain jammer energy, the proposed method achieves the effect of multiple jammers interfering with each enemy platform in the spatial and temporal domains to cover our radar detection. This effect is achieved while ensuring high-performance radar detection of the target without interference. Compared with traditional algorithms, the collaborative interference based on the FPP-SCA algorithm exhibits a better performance in the dynamic scene. The modern radar confrontation situation is complex and changeable, and inter-system combat has become a basic feature. The overall system performance affects the initiative on the battlefield and even the final victory or defeat. By optimizing the beam resources of radar and jammers in a system, the overall performance can be improved, and the effective low-intercept detection effect can be obtained in the spatial and temporal domains. However, joint optimization of cooperative beamforming in the spatial and temporal domains is a nonconvex problem with complex multiparameter coupling. In this paper, an optimization model is established for a multitasking dynamic scene in the spatial and temporal domains. Radar detection performance is the optimization goal, while the interference performance and energy limitation of jammers are the constraints. To solve the model, a joint design method of space-time cooperative beamforming based on iterative optimization was proposed; that is, iterative optimization of radar transmitting, receiving, and multiple jammers transmitting beamforming vectors was alternately optimized. To solve the Quadratically Constrained Quadratic Programs (QCQP) problem with indefinite matrices for multijammer collaborative optimization, this paper is based on the Feasible Point Pursuit Successive Convex Approximation (FPP-SCA) algorithm. In other words, on the basis of the SCA algorithm, algorithm feasibility is ensured through reasonable relaxation by introducing relaxation variables and a penalty term, which solves the difficulty of obtaining a feasible solution when a problem contains indefinite matrices. Simulation results show that under the constraint of certain jammer energy, the proposed method achieves the effect of multiple jammers interfering with each enemy platform in the spatial and temporal domains to cover our radar detection. This effect is achieved while ensuring high-performance radar detection of the target without interference. Compared with traditional algorithms, the collaborative interference based on the FPP-SCA algorithm exhibits a better performance in the dynamic scene.
Radar Signal Processing
To address the challenges in tracking complex maneuvering extended targets, an effective maneuvering extended target tracking method was proposed for irregularly shaped star-convex using a Transformer network. Initially, the alpha-shape algorithm was used to model the variations in the star-convex shape. In addition, a recursive approach was proposed to estimate the irregular shape of an extended target by detailed derivation in the Bayesian filtering framework. This approach accurately estimated the shape of a static star convex extended target. Moreover, through the structural redesign of the target state transition matrix and the real-time estimation of the maneuvering extended target’s state transition matrix using a transformer network, the accurate tracking of complex maneuvering targets was achieved. Furthermore, the real-time tracking of star convex maneuvering extended targets was achieved by fusing the estimated shape contours with motion states. This study focused on constructing certain complex maneuvering extended target tracking scenarios to assess the performance of the proposed method and the comprehensive estimation capabilities of the algorithm considering both shapes and motion states using multiple performance indicators. To address the challenges in tracking complex maneuvering extended targets, an effective maneuvering extended target tracking method was proposed for irregularly shaped star-convex using a Transformer network. Initially, the alpha-shape algorithm was used to model the variations in the star-convex shape. In addition, a recursive approach was proposed to estimate the irregular shape of an extended target by detailed derivation in the Bayesian filtering framework. This approach accurately estimated the shape of a static star convex extended target. Moreover, through the structural redesign of the target state transition matrix and the real-time estimation of the maneuvering extended target’s state transition matrix using a transformer network, the accurate tracking of complex maneuvering targets was achieved. Furthermore, the real-time tracking of star convex maneuvering extended targets was achieved by fusing the estimated shape contours with motion states. This study focused on constructing certain complex maneuvering extended target tracking scenarios to assess the performance of the proposed method and the comprehensive estimation capabilities of the algorithm considering both shapes and motion states using multiple performance indicators.
Scanning radar angular super-resolution technology is based on the relationship between the target and antenna pattern, and a deconvolution method is used to obtain angular resolution capabilities beyond the real beam. Most current angular super-resolution methods are based on ideal distortion-free antenna patterns and do not consider pattern changes in the actual process due to the influence of factors such as radar radome, antenna measurement errors, and non-ideal platform motion. In practice, an antenna pattern often has unknown errors, which can result in reduced target resolution and even false target generation. To address this problem, this paper proposes an angular super-resolution imaging method for airborne radar with unknown antenna errors. First, based on the Total Least Square (TLS) criterion, this paper considers the effect of the pattern error matrix and derive the corresponding objective function. Second, this paper employs the iterative reweighted optimization method to solve the objective function by adopting an alternative iteration solution idea. Finally, an adaptive parameter update method is introduced for algorithm hyperparameter selection. The simulation and experimental results demonstrate that the proposed method can achieve super-resolution reconstruction even in the presence of unknown antenna errors, promoting the robustness of the super-resolution algorithm. Scanning radar angular super-resolution technology is based on the relationship between the target and antenna pattern, and a deconvolution method is used to obtain angular resolution capabilities beyond the real beam. Most current angular super-resolution methods are based on ideal distortion-free antenna patterns and do not consider pattern changes in the actual process due to the influence of factors such as radar radome, antenna measurement errors, and non-ideal platform motion. In practice, an antenna pattern often has unknown errors, which can result in reduced target resolution and even false target generation. To address this problem, this paper proposes an angular super-resolution imaging method for airborne radar with unknown antenna errors. First, based on the Total Least Square (TLS) criterion, this paper considers the effect of the pattern error matrix and derive the corresponding objective function. Second, this paper employs the iterative reweighted optimization method to solve the objective function by adopting an alternative iteration solution idea. Finally, an adaptive parameter update method is introduced for algorithm hyperparameter selection. The simulation and experimental results demonstrate that the proposed method can achieve super-resolution reconstruction even in the presence of unknown antenna errors, promoting the robustness of the super-resolution algorithm.
Forward-looking imaging of airborne scanning radar is widely used in situation awareness, autonomous navigation and terrain following. When the radar is influenced by unintentional temporally sporadic electromagnetic interference or abnormal equipment performance, the echo signal contains outliers. Existing super-resolution methods can suppress outliers and improve azimuth resolution, but the real-time computing problem is not considered. In this study, we propose an airborne scanning radar super-resolution method to achieve fast forward-looking imaging when echo data are abnormal. First, we propose using the Student-t distribution to model noise. Then, the expectation-maximization method is used to estimate the parameters. Inspired by the truncated singular value decomposition method, we introduce the truncated unitary matrix into the estimation formula of the target scattering coefficient. Finally, the size of inverse matrix is reduced and the computational complexity of parameter estimation is reduced through matrix transformation. The simulation results show that the proposed method can improve the azimuth resolution of forward-looking imaging in a shorter time, and suppress outliers in echo data. Forward-looking imaging of airborne scanning radar is widely used in situation awareness, autonomous navigation and terrain following. When the radar is influenced by unintentional temporally sporadic electromagnetic interference or abnormal equipment performance, the echo signal contains outliers. Existing super-resolution methods can suppress outliers and improve azimuth resolution, but the real-time computing problem is not considered. In this study, we propose an airborne scanning radar super-resolution method to achieve fast forward-looking imaging when echo data are abnormal. First, we propose using the Student-t distribution to model noise. Then, the expectation-maximization method is used to estimate the parameters. Inspired by the truncated singular value decomposition method, we introduce the truncated unitary matrix into the estimation formula of the target scattering coefficient. Finally, the size of inverse matrix is reduced and the computational complexity of parameter estimation is reduced through matrix transformation. The simulation results show that the proposed method can improve the azimuth resolution of forward-looking imaging in a shorter time, and suppress outliers in echo data.
SAR Image Interpretation
Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images. Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images.
Radar Jamming Technique
Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology. Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology.
The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6.0%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable. The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6.0%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable.

微信 | 公众平台

随时查询稿件 获取最新论文 知晓行业信息

  • EI
  • Scopus
  • DOAJ
  • JST
  • CSCD
  • CSTPCD
  • CNKI
  • 中文核心期刊