2024 Vol. 13, No. 4

Special Topic Papers: Urban Building Penetration Detection and Information Perception
Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging. Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging.
Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m. Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m.
Synthetic Aperture Radar (SAR) has the advantage of noncontact monitoring around the clock and is an important tool for closed space security monitoring. However, when SAR is employed in complex closed spaces, it is susceptible to multipath effects, resulting in a considerable number of virtual images in the image, which has a detrimental impact on interpretation. Existing methods require scene priors for multipath estimation or subaperture weighted fusion to suppress multipath; however, accurately distinguishing multipath virtual images from target images is challenging. This paper proposes a novel multi-angle dual-layer deviation measurement method that effectively distinguishes multipath virtual images from targets. The proposed method employs a large viewing angle difference to conduct multi-angle observation of the target scene, capitalizing on the fact that the position of the multipath virtual image varies with the observation angle, whereas the actual target position remains constant; this is followed by applying a dual-layer deviation measurement algorithm. The algorithm calculates the deviation between the sequence amplitude value and mean twice based on the sparsity of multipath in the multiangle sequence. The proposed method accurately detects and removes sparse and unstable multipath components, whereas the remaining stable components are averaged. This effectively suppresses multipath while retaining target information. Finally, the simulation and actual millimeter wave radar data processing verified the effectiveness of the proposed method. Synthetic Aperture Radar (SAR) has the advantage of noncontact monitoring around the clock and is an important tool for closed space security monitoring. However, when SAR is employed in complex closed spaces, it is susceptible to multipath effects, resulting in a considerable number of virtual images in the image, which has a detrimental impact on interpretation. Existing methods require scene priors for multipath estimation or subaperture weighted fusion to suppress multipath; however, accurately distinguishing multipath virtual images from target images is challenging. This paper proposes a novel multi-angle dual-layer deviation measurement method that effectively distinguishes multipath virtual images from targets. The proposed method employs a large viewing angle difference to conduct multi-angle observation of the target scene, capitalizing on the fact that the position of the multipath virtual image varies with the observation angle, whereas the actual target position remains constant; this is followed by applying a dual-layer deviation measurement algorithm. The algorithm calculates the deviation between the sequence amplitude value and mean twice based on the sparsity of multipath in the multiangle sequence. The proposed method accurately detects and removes sparse and unstable multipath components, whereas the remaining stable components are averaged. This effectively suppresses multipath while retaining target information. Finally, the simulation and actual millimeter wave radar data processing verified the effectiveness of the proposed method.
The advancement in the miniaturization technology of Synthetic Aperture Radar (SAR) systems and SAR three-dimensional (3D) imaging has enabled the 3D imaging of urban areas through Unmanned Aerial Vehicle (UAV)-borne array Interferometric SAR (array-InSAR), offering significant utility in urban cartography, complex environment reconstruction, and related domains. Despite the challenges posed by multipath signals in urban scene imaging, these signals serve as a crucial asset for imaging hidden targets in Non-Line-of-Sight (NLOS) areas. Hence, this paper studies NLOS targets in UAV-borne array-InSAR 3D imaging at low altitudes and establishes a multipath model for 3D imaging at low altitudes. Then, a calculation method is proposed for obtaining the multipath reachable range in urban canyon areas based on building plane fitting. Finally, a relocation method for NLOS targets is presented. The simulation and real data experiments of UAV-borne array-InSAR show that the proposed method can effectively obtain 3D images and relocate NLOS targets in urban canyon areas, with errors typically below 0.5 m, which realizes the acquisition of hidden NLOS region information. The advancement in the miniaturization technology of Synthetic Aperture Radar (SAR) systems and SAR three-dimensional (3D) imaging has enabled the 3D imaging of urban areas through Unmanned Aerial Vehicle (UAV)-borne array Interferometric SAR (array-InSAR), offering significant utility in urban cartography, complex environment reconstruction, and related domains. Despite the challenges posed by multipath signals in urban scene imaging, these signals serve as a crucial asset for imaging hidden targets in Non-Line-of-Sight (NLOS) areas. Hence, this paper studies NLOS targets in UAV-borne array-InSAR 3D imaging at low altitudes and establishes a multipath model for 3D imaging at low altitudes. Then, a calculation method is proposed for obtaining the multipath reachable range in urban canyon areas based on building plane fitting. Finally, a relocation method for NLOS targets is presented. The simulation and real data experiments of UAV-borne array-InSAR show that the proposed method can effectively obtain 3D images and relocate NLOS targets in urban canyon areas, with errors typically below 0.5 m, which realizes the acquisition of hidden NLOS region information.
Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman Total Variation (TV) regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper. Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman Total Variation (TV) regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper.
This paper addresses the problem of high-resolution imaging of shadowed multiple-targets with limited labeled data, by proposing a transfer-learning-based method for through-the-wall radar imaging. First, a generative adversarial sub-network is developed to facilitate the migration of labeled simulation data to measured data, overcoming the difficulty of generating labeled data. This method incorporates an attention mechanism, adaptive residual blocks, and a multi-scale discriminator to improve the quality of image migration. It also incorporates a structural consistency loss function to minimize perceptual differences between images. Finally, the labeled data are used to train the through-the-wall radar target-imaging sub-network, achieving high-resolution imaging of multiple targets through walls. Experimental results show that the proposed method effectively reduces discrepancies between simulated and obtained images, and generates pseudo-measured images with labels. It systematically addresses issues such as side/grating ghost interference, target image defocusing, and multi-target mutual interference, significantly improving the multi-target imaging quality of the through-the-wall radar. The imaging accuracy achieved is 98.24%, 90.97% and 55.17% for single, double, and triple-target scenarios, respectively. Compared with CycleGAN, the imaging accuracy for the corresponding scenarios is improved by 2.29%, 40.28% and 15.51%, respectively. This paper addresses the problem of high-resolution imaging of shadowed multiple-targets with limited labeled data, by proposing a transfer-learning-based method for through-the-wall radar imaging. First, a generative adversarial sub-network is developed to facilitate the migration of labeled simulation data to measured data, overcoming the difficulty of generating labeled data. This method incorporates an attention mechanism, adaptive residual blocks, and a multi-scale discriminator to improve the quality of image migration. It also incorporates a structural consistency loss function to minimize perceptual differences between images. Finally, the labeled data are used to train the through-the-wall radar target-imaging sub-network, achieving high-resolution imaging of multiple targets through walls. Experimental results show that the proposed method effectively reduces discrepancies between simulated and obtained images, and generates pseudo-measured images with labels. It systematically addresses issues such as side/grating ghost interference, target image defocusing, and multi-target mutual interference, significantly improving the multi-target imaging quality of the through-the-wall radar. The imaging accuracy achieved is 98.24%, 90.97% and 55.17% for single, double, and triple-target scenarios, respectively. Compared with CycleGAN, the imaging accuracy for the corresponding scenarios is improved by 2.29%, 40.28% and 15.51%, respectively.
Ultra-wideband through-wall radar, leveraging its ability to penetrate walls, can be used together with Multiple-Input Multiple-Output (MIMO) technology to image hidden targets behind walls. This approach provides rich information for detecting and locating people within buildings. This paper introduces a closed-loop interferometric calibration method based on a multitransmitter multireceiver ultrawideband wall-penetrating radar system in the Frequency Modulated Continuous Wave (FMCW) regime. This method aims to correct scattering issues caused by internal system errors. The presence of walls causes the target imaging position to deviate from the real position. To address this, this paper derives a three-Dimensional (3D) wall compensation algorithm jointing channels and pixel points. Then, a fast refocusing algorithm is proposed based on the geometric properties of the imaging area. The first step involves removing the influence of walls on delay time and determining the presence of the target. Subsequently, in view of the geometric properties of the region, a spherical coordinate grid division adapted to the region shape is selected. Localized refocusing is then performed in the subregion. This avoids the issue of electromagnetic wave attenuation, causing strong targets to mask weak ones in the imaging results. At the same time, the adoption of spherical coordinates for gridding and localized imaging greatly reduces the overall time consumption by the algorithm. Through simulation analysis and experimental verification, the proposed calibration method can effectively compensate for system errors. The fast refocusing algorithm can be used to realize multitarget 3D localization of the human body behind walls, with the localization accuracy of each dimension surpassing 10 cm and computational speeds improving by five times compared with those of existing algorithms. In terms of target detection probability, the proposed algorithm consistently identifies weak targets that other algorithms may overlook. Ultra-wideband through-wall radar, leveraging its ability to penetrate walls, can be used together with Multiple-Input Multiple-Output (MIMO) technology to image hidden targets behind walls. This approach provides rich information for detecting and locating people within buildings. This paper introduces a closed-loop interferometric calibration method based on a multitransmitter multireceiver ultrawideband wall-penetrating radar system in the Frequency Modulated Continuous Wave (FMCW) regime. This method aims to correct scattering issues caused by internal system errors. The presence of walls causes the target imaging position to deviate from the real position. To address this, this paper derives a three-Dimensional (3D) wall compensation algorithm jointing channels and pixel points. Then, a fast refocusing algorithm is proposed based on the geometric properties of the imaging area. The first step involves removing the influence of walls on delay time and determining the presence of the target. Subsequently, in view of the geometric properties of the region, a spherical coordinate grid division adapted to the region shape is selected. Localized refocusing is then performed in the subregion. This avoids the issue of electromagnetic wave attenuation, causing strong targets to mask weak ones in the imaging results. At the same time, the adoption of spherical coordinates for gridding and localized imaging greatly reduces the overall time consumption by the algorithm. Through simulation analysis and experimental verification, the proposed calibration method can effectively compensate for system errors. The fast refocusing algorithm can be used to realize multitarget 3D localization of the human body behind walls, with the localization accuracy of each dimension surpassing 10 cm and computational speeds improving by five times compared with those of existing algorithms. In terms of target detection probability, the proposed algorithm consistently identifies weak targets that other algorithms may overlook.
Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%. Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%.
Synthetic Aperture Radar
The imaging of aerial targets using Inverse Synthetic Aperture Radar (ISAR) is affected by micro-Doppler effects resulting from localized micromotions, such as rotation and vibration. These effects introduce additional Doppler frequency modulation into the echo, leading to spectral broadening. Under ultrahigh-resolution conditions, these micromotions interfere with the focusing process of subject scatterers, resulting in images with poor focus showing significantly reduced quality. Furthermore, micro-Doppler signals exhibit temporal variability and nonstationary characteristics, posing difficulties in their estimation and differentiation from the echo. To address these challenges, this paper proposes a nonparametric method based on Variational Mode Decomposition (VMD) and mode optimization to separate the echo of the subject from micro-Doppler components. This separation is achieved by utilizing differences in their respective time-frequency distributions. This methodology mitigates the effect of micro-Doppler signals on the echo and obtains imaging results of a drone with ultrahigh-resolution. The VMD algorithm is introduced and subsequently extended to the complex domain. The method entails the decomposition of the ISAR echo along the azimuth direction into several mode functions distributed uniformly across the Doppler sampling bandwidth. Subsequently, image entropy indices are employed to optimize the decomposition parameters and select the imaging modes. This ensures the effective suppression of micro-Doppler signals and preservation of the subject echo. Compared to existing methods based on Empirical Mode Decomposition (EMD) and Local Mean Decomposition (LMD), the proposed method exhibits superior performance in suppressing image blurring caused by micro-Doppler effects while ensuring complete retention of fuselage details. Furthermore, the effectiveness and advantages of the proposed method are validated through simulations and processing of ultrawideband microwave photonic data obtained from drone measurements. The imaging of aerial targets using Inverse Synthetic Aperture Radar (ISAR) is affected by micro-Doppler effects resulting from localized micromotions, such as rotation and vibration. These effects introduce additional Doppler frequency modulation into the echo, leading to spectral broadening. Under ultrahigh-resolution conditions, these micromotions interfere with the focusing process of subject scatterers, resulting in images with poor focus showing significantly reduced quality. Furthermore, micro-Doppler signals exhibit temporal variability and nonstationary characteristics, posing difficulties in their estimation and differentiation from the echo. To address these challenges, this paper proposes a nonparametric method based on Variational Mode Decomposition (VMD) and mode optimization to separate the echo of the subject from micro-Doppler components. This separation is achieved by utilizing differences in their respective time-frequency distributions. This methodology mitigates the effect of micro-Doppler signals on the echo and obtains imaging results of a drone with ultrahigh-resolution. The VMD algorithm is introduced and subsequently extended to the complex domain. The method entails the decomposition of the ISAR echo along the azimuth direction into several mode functions distributed uniformly across the Doppler sampling bandwidth. Subsequently, image entropy indices are employed to optimize the decomposition parameters and select the imaging modes. This ensures the effective suppression of micro-Doppler signals and preservation of the subject echo. Compared to existing methods based on Empirical Mode Decomposition (EMD) and Local Mean Decomposition (LMD), the proposed method exhibits superior performance in suppressing image blurring caused by micro-Doppler effects while ensuring complete retention of fuselage details. Furthermore, the effectiveness and advantages of the proposed method are validated through simulations and processing of ultrawideband microwave photonic data obtained from drone measurements.
With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications. With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications.
In ship detection through remote sensing images, optical images often provide rich details and texture information; however, the quality of such optical images can be affected by cloud and fog interferences. In contrast, Synthetic Aperture Radar (SAR) provides all-weather and all-day imaging capabilities; however, SAR images are susceptible to interference from complex sea clutter. Cooperative ship detection combining the advantages of optical and SAR images can enhance the detection performance of ships. In this paper, by focusing on the slight shift of ships in a small neighborhood range in the prior and later temporal images, we propose a method for cooperative ship detection based on neighborhood saliency in multisource heterogeneous remote sensing images, including optical and SAR data. Initially, a sea-land segmentation algorithm of optical and SAR images is applied to reduce interference from land regions. Next, single-source ship detection from optical and SAR images is performed using the RetinaNet and YOLOv5s models, respectively. Then, we introduce a multisource cooperative ship target detection strategy based on the neighborhood window opening of single-source detection results in remote sensing images and secondary detection of neighborhood salient ships. This strategy further leverages the complementary advantages of both optical and SAR heterogeneous images, reducing the possibility of missing ship and false alarms to improve overall detection performance. The performance of the proposed method has been validated using optical and SAR remote sensing data measured from Yantai, China, in 2022. Compared with existing ship detection methods, our method improves detection accuracy AP50 by ≥1.9%, demonstrating its effectiveness and superiority. In ship detection through remote sensing images, optical images often provide rich details and texture information; however, the quality of such optical images can be affected by cloud and fog interferences. In contrast, Synthetic Aperture Radar (SAR) provides all-weather and all-day imaging capabilities; however, SAR images are susceptible to interference from complex sea clutter. Cooperative ship detection combining the advantages of optical and SAR images can enhance the detection performance of ships. In this paper, by focusing on the slight shift of ships in a small neighborhood range in the prior and later temporal images, we propose a method for cooperative ship detection based on neighborhood saliency in multisource heterogeneous remote sensing images, including optical and SAR data. Initially, a sea-land segmentation algorithm of optical and SAR images is applied to reduce interference from land regions. Next, single-source ship detection from optical and SAR images is performed using the RetinaNet and YOLOv5s models, respectively. Then, we introduce a multisource cooperative ship target detection strategy based on the neighborhood window opening of single-source detection results in remote sensing images and secondary detection of neighborhood salient ships. This strategy further leverages the complementary advantages of both optical and SAR heterogeneous images, reducing the possibility of missing ship and false alarms to improve overall detection performance. The performance of the proposed method has been validated using optical and SAR remote sensing data measured from Yantai, China, in 2022. Compared with existing ship detection methods, our method improves detection accuracy AP50 by ≥1.9%, demonstrating its effectiveness and superiority.
Radar Signal Processing
The conventional terahertz radar suffers from limited operation range for long-distance, noncooperative target detection due to the low transmitter power and atmospheric attenuation effect, both of which pose a hindrance in meeting the requirements of warning detection applications. To improve the radar detection capability, this paper studies an ultrasensitive target detection method based on single-photon detectors to replace traditional radar receivers. The method is expected to considerably expand the operation range of terahertz radars. First, the statistical law of the number of echo photons of a terahertz single-photon radar system is analyzed, and the echo characteristics of the target are expounded from a microscopic perspective. Furthermore, a terahertz single-photon target detection model, incorporating the characteristics of a quantum capacitor detector, is established. In addition, the mathematical expression of the target detection performance is derived, and the performance is evaluated via simulations. Further, a target detection performance curve is obtained. Finally, a time-resolved terahertz photon-counting mechanism experiment is performed, wherein we realize high-precision ranging by counting echo pulses. This work can provide support for the research and development of ultrasensitive target detection technologies and single-photon radar systems in the terahertz band. The conventional terahertz radar suffers from limited operation range for long-distance, noncooperative target detection due to the low transmitter power and atmospheric attenuation effect, both of which pose a hindrance in meeting the requirements of warning detection applications. To improve the radar detection capability, this paper studies an ultrasensitive target detection method based on single-photon detectors to replace traditional radar receivers. The method is expected to considerably expand the operation range of terahertz radars. First, the statistical law of the number of echo photons of a terahertz single-photon radar system is analyzed, and the echo characteristics of the target are expounded from a microscopic perspective. Furthermore, a terahertz single-photon target detection model, incorporating the characteristics of a quantum capacitor detector, is established. In addition, the mathematical expression of the target detection performance is derived, and the performance is evaluated via simulations. Further, a target detection performance curve is obtained. Finally, a time-resolved terahertz photon-counting mechanism experiment is performed, wherein we realize high-precision ranging by counting echo pulses. This work can provide support for the research and development of ultrasensitive target detection technologies and single-photon radar systems in the terahertz band.
In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time. In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time.
Application of Information Metamaterial Radar
In this study, aiming at fulfilling the requirement of polarization acquisition and utilization, a method for active deception jamming recognition based on the time-varying polarization-conversion metasurface is investigated. First, an anisotropic phase-modulated metasurface supporting 3-bit phase quantization in the 9.6~10.1 GHz frequency band is designed. By optimizing the periodical phase coding, the polarization state can be converted on demand. And then, loading the polarization-conversion metasurface on a single polarization radar antenna so that the polarization states of the antenna can change along a specific trajectory. By extracting the difference in the polarization domain between target and active deception jamming, the active deception jamming could be distinguished from the radar echo. The simulation results show that under the constraints of three different polarization trajectories, the active deception jamming and targets exhibit a significant clustering effect, and the identification effect is stable. Compared with jamming identification methods that rely on dual-polarization or full-polarization radar systems, the proposed method has both low cost and high efficiency, which has great application potential in radar anti-jamming. In this study, aiming at fulfilling the requirement of polarization acquisition and utilization, a method for active deception jamming recognition based on the time-varying polarization-conversion metasurface is investigated. First, an anisotropic phase-modulated metasurface supporting 3-bit phase quantization in the 9.6~10.1 GHz frequency band is designed. By optimizing the periodical phase coding, the polarization state can be converted on demand. And then, loading the polarization-conversion metasurface on a single polarization radar antenna so that the polarization states of the antenna can change along a specific trajectory. By extracting the difference in the polarization domain between target and active deception jamming, the active deception jamming could be distinguished from the radar echo. The simulation results show that under the constraints of three different polarization trajectories, the active deception jamming and targets exhibit a significant clustering effect, and the identification effect is stable. Compared with jamming identification methods that rely on dual-polarization or full-polarization radar systems, the proposed method has both low cost and high efficiency, which has great application potential in radar anti-jamming.