Display Method:
With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications. With the successive launch of high-resolution Synthetic Aperture Radar (SAR) satellites, conducting all-weather, all-time high-precision observation of island regions with variable weather conditions has become feasible. As a key preprocessing step in various remote sensing applications, orthorectification relies on high-precision control points to correct the geometric positioning errors of SAR images. However, obtaining artificial control points that meet SAR correction requirements in island areas is costly and risky. To address this challenge, this study first proposes a rapid registration algorithm for optical and SAR heterogeneous images, and then automatically extracts control points based on an optical reference base map, achieving orthorectification of SAR images in island regions. The proposed registration algorithm consists of two stages: constructing dense common features of heterogeneous images; performing pixel-by-pixel matching on the down-sampled features, to avoid the issue of low repeatability of feature points in heterogeneous images. To reduce the matching complexity, a land sea segmentation mask is introduced to limit the search range. Subsequently, local fine matching is applied to the preliminary matched points to reduce inaccuracies introduced by down-sampling. Meanwhile, uniformly sampled coastline points are introduced to enhance the uniformity of the matching results, and orthorectified images are generated through a piecewise linear transformation model, ensuring the overall correction accuracy in sparse island areas. This algorithm performs excellently on the high-resolution SAR images of multiple scenes in island regions, with an average positioning error of 3.2 m and a complete scene correction time of only 17.3 s, both these values are superior to various existing advanced heterogeneous registration and correction algorithms, demonstrating the great potential of the proposed algorithm in engineering applications.
To address the challenges in tracking complex maneuvering extended targets, an effective maneuvering extended target tracking method was proposed for irregularly shaped star-convex using a transformer network. Initially, the alpha-shape algorithm was used to model the variations in the star-convex shape. In addition, a recursive approach was proposed to estimate the irregular shape of an extended target by detailed derivation in the Bayesian filtering framework. This approach accurately estimated the shape of a static star convex extended target. Moreover, through the structural redesign of the target state transition matrix and the real-time estimation of the maneuvering extended target’s state transition matrix using a transformer network, the accurate tracking of complex maneuvering targets was achieved. Furthermore, the real-time tracking of star convex maneuvering extended targets was achieved by fusing the estimated shape contours with motion states. This study focused on constructing certain complex maneuvering extended target tracking scenarios to assess the performance of the proposed method and the comprehensive estimation capabilities of the algorithm considering both shapes and motion states using multiple performance indicators. To address the challenges in tracking complex maneuvering extended targets, an effective maneuvering extended target tracking method was proposed for irregularly shaped star-convex using a transformer network. Initially, the alpha-shape algorithm was used to model the variations in the star-convex shape. In addition, a recursive approach was proposed to estimate the irregular shape of an extended target by detailed derivation in the Bayesian filtering framework. This approach accurately estimated the shape of a static star convex extended target. Moreover, through the structural redesign of the target state transition matrix and the real-time estimation of the maneuvering extended target’s state transition matrix using a transformer network, the accurate tracking of complex maneuvering targets was achieved. Furthermore, the real-time tracking of star convex maneuvering extended targets was achieved by fusing the estimated shape contours with motion states. This study focused on constructing certain complex maneuvering extended target tracking scenarios to assess the performance of the proposed method and the comprehensive estimation capabilities of the algorithm considering both shapes and motion states using multiple performance indicators.
In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time. In practical settings, the efficacy of Space-Time Adaptive Processing (STAP) algorithms relies on acquiring sufficient Independent Identically Distributed (IID) samples. However, sparse recovery STAP method encounters challenges like model parameter dependence and high computational complexity. Furthermore, current deep learning STAP methods lack interpretability, posing significant hurdles in debugging and practical applications for the network. In response to these challenges, this paper introduces an innovative method: a Multi-module Deep Convolutional Neural Network (MDCNN). This network blends data- and model-driven techniques to precisely estimate clutter covariance matrices, particularly in scenarios where training samples are limited. MDCNN is built based on four key modules: mapping, data, priori and hyperparameter modules. The front- and back-end mapping modules manage the pre- and post-processing of data, respectively. During each equivalent iteration, a group of data and priori modules collaborate. The core network is formed by multiple groups of these two modules, enabling multiple equivalent iterative optimizations. Further, the hyperparameter module adjusts the trainable parameters in equivalent iterations. These modules are developed with precise mathematical expressions and practical interpretations, remarkably improving the network’s interpretability. Performance evaluation using real data demonstrates that our proposed method slightly outperforms existing small-sample STAP methods in nonhomogeneous clutter environments while significantly reducing computational time.
The modern radar confrontation situation is complex and changeable, and inter-system combat has become a basic feature. The overall system performance affects the initiative on the battlefield and even the final victory or defeat. By optimizing the beam resources of radar and jammers in a system, the overall performance can be improved, and the effective low-intercept detection effect can be obtained in the spatial and temporal domains. However, joint optimization of cooperative beamforming in the spatial and temporal domains is a nonconvex problem with complex multiparameter coupling. In this paper, an optimization model is established for a multitasking dynamic scene in the spatial and temporal domains. Radar detection performance is the optimization goal, while the interference performance and energy limitation of jammers are the constraints. To solve the model, a joint design method of space-time cooperative beamforming based on iterative optimization was proposed; that is, iterative optimization of radar transmitting, receiving, and multiple jammers transmitting beamforming vectors was alternately optimized. To solve the Quadratically Constrained Quadratic Programs (QCQP) problem with indefinite matrices for multijammer collaborative optimization, this paper is based on the Feasible-Point-Pursuit Successive Convex Approximation (FPP-SCA) algorithm. In other words, on the basis of the SCA algorithm, algorithm feasibility is ensured through reasonable relaxation by introducing relaxation variables and a penalty term, which solves the difficulty of obtaining a feasible solution when a problem contains indefinite matrices. Simulation results show that under the constraint of certain jammer energy, the proposed method achieves the effect of multiple jammers interfering with each enemy platform in the spatial and temporal domains to cover our radar detection. This effect is achieved while ensuring high-performance radar detection of the target without interference. Compared with traditional algorithms, the collaborative interference based on the FPP-SCA algorithm exhibits a better performance in the dynamic scene. The modern radar confrontation situation is complex and changeable, and inter-system combat has become a basic feature. The overall system performance affects the initiative on the battlefield and even the final victory or defeat. By optimizing the beam resources of radar and jammers in a system, the overall performance can be improved, and the effective low-intercept detection effect can be obtained in the spatial and temporal domains. However, joint optimization of cooperative beamforming in the spatial and temporal domains is a nonconvex problem with complex multiparameter coupling. In this paper, an optimization model is established for a multitasking dynamic scene in the spatial and temporal domains. Radar detection performance is the optimization goal, while the interference performance and energy limitation of jammers are the constraints. To solve the model, a joint design method of space-time cooperative beamforming based on iterative optimization was proposed; that is, iterative optimization of radar transmitting, receiving, and multiple jammers transmitting beamforming vectors was alternately optimized. To solve the Quadratically Constrained Quadratic Programs (QCQP) problem with indefinite matrices for multijammer collaborative optimization, this paper is based on the Feasible-Point-Pursuit Successive Convex Approximation (FPP-SCA) algorithm. In other words, on the basis of the SCA algorithm, algorithm feasibility is ensured through reasonable relaxation by introducing relaxation variables and a penalty term, which solves the difficulty of obtaining a feasible solution when a problem contains indefinite matrices. Simulation results show that under the constraint of certain jammer energy, the proposed method achieves the effect of multiple jammers interfering with each enemy platform in the spatial and temporal domains to cover our radar detection. This effect is achieved while ensuring high-performance radar detection of the target without interference. Compared with traditional algorithms, the collaborative interference based on the FPP-SCA algorithm exhibits a better performance in the dynamic scene.
Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images. Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images.
Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset. Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset.
Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m. Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m.
Scanning radar angular super-resolution technology is based on the relationship between the target and antenna pattern, and a deconvolution method is used to obtain angular resolution capabilities beyond the real beam. Most current angular super-resolution methods are based on ideal distortion-free antenna patterns and do not consider pattern changes in the actual process due to the influence of factors such as radar radome, antenna measurement errors, and non-ideal platform motion. In practice, an antenna pattern often has unknown errors, which can result in reduced target resolution and even false target generation. To address this problem, this paper proposes an angular super-resolution imaging method for airborne radar with unknown antenna errors. First, based on the total least square criterion, this paper considers the effect of the pattern error matrix and derive the corresponding objective function. Second, this paper employs the iterative reweighted optimization method to solve the objective function by adopting an alternative iteration solution idea. Finally, an adaptive parameter update method is introduced for algorithm hyperparameter selection. The simulation and experimental results demonstrate that the proposed method can achieve super-resolution reconstruction even in the presence of unknown antenna errors, promoting the robustness of the super-resolution algorithm. Scanning radar angular super-resolution technology is based on the relationship between the target and antenna pattern, and a deconvolution method is used to obtain angular resolution capabilities beyond the real beam. Most current angular super-resolution methods are based on ideal distortion-free antenna patterns and do not consider pattern changes in the actual process due to the influence of factors such as radar radome, antenna measurement errors, and non-ideal platform motion. In practice, an antenna pattern often has unknown errors, which can result in reduced target resolution and even false target generation. To address this problem, this paper proposes an angular super-resolution imaging method for airborne radar with unknown antenna errors. First, based on the total least square criterion, this paper considers the effect of the pattern error matrix and derive the corresponding objective function. Second, this paper employs the iterative reweighted optimization method to solve the objective function by adopting an alternative iteration solution idea. Finally, an adaptive parameter update method is introduced for algorithm hyperparameter selection. The simulation and experimental results demonstrate that the proposed method can achieve super-resolution reconstruction even in the presence of unknown antenna errors, promoting the robustness of the super-resolution algorithm.
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
Distributed radar with moving platforms can enhance the survivability and detection performance of a system, however, it is difficult to equip these platforms with sufficient communication bandwidth to transmit high-precision observed data, posing a great challenge to the high-performance detection of a distributed radar system. Because low-bit quantization can effectively reduce the computation cost and resource consumption of distributed radar systems, in this paper, we investigate the high-performance detection of multiple moving targets using the distributed radar system on moving platforms by adopting the low-bit quantization strategy. First, according to system resources, multipulse observed data of each node may be quantized with a low-bit quantizer and the likelihood function relative to the quantizer and states of multiple targets are derived. Subsequently, based on the convexity of the likelihood function relative to the unknown reflection coefficients, a joint estimation algorithm is designed for the Doppler shifts and reflection coefficients. Then, a generalized likelihood ratio test based multi-target detector is designed for detecting multiple targets in the surveillance area with unknown states, and deriving the constant false alarm rate detection threshold. Finally, the optimal low-bit quantizer is designed by deriving the asymptotic detection performance of the system, which effectively improves the detection performance and ensures robustness. Simulation experiments are conducted to analyze the detection and estimation performance of the proposed algorithm, thereby demonstrating the effectiveness of the proposed algorithm for weak signals, and showing that the low-bit quantized data can achieve detection and estimation performance close to that of the high-precision (16-bit quantization) data while consuming a complementary 20% of the communication bandwidth. Besides, according to the simulated results, the two-bit quantization strategy may be a trade-off between the detection performance and resource consumption of the distributed radar system. Distributed radar with moving platforms can enhance the survivability and detection performance of a system, however, it is difficult to equip these platforms with sufficient communication bandwidth to transmit high-precision observed data, posing a great challenge to the high-performance detection of a distributed radar system. Because low-bit quantization can effectively reduce the computation cost and resource consumption of distributed radar systems, in this paper, we investigate the high-performance detection of multiple moving targets using the distributed radar system on moving platforms by adopting the low-bit quantization strategy. First, according to system resources, multipulse observed data of each node may be quantized with a low-bit quantizer and the likelihood function relative to the quantizer and states of multiple targets are derived. Subsequently, based on the convexity of the likelihood function relative to the unknown reflection coefficients, a joint estimation algorithm is designed for the Doppler shifts and reflection coefficients. Then, a generalized likelihood ratio test based multi-target detector is designed for detecting multiple targets in the surveillance area with unknown states, and deriving the constant false alarm rate detection threshold. Finally, the optimal low-bit quantizer is designed by deriving the asymptotic detection performance of the system, which effectively improves the detection performance and ensures robustness. Simulation experiments are conducted to analyze the detection and estimation performance of the proposed algorithm, thereby demonstrating the effectiveness of the proposed algorithm for weak signals, and showing that the low-bit quantized data can achieve detection and estimation performance close to that of the high-precision (16-bit quantization) data while consuming a complementary 20% of the communication bandwidth. Besides, according to the simulated results, the two-bit quantization strategy may be a trade-off between the detection performance and resource consumption of the distributed radar system.
Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology. Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology.
Forward-looking imaging of airborne scanning radar is widely used in situation awareness, autonomous navigation and terrain following. When the radar is influenced by unintentional temporally sporadic electromagnetic interference or abnormal equipment performance, the echo signal contains outliers. Existing super-resolution methods can suppress outliers and improve azimuth resolution, but the real-time computing problem is not considered. In this study, we propose an airborne scanning radar super-resolution method to achieve fast forward-looking imaging when echo data are abnormal. First, we propose using the Student-t distribution to model noise. Then, the expectation-maximization method is used to estimate the parameters. Inspired by the truncated singular value decomposition method, we introduce the truncated unitary matrix into the estimation formula of the target scattering coefficient. Finally, the size of inverse matrix is reduced and the computational complexity of parameter estimation is reduced through matrix transformation. The simulation results show that the proposed method can improve the azimuth resolution of forward-looking imaging in a shorter time, and suppress outliers in echo data. Forward-looking imaging of airborne scanning radar is widely used in situation awareness, autonomous navigation and terrain following. When the radar is influenced by unintentional temporally sporadic electromagnetic interference or abnormal equipment performance, the echo signal contains outliers. Existing super-resolution methods can suppress outliers and improve azimuth resolution, but the real-time computing problem is not considered. In this study, we propose an airborne scanning radar super-resolution method to achieve fast forward-looking imaging when echo data are abnormal. First, we propose using the Student-t distribution to model noise. Then, the expectation-maximization method is used to estimate the parameters. Inspired by the truncated singular value decomposition method, we introduce the truncated unitary matrix into the estimation formula of the target scattering coefficient. Finally, the size of inverse matrix is reduced and the computational complexity of parameter estimation is reduced through matrix transformation. The simulation results show that the proposed method can improve the azimuth resolution of forward-looking imaging in a shorter time, and suppress outliers in echo data.
The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable. The field of Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) lacks effective black-box attack algorithms. Therefore, this research proposes a migration-based black-box attack algorithm by combining the idea of the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). First, random speckle noise transformation is performed according to the characteristics of SAR images to alleviate model overfitting to the speckle noise and improve the generalization performance of the algorithm. Second, an AdaBelief-Nesterov optimizer is designed to rapidly find the optimal gradient descent direction, and the attack effectiveness of the algorithm is improved through a rapid convergence of the model gradient. Finally, a quasihyperbolic momentum operator is introduced to obtain a stable model gradient descent direction so that the gradient can avoid falling into a local optimum during the rapid convergence and to further enhance the success rate of black-box attacks on adversarial examples. Simulation experiments show that compared with existing adversarial attack algorithms, the proposed algorithm improves the ensemble model black-box attack success rate of mainstream SAR-ATR deep neural networks by 3%~55% and 6%~57.5% on the MSTAR and FUSAR-Ship datasets, respectively; the generated adversarial examples are highly concealable.
Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%. Doppler through-wall radar faces two challenges when locating targets concealed behind walls: (1) precisely determining the instantaneous frequency of the target within the frequency aliasing region and (2) reducing the impact of the wall on positioning by determining accurate wall parameters. To address these issues, this paper introduces a target localization algorithm that combines the Hough transform and support vector regression-BP neural network. First, a multiview fusion model framework is proposed for through-wall target detection, which enables the auxiliary estimation of wall parameter information by acquiring target positions from different perspectives. Second, a high-precision extraction and estimation algorithm for the instantaneous frequency curve of the target is proposed by combining the differential evolutionary algorithm and Chebyshev interpolation polynomials. Finally, a target motion trajectory compensation algorithm based on the Back Propagation (BP) neural network is proposed using the estimated wall parameter information, which suppresses the distorting effect of obstacles on target localization results and achieves the accurate localization of the target behind a wall. Experimental results indicate that compared with the conventional short-time Fourier method, the developed algorithm can accurately extract target instantaneous frequency curves within the time-frequency aliasing region. Moreover, it successfully reduces the impact caused by walls, facilitating the precise localization of multiple targets behind walls, and the overall localization accuracy is improved ~85%.
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the short-time Fourier transform, this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 4.7 dB and decreases estimation errors in rotor blade length by 10.9% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the short-time Fourier transform, this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 4.7 dB and decreases estimation errors in rotor blade length by 10.9% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
Passive radars based on FM radio signals have low detection probability, high false alarm rates and poor accuracy, presenting considerable challenges to target tracking in radar networks. Moreover, a high false alarm rate increases the computational burden and puts forward high requirements for the real-time performance of networking algorithms. In addition, low detection probability and poor azimuth accuracy result in a lack of redundant information, making measurement association and track initiation challenging. To address these issues, this paper proposes an FM-based passive radar network based on the concepts of elementary hypothesis points and elementary hypothesis track, as well as a track initiation algorithm. First, we construct possible low-dimensional association hypotheses and solve for their corresponding elementary hypothesis points. Subsequently, we associate elementary hypothesis points from different frames to form multiple possible elementary hypothesis tracks. Finally, by combining multi-frame radar network data for hypothesis track judgment, we confirm the elementary hypothesis tracks corresponding to the real targets, and eliminate the false elementary hypothesis tracks caused by incorrect associations. Result reveal that the proposed algorithm has lower computational complexity and faster track initiation speed than existing algorithms. Moreover, we verified the effectiveness of the proposed algorithm using simulation and experimental results. Passive radars based on FM radio signals have low detection probability, high false alarm rates and poor accuracy, presenting considerable challenges to target tracking in radar networks. Moreover, a high false alarm rate increases the computational burden and puts forward high requirements for the real-time performance of networking algorithms. In addition, low detection probability and poor azimuth accuracy result in a lack of redundant information, making measurement association and track initiation challenging. To address these issues, this paper proposes an FM-based passive radar network based on the concepts of elementary hypothesis points and elementary hypothesis track, as well as a track initiation algorithm. First, we construct possible low-dimensional association hypotheses and solve for their corresponding elementary hypothesis points. Subsequently, we associate elementary hypothesis points from different frames to form multiple possible elementary hypothesis tracks. Finally, by combining multi-frame radar network data for hypothesis track judgment, we confirm the elementary hypothesis tracks corresponding to the real targets, and eliminate the false elementary hypothesis tracks caused by incorrect associations. Result reveal that the proposed algorithm has lower computational complexity and faster track initiation speed than existing algorithms. Moreover, we verified the effectiveness of the proposed algorithm using simulation and experimental results.
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
In this study, a collaborative radar selection and transmit resource allocation strategy is proposed for multitarget tracking applications in multiple distributed phased array radar networks with imperfect detection performance. The closed-form expression for the Bayesian Cramér-Rao Lower Bound (BCRLB) with imperfect detection performance is obtained and adopted as the criterion function to characterize the precision of target state estimates. The key concept of the developed strategy is to collaboratively adjust the radar node selection, transmitted power, and effective bandwidth allocation of multiple distributed phased array radar networks to minimize the total transmit power consumption in an imperfect detection environment. This will be achieved under the constraints of the predetermined tracking accuracy requirements of multiple targets and several illumination resource budgets to improve its radio frequency stealth performance. The results revealed that the formulated problem is a mixed-integer programming, nonlinear, and nonconvex optimization model. By incorporating the barrier function approach and cyclic minimization technique, an efficient four-step-based solution methodology is proposed to solve the resulting optimization problem. The numerical simulation examples demonstrate that the proposed strategy can effectively reduce the total power consumption of multiple distributed phased array radar networks by at least 32.3% and improve its radio frequency stealth performance while meeting the given multitarget tracking accuracy requirements compared with other existing algorithms. In this study, a collaborative radar selection and transmit resource allocation strategy is proposed for multitarget tracking applications in multiple distributed phased array radar networks with imperfect detection performance. The closed-form expression for the Bayesian Cramér-Rao Lower Bound (BCRLB) with imperfect detection performance is obtained and adopted as the criterion function to characterize the precision of target state estimates. The key concept of the developed strategy is to collaboratively adjust the radar node selection, transmitted power, and effective bandwidth allocation of multiple distributed phased array radar networks to minimize the total transmit power consumption in an imperfect detection environment. This will be achieved under the constraints of the predetermined tracking accuracy requirements of multiple targets and several illumination resource budgets to improve its radio frequency stealth performance. The results revealed that the formulated problem is a mixed-integer programming, nonlinear, and nonconvex optimization model. By incorporating the barrier function approach and cyclic minimization technique, an efficient four-step-based solution methodology is proposed to solve the resulting optimization problem. The numerical simulation examples demonstrate that the proposed strategy can effectively reduce the total power consumption of multiple distributed phased array radar networks by at least 32.3% and improve its radio frequency stealth performance while meeting the given multitarget tracking accuracy requirements compared with other existing algorithms.
Column
Display Method:
Reviews
With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group. With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group.
Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed. Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed.
Papers
The current state of intelligent target recognition approaches for Synthetic Aperture Radar (SAR) continues to experience challenges owing to their limited robustness, generalizability, and interpretability. Currently, research focuses on comprehending the microwave properties of SAR targets and integrating them with advanced deep learning algorithms to achieve effective and resilient SAR target recognition. The computational complexity of SAR target characteristic-inversion approaches is often considerable, rendering their integration with deep neural networks for achieving real-time predictions in an end-to-end manner challenging. To facilitate the utilization of the physical properties of SAR targets in intelligent recognition tasks, advancing the development of microwave physical property sensing technologies that are efficient, intelligent, and interpretable is imperative. This paper focuses on the nonstationary nature of high-resolution SAR targets and proposes an improved intelligent approach for analyzing target characteristics using time-frequency analysis. This method enhances the processing flow and calculation efficiency, making it more suitable for SAR targets. It is integrated with a deep neural network for SAR target recognition to achieve consistent performance improvement. The proposed approach exhibits robust generalization capabilities and notable computing efficiency, enabling the acquisition of classification outcomes of the SAR target characteristics that are readily interpretable from a physical standpoint. The enhancement in the performance of the target recognition algorithm is comparable to that achieved by the attribute scattering center model. The current state of intelligent target recognition approaches for Synthetic Aperture Radar (SAR) continues to experience challenges owing to their limited robustness, generalizability, and interpretability. Currently, research focuses on comprehending the microwave properties of SAR targets and integrating them with advanced deep learning algorithms to achieve effective and resilient SAR target recognition. The computational complexity of SAR target characteristic-inversion approaches is often considerable, rendering their integration with deep neural networks for achieving real-time predictions in an end-to-end manner challenging. To facilitate the utilization of the physical properties of SAR targets in intelligent recognition tasks, advancing the development of microwave physical property sensing technologies that are efficient, intelligent, and interpretable is imperative. This paper focuses on the nonstationary nature of high-resolution SAR targets and proposes an improved intelligent approach for analyzing target characteristics using time-frequency analysis. This method enhances the processing flow and calculation efficiency, making it more suitable for SAR targets. It is integrated with a deep neural network for SAR target recognition to achieve consistent performance improvement. The proposed approach exhibits robust generalization capabilities and notable computing efficiency, enabling the acquisition of classification outcomes of the SAR target characteristics that are readily interpretable from a physical standpoint. The enhancement in the performance of the target recognition algorithm is comparable to that achieved by the attribute scattering center model.
Synthetic Aperture Radar (SAR) images are an important data source in microwave vision research; however, computer vision cannot interpret these images effectively based on optical perceptual principles. Therefore, microwave vision, which draws inspiration from human visual perception principles and combines computer vision techniques with electromagnetic physical principles, has become an important research direction in microwave remote sensing. Exploring the cognitive basis for microwave vision is crucial for improving the theoretical system of microwave vision. Therefore, as a preliminary attempt to enhance the theoretical understanding of microwave vision, this paper examines the effectiveness of optical perceptual principles for microwave vision. As a classical visual theory, Gestalt perceptual principles are commonly used for describing the perceptual principles of the human visual system for the external optical world and are a cognitive theoretical foundation of computer vision. In this context, this paper uses SAR images as the research object, focuses on the design process of cognitive psychology experiments, and preliminarily studies the effectiveness of Gestalt perceptual principles for SAR images, including the principles of perceptual grouping and perceptual invariance, exploring the cognitive basis of microwave vision. The experimental results indicate that the Gestalt perceptual principles cannot be directly applied to the algorithm design for SAR images, and the knowledge concepts and visual principles derived from the optical world using the human visual system do not perform well in SAR images. In the future, it will be necessary to summarize the corresponding visual cognitive principles based on the characteristics of microwave images, such as SAR images. Synthetic Aperture Radar (SAR) images are an important data source in microwave vision research; however, computer vision cannot interpret these images effectively based on optical perceptual principles. Therefore, microwave vision, which draws inspiration from human visual perception principles and combines computer vision techniques with electromagnetic physical principles, has become an important research direction in microwave remote sensing. Exploring the cognitive basis for microwave vision is crucial for improving the theoretical system of microwave vision. Therefore, as a preliminary attempt to enhance the theoretical understanding of microwave vision, this paper examines the effectiveness of optical perceptual principles for microwave vision. As a classical visual theory, Gestalt perceptual principles are commonly used for describing the perceptual principles of the human visual system for the external optical world and are a cognitive theoretical foundation of computer vision. In this context, this paper uses SAR images as the research object, focuses on the design process of cognitive psychology experiments, and preliminarily studies the effectiveness of Gestalt perceptual principles for SAR images, including the principles of perceptual grouping and perceptual invariance, exploring the cognitive basis of microwave vision. The experimental results indicate that the Gestalt perceptual principles cannot be directly applied to the algorithm design for SAR images, and the knowledge concepts and visual principles derived from the optical world using the human visual system do not perform well in SAR images. In the future, it will be necessary to summarize the corresponding visual cognitive principles based on the characteristics of microwave images, such as SAR images.
Convolutional Neural Network (CNN) is widely used for image target classifications in Synthetic Aperture Radar (SAR), but the lack of mechanism transparency prevents it from meeting the practical application requirements, such as high reliability and trustworthiness. The Class Activation Mapping (CAM) method is often used to visualize the decision region of the CNN model. However, existing methods are primarily based on either channel-level or space-level class activation weights, and their research progress is still in its infancy regarding more complex SAR image datasets. Based on this, this paper proposes a CNN model visualization method for SAR images, considering the feature extraction ability of neurons and their current network decisions. Initially, neuronal activation values are used to visualize the capability of neurons to learn a target structure in its corresponding receptive field. Further, a novel CAM-based method combined with channel-wise and spatial-wise weights is proposed, which can provide the foundation for the decision-making process of the trained CNN models by detecting the crucial areas in SAR images. Experimental results showed that this method provides interpretability analysis of the model under different settings and effectively expands the application of CNNs for SAR image visualization. Convolutional Neural Network (CNN) is widely used for image target classifications in Synthetic Aperture Radar (SAR), but the lack of mechanism transparency prevents it from meeting the practical application requirements, such as high reliability and trustworthiness. The Class Activation Mapping (CAM) method is often used to visualize the decision region of the CNN model. However, existing methods are primarily based on either channel-level or space-level class activation weights, and their research progress is still in its infancy regarding more complex SAR image datasets. Based on this, this paper proposes a CNN model visualization method for SAR images, considering the feature extraction ability of neurons and their current network decisions. Initially, neuronal activation values are used to visualize the capability of neurons to learn a target structure in its corresponding receptive field. Further, a novel CAM-based method combined with channel-wise and spatial-wise weights is proposed, which can provide the foundation for the decision-making process of the trained CNN models by detecting the crucial areas in SAR images. Experimental results showed that this method provides interpretability analysis of the model under different settings and effectively expands the application of CNNs for SAR image visualization.
With advances in satellite technology, Polarimetric Synthetic Aperture Radar (PolSAR) now have higher resolution and better data quality, providing excellent data conditions for the refined visual interpretation of artificial targets. The primary method currently used is a multicomponent decomposition, but this method can result in pixel misdivision problems. Thus, we propose a non-fixed threshold division method for achieving advanced feature ship structure characterization in full-polarimetric SAR images. Yamaguchi decomposition can effectively identify the primary scattering mechanism and characterize artificial targets. Its modified volume scattering model is more consistent with actual data. The polarization entropy can serve as the target scattering mechanism at a specified equivalent point in the weakly depolarized state, which can effectively highlight the ship structure. This paper combines the three components of the Yamaguchi decomposition algorithm with the entropy, and divides it into a nine-classification plane with a non-fixed threshold. This method reduces category randomness generated by noise at the threshold boundary for complicated threshold treatments. Furthermore, the Mixed Scattering Mechanism (MSM) which is the region where both secondary scattering and single scattering are significant, was proposed to better match the scattering types of typical structures of vessels in the experiment. The Generalized Similarity Parameter (GSP) was used to further shorten the intra-class distance and perform iterative clustering using a modified GSP-Wishart classifier. This method improves the vessel distinguishability by enhancing the secondary and mixed scattering mechanisms. Finally, this paper uses full-polarimetric SAR data from a port in Shanghai, China, for the experiment. We collected and filtered ship information and optical data from this port through the Automatic Identification System (AIS) and matched them with the ships in full-polarimetric SAR images to verify the correct characterization of each vessel’s features. The experimental results show that the proposed method can effectively distinguish three types of vessels: bulk carriers, container ships and tankers. With advances in satellite technology, Polarimetric Synthetic Aperture Radar (PolSAR) now have higher resolution and better data quality, providing excellent data conditions for the refined visual interpretation of artificial targets. The primary method currently used is a multicomponent decomposition, but this method can result in pixel misdivision problems. Thus, we propose a non-fixed threshold division method for achieving advanced feature ship structure characterization in full-polarimetric SAR images. Yamaguchi decomposition can effectively identify the primary scattering mechanism and characterize artificial targets. Its modified volume scattering model is more consistent with actual data. The polarization entropy can serve as the target scattering mechanism at a specified equivalent point in the weakly depolarized state, which can effectively highlight the ship structure. This paper combines the three components of the Yamaguchi decomposition algorithm with the entropy, and divides it into a nine-classification plane with a non-fixed threshold. This method reduces category randomness generated by noise at the threshold boundary for complicated threshold treatments. Furthermore, the Mixed Scattering Mechanism (MSM) which is the region where both secondary scattering and single scattering are significant, was proposed to better match the scattering types of typical structures of vessels in the experiment. The Generalized Similarity Parameter (GSP) was used to further shorten the intra-class distance and perform iterative clustering using a modified GSP-Wishart classifier. This method improves the vessel distinguishability by enhancing the secondary and mixed scattering mechanisms. Finally, this paper uses full-polarimetric SAR data from a port in Shanghai, China, for the experiment. We collected and filtered ship information and optical data from this port through the Automatic Identification System (AIS) and matched them with the ships in full-polarimetric SAR images to verify the correct characterization of each vessel’s features. The experimental results show that the proposed method can effectively distinguish three types of vessels: bulk carriers, container ships and tankers.
Ship detection is one of the most important applications of polarimetric Synthetic Aperture Radar (SAR) systems. Current ship detection methods are susceptible to side flap interference, making it difficult to extract the target shape correctly. In addition, when ships are exceedingly dense and have different scales, adjacent ships may be considered as a single target because of the influence of strong sidelobes, causing missed detections. To address the issues of sidelobe interference and multi-scale dense ship detection, a ship detection method based on the polarimetric SAR gradient and the complex Wishart classifier is proposed. First, the Likelihood Ratio Test (LRT) gradient is introduced into the log-ratio gradient framework to apply it to the polarimetric SAR data. Then, a Constant False Alarm Rate (CFAR) detector is applied to the gradient image to map the ship boundaries accurately. Second, the complex Wishart iterative classifier is used to detect the strong scattering part of the ship, which can eliminate most clutter interference and maintain the ship’s shape details. Finally, the LRT detection and complex Wishart classifier detection results are fused. Thus, not only the strong sidelobe interference can be greatly suppressed, but the dense targets with different scales are also distinguished and accurately located. This study performs comparative experiments on three polarimetric SAR images from the ALOS-2 satellite. Experimental results show that compared with the existing methods, the proposed algorithm has fewer false alarms and missed detections and can effectively overcome the problems of sidelobe interference while maintaining the shape details. Ship detection is one of the most important applications of polarimetric Synthetic Aperture Radar (SAR) systems. Current ship detection methods are susceptible to side flap interference, making it difficult to extract the target shape correctly. In addition, when ships are exceedingly dense and have different scales, adjacent ships may be considered as a single target because of the influence of strong sidelobes, causing missed detections. To address the issues of sidelobe interference and multi-scale dense ship detection, a ship detection method based on the polarimetric SAR gradient and the complex Wishart classifier is proposed. First, the Likelihood Ratio Test (LRT) gradient is introduced into the log-ratio gradient framework to apply it to the polarimetric SAR data. Then, a Constant False Alarm Rate (CFAR) detector is applied to the gradient image to map the ship boundaries accurately. Second, the complex Wishart iterative classifier is used to detect the strong scattering part of the ship, which can eliminate most clutter interference and maintain the ship’s shape details. Finally, the LRT detection and complex Wishart classifier detection results are fused. Thus, not only the strong sidelobe interference can be greatly suppressed, but the dense targets with different scales are also distinguished and accurately located. This study performs comparative experiments on three polarimetric SAR images from the ALOS-2 satellite. Experimental results show that compared with the existing methods, the proposed algorithm has fewer false alarms and missed detections and can effectively overcome the problems of sidelobe interference while maintaining the shape details.
With the widespread application of Synthetic Aperture Radar (SAR) images in ship detection and recognition, accurate and efficient ship classification has become an urgent issue that needs to be addressed. In few-shot learning, conventional methods often suffer from limited generalization capabilities. Herein, additional information and features are introduced to enhance the understanding and generalization capabilities of the model for targets. To address this challenge, this study proposes a few-shot ship classification method for SAR images based on scattering point topology and Dual-Branch Convolutional Neural Network (DB-CNN). First, a topology structure was constructed using scattering key points to characterize the structural and shape features of ship targets. Second, the Laplacian matrix of the topology structure was calculated to transform the topological relations between scattering points into a matrix form. Finally, the original image and Laplacian matrix were used as inputs to the DB-CNN for feature extraction. Regarding network architecture, a DB-CNN comprising two independent convolution branches was designed. These branches were tasked with processing visual and topological features, employing two cross-fusion attention modules to collaboratively merge features from both branches. This approach effectively integrates the topological relations of target scattering points into the automated learning process of the network, enhancing the generalization capabilities and enhancing the classification accuracy of the model. Experimental results demonstrated that the proposed approach obtained average accuracies of 53.80% and 73.00% in 1-shot and 5-shot tasks, respectively, on the OpenSARShip dataset. Similarly, on the FUSAR-Ship dataset, it achieved average accuracies of 54.44% and 71.36% in 1-shot and 5-shot tasks, respectively. In the case of both 1-shot and 5-shot tasks, the proposed approach outperformed the baseline by >15% in terms of accuracy, underscoring the effectiveness of incorporating scattering point topology in few-shot ship classification of SAR images. With the widespread application of Synthetic Aperture Radar (SAR) images in ship detection and recognition, accurate and efficient ship classification has become an urgent issue that needs to be addressed. In few-shot learning, conventional methods often suffer from limited generalization capabilities. Herein, additional information and features are introduced to enhance the understanding and generalization capabilities of the model for targets. To address this challenge, this study proposes a few-shot ship classification method for SAR images based on scattering point topology and Dual-Branch Convolutional Neural Network (DB-CNN). First, a topology structure was constructed using scattering key points to characterize the structural and shape features of ship targets. Second, the Laplacian matrix of the topology structure was calculated to transform the topological relations between scattering points into a matrix form. Finally, the original image and Laplacian matrix were used as inputs to the DB-CNN for feature extraction. Regarding network architecture, a DB-CNN comprising two independent convolution branches was designed. These branches were tasked with processing visual and topological features, employing two cross-fusion attention modules to collaboratively merge features from both branches. This approach effectively integrates the topological relations of target scattering points into the automated learning process of the network, enhancing the generalization capabilities and enhancing the classification accuracy of the model. Experimental results demonstrated that the proposed approach obtained average accuracies of 53.80% and 73.00% in 1-shot and 5-shot tasks, respectively, on the OpenSARShip dataset. Similarly, on the FUSAR-Ship dataset, it achieved average accuracies of 54.44% and 71.36% in 1-shot and 5-shot tasks, respectively. In the case of both 1-shot and 5-shot tasks, the proposed approach outperformed the baseline by >15% in terms of accuracy, underscoring the effectiveness of incorporating scattering point topology in few-shot ship classification of SAR images.
With the widespread application of deep learning methods in Synthetic Aperture Radar (SAR) image interpretation, the explainability of SAR target recognition deep networks has gradually attracted the attention of scholars. Class Activation Mapping (CAM), a commonly used explainability algorithm, can visually display the salient regions influencing the recognition task through heatmaps. However, as a post hoc explanation method, CAM can only statically display the salient regions during the current recognition process and cannot dynamically show the variation patterns of the salient regions upon changing the input. This study introduces the concept of perturbation into CAM, proposing an algorithm called SAR Clutter Characteristics CAM (SCC-CAM). By introducing globally distributed perturbations to the input image, interference is gradually applied to deep SAR recognition networks, causing decision flips. The degree of change in the activation values of network neurons is also calculated. This method addresses the issue of perturbation propagation and allows for dynamic observation and measurement of variation patterns of salient regions during the recognition process. Thus, SCC-CAM enhances the explainability of deep networks. Experiments on the MSTAR and OpenSARShip-1.0 datasets demonstrate that the proposed algorithm can more accurately locate salient regions. Compared with traditional methods, the algorithm in this study shows stronger explainability in terms of average confidence degradation rates, confidence ascent ratios, information content, and other evaluation metrics. This algorithm can serve as a universal method for enhancing the explainability of networks. With the widespread application of deep learning methods in Synthetic Aperture Radar (SAR) image interpretation, the explainability of SAR target recognition deep networks has gradually attracted the attention of scholars. Class Activation Mapping (CAM), a commonly used explainability algorithm, can visually display the salient regions influencing the recognition task through heatmaps. However, as a post hoc explanation method, CAM can only statically display the salient regions during the current recognition process and cannot dynamically show the variation patterns of the salient regions upon changing the input. This study introduces the concept of perturbation into CAM, proposing an algorithm called SAR Clutter Characteristics CAM (SCC-CAM). By introducing globally distributed perturbations to the input image, interference is gradually applied to deep SAR recognition networks, causing decision flips. The degree of change in the activation values of network neurons is also calculated. This method addresses the issue of perturbation propagation and allows for dynamic observation and measurement of variation patterns of salient regions during the recognition process. Thus, SCC-CAM enhances the explainability of deep networks. Experiments on the MSTAR and OpenSARShip-1.0 datasets demonstrate that the proposed algorithm can more accurately locate salient regions. Compared with traditional methods, the algorithm in this study shows stronger explainability in terms of average confidence degradation rates, confidence ascent ratios, information content, and other evaluation metrics. This algorithm can serve as a universal method for enhancing the explainability of networks.
The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples. The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples.
Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions. Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions.
The global scattering-center model is a high-performance electromagnetic scattering parametric model for complex targets in an optical region. The traditional methods for constructing global scattering models are usually based on candidate-point screening and clustering and are prone to producing false scattering centers and ignoring actual scattering centers. To address this issue, this study proposes a novel modeling method based on the spectral peak analysis of the target electromagnetic scattering intensity field. First, the three-dimensional (3D) electromagnetic scattering intensity field of the target is estimated based on the multiperspective, one-dimensional scattering-center parameters of the target using the RANdom SAmple Consensus (RANSAC) and Parzen window methods. Next, the positions of the global 3D scattering centers are determined through spectral peak analysis, scattering-center association, and multivision measurement fusion. Finally, the scattering coefficients and type parameters of the global scattering centers are estimated after the visibility of the global scattering center is corrected through binary image morphological processing. Simulation results demonstrate that the global scattering center model extracted using this method, which is highly consistent with the geometrical structure of the target, achieves higher expression accuracy while using fewer scattering centers than those used in traditional methods. The global scattering-center model is a high-performance electromagnetic scattering parametric model for complex targets in an optical region. The traditional methods for constructing global scattering models are usually based on candidate-point screening and clustering and are prone to producing false scattering centers and ignoring actual scattering centers. To address this issue, this study proposes a novel modeling method based on the spectral peak analysis of the target electromagnetic scattering intensity field. First, the three-dimensional (3D) electromagnetic scattering intensity field of the target is estimated based on the multiperspective, one-dimensional scattering-center parameters of the target using the RANdom SAmple Consensus (RANSAC) and Parzen window methods. Next, the positions of the global 3D scattering centers are determined through spectral peak analysis, scattering-center association, and multivision measurement fusion. Finally, the scattering coefficients and type parameters of the global scattering centers are estimated after the visibility of the global scattering center is corrected through binary image morphological processing. Simulation results demonstrate that the global scattering center model extracted using this method, which is highly consistent with the geometrical structure of the target, achieves higher expression accuracy while using fewer scattering centers than those used in traditional methods.
Microwave Photonic (MWP) radars have remarkably improved traditional microwave radar hardware architectures using photonics devices. With the exceptional physical properties of photonic devices, MWP radars can emit ultra-wideband, high-linearity, high-quality linear frequency modulation signals, allowing ultra-high-resolution target imaging and detection. Different target regions exhibit distinct responses to different frequency signals during target imaging and detection due to their diverse structures and characteristics. Therefore, MWP radars have the potential to generate pseudo-color images based on scattering differences, further enhancing the information retrieval capability of MWP Synthetic Aperture Radar (MWP-SAR). Pseudo-color images generated using traditional remote sensing techniques cannot achieve centimeter-level resolution. Therefore, we propose a method for generating pseudo-color images while maintaining the resolution of MWP-SAR. The algorithm first determines an optimal sub-band echo search model and subsequently employs the optimal sub-band search algorithm to process the ultra-wideband echoes to obtain sub-band echo channels with the largest scattering characteristic differences. The multi-sub-band images are then color-composited to generate pseudo-color images that best describe the target scattering characteristics. However, to ensure the high resolution of MWP-SAR, a fusion model is established to combine the full-resolution SAR image with the multi-sub-band image. Finally, full-resolution pseudo-color images are successfully synthesized using the measured airborne MWP-SAR data, validating the effectiveness of the algorithm. This algorithm enables MWP-SAR to obtain more target information during imaging, offering assistance in implementing imaging radar and microwave vision. Microwave Photonic (MWP) radars have remarkably improved traditional microwave radar hardware architectures using photonics devices. With the exceptional physical properties of photonic devices, MWP radars can emit ultra-wideband, high-linearity, high-quality linear frequency modulation signals, allowing ultra-high-resolution target imaging and detection. Different target regions exhibit distinct responses to different frequency signals during target imaging and detection due to their diverse structures and characteristics. Therefore, MWP radars have the potential to generate pseudo-color images based on scattering differences, further enhancing the information retrieval capability of MWP Synthetic Aperture Radar (MWP-SAR). Pseudo-color images generated using traditional remote sensing techniques cannot achieve centimeter-level resolution. Therefore, we propose a method for generating pseudo-color images while maintaining the resolution of MWP-SAR. The algorithm first determines an optimal sub-band echo search model and subsequently employs the optimal sub-band search algorithm to process the ultra-wideband echoes to obtain sub-band echo channels with the largest scattering characteristic differences. The multi-sub-band images are then color-composited to generate pseudo-color images that best describe the target scattering characteristics. However, to ensure the high resolution of MWP-SAR, a fusion model is established to combine the full-resolution SAR image with the multi-sub-band image. Finally, full-resolution pseudo-color images are successfully synthesized using the measured airborne MWP-SAR data, validating the effectiveness of the algorithm. This algorithm enables MWP-SAR to obtain more target information during imaging, offering assistance in implementing imaging radar and microwave vision.

微信 | 公众平台

随时查询稿件 获取最新论文 知晓行业信息

  • EI
  • Scopus
  • DOAJ
  • JST
  • CSCD
  • CSTPCD
  • CNKI
  • 中文核心期刊