Current Issue

2026 Vol. 15, No. 1
Special Topic Papers: Multi-dimensional Radar Imaging and Information Acquisition Techniques
With the increasing demands on imaging accuracy, efficiency, and robustness in modern three-Dimensional (3D) Synthetic Aperture Radar (SAR) imaging systems, the performance of traditional 3D imaging methods, such as matched filtering and compressed sensing, has become limited in these aspects. In recent years, the rapid development of Deep Learning (DL) technology has provided new theoretical solutions for SAR 3D imaging by enabling the integration of neural networks with physical radar imaging models, leading to the emergence of a learning-based imaging paradigm that combines data-driven and model-driven approaches. This paper systematically reviews recent research progress in DL-based SAR 3D imaging. Focusing on two core issues, namely super-resolution imaging and enhanced imaging, this paper discusses current research advances and hotspots in SAR 3D imaging. These include super-resolution 3D imaging methods based on feedforward neural networks and deep unfolding networks, as well as 3D enhancement techniques such as multichannel data preprocessing and point cloud post-processing. This paper also summarizes publicly available datasets for SAR 3D imaging. In addition, this paper explores current research challenges in DL SAR 3D imaging, including high-generalization and high-precision DL SAR super-resolution 3D imaging technology, DL SAR elevation dimension disambiguation technology, integrated study of DL SAR 3D imaging and image enhancement, and the construction of DL SAR 3D imaging datasets. This paper provides an outlook on future development trends, aiming to offer research references and technical guidance for scholars in related fields. With the increasing demands on imaging accuracy, efficiency, and robustness in modern three-Dimensional (3D) Synthetic Aperture Radar (SAR) imaging systems, the performance of traditional 3D imaging methods, such as matched filtering and compressed sensing, has become limited in these aspects. In recent years, the rapid development of Deep Learning (DL) technology has provided new theoretical solutions for SAR 3D imaging by enabling the integration of neural networks with physical radar imaging models, leading to the emergence of a learning-based imaging paradigm that combines data-driven and model-driven approaches. This paper systematically reviews recent research progress in DL-based SAR 3D imaging. Focusing on two core issues, namely super-resolution imaging and enhanced imaging, this paper discusses current research advances and hotspots in SAR 3D imaging. These include super-resolution 3D imaging methods based on feedforward neural networks and deep unfolding networks, as well as 3D enhancement techniques such as multichannel data preprocessing and point cloud post-processing. This paper also summarizes publicly available datasets for SAR 3D imaging. In addition, this paper explores current research challenges in DL SAR 3D imaging, including high-generalization and high-precision DL SAR super-resolution 3D imaging technology, DL SAR elevation dimension disambiguation technology, integrated study of DL SAR 3D imaging and image enhancement, and the construction of DL SAR 3D imaging datasets. This paper provides an outlook on future development trends, aiming to offer research references and technical guidance for scholars in related fields.
Synthetic Aperture Radar (SAR) is an active microwave sensing technology capable of all-weather, day-and-night operation, making it a critical source of Earth observation data. However, conventional two-dimensional SAR imagery often suffers from echo overlap, limiting its effectiveness for target recognition. Although three-Dimensional (3D) SAR imaging using multibaseline observations can mitigate target occlusion, single-pass airborne or spaceborne SAR systems are typically constrained by system complexity, resulting in sparse track sampling that is inadequate for conventional 3D imaging algorithms. To address this limitation, a novel microwave vision-based 3D imaging framework has recently been proposed, in which visual semantic information is extracted and fused to enhance imaging performance. However, the characterization and application of geometric continuity in SAR imagery remain largely unexplored. This study characterizes the geometric continuity properties of typical urban buildings in the SAR 3D imaging domain and proposes sparse-track 3D imaging methods constrained by both implicit and explicit geometric continuity. Experimental results obtained from measured airborne-array InSAR data demonstrate that incorporating geometric continuity constraints effectively enhances 3D imaging performance under sparse sampling conditions. These findings indicate that geometric continuity-based representations provide a practical and effective pathway toward realizing microwave-vision 3D SAR imaging. Synthetic Aperture Radar (SAR) is an active microwave sensing technology capable of all-weather, day-and-night operation, making it a critical source of Earth observation data. However, conventional two-dimensional SAR imagery often suffers from echo overlap, limiting its effectiveness for target recognition. Although three-Dimensional (3D) SAR imaging using multibaseline observations can mitigate target occlusion, single-pass airborne or spaceborne SAR systems are typically constrained by system complexity, resulting in sparse track sampling that is inadequate for conventional 3D imaging algorithms. To address this limitation, a novel microwave vision-based 3D imaging framework has recently been proposed, in which visual semantic information is extracted and fused to enhance imaging performance. However, the characterization and application of geometric continuity in SAR imagery remain largely unexplored. This study characterizes the geometric continuity properties of typical urban buildings in the SAR 3D imaging domain and proposes sparse-track 3D imaging methods constrained by both implicit and explicit geometric continuity. Experimental results obtained from measured airborne-array InSAR data demonstrate that incorporating geometric continuity constraints effectively enhances 3D imaging performance under sparse sampling conditions. These findings indicate that geometric continuity-based representations provide a practical and effective pathway toward realizing microwave-vision 3D SAR imaging.
Non-Line-Of-Sight (NLOS) millimeter wave radar 3D imaging leverages electromagnetic wave propagation characteristics such as reflection, diffraction, scattering, and penetration to detect, locate, and image hidden targets in occluded environments. It holds significant potential for applications in autonomous driving, disaster rescue, and urban warfare. However, uncertainties introduced by reflection surfaces and occluding objects in practical NLOS scenarios, such as phase errors, aperture shadowing, and multipath effect, lead to issues like blurred imaging and increased artifacts in radar imaging. To address these challenges, this study proposes a 3D imaging method for NLOS millimeter wave radar based on Range Migration (RM) operator learning, leveraging the adaptive optimization properties of deep unfolding networks and prior environmental perception. First, a 3D imaging model for NLOS millimeter wave radar in Looking Around Corner (LAC) scenarios is established. An RM kernel operator is introduced to enhance imaging efficiency and reduce computational complexity. Second, a high-precision NLOS 3D imaging network is constructed based on the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) framework. Utilizing features specific to NLOS scenes and designing algorithm parameters as functions of network weights, the method achieves high-precision, high-efficiency 3D reconstruction of NLOS targets. Finally, a near-field NLOS millimeter wave radar imaging platform is developed. Experimental validations are performed on targets, including metal letters “O” and “S”, an Eiffel Tower model, and an artificial satellite model, under both ideal and non-ideal reflection surface conditions. The results demonstrate that the proposed method significantly improves 3D imaging precision, achieving a two-orders-of-magnitude increase in computational speed over traditional sparse imaging algorithms. Non-Line-Of-Sight (NLOS) millimeter wave radar 3D imaging leverages electromagnetic wave propagation characteristics such as reflection, diffraction, scattering, and penetration to detect, locate, and image hidden targets in occluded environments. It holds significant potential for applications in autonomous driving, disaster rescue, and urban warfare. However, uncertainties introduced by reflection surfaces and occluding objects in practical NLOS scenarios, such as phase errors, aperture shadowing, and multipath effect, lead to issues like blurred imaging and increased artifacts in radar imaging. To address these challenges, this study proposes a 3D imaging method for NLOS millimeter wave radar based on Range Migration (RM) operator learning, leveraging the adaptive optimization properties of deep unfolding networks and prior environmental perception. First, a 3D imaging model for NLOS millimeter wave radar in Looking Around Corner (LAC) scenarios is established. An RM kernel operator is introduced to enhance imaging efficiency and reduce computational complexity. Second, a high-precision NLOS 3D imaging network is constructed based on the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) framework. Utilizing features specific to NLOS scenes and designing algorithm parameters as functions of network weights, the method achieves high-precision, high-efficiency 3D reconstruction of NLOS targets. Finally, a near-field NLOS millimeter wave radar imaging platform is developed. Experimental validations are performed on targets, including metal letters “O” and “S”, an Eiffel Tower model, and an artificial satellite model, under both ideal and non-ideal reflection surface conditions. The results demonstrate that the proposed method significantly improves 3D imaging precision, achieving a two-orders-of-magnitude increase in computational speed over traditional sparse imaging algorithms.
Synthetic Aperture Radar Tomography (TomoSAR) has emerged as the primary technique for generating 3D SAR point clouds. In practice, ignoring the quadratic phase distribution in the elevation dimension causes defocusing artifacts due to inherent Fresnel diffraction in tomographic SAR processing. By comparing with optical diffraction theory, this paper identifies similar diffraction effects in the third dimension of SAR images and introduces a sparse matched filtering technique for tomographic focusing. Our method is based on deriving a sparse phase compensation factor to construct the matched filter. The proposed processing chain includes three key steps. First, a normalized sparse frequency profile in the tomographic dimension is constructed using the spatial geometric baseline of TomoSAR acquisitions. Next, we derive a frequency-domain sparse matched filter based on Fresnel integral properties incorporating system parameters such as wavelength, range, and aperture size. Finally, phase compensation is applied through the designed sparse filter, enabling elevation target detection using established sparse imaging techniques, including compressed sensing and likelihood ratio detection. This study employs airborne SAR data acquired by the Aerospace Information Research Institute, Chinese Academy of Sciences, to validate the proposed frequency-domain sparse matched filter. Experimental results demonstrate that our method effectively reduces tomographic defocusing artifacts caused by Fresnel diffraction, substantially improving the accuracy of both target localization and backscattering coefficient estimation. Synthetic Aperture Radar Tomography (TomoSAR) has emerged as the primary technique for generating 3D SAR point clouds. In practice, ignoring the quadratic phase distribution in the elevation dimension causes defocusing artifacts due to inherent Fresnel diffraction in tomographic SAR processing. By comparing with optical diffraction theory, this paper identifies similar diffraction effects in the third dimension of SAR images and introduces a sparse matched filtering technique for tomographic focusing. Our method is based on deriving a sparse phase compensation factor to construct the matched filter. The proposed processing chain includes three key steps. First, a normalized sparse frequency profile in the tomographic dimension is constructed using the spatial geometric baseline of TomoSAR acquisitions. Next, we derive a frequency-domain sparse matched filter based on Fresnel integral properties incorporating system parameters such as wavelength, range, and aperture size. Finally, phase compensation is applied through the designed sparse filter, enabling elevation target detection using established sparse imaging techniques, including compressed sensing and likelihood ratio detection. This study employs airborne SAR data acquired by the Aerospace Information Research Institute, Chinese Academy of Sciences, to validate the proposed frequency-domain sparse matched filter. Experimental results demonstrate that our method effectively reduces tomographic defocusing artifacts caused by Fresnel diffraction, substantially improving the accuracy of both target localization and backscattering coefficient estimation.
Tomographic SAR (TomoSAR) has significant value in scientific research and applications, as it enables three-dimensional (3D) imaging to address limitations such as scene overlay and projection geometric distortion. The elevation resolution of TomoSAR is limited by the aperture in the elevation direction. Consequently, super-resolution algorithms such as Compressive Sensing (CS) are generally used to enhance the performance of 3D imaging. However, conventional CS methods suffer from grid mismatch issues due to predefined discrete grids, which lead to limited resolution under practical constraints, such as limited channels and low signal-to-noise ratios. To address these limitations, a novel, gridless, super-resolution algorithm based on the structured low-rankness of joint neighboring pixels is proposed herein for tomographic 3D SAR Imaging. By enhancing the intrinsic structural observation, the efficacy of the model for 3D reconstruction can be effectively improved by increasing the number of valid observations. Specifically, a gridless, structured, low-rank, non-convex optimization model is constructed by leveraging the joint sparse characteristics of neighboring pixels, overcoming the limitations of traditional sparse grid-based approaches. Furthermore, an efficient solution is achieved using a projected gradient descent algorithm, and the dependence of the reconstruction performance on the sampling positions is reduced by introducing an incoherent feasible region constraint. Finally, the superiority of the proposed algorithm is validated through both simulation and analysis of real measured datasets, including the SARMV3D-1.0 airborne array dataset and spaceborne LuTan-1 dataset. The proposed algorithm achieves superior 3D reconstruction accuracy and stability compared to most existing state-of-the-art methods. Tomographic SAR (TomoSAR) has significant value in scientific research and applications, as it enables three-dimensional (3D) imaging to address limitations such as scene overlay and projection geometric distortion. The elevation resolution of TomoSAR is limited by the aperture in the elevation direction. Consequently, super-resolution algorithms such as Compressive Sensing (CS) are generally used to enhance the performance of 3D imaging. However, conventional CS methods suffer from grid mismatch issues due to predefined discrete grids, which lead to limited resolution under practical constraints, such as limited channels and low signal-to-noise ratios. To address these limitations, a novel, gridless, super-resolution algorithm based on the structured low-rankness of joint neighboring pixels is proposed herein for tomographic 3D SAR Imaging. By enhancing the intrinsic structural observation, the efficacy of the model for 3D reconstruction can be effectively improved by increasing the number of valid observations. Specifically, a gridless, structured, low-rank, non-convex optimization model is constructed by leveraging the joint sparse characteristics of neighboring pixels, overcoming the limitations of traditional sparse grid-based approaches. Furthermore, an efficient solution is achieved using a projected gradient descent algorithm, and the dependence of the reconstruction performance on the sampling positions is reduced by introducing an incoherent feasible region constraint. Finally, the superiority of the proposed algorithm is validated through both simulation and analysis of real measured datasets, including the SARMV3D-1.0 airborne array dataset and spaceborne LuTan-1 dataset. The proposed algorithm achieves superior 3D reconstruction accuracy and stability compared to most existing state-of-the-art methods.
Tomographic Synthetic Aperture Radar (TomoSAR) is a key technique for 3D reconstruction of urban buildings. Although existing methods improve imaging quality by incorporating geometric constraints and have evolved into Polarimetric TomoSAR (PolTomoSAR) with multi-polarization capabilities, challenges remain in handling complex structures due to heavy reliance on geometric accuracy and limitations in polarization modeling. To address these issues, this paper proposes a novel TomoSAR 3D imaging method based on joint geometric and polarimetric constraints. The approach integrates building geometry with Pauli scattering similarity and incorporates polarization coherence optimization and probability density-based constraints to significantly enhance point cloud quality. Experiments using airborne Ku-band multi-channel SAR data over Suzhou, China, demonstrate the superiority and effectiveness of the proposed method in both accuracy and completeness of 3D reconstruction. Tomographic Synthetic Aperture Radar (TomoSAR) is a key technique for 3D reconstruction of urban buildings. Although existing methods improve imaging quality by incorporating geometric constraints and have evolved into Polarimetric TomoSAR (PolTomoSAR) with multi-polarization capabilities, challenges remain in handling complex structures due to heavy reliance on geometric accuracy and limitations in polarization modeling. To address these issues, this paper proposes a novel TomoSAR 3D imaging method based on joint geometric and polarimetric constraints. The approach integrates building geometry with Pauli scattering similarity and incorporates polarization coherence optimization and probability density-based constraints to significantly enhance point cloud quality. Experiments using airborne Ku-band multi-channel SAR data over Suzhou, China, demonstrate the superiority and effectiveness of the proposed method in both accuracy and completeness of 3D reconstruction.
Synthetic Aperture Radar Tomography (TomoSAR), by virtue of its three-dimensional (3D) resolution capability, can be used to study the 3D structure of semitransparent targets, such as forests, icebergs, and snowpacks. Currently, TomoSAR measurements, especially spaceborne TomoSAR, are mostly obtained through repeat-pass observations, which introduce two major problems: Temporal decorrelation and signal delay caused by the troposphere or ionosphere. Severe temporal decorrelation and signal delay can lead to defocused tomograms, which make it impossible to reconstruct the 3D structure of a target. Unlike repeat-pass TomoSAR systems, multi-static TomoSAR systems simultaneously collect multibaseline images, reducing temporal decorrelation to zero and canceling all types of signal delay, making them an ideal tool for 3D TomoSAR reconstruction. The Hongtu-1 constellation, launched in 2023 and operated by PIESAT Information Technology Limited, is the world’s first spaceborne multi-static SAR system. In this paper, we conduct spaceborne multi-static TomoSAR processing and forest height estimation experiments using Hongtu-1 multi-static images. By comparing tomograms from tropical and temperate forests, it is found that the X-band signal from Hongtu-1 cannot reach the ground in dense tropical forests, but can in temperate forests, due to much lower tree and leaf density. This indicates that Hongtu-1 is capable of forest height measurement in temperate forests. By comparing forest height inversion in temperate forests obtained from Hongtu-1 TomoSAR and GEDI LiDAR, it is found that, at the test sites considered in this paper, Hongtu-1 TomoSAR measurements can provide more accurate forest height inversion (with a 35% improvement), more measurement points, and higher-resolution products than GEDI, which further demonstrates the capability and superiority of Hongtu-1 TomoSAR in forest height estimation. Synthetic Aperture Radar Tomography (TomoSAR), by virtue of its three-dimensional (3D) resolution capability, can be used to study the 3D structure of semitransparent targets, such as forests, icebergs, and snowpacks. Currently, TomoSAR measurements, especially spaceborne TomoSAR, are mostly obtained through repeat-pass observations, which introduce two major problems: Temporal decorrelation and signal delay caused by the troposphere or ionosphere. Severe temporal decorrelation and signal delay can lead to defocused tomograms, which make it impossible to reconstruct the 3D structure of a target. Unlike repeat-pass TomoSAR systems, multi-static TomoSAR systems simultaneously collect multibaseline images, reducing temporal decorrelation to zero and canceling all types of signal delay, making them an ideal tool for 3D TomoSAR reconstruction. The Hongtu-1 constellation, launched in 2023 and operated by PIESAT Information Technology Limited, is the world’s first spaceborne multi-static SAR system. In this paper, we conduct spaceborne multi-static TomoSAR processing and forest height estimation experiments using Hongtu-1 multi-static images. By comparing tomograms from tropical and temperate forests, it is found that the X-band signal from Hongtu-1 cannot reach the ground in dense tropical forests, but can in temperate forests, due to much lower tree and leaf density. This indicates that Hongtu-1 is capable of forest height measurement in temperate forests. By comparing forest height inversion in temperate forests obtained from Hongtu-1 TomoSAR and GEDI LiDAR, it is found that, at the test sites considered in this paper, Hongtu-1 TomoSAR measurements can provide more accurate forest height inversion (with a 35% improvement), more measurement points, and higher-resolution products than GEDI, which further demonstrates the capability and superiority of Hongtu-1 TomoSAR in forest height estimation.
Special Topic Papers: Advanced Array Signal Processing Technology
Moving target tracking is a fundamental task in bistatic Multiple-Input Multiple-Output (MIMO) radar systems, as it is essential for improving sensing accuracy and real-time adaptability in dynamic environments. This paper proposes a tracking algorithm based on Adaptive Tensor Decomposition (ATD) to address accuracy degradation caused by target dynamics and high-dimensional data coupling. A third-order streaming tensor is first established to model the time-varying, multi-dimensional structure of received signals from moving targets, which jointly incorporates the Direction Of Departure (DOD) and Direction Of Arrival (DOA). A dynamic mapping is then derived from the tensor to characterize the relationship between the target’s spatial state and the factor matrices. Next, a random dimensionality reduction strategy is integrated into the adaptive tensor decomposition, which iteratively updates the factor matrices that contain target state information, thereby enabling real-time and robust tracking of target angles. Finally, numerical simulations are conducted to evaluate the tracking performance of the proposed method. The results demonstrate that it provides continuous and stable tracking of moving targets under low Signal-to-Noise Ratio (SNR) conditions. Compared to classical approaches, the proposed algorithm reduces computational time by one to two orders of magnitude, demonstrating its effectiveness and real-time applicability in complex and dynamic environments. Moving target tracking is a fundamental task in bistatic Multiple-Input Multiple-Output (MIMO) radar systems, as it is essential for improving sensing accuracy and real-time adaptability in dynamic environments. This paper proposes a tracking algorithm based on Adaptive Tensor Decomposition (ATD) to address accuracy degradation caused by target dynamics and high-dimensional data coupling. A third-order streaming tensor is first established to model the time-varying, multi-dimensional structure of received signals from moving targets, which jointly incorporates the Direction Of Departure (DOD) and Direction Of Arrival (DOA). A dynamic mapping is then derived from the tensor to characterize the relationship between the target’s spatial state and the factor matrices. Next, a random dimensionality reduction strategy is integrated into the adaptive tensor decomposition, which iteratively updates the factor matrices that contain target state information, thereby enabling real-time and robust tracking of target angles. Finally, numerical simulations are conducted to evaluate the tracking performance of the proposed method. The results demonstrate that it provides continuous and stable tracking of moving targets under low Signal-to-Noise Ratio (SNR) conditions. Compared to classical approaches, the proposed algorithm reduces computational time by one to two orders of magnitude, demonstrating its effectiveness and real-time applicability in complex and dynamic environments.
In light of challenges related to weak target detection and limited communication performance in extended clutter environments, this paper proposes a joint design of a transmit waveform and receive filter within a Multiple-Input Multiple-Output (MIMO) Radar Communication Integration (RCI) system, considering the uncertainty in the extended Target Impulse Response (TIR). Due to difficulties in accurately determining the extended TIR, an objective function was formulated to maximize the minimum Signal-to-Interference-plus-Noise Ratio (SINR) within a set of sphere TIR uncertainties. To ensure reliable information transmission for each user and to achieve desirable properties of the ambiguity function for the transmission waveform, per-user interference constraints were imposed, along with constraints on waveform similarity and peak-to-average ratio. A cyclic optimization algorithm was introduced to address the nonconvex quadratic constrained fractional programming problem. The optimal receive filter was first derived using a generalized Rayleigh quotient, and the nonconvex part of the original NP-Hard problem was then transformed into a convex problem using the Lagrange duality principle and subsequently solved by the semidefinite optimization method. Also, the convergence and computational complexity of the proposed algorithm are thoroughly discussed. Furthermore, the simulation results confirmed that the algorithm effectively enhances SINR in extended clutter environments and fulfills the communication needs of multiple users. In light of challenges related to weak target detection and limited communication performance in extended clutter environments, this paper proposes a joint design of a transmit waveform and receive filter within a Multiple-Input Multiple-Output (MIMO) Radar Communication Integration (RCI) system, considering the uncertainty in the extended Target Impulse Response (TIR). Due to difficulties in accurately determining the extended TIR, an objective function was formulated to maximize the minimum Signal-to-Interference-plus-Noise Ratio (SINR) within a set of sphere TIR uncertainties. To ensure reliable information transmission for each user and to achieve desirable properties of the ambiguity function for the transmission waveform, per-user interference constraints were imposed, along with constraints on waveform similarity and peak-to-average ratio. A cyclic optimization algorithm was introduced to address the nonconvex quadratic constrained fractional programming problem. The optimal receive filter was first derived using a generalized Rayleigh quotient, and the nonconvex part of the original NP-Hard problem was then transformed into a convex problem using the Lagrange duality principle and subsequently solved by the semidefinite optimization method. Also, the convergence and computational complexity of the proposed algorithm are thoroughly discussed. Furthermore, the simulation results confirmed that the algorithm effectively enhances SINR in extended clutter environments and fulfills the communication needs of multiple users.
Signal-level cooperative detection based on multichannel observations is a pivotal technique in distributed Multiple-Input Multiple-Output (MIMO) radar for probing targets via the joint processing of multiple echo channels. However, such cooperative processing imposes substantial demands on computational and communication resources. To address this challenge, moving target detection using distributed MIMO radar with low-bit quantization in the presence of generalized Gaussian noise was investigated herein. In particular, the detectors were designed based on the Generalized Likelihood Ratio Test (GLRT) and the Generalized Rao (G-Rao) test. The maximum likelihood of the target reflection coefficient and Doppler frequency is estimated using the GLRT, whereas the G-Rao test directly constructs statistics based on a score function. These methods avoid redundant parameter searches and effectively reduce computational complexity. A Dynamic Programming (DP) algorithm was used to optimize the quantization threshold and improve the detection performance. Experimental results demonstrate that the G-Rao test is more computationally efficient than the GLRT method. In addition, threshold optimization considerably improves target detection performance compared with a uniform quantization threshold, and DP exhibits lower computational complexity than existing algorithms, such as Particle Swarm Optimization (PSOA). Signal-level cooperative detection based on multichannel observations is a pivotal technique in distributed Multiple-Input Multiple-Output (MIMO) radar for probing targets via the joint processing of multiple echo channels. However, such cooperative processing imposes substantial demands on computational and communication resources. To address this challenge, moving target detection using distributed MIMO radar with low-bit quantization in the presence of generalized Gaussian noise was investigated herein. In particular, the detectors were designed based on the Generalized Likelihood Ratio Test (GLRT) and the Generalized Rao (G-Rao) test. The maximum likelihood of the target reflection coefficient and Doppler frequency is estimated using the GLRT, whereas the G-Rao test directly constructs statistics based on a score function. These methods avoid redundant parameter searches and effectively reduce computational complexity. A Dynamic Programming (DP) algorithm was used to optimize the quantization threshold and improve the detection performance. Experimental results demonstrate that the G-Rao test is more computationally efficient than the GLRT method. In addition, threshold optimization considerably improves target detection performance compared with a uniform quantization threshold, and DP exhibits lower computational complexity than existing algorithms, such as Particle Swarm Optimization (PSOA).
To address the high computational complexity of beamforming in large-scale rectangular phased arrays, this paper proposes a low-complexity beampattern shaping method based on dimension decoupling, which markedly enhances design efficiency and beampattern adjustment flexibility. By fully exploiting the configuration characteristics of rectangular arrays, an analytical beamforming expression is derived that decouples the azimuth and elevation steering vectors. This transformation converts the traditional high-dimensional weight vector design problem into a joint optimization of two low-dimensional weight vectors, thereby substantially reducing computational complexity. On this basis, an optimization model is formulated that minimizes the peak sidelobe level under constraints on beam levels and noise output power. To solve the resultant optimization problem, an iterative algorithm based on the proximal alternating direction method of multipliers is developed, and sufficient conditions for algorithmic convergence are rigorously derived to ensure solution stability and reliability. Simulation results demonstrate that the proposed method substantially improves computational efficiency and enables flexible adjustment of mainlobe width and null depth according to prior information. Furthermore, it achieves a precise trade-off between peak sidelobe suppression and signal-to-noise ratio loss, exhibiting excellent potential for engineering applications. To address the high computational complexity of beamforming in large-scale rectangular phased arrays, this paper proposes a low-complexity beampattern shaping method based on dimension decoupling, which markedly enhances design efficiency and beampattern adjustment flexibility. By fully exploiting the configuration characteristics of rectangular arrays, an analytical beamforming expression is derived that decouples the azimuth and elevation steering vectors. This transformation converts the traditional high-dimensional weight vector design problem into a joint optimization of two low-dimensional weight vectors, thereby substantially reducing computational complexity. On this basis, an optimization model is formulated that minimizes the peak sidelobe level under constraints on beam levels and noise output power. To solve the resultant optimization problem, an iterative algorithm based on the proximal alternating direction method of multipliers is developed, and sufficient conditions for algorithmic convergence are rigorously derived to ensure solution stability and reliability. Simulation results demonstrate that the proposed method substantially improves computational efficiency and enables flexible adjustment of mainlobe width and null depth according to prior information. Furthermore, it achieves a precise trade-off between peak sidelobe suppression and signal-to-noise ratio loss, exhibiting excellent potential for engineering applications.
To address beam pattern distortion and monopulse angle estimation precision degradation associated with adaptive beamforming processing in the presence of mainlobe self-defense or escort jamming, a joint Space-Polarization Jamming Suppression method based on Dimensional Decomposition (SPJS-DD) is proposed for digital phased array antennas. In SPJS-DD, the orthogonality between the spatial array steering vectors in the azimuth and elevation dimensions of a dual-polarized rectangular planar array antenna is derived first. The azimuth and elevation dimensions of the rectangular array are then alternately selected as the Angle Estimation Dimension (AED), with the other serving as the Non-Angle Estimation Dimension (NAED). The adaptive beamforming process in SPJS-DD is divided into two stages. The first-stage processing is applied in the NAED, where mainlobe jamming is adaptively suppressed using the degrees of freedom available in the joint spatial-polarized domain subject to a constraint on the desired steering direction. In the second stage, quiescent sum and difference weights are applied in the AED to preserve the monopulse beam pattern required for accurate angle estimation. Through this two-stage decomposition, SPJS-DD suppresses mainlobe jamming in the NAED while maintaining an undistorted monopulse beam pattern in the AED. Simulation results verify that the proposed SPJS-DD method effectively suppresses mainlobe jamming and achieves high-precision angle estimation. To address beam pattern distortion and monopulse angle estimation precision degradation associated with adaptive beamforming processing in the presence of mainlobe self-defense or escort jamming, a joint Space-Polarization Jamming Suppression method based on Dimensional Decomposition (SPJS-DD) is proposed for digital phased array antennas. In SPJS-DD, the orthogonality between the spatial array steering vectors in the azimuth and elevation dimensions of a dual-polarized rectangular planar array antenna is derived first. The azimuth and elevation dimensions of the rectangular array are then alternately selected as the Angle Estimation Dimension (AED), with the other serving as the Non-Angle Estimation Dimension (NAED). The adaptive beamforming process in SPJS-DD is divided into two stages. The first-stage processing is applied in the NAED, where mainlobe jamming is adaptively suppressed using the degrees of freedom available in the joint spatial-polarized domain subject to a constraint on the desired steering direction. In the second stage, quiescent sum and difference weights are applied in the AED to preserve the monopulse beam pattern required for accurate angle estimation. Through this two-stage decomposition, SPJS-DD suppresses mainlobe jamming in the NAED while maintaining an undistorted monopulse beam pattern in the AED. Simulation results verify that the proposed SPJS-DD method effectively suppresses mainlobe jamming and achieves high-precision angle estimation.
The challenge of distinguishing multiple targets and mitigating image blurry caused by Doppler gradient disappearance in the forward-looking direction of moving platforms is addressed through a multichannel radar forward-looking imaging method based on dual-network collaboration. The proposed method establishes a hierarchical, cascaded, end-to-end processing framework. First, a target Numerical Estimation Network (NEN) predicts the number of targets within the main lobe by analyzing the characteristics of the echo covariance matrix. Then, according to the estimated target count, a pretrained Angle Estimation Network (AEN) model is dynamically selected to determine the azimuth angles of the targets. Finally, target intensity estimation and two-dimensional projection imaging are performed in combination with an improved iterative adaptive algorithm. Simulation and experimental results demonstrate that, compared with conventional super-resolution algorithms, the proposed method achieves more effective simultaneous estimation and accurate reconstruction of parameters for both strong and weak targets in the forward-looking region. Specifically, it attains 86.75% accuracy in target number estimation, while the root mean square error of angle estimation remains below 0.2° in two-target scenarios, significantly enhancing the quality of forward-looking imaging. The challenge of distinguishing multiple targets and mitigating image blurry caused by Doppler gradient disappearance in the forward-looking direction of moving platforms is addressed through a multichannel radar forward-looking imaging method based on dual-network collaboration. The proposed method establishes a hierarchical, cascaded, end-to-end processing framework. First, a target Numerical Estimation Network (NEN) predicts the number of targets within the main lobe by analyzing the characteristics of the echo covariance matrix. Then, according to the estimated target count, a pretrained Angle Estimation Network (AEN) model is dynamically selected to determine the azimuth angles of the targets. Finally, target intensity estimation and two-dimensional projection imaging are performed in combination with an improved iterative adaptive algorithm. Simulation and experimental results demonstrate that, compared with conventional super-resolution algorithms, the proposed method achieves more effective simultaneous estimation and accurate reconstruction of parameters for both strong and weak targets in the forward-looking region. Specifically, it attains 86.75% accuracy in target number estimation, while the root mean square error of angle estimation remains below 0.2° in two-target scenarios, significantly enhancing the quality of forward-looking imaging.
Synthetic Aperture Radar
Spaceborne Interferometric Synthetic Aperture Radar (InSAR) enables surface elevation measurement and deformation monitoring by measuring phase differences along the radar line of sight. However, meeting the future demand for higher-precision measurements remains challenging: analytical models linking InSAR system design parameters to measurement accuracy are still limited by incomplete key parameters and insufficient or unclear physical constraints. These limitations restrict the development of next-generation InSAR technology. This study examines the complex multifactor coupling between system design parameters and measurement accuracy. It provides a detailed analysis of the imaging mechanism and theoretical constraints of spaceborne InSAR with spatial and temporal baselines and presents a spatiotemporal error model integrating multisource decorrelation. The nonlinear relationship between baseline parameters and measurement accuracy is quantitatively characterized, and a comprehensive evaluation framework is established based on key indicators such as coherence, elevation accuracy, and coherent temporal baseline-based deformation sensitivity. Built on top of these analysis, the concept and system architecture of very large baseline spaceborne InSAR are proposed, and its performance is analyzed in detail. The associated technical challenges—including orbit configuration, system design, synchronization, error correction, and phase unwrapping—are systematically discussed. Potential applications of this type of InSAR system architecture in high-precision elevation, deformation measurements, and distributed SAR systems are introduced. The proposed framework provides theoretical support for the design of next-generation high-precision, multidimensional InSAR systems and is expected to play a key role in the frontier of Earth science exploration and the safety assurance of major national engineering projects. Spaceborne Interferometric Synthetic Aperture Radar (InSAR) enables surface elevation measurement and deformation monitoring by measuring phase differences along the radar line of sight. However, meeting the future demand for higher-precision measurements remains challenging: analytical models linking InSAR system design parameters to measurement accuracy are still limited by incomplete key parameters and insufficient or unclear physical constraints. These limitations restrict the development of next-generation InSAR technology. This study examines the complex multifactor coupling between system design parameters and measurement accuracy. It provides a detailed analysis of the imaging mechanism and theoretical constraints of spaceborne InSAR with spatial and temporal baselines and presents a spatiotemporal error model integrating multisource decorrelation. The nonlinear relationship between baseline parameters and measurement accuracy is quantitatively characterized, and a comprehensive evaluation framework is established based on key indicators such as coherence, elevation accuracy, and coherent temporal baseline-based deformation sensitivity. Built on top of these analysis, the concept and system architecture of very large baseline spaceborne InSAR are proposed, and its performance is analyzed in detail. The associated technical challenges—including orbit configuration, system design, synchronization, error correction, and phase unwrapping—are systematically discussed. Potential applications of this type of InSAR system architecture in high-precision elevation, deformation measurements, and distributed SAR systems are introduced. The proposed framework provides theoretical support for the design of next-generation high-precision, multidimensional InSAR systems and is expected to play a key role in the frontier of Earth science exploration and the safety assurance of major national engineering projects.
Spaceborne Synthetic Aperture Radar (SAR) data may be prone to interrupted-sampling repeater jamming and many common unintentional interferences, such as linear frequency modulated pulses. In this paper, we first divide a single-look complex SAR image into multiple sub-band images of equal bandwidth in the range frequency domain. Then, we model the pixel intensity of these sub-band images and analyze the fluctuation mechanism of interfering and noninterfering pixels across the sub-bands. The findings reveal that the energy distribution of interfering pixels is uneven across different sub-bands, leading to substantial intensity fluctuations within the sub-band domain, whereas the intensity of noninterfering pixels remains relatively stable. Based on this observation, we define sub-band contrast and sub-band entropy as statistical measures to quantify fluctuation characteristics across the sub-bands. These measures are then compared with certain thresholds to obtain detection results. Statistical analysis revealed that under noninterfering conditions, these two statistics approximately follow the beta distribution. By leveraging this finding, we fit the distributions of these measures using the beta distribution and develop a method to determine detection thresholds under the constant-false-alarm-rate criterion. Experimental results showed that the proposed method can effectively detect interrupted-sampling repeater jamming and common unintentional interferences. In addition, we investigated the impact of the jamming-to-signal ratio on detection performance and verified the reliability and stability of the method via Monte Carlo simulations. Furthermore, we introduced an interference suppression technique based on a rank-1 model to reduce the adverse effects of interference on downstream tasks. This technique is capable of adaptively suppressing interference in detected regions. Spaceborne Synthetic Aperture Radar (SAR) data may be prone to interrupted-sampling repeater jamming and many common unintentional interferences, such as linear frequency modulated pulses. In this paper, we first divide a single-look complex SAR image into multiple sub-band images of equal bandwidth in the range frequency domain. Then, we model the pixel intensity of these sub-band images and analyze the fluctuation mechanism of interfering and noninterfering pixels across the sub-bands. The findings reveal that the energy distribution of interfering pixels is uneven across different sub-bands, leading to substantial intensity fluctuations within the sub-band domain, whereas the intensity of noninterfering pixels remains relatively stable. Based on this observation, we define sub-band contrast and sub-band entropy as statistical measures to quantify fluctuation characteristics across the sub-bands. These measures are then compared with certain thresholds to obtain detection results. Statistical analysis revealed that under noninterfering conditions, these two statistics approximately follow the beta distribution. By leveraging this finding, we fit the distributions of these measures using the beta distribution and develop a method to determine detection thresholds under the constant-false-alarm-rate criterion. Experimental results showed that the proposed method can effectively detect interrupted-sampling repeater jamming and common unintentional interferences. In addition, we investigated the impact of the jamming-to-signal ratio on detection performance and verified the reliability and stability of the method via Monte Carlo simulations. Furthermore, we introduced an interference suppression technique based on a rank-1 model to reduce the adverse effects of interference on downstream tasks. This technique is capable of adaptively suppressing interference in detected regions.
Small rotorcraft Unmanned Aerial Vehicles (UAVs), owing to their compact size, lightweight nature, and excellent maneuverability, are often used as platforms for Synthetic Aperture Radar (SAR) systems. These UAVs exhibit great potential in complex environment detection at low altitudes. However, the operation of small rotorcraft UAVs involves sharp, random motion errors during flight at low altitudes. Additionally, the limited payload capacity of these vehicles further limits their capacity to carry high-precision positioning equipment. The abovementioned motion errors observed during the operation of UAVs become a key factor that affects the imaging accuracy in UAV-mounted through-the-wall SAR imaging. To address this drawback, a conventional error compensation algorithm based on the Stage by Stage Approaching (SSA) algorithm has been proposed. This approach is based on the bunching SAR imaging mechanism, assuming that the phase error of all the pixels in the scene is the same; this approach is not applicable under the condition of a bandwidth beam. Therefore, this paper presents a wide-beam motion error compensation method for through-the-wall SAR imaging based on the SSA algorithm. The method employs the Back Projection (BP) algorithm to model the motion errors of the radar echo of the rotorcraft UAVs. Using the image entropy evaluation criterion of SAR, the SSA optimization algorithm was applied in this study to estimate the phase errors of the antenna phase center for each pixel in the imaging scene. Subsequently, the BP algorithm was used to perform high-precision phase compensation for each pixel, thereby addressing the spatial variations of motion errors in the wide-beam through-the-wall SAR system. The results of the simulation and experimental data processing reveal that the proposed algorithm can accurately compensate for spatially varying motion errors in wide-beam scenarios. It enables good focusing of multiple targets in the scene and effectively resolves the problem of spatially varying motion errors in wide-beam through-the-wall SAR imaging. Small rotorcraft Unmanned Aerial Vehicles (UAVs), owing to their compact size, lightweight nature, and excellent maneuverability, are often used as platforms for Synthetic Aperture Radar (SAR) systems. These UAVs exhibit great potential in complex environment detection at low altitudes. However, the operation of small rotorcraft UAVs involves sharp, random motion errors during flight at low altitudes. Additionally, the limited payload capacity of these vehicles further limits their capacity to carry high-precision positioning equipment. The abovementioned motion errors observed during the operation of UAVs become a key factor that affects the imaging accuracy in UAV-mounted through-the-wall SAR imaging. To address this drawback, a conventional error compensation algorithm based on the Stage by Stage Approaching (SSA) algorithm has been proposed. This approach is based on the bunching SAR imaging mechanism, assuming that the phase error of all the pixels in the scene is the same; this approach is not applicable under the condition of a bandwidth beam. Therefore, this paper presents a wide-beam motion error compensation method for through-the-wall SAR imaging based on the SSA algorithm. The method employs the Back Projection (BP) algorithm to model the motion errors of the radar echo of the rotorcraft UAVs. Using the image entropy evaluation criterion of SAR, the SSA optimization algorithm was applied in this study to estimate the phase errors of the antenna phase center for each pixel in the imaging scene. Subsequently, the BP algorithm was used to perform high-precision phase compensation for each pixel, thereby addressing the spatial variations of motion errors in the wide-beam through-the-wall SAR system. The results of the simulation and experimental data processing reveal that the proposed algorithm can accurately compensate for spatially varying motion errors in wide-beam scenarios. It enables good focusing of multiple targets in the scene and effectively resolves the problem of spatially varying motion errors in wide-beam through-the-wall SAR imaging.
Radar Signal and Data Processing
Detecting targets despite sea clutter is crucial in military and civilian applications. In complex marine environments, sea clutter exhibits target-like spikes and inherently broad-spectrum characteristics, posing a significant challenge for marine radars in detecting Low-Slow-Small (LSS) targets and leading to high false alarm rates. In this study, an S-band holographic staring radar with high-Doppler and high-range-resolution capabilities (i.e., “dual-high” capability) was utilized in sea detection experiments. We obtained sea clutter data, LSS target data (over the sea surface and in the air), ground truth data on target positions and trajectories, as well as wind and wave data. Using these data, we constructed an S-band holographic staring radar dataset for low-observable targets at sea. The time-domain, frequency-domain, and time-Doppler characteristics of the dataset were analyzed, and the results served as a reference for data utilization. Future work will involve continuing experiments to expand the maritime experimental environment (e.g., sea state and region) and target types toward enhancing data diversity. This open dataset will support the enhancement of new radar systems for detecting low-observable targets at sea and improving maritime target detection and recognition performance. Detecting targets despite sea clutter is crucial in military and civilian applications. In complex marine environments, sea clutter exhibits target-like spikes and inherently broad-spectrum characteristics, posing a significant challenge for marine radars in detecting Low-Slow-Small (LSS) targets and leading to high false alarm rates. In this study, an S-band holographic staring radar with high-Doppler and high-range-resolution capabilities (i.e., “dual-high” capability) was utilized in sea detection experiments. We obtained sea clutter data, LSS target data (over the sea surface and in the air), ground truth data on target positions and trajectories, as well as wind and wave data. Using these data, we constructed an S-band holographic staring radar dataset for low-observable targets at sea. The time-domain, frequency-domain, and time-Doppler characteristics of the dataset were analyzed, and the results served as a reference for data utilization. Future work will involve continuing experiments to expand the maritime experimental environment (e.g., sea state and region) and target types toward enhancing data diversity. This open dataset will support the enhancement of new radar systems for detecting low-observable targets at sea and improving maritime target detection and recognition performance.
In radar systems that track multiple maneuvering targets, conventional approaches often suffer from performance degradation due to suboptimal resource allocation and insufficient utilization of prior information. To address this challenge and significantly enhance tracking performance under equivalent resource constraints, a resource allocation and precise tracking algorithm for multiple maneuvering targets is proposed. First, by integrating a multiple model interaction architecture with tracker feedback prediction, a probabilistic distribution model for target position prediction through multiple model interaction is constructed. This model establishes an integrated detection and tracking method based on multiple model interactions to achieve precise tracking of maneuvering targets. Subsequently, by analytically modeling the coupling mechanism between radar resources and tracking performance, and deriving the Bayesian Cramér-Rao Lower Bound (BCRLB) for maneuvering targets, a performance-driven multimodel weighted resource allocation framework is developed. Simulations validate that the proposed method can significantly enhance the overall tracking precision of multiple maneuvering targets under equivalent resource consumption. In radar systems that track multiple maneuvering targets, conventional approaches often suffer from performance degradation due to suboptimal resource allocation and insufficient utilization of prior information. To address this challenge and significantly enhance tracking performance under equivalent resource constraints, a resource allocation and precise tracking algorithm for multiple maneuvering targets is proposed. First, by integrating a multiple model interaction architecture with tracker feedback prediction, a probabilistic distribution model for target position prediction through multiple model interaction is constructed. This model establishes an integrated detection and tracking method based on multiple model interactions to achieve precise tracking of maneuvering targets. Subsequently, by analytically modeling the coupling mechanism between radar resources and tracking performance, and deriving the Bayesian Cramér-Rao Lower Bound (BCRLB) for maneuvering targets, a performance-driven multimodel weighted resource allocation framework is developed. Simulations validate that the proposed method can significantly enhance the overall tracking precision of multiple maneuvering targets under equivalent resource consumption.
In recent years, with the increasing diversification of mission requirements, radar imaging has expanded from conventional side-looking and squint-looking modes to the forward-looking mode. In this regard, the monopulse imaging method offers several advantages, including its forward-looking imaging capability, real-time processing ability, and effective anti-jamming performance. These features can help efficiently overcome the problems faced by conventional imaging methods, i.e., low azimuth resolution and Doppler ambiguity in the forward-looking region. Hence, this method has emerged as a key solution to these challenges. This study first explores the distinction between monopulse tracking and monopulse imaging, followed by a systematic review of the existing technical approaches and evaluation metrics for monopulse imaging. Subsequently, the performance of different methods is analyzed, and specific applications of monopulse imaging technology in various scenarios are introduced, including three-dimensional imaging, moving target localization and imaging, and multi-view image fusion. The paper ends with a discussion of the development trends of monopulse imaging technology and an analysis of future research directions, such as imaging quality improvement and the expansion of the application scope. In recent years, with the increasing diversification of mission requirements, radar imaging has expanded from conventional side-looking and squint-looking modes to the forward-looking mode. In this regard, the monopulse imaging method offers several advantages, including its forward-looking imaging capability, real-time processing ability, and effective anti-jamming performance. These features can help efficiently overcome the problems faced by conventional imaging methods, i.e., low azimuth resolution and Doppler ambiguity in the forward-looking region. Hence, this method has emerged as a key solution to these challenges. This study first explores the distinction between monopulse tracking and monopulse imaging, followed by a systematic review of the existing technical approaches and evaluation metrics for monopulse imaging. Subsequently, the performance of different methods is analyzed, and specific applications of monopulse imaging technology in various scenarios are introduced, including three-dimensional imaging, moving target localization and imaging, and multi-view image fusion. The paper ends with a discussion of the development trends of monopulse imaging technology and an analysis of future research directions, such as imaging quality improvement and the expansion of the application scope.
Radar Countermeasure Technology
The development of intelligent jamming decision-making technology has substantially enhanced the survival and confrontation capabilities of sensitive targets on the battlefield. However, existing jamming decision-making algorithms only consider active jamming while neglecting the optimization of passive jamming strategies. This limitation seriously restricts the application of adversarial models in jamming decision-making scenarios. Aiming to address this defect, this paper constructs a joint optimization method for active-passive jamming strategies based on Rainbow Deep Q-Network (DQN) and dichotomy. The method uses Rainbow DQN to determine the sequence of active and passive jamming styles and applies a dichotomy to dynamically search for the optimal release position of passive jamming. Additionally, considering the partially observable nature of the jamming confrontation environment, this paper further designs an optimization method for active-passive jamming strategies based on Rainbow DQN and Baseline DQN. A reward function is also introduced, based on changes in the radar beam pointing point, to accurately feedback the effectiveness of the jamming strategy. Through simulation experiments in jammer-radar confrontations, the proposed method is compared with the following three mainstream jamming decision models: Baseline DQN, Dueling DQN, and Double DQN. Results show that, compared to other interference decision-making models, the proposed method improves the Q value by an average of 2.43 times, the reward mean value by an average of 3.09 times, and reduces the number of decision-making steps for passive interference location by more than 50%. The experimental results show that the proposed joint active-passive jamming strategy optimization method based on Rainbow DQN and dichotomy substantially enhances the effectiveness of decision-making, improving the applicability of jamming strategy models and drastically boosting the value of the jammer in electronic countermeasures. The development of intelligent jamming decision-making technology has substantially enhanced the survival and confrontation capabilities of sensitive targets on the battlefield. However, existing jamming decision-making algorithms only consider active jamming while neglecting the optimization of passive jamming strategies. This limitation seriously restricts the application of adversarial models in jamming decision-making scenarios. Aiming to address this defect, this paper constructs a joint optimization method for active-passive jamming strategies based on Rainbow Deep Q-Network (DQN) and dichotomy. The method uses Rainbow DQN to determine the sequence of active and passive jamming styles and applies a dichotomy to dynamically search for the optimal release position of passive jamming. Additionally, considering the partially observable nature of the jamming confrontation environment, this paper further designs an optimization method for active-passive jamming strategies based on Rainbow DQN and Baseline DQN. A reward function is also introduced, based on changes in the radar beam pointing point, to accurately feedback the effectiveness of the jamming strategy. Through simulation experiments in jammer-radar confrontations, the proposed method is compared with the following three mainstream jamming decision models: Baseline DQN, Dueling DQN, and Double DQN. Results show that, compared to other interference decision-making models, the proposed method improves the Q value by an average of 2.43 times, the reward mean value by an average of 3.09 times, and reduces the number of decision-making steps for passive interference location by more than 50%. The experimental results show that the proposed joint active-passive jamming strategy optimization method based on Rainbow DQN and dichotomy substantially enhances the effectiveness of decision-making, improving the applicability of jamming strategy models and drastically boosting the value of the jammer in electronic countermeasures.
In the field of radar target recognition, the introduction of Icosahedron Triangular Trihedral Corner Reflector (ITTCR) has increased the difficulty of target identification tasks, especially under moderate to high sea states. Under such conditions, the undulating sea surface can couple with an ITTCR to produce scattering characteristics similar to those of the target, resulting in a decline in the performance of traditional target identification methods. As a solution, a joint matrix of polarization features and range was constructed by considering the dominant scattering mechanisms and scattering complexity. This matrix characterizes the component-level differences between ships and ITTCR arrays in the presence of sea clutter. Subsequently, a temporal neural network extracts features from the joint matrices of the vessels and ITTCR arrays, achieving effective target identification. The performance of the proposed method was validated using a dataset. The proposed method effectively reduces information loss during manual knowledge refinement. Under moderate to high sea states, the proposed method has an accuracy higher than that of the existing methods by 10.14%. Furthermore, the proposed method considerably reduces false alarms caused by ITTCR arrays. In the field of radar target recognition, the introduction of Icosahedron Triangular Trihedral Corner Reflector (ITTCR) has increased the difficulty of target identification tasks, especially under moderate to high sea states. Under such conditions, the undulating sea surface can couple with an ITTCR to produce scattering characteristics similar to those of the target, resulting in a decline in the performance of traditional target identification methods. As a solution, a joint matrix of polarization features and range was constructed by considering the dominant scattering mechanisms and scattering complexity. This matrix characterizes the component-level differences between ships and ITTCR arrays in the presence of sea clutter. Subsequently, a temporal neural network extracts features from the joint matrices of the vessels and ITTCR arrays, achieving effective target identification. The performance of the proposed method was validated using a dataset. The proposed method effectively reduces information loss during manual knowledge refinement. Under moderate to high sea states, the proposed method has an accuracy higher than that of the existing methods by 10.14%. Furthermore, the proposed method considerably reduces false alarms caused by ITTCR arrays.
Radar Remote Sensing Application
In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions. In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions.