Accepted Article Preview
With the emergence of the low-altitude economy, the communication and detection issues of unmanned aerial vehicles (UAVs) have gained considerable attention. This paper investigates sensing reference signal design for integrated sensing and communication (ISAC) in orthogonal frequency division multiplexing (OFDM) systems aimed at detecting long-range, high-speed UAVs. To address the ambiguity problem in long-range and high-speed UAV detection, traditional reference signal designs require densely arranged reference signals, leading to significant resource overhead. In addition, long-range detection based on OFDM waveforms faces challenges from inter-symbol interference (ISI). To address these issues, this paper first proposes a reference signal pattern that supports long-range detection and resists ISI, achieving the maximum unambiguous detection range of the system with reduced resource overhead. Then, to address the challenge of high-speed detection, the paper incorporates range-rate into the Chinese Remainder Theorem-based method. Through the proper configuration of sensing reference signals and the cancellation of ghost targets, this approach significantly increases the unambiguous detection velocity while minimizing resource usage and avoiding the generation of ghost targets. The effectiveness of the proposed methods is validated through simulations. Simulation results show that compared with the traditional sensing reference signal design, our proposed scheme can reduce 72% overhead of reference signals for long-range and high-speed UAV detections.
With the emergence of the low-altitude economy, the communication and detection issues of unmanned aerial vehicles (UAVs) have gained considerable attention. This paper investigates sensing reference signal design for integrated sensing and communication (ISAC) in orthogonal frequency division multiplexing (OFDM) systems aimed at detecting long-range, high-speed UAVs. To address the ambiguity problem in long-range and high-speed UAV detection, traditional reference signal designs require densely arranged reference signals, leading to significant resource overhead. In addition, long-range detection based on OFDM waveforms faces challenges from inter-symbol interference (ISI). To address these issues, this paper first proposes a reference signal pattern that supports long-range detection and resists ISI, achieving the maximum unambiguous detection range of the system with reduced resource overhead. Then, to address the challenge of high-speed detection, the paper incorporates range-rate into the Chinese Remainder Theorem-based method. Through the proper configuration of sensing reference signals and the cancellation of ghost targets, this approach significantly increases the unambiguous detection velocity while minimizing resource usage and avoiding the generation of ghost targets. The effectiveness of the proposed methods is validated through simulations. Simulation results show that compared with the traditional sensing reference signal design, our proposed scheme can reduce 72% overhead of reference signals for long-range and high-speed UAV detections.
,
Light Detection And Ranging (LiDAR) systems lack texture and color information, while cameras lack depth information. Thus, the information obtained from LiDAR and cameras is highly complementary. Therefore, combining these two types of sensors can obtain rich observation data and improve the accuracy and stability of environmental perception. The accurate joint calibration of the external parameters of these two types of sensors is the premise of data fusion. At present, most joint calibration methods need to be processed through target calibration and manual point selection. This makes it impossible to use them in dynamic application scenarios. This paper presents a ResCalib deep neural network model, which can be used to solve the problem of the online joint calibration of LiDAR and a camera. The method uses LiDAR point clouds, monocular images, and in-camera parameter matrices as the input to achieve the external parameters solving of LiDAR and cameras; however, the method has low dependence on external features or targets. ResCalib is a geometrically supervised deep neural network that automatically estimates the six-degree-of-freedom external parameter relationship between LiDAR and cameras by implementing supervised learning to maximize the geometric and photometric consistencies of input images and point clouds. Experiments show that the proposed method can correct errors in calibrating rotation by ±10° and translation by ±0.2 m. The average absolute errors of the rotation and translation components of the calibration solution are 0.35° and 0.032 m, respectively, and the time required for single-group calibration is 0.018 s, which provides technical support for realizing automatic joint calibration in a dynamic environment.
Light Detection And Ranging (LiDAR) systems lack texture and color information, while cameras lack depth information. Thus, the information obtained from LiDAR and cameras is highly complementary. Therefore, combining these two types of sensors can obtain rich observation data and improve the accuracy and stability of environmental perception. The accurate joint calibration of the external parameters of these two types of sensors is the premise of data fusion. At present, most joint calibration methods need to be processed through target calibration and manual point selection. This makes it impossible to use them in dynamic application scenarios. This paper presents a ResCalib deep neural network model, which can be used to solve the problem of the online joint calibration of LiDAR and a camera. The method uses LiDAR point clouds, monocular images, and in-camera parameter matrices as the input to achieve the external parameters solving of LiDAR and cameras; however, the method has low dependence on external features or targets. ResCalib is a geometrically supervised deep neural network that automatically estimates the six-degree-of-freedom external parameter relationship between LiDAR and cameras by implementing supervised learning to maximize the geometric and photometric consistencies of input images and point clouds. Experiments show that the proposed method can correct errors in calibrating rotation by ±10° and translation by ±0.2 m. The average absolute errors of the rotation and translation components of the calibration solution are 0.35° and 0.032 m, respectively, and the time required for single-group calibration is 0.018 s, which provides technical support for realizing automatic joint calibration in a dynamic environment.
Dual Function Radar and Communication (DFRC)-integrated electronic equipment platform, which combines detection and communication functions, effectively addresses issues such as platform limitations, resource constraints, and electromagnetic compatibility by sharing hardware platforms and transmitting waveforms. Therefore, it has become a research hotspot in recent years. The DFRC technology, centered on detection functionality and incorporating limited communication capabilities, has remarkable application prospects in typical detection scenarios, such as early warning and surveillance and tracking guidance under future combat conditions. This paper focuses on using the signal design method to optimize radar detection performance by effectively adjusting the trade-off between detection and communication in multi-domain resource utilization by guaranteeing a minimum communication performance. First, the performance measurement criteria of DFRC systems were summarized. Then, the paper provides a comprehensive introduction to the DFRC signal design methods under typical detection scenarios and a thorough analysis of the problems and current solutions of each signal design method. Finally, a summary and future research directions are outlined.
Dual Function Radar and Communication (DFRC)-integrated electronic equipment platform, which combines detection and communication functions, effectively addresses issues such as platform limitations, resource constraints, and electromagnetic compatibility by sharing hardware platforms and transmitting waveforms. Therefore, it has become a research hotspot in recent years. The DFRC technology, centered on detection functionality and incorporating limited communication capabilities, has remarkable application prospects in typical detection scenarios, such as early warning and surveillance and tracking guidance under future combat conditions. This paper focuses on using the signal design method to optimize radar detection performance by effectively adjusting the trade-off between detection and communication in multi-domain resource utilization by guaranteeing a minimum communication performance. First, the performance measurement criteria of DFRC systems were summarized. Then, the paper provides a comprehensive introduction to the DFRC signal design methods under typical detection scenarios and a thorough analysis of the problems and current solutions of each signal design method. Finally, a summary and future research directions are outlined.
Modern radar systems face increasingly complex challenges in tasks such as detection, tracking, and identification. The diversity of task types, limited data resources, and strict execution time requirements make radar task scheduling a strongly NP-hard problem. However, existing scheduling algorithms struggle to efficiently handle multiradar collaborative tasks involving complex logical constraints. Therefore, Artificial Intelligence (AI)-based scheduling algorithms have gained significant attention. However, their efficiency is heavily dependent on effectively extracting the key features of the problem. The ability to quickly and comprehensively extract common features of multiradar scheduling problems is essential for improving the efficiency of such AI scheduling algorithms. Therefore, this paper proposes a Model Knowledge Enhanced Graph Neural Network (MKEGNN) scheduling algorithm. This method frames the radar task collaborative scheduling problem as a heterogeneous network graph, leveraging model knowledge to optimize the training process of the Graph Neural Network (GNN) algorithm. A key innovation of this algorithm is its capability to capture critical model knowledge using low-complexity calculations, which helps to further optimize the GNN model. During the feature extraction stage, the algorithm employs a random unitary matrix transformation. This approach utilizes the spectral features of the random Laplacian matrix from the task’s heterogeneous graph as global features, enhancing the GNN’s ability to extract shared problem features while downplaying individual characteristics. In the parameterized decision-making stage, the algorithm leverages the upper and lower bound knowledge derived from guiding and empirical solutions of the problem model. This strategy significantly reduces the decision space, enabling the network to optimize quickly and accelerating the learning process. Extensive simulation experiments confirm the effectiveness of the MKEGNN algorithm. Compared to existing approaches, it demonstrates improved stability and accuracy across all task sets, boosting the scheduling success rate by 3%~10% and the weighted success rate by 5%~15%. For particularly challenging task sets involving complex multiradar collaborations, the success rate improves by over 4%. The results highlight the algorithm’s stability and robustness.
Modern radar systems face increasingly complex challenges in tasks such as detection, tracking, and identification. The diversity of task types, limited data resources, and strict execution time requirements make radar task scheduling a strongly NP-hard problem. However, existing scheduling algorithms struggle to efficiently handle multiradar collaborative tasks involving complex logical constraints. Therefore, Artificial Intelligence (AI)-based scheduling algorithms have gained significant attention. However, their efficiency is heavily dependent on effectively extracting the key features of the problem. The ability to quickly and comprehensively extract common features of multiradar scheduling problems is essential for improving the efficiency of such AI scheduling algorithms. Therefore, this paper proposes a Model Knowledge Enhanced Graph Neural Network (MKEGNN) scheduling algorithm. This method frames the radar task collaborative scheduling problem as a heterogeneous network graph, leveraging model knowledge to optimize the training process of the Graph Neural Network (GNN) algorithm. A key innovation of this algorithm is its capability to capture critical model knowledge using low-complexity calculations, which helps to further optimize the GNN model. During the feature extraction stage, the algorithm employs a random unitary matrix transformation. This approach utilizes the spectral features of the random Laplacian matrix from the task’s heterogeneous graph as global features, enhancing the GNN’s ability to extract shared problem features while downplaying individual characteristics. In the parameterized decision-making stage, the algorithm leverages the upper and lower bound knowledge derived from guiding and empirical solutions of the problem model. This strategy significantly reduces the decision space, enabling the network to optimize quickly and accelerating the learning process. Extensive simulation experiments confirm the effectiveness of the MKEGNN algorithm. Compared to existing approaches, it demonstrates improved stability and accuracy across all task sets, boosting the scheduling success rate by 3%~10% and the weighted success rate by 5%~15%. For particularly challenging task sets involving complex multiradar collaborations, the success rate improves by over 4%. The results highlight the algorithm’s stability and robustness.
This paper proposes an intelligent framework based on a cell-free network architecture, called HRT-Net. HRT-Net is designed to enhance multi-station collaborative sensing problems for joint radar and communication systems, offering accurate and resource-efficient target location estimation. First, the sensing area is divided into sub-regions and a lightweight region selection network employing depthwise separable convolution; this approach coarsely identifies the target’s sub-region, reducing computational demands and enabling extensive area coverage. To tackle interstation data disparity, we propose a channel-wise unidimensional attention mechanism. This mechanism aggregates multi-station sensing data effectively, enhancing feature extraction and representation by generating attention weight maps that refine the original features. Finally, we design a target localization network featuring multi-scale and multi-residual connections. This network extracts comprehensive, deep features and achieves multi-level feature fusion, allowing for reliable mapping of data to the target coordinates. Extensive simulations and real-world experiments validate the effectiveness and robustness of our scheme. The results show that compared with the existing methods, HRT-Net achieves centimeter-level target localization with low computational complexity and minimal storage overhead.
This paper proposes an intelligent framework based on a cell-free network architecture, called HRT-Net. HRT-Net is designed to enhance multi-station collaborative sensing problems for joint radar and communication systems, offering accurate and resource-efficient target location estimation. First, the sensing area is divided into sub-regions and a lightweight region selection network employing depthwise separable convolution; this approach coarsely identifies the target’s sub-region, reducing computational demands and enabling extensive area coverage. To tackle interstation data disparity, we propose a channel-wise unidimensional attention mechanism. This mechanism aggregates multi-station sensing data effectively, enhancing feature extraction and representation by generating attention weight maps that refine the original features. Finally, we design a target localization network featuring multi-scale and multi-residual connections. This network extracts comprehensive, deep features and achieves multi-level feature fusion, allowing for reliable mapping of data to the target coordinates. Extensive simulations and real-world experiments validate the effectiveness and robustness of our scheme. The results show that compared with the existing methods, HRT-Net achieves centimeter-level target localization with low computational complexity and minimal storage overhead.
This paper addresses the task allocation problem in swarm Unmanned Aerial Vehicle (UAV) Synthetic Aperture Radar (SAR) systems and proposes a method based on low-redundancy chromosome encoding. It starts with a thorough analysis of the relationship between imaging performance and geometric configurations in SAR imaging tasks and accordingly constructs a path function that reflects imaging resolution performance. The task allocation problem is then formulated as a generalized, balanced multiple traveling salesman problem. To enhance the search efficiency and accuracy of the algorithm, a two-part chromosome encoding scheme with low redundancy is introduced. Additionally, considering possible unexpected situations and dynamic changes in practical applications, a dynamic task allocation strategy integrating a contract net protocol and attention mechanisms is proposed. This method can flexibly adjust task allocation strategies based on actual conditions, ensuring the robustness of the system. Simulation experiments validate the effectiveness of the proposed method.
This paper addresses the task allocation problem in swarm Unmanned Aerial Vehicle (UAV) Synthetic Aperture Radar (SAR) systems and proposes a method based on low-redundancy chromosome encoding. It starts with a thorough analysis of the relationship between imaging performance and geometric configurations in SAR imaging tasks and accordingly constructs a path function that reflects imaging resolution performance. The task allocation problem is then formulated as a generalized, balanced multiple traveling salesman problem. To enhance the search efficiency and accuracy of the algorithm, a two-part chromosome encoding scheme with low redundancy is introduced. Additionally, considering possible unexpected situations and dynamic changes in practical applications, a dynamic task allocation strategy integrating a contract net protocol and attention mechanisms is proposed. This method can flexibly adjust task allocation strategies based on actual conditions, ensuring the robustness of the system. Simulation experiments validate the effectiveness of the proposed method.
The meter-wave radar, known for its wide beamwidth, often faces challenges in detecting low-elevation targets due to interference from multipath signals. These reflected signals diminish the strength of the direct signal, leading to poor accuracy in low-elevation angle measurements. To solve this problem, this paper proposes a multipath suppression and high-precision angle measurement method. This method, based on a signal-level feature game approach, incorporates two interconnected components working together. The direct signal extractor mines the direct signal submerged within the multipath signal. The direct signal feature discriminator ensures the integrity and validity of the extracted direct signal. By continuously interacting and optimizing one another, these components suppress the multipath interference effectively and enhance the quality of the direct signal. The refined signal is then processed using advanced super-resolution algorithms to estimate the direction of arrival. Computer simulations have shown that the proposed algorithm achieves high performance without relying on strict target angle information, effectively suppressing multipath signals. This approach noticeably enhances the estimation accuracy of classic super-resolution algorithms. Compared to existing supervised learning models, the proposed algorithm offers better generalization to unknown signal parameters and multipath distribution models.
The meter-wave radar, known for its wide beamwidth, often faces challenges in detecting low-elevation targets due to interference from multipath signals. These reflected signals diminish the strength of the direct signal, leading to poor accuracy in low-elevation angle measurements. To solve this problem, this paper proposes a multipath suppression and high-precision angle measurement method. This method, based on a signal-level feature game approach, incorporates two interconnected components working together. The direct signal extractor mines the direct signal submerged within the multipath signal. The direct signal feature discriminator ensures the integrity and validity of the extracted direct signal. By continuously interacting and optimizing one another, these components suppress the multipath interference effectively and enhance the quality of the direct signal. The refined signal is then processed using advanced super-resolution algorithms to estimate the direction of arrival. Computer simulations have shown that the proposed algorithm achieves high performance without relying on strict target angle information, effectively suppressing multipath signals. This approach noticeably enhances the estimation accuracy of classic super-resolution algorithms. Compared to existing supervised learning models, the proposed algorithm offers better generalization to unknown signal parameters and multipath distribution models.
,
Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features.
Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features.
Obtaining internal layout information before entering unfamiliar buildings is crucial for various applications, such as counter-terrorism operations, disaster relief, and surveillance, highlighting its great practical significance and research value. To enable the acquisition of the building layout information, this paper presents a building layout tomography method based on joint multidomain direct wave estimation. First, a linear approximation model is established to map the relationship between the propagation delay of direct wave signals and the layout of the unknown building. Using this model, the distribution characteristics of direct wave and multipath signals in the fast-time, slow-time, and Doppler domains are analyzed in the tomographic imaging mode. A joint multidomain direct wave estimation algorithm is then proposed to achieve the suppression of multipath interference and precise estimation of direct wave signals. Additionally, a projection matrix adaptive correction algebraic reconstruction algorithm with total variation constraints is proposed, which enhances building layout inversion quality under limited data scenarios. Finally, electromagnetic simulation and experimental results demonstrate the effectiveness of the proposed building layout tomography method, with structural similarity indices of 91.2% and 81.7% for the reconstructed results, significantly outperforming existing building layout tomography methods.
Obtaining internal layout information before entering unfamiliar buildings is crucial for various applications, such as counter-terrorism operations, disaster relief, and surveillance, highlighting its great practical significance and research value. To enable the acquisition of the building layout information, this paper presents a building layout tomography method based on joint multidomain direct wave estimation. First, a linear approximation model is established to map the relationship between the propagation delay of direct wave signals and the layout of the unknown building. Using this model, the distribution characteristics of direct wave and multipath signals in the fast-time, slow-time, and Doppler domains are analyzed in the tomographic imaging mode. A joint multidomain direct wave estimation algorithm is then proposed to achieve the suppression of multipath interference and precise estimation of direct wave signals. Additionally, a projection matrix adaptive correction algebraic reconstruction algorithm with total variation constraints is proposed, which enhances building layout inversion quality under limited data scenarios. Finally, electromagnetic simulation and experimental results demonstrate the effectiveness of the proposed building layout tomography method, with structural similarity indices of 91.2% and 81.7% for the reconstructed results, significantly outperforming existing building layout tomography methods.
Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multi-view SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multi-view SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multi-view SAR image registration, with an average error reduction of over 65%, 3~5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime.
Due to the side-looking and coherent imaging mechanisms, feature differences between high-resolution Synthetic Aperture Radar (SAR) images increase when the imaging viewpoint changes considerably, making image registration highly challenging. Traditional registration techniques for high-resolution multi-view SAR images mainly face issues, such as insufficient keypoint localization accuracy and low matching precision. This work designs an end-to-end high-resolution multi-view SAR image registration network to address the above challenges. The main contributions of this study include the following: A high-resolution SAR image feature extraction method based on a local pixel offset model is proposed. This method introduces a diversity peak loss to guide response weight allocation in the keypoint extraction network and optimizes keypoint coordinates by detecting pixel offsets. A descriptor extraction method is developed based on adaptive adjustment of convolution kernel sampling positions that utilizes sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results show that compared with other registration methods, the proposed algorithm achieves substantial improvements in the high-resolution adjustment of convolution kernel sampling positions, which utilize sparse cross-entropy loss to supervise descriptor matching in the network. Experimental results illustrate that compared with other registration methods, the proposed algorithm achieves remarkable improvements in high-resolution multi-view SAR image registration, with an average error reduction of over 65%, 3~5-fold increases in the number of correctly matched point pairs, and an average reduction of over 50% in runtime.
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio.
Aiming to address the problem of increased radar jamming in complex electromagnetic environments and the difficulty of accurately estimating the target signal close to a strong jamming signal, this paper proposes a sparse Direction of Arrival (DOA) estimation method based on Riemann averaging under strong intermittent jamming. First, under the extended coprime array data model, the Riemann averaging is introduced to suppress the jamming signal by leveraging the property that the target signal is continuously active while the strong jamming signal is intermittently active. Then, the covariance matrix of the processed data is vectorized to obtain virtual array reception data. Finally, the sparse iterative covariance-based estimation method, which is used for estimating the DOA under strong intermittent interference, is employed in the virtual domain to reconstruct the sparse signal and estimate the DOA of the target signal. The simulation results show that the method can provide highly accurate DOA estimation for weak target signals whose angles are closely adjacent to strong interference signals when the number of signal sources is unknown. Compared with existing subspace algorithms and sparse reconstruction class algorithms, the proposed algorithm has higher estimation accuracy and angular resolution at a smaller number of snapshots, as well as a lower signal-to-noise ratio.
Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
Land-sea clutter classification is essential for boosting the target positioning accuracy of skywave over-the-horizon radar. This classification process involves discriminating whether each azimuth-range cell in the Range-Doppler (RD) map is overland or sea. Traditional deep learning methods for this task require extensive, high-quality, and class-balanced labeled samples, leading to long training periods and high costs. In addition, these methods typically use a single azimuth-range cell clutter without considering intra-class and inter-class relationships, resulting in poor model performance. To address these challenges, this study analyzes the correlation between adjacent azimuth-range cells, and converts land-sea clutter data from Euclidean space into graph data in non-Euclidean space, thereby incorporating sample relationships. We propose a Multi-Channel Graph Convolutional Networks (MC-GCN) for land-sea clutter classification. MC-GCN decomposes graph data from a single channel into multiple channels, each containing a single type of edge and a weight matrix. This approach restricts node information aggregation, effectively reducing node attribute misjudgment caused by data heterogeneity. For validation, RD maps from various seasons, times, and detection areas were selected. Based on radar parameters, data characteristics, and sample proportions, we construct a land-sea clutter original dataset containing 12 different scenes and a land-sea clutter scarce dataset containing 36 different configurations. The effectiveness of MC-GCN is confirmed, with the approach outperforming state-of-the-art classification methods with a classification accuracy of at least 92%.
Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR
Imaging of passive jamming objects has been a hot topic in radar imaging and countermeasures research, which directly affects the detection and recognition capabilities of radar targets. In the microwave band, the long dwell time required to generate a single image with desired azimuthal resolution makes it difficult to directly distinguish passive jamming objects based on imaging results. In addition, there is a lack of time-dimensional resolution. In comparison, terahertz imaging systems require a shorter synthetic aperture to achieve the same azimuthal resolution, making it easier to obtain low-latency, high-resolution, and high-frame-rate imaging results. Hence, terahertz radar has considerable potential in Video Synthetic Aperture Radar (ViSAR) technology. First, the aperture division and imaging resolutions of airborne terahertz ViSAR are briefly analyzed. Subsequently, imaging results and characteristics of stationary passive jamming objects, such as corner reflector arrays and camouflage mats, are explored before and after motion compensation. Further, the phenomenon that camouflage mats with fluctuating grids exhibit roughness in the terahertz band is demonstrated, exhibiting the special scattering characteristics of the terahertz band. Next, considering rotating corner reflectors as an example of moving passive jamming objects, their characteristics regarding suppressive interference are analyzed. Considering that stationary scenes feature similarity under adjacent apertures, rotating corner reflectors can be directly detected by incoherent image subtraction after inter-frame image and amplitude registrations, followed by the extraction of signals of interest and non-parametrical compensation. Currently, few field experiments regarding the imaging of passive jamming objects using terahertz ViSAR are being reported. Airborne field experiments have been performed to effectively demonstrate the high-resolution and high-frame-rate imaging capabilities of terahertz ViSAR
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
The ionosphere can distort received signals, degrade imaging quality, and decrease interferometric and polarimetric accuracies of spaceborne Synthetic Aperture Radars (SAR). The low-frequency systems operating at L-band and P-band are very susceptible to such problems. From another viewpoint, low-frequency spaceborne SARs can capture ionospheric structures with different spatial scales over the observed scope, and their echo and image data have sufficient ionospheric information, offering great probability for high-precision and high-resolution ionospheric probing. The research progress of ionospheric probing based on spaceborne SARs is reviewed in this paper. The technological system of this field is summarized from three aspects: Mapping of background ionospheric total electron content, tomography of ionospheric electron density, and probing of ionospheric irregularities. The potential of the low-frequency spaceborne SARs in mapping ionospheric local refined structures and global tendency is emphasized, and the future development direction is prospected.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
As a representative of China’s new generation of space-borne long-wavelength Synthetic Aperture Radar (SAR), the LuTan-1A (LT-1A) satellite was launched into a solar synchronous orbit in January 2022. The SAR onboard the LT-1A satellite operates in the L band and exhibits various earth observation capabilities, including single-polarization, linear dual-polarization, compressed dual-polarization, and quad-polarization observation capabilities. Existing research has mainly focused on LT-1A interferometric data acquisition capabilities and the accuracy evaluation of digital elevation models and displacement measurements. Research on the radiometric and polarimetric accuracy of the LT-1A satellite is limited. This article uses tropical rainforest vegetation as a reference to evaluate and analyze the radiometric error and polarimetricstability of the LT-1A satellite in the full polarization observation mode through a self-calibration method that does not rely on artificial calibrators. The experiment demonstrates that the LT-1A satellite has good radiometric stability and polarimetric accuracy, exceeding the recommended specifications of the International Organization for Earth Observations (Committee on Earth Observation Satellites, CEOS). Fluctuations in the Normalized Radar Cross-Section (NRCS) error within 1,000 km of continuous observation are less than 1 dB (3σ), and there are no significant changes in system radiometric errors of less than 0.5 dB (3σ) when observation is resumed within five days. In the full polarization observation mode, the system crosstalk is less than −35 dB, reaching as low as −45 dB. Further, the cross-polarization channel imbalance is better than 0.2 dB and 2°, whilethe co-polarization channel imbalance is better than 0.5 dB and 10°. The equivalent thermal noise ranges from −42~−22 dB, and the average equivalent thermal noise of the system is better than −25 dB. The level of thermal noise may increase to some extent with increasing continuous observation duration. Additionally, this study found that the ionosphere significantly affects the quality of the LT-1A satellite polarization data, with a Faraday rotation angle of approximately 5°, causing a crosstalk of nearly −20 dB. In middle- and low-latitude regions, the Faraday rotation angle commonly ranges from 3° to 20°. The Faraday rotation angle can cause polarimetric distortion errors between channels ranging from −21.16~−8.78 dB. The interference from the atmospheric observation environment is considerably greater than the influence of about −40 dB system crosstalk errors. This research carefully assesses the radiomatric and polarimetric quality of the LT-1A satellite data considering dense vegetation in the Amazon rainforest and provides valuable information to industrial users. Thus, this research holds significant scientific importanceand reference value.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Multiplier Method (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.