Most Downloaded

1
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
2
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
3
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
4
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
5
Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios. Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.
6
As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields. As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.
7
Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research. Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research.
8
Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset. Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset.
9
Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed. Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed.
10
With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group. With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group.
11
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR. Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
12
Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored. Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored.
13
With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing. With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing.
14
Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies. Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies.
15
Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman Total Variation (TV) regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper. Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman Total Variation (TV) regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper.
16
Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images. Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images.
17
The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed. The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed.
18
Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted. Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted.
19
Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed. Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed.
20
As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research. As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go