Most Downloaded

1
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
2
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
3
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
4
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
5
With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group. With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group.
6
Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed. Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed.
7
Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset. Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset.
8
The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed. The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed.
9
Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions. Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions.
10
Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored. Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored.
11
Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed. Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed.
12
Electromagnetic waves are transmitted by radars and reflected by different objects, and radar signal processing is highly significant as its analyses can lead to the acquisition of important information such as the situation and radial movement speed. Moreover, deep learning has gained much attention in several fields, and it can be utilized to implement radar signal processing. Compared with the traditional methods, deep learning can realize automatic feature extraction and yield highly accurate results; hence, in this paper, the application of deep learning algorithm in radar signal processing is studied. In addition, the study directions in radar signal processing are summarized into overfitting and interpretation. Thus, these two issues are being considered. Electromagnetic waves are transmitted by radars and reflected by different objects, and radar signal processing is highly significant as its analyses can lead to the acquisition of important information such as the situation and radial movement speed. Moreover, deep learning has gained much attention in several fields, and it can be utilized to implement radar signal processing. Compared with the traditional methods, deep learning can realize automatic feature extraction and yield highly accurate results; hence, in this paper, the application of deep learning algorithm in radar signal processing is studied. In addition, the study directions in radar signal processing are summarized into overfitting and interpretation. Thus, these two issues are being considered.
13
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR. Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
14
With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing. With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing.
15
Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images. Inverse Synthetic Aperture Radar (ISAR) images of spacecraft are composed of discrete scatterers that exhibit weak texture, high dynamics, and discontinuity. These characteristics result in sparse point clouds obtained using traditional algorithms for the Three-Dimensional (3D) reconstruction of spacecraft ISAR images. Furthermore, using point clouds to comprehensively describe the complete shape of targets is difficult, which consequently hampers the accurate extraction of the structural and pose parameters of the target. To address this problem, considering that space targets usually have specific modular structures, this paper proposes a method for abstracting parametric structural primitives from space target ISAR images to represent their 3D structures. First, the energy accumulation algorithm is used to obtain the sparse point cloud of the target from ISAR images. Subsequently, the point cloud is fitted using parameterized primitives. Finally, primitives are projected onto the ISAR imaging plane and optimized by maximizing their similarity with the target image to obtain the optimal 3D representation of the target primitives. Compared with the traditional point cloud 3D reconstruction, this method can provide a more complete description of the three-dimensional structure of the target. Meanwhile, primitive parameters obtained using this method represent the attitude and structure of the target and can directly support subsequent tasks such as target recognition and analysis. Simulation experiments demonstrate that this method can effectively achieve the 3D abstraction of space targets based on ISAR sequential images.
16
Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman total-variation regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper. Non-Line-Of-Sight (NLOS) 3D imaging radar is an emerging technology that utilizes multipath scattering echoes to detect hidden targets. However, this technology faces challenges such as the separation of multipath echoes, reduction of aperture occlusion, and phase errors of reflective surfaces, which hinder the high-precision imaging of hidden targets when using traditional Line-Of-Sight (LOS) radar imaging methods. To address these challenges, this paper proposes a precise imaging method for NLOS hidden targets based on Sparse Iterative Reconstruction (NSIR). In this method, we first establish a multipath signal model for NLOS millimeter-wave 3D imaging radar. By exploiting the characteristics of LOS/NLOS echoes, we extract the multipath echoes from hidden targets using a model-driven approach to realize the separation of LOS/NLOS echo signals. Second, we formulate a total variation multiconstraint optimization problem for reconstructing hidden targets, integrating multipath reflective surface phase errors. Using the split Bregman total-variation regularization operator and the phase error estimation criterion based on the minimum mean square error, we jointly solve the multiconstraint optimization problem. This approach facilitates precise imaging and contour reconstruction of NLOS targets. Finally, we construct a planar scanning 3D imaging radar experimental platform and conduct experimental verification of targets such as knives and iron racks in a corner NLOS scenario. Results validate the capability of NLOS millimeter-wave 3D imaging radar in detecting hidden targets and the effectiveness of the method proposed in this paper.
17
Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies. Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies.
18
The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples. The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples.
19
As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields. As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.
20
Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios. Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go