Most Downloaded

1
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
2
With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group. With the rapid development of high-resolution radar imaging technology, artificial intelligence, and big data technology, remarkable advancements have been made in the intelligent interpretation of radar imagery. Despite growing demands, radar image intrpretation is now facing various technical challenges mainly because of the particularity of the radar sensor itself and the complexity of electromagnetic scattering physical phenomena. To address the problem of microwave radar imagery perception, this article proposes the development of the cross-disciplinary microwave vision research, which further integrates electromagnetic physics and radar imaging mechanism with human brain visual perception principles and computer vision technologies. This article discusses the concept and implication of microwave vision, proposes a microwave vision perception model, and explains its basic scientific problems and technical roadmaps. Finally, it introduces the preliminary research progress on related issues achieved by the authors’ group.
3
Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed. Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed.
4
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
5
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
6
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
7
Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset. Fine terrain classification is one of the main applications of Synthetic Aperture Radar (SAR). In the multiband fully polarized SAR operating mode, obtaining information on different frequency bands of the target and polarization response characteristics of a target is possible, which can improve target classification accuracy. However, the existing datasets at home and abroad only have low-resolution fully polarized classification data for individual bands, limited regions, and small samples. Thus, a multidimensional SAR dataset from Hainan is used to construct a multiband fully polarized fine classification dataset with ample sample size, diverse land cover categories, and high classification reliability. This dataset will promote the development of multiband fully polarized SAR classification applications, supported by the high-resolution aerial observation system application calibration and verification project. This paper provides an overview of the composition of the dataset, and describes the information and dataset production methods for the first batch of published data (MPOLSAR-1.0). Furthermore, this study presents the preliminary classification experimental results based on the polarization feature classification and classical machine learning classification methods, providing support for the sharing and application of the dataset.
8
The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed. The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed.
9
With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing. With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing.
10
As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields. As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.
11
Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology. Metasurfaces are two-dimensional artificial structures with numerous subwavelength elements arranged periodically or aperiodically. They have demonstrated their exceptional capabilities in electromagnetic wave polarization manipulation, opening new avenues for manipulating electromagnetic waves. Metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation have garnered widespread research interest. These unique metasurfaces can dynamically adjust the polarization state of electromagnetic waves through real-time modification of their structure or material properties via electrical signals. This article provides a comprehensive overview of the development of metasurfaces exhibiting electrically controlled reconfigurable polarization manipulation and explores the technological advancements of metasurfaces with different transmission characteristics in the microwave region in detail. Furthermore, it delves into and anticipates the future development of this technology.
12
Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios. Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.
13
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR. Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
14
This paper proposes a novel multimodal collaborative perception framework to enhance the situational awareness of autonomous vehicles. First, a multimodal fusion baseline system is built that effectively integrates Light Detection and Ranging (LiDAR) point clouds and camera images. This system provides a comparable benchmark for subsequent research. Second, various well-known feature fusion strategies are investigated in the context of collaborative scenarios, including channel-wise concatenation, element-wise summation, and transformer-based methods. This study aims to seamlessly integrate intermediate representations from different sensor modalities, facilitating an exhaustive assessment of their effects on model performance. Extensive experiments were conducted on a large-scale open-source simulation dataset, i.e., OPV2V. The results showed that attention-based multimodal fusion outperforms alternative solutions, delivering more precise target localization during complex traffic scenarios, thereby enhancing the safety and reliability of autonomous driving systems. This paper proposes a novel multimodal collaborative perception framework to enhance the situational awareness of autonomous vehicles. First, a multimodal fusion baseline system is built that effectively integrates Light Detection and Ranging (LiDAR) point clouds and camera images. This system provides a comparable benchmark for subsequent research. Second, various well-known feature fusion strategies are investigated in the context of collaborative scenarios, including channel-wise concatenation, element-wise summation, and transformer-based methods. This study aims to seamlessly integrate intermediate representations from different sensor modalities, facilitating an exhaustive assessment of their effects on model performance. Extensive experiments were conducted on a large-scale open-source simulation dataset, i.e., OPV2V. The results showed that attention-based multimodal fusion outperforms alternative solutions, delivering more precise target localization during complex traffic scenarios, thereby enhancing the safety and reliability of autonomous driving systems.
15
Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed. Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed.
16
The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples. The feature extraction capability of Convolutional Neural Networks (CNNs) is related to the number of their parameters. Generally, using a large number of parameters leads to improved feature extraction capability of CNNs. However, a considerable amount of training data is required to effectively learn these parameters. In practical applications, Synthetic Aperture Radar (SAR) images available for model training are often limited. Reducing the number of parameters in a CNN can decrease the demand for training samples, but the feature expression ability of the CNN is simultaneously diminished, which affects its target recognition performance. To solve this problem, this paper proposes a deep network for SAR target recognition based on Attribute Scattering Center (ASC) convolutional kernel modulation. Given the electromagnetic scattering characteristics of SAR images, the proposed network extracts scattering structures and edge features that are more consistent with the characteristics of SAR targets by modulating a small number of CNN convolutional kernels using predefined ASC kernels with different orientations and lengths. This approach generates additional convolutional kernels, which can reduce the network parameters while ensuring feature extraction capability. In addition, the designed network uses ASC-modulated convolutional kernels at shallow layers to extract scattering structures and edge features that are more consistent with the characteristics of SAR images while utilizing CNN convolutional kernels at deeper layers to extract semantic features of SAR images. The proposed network focuses on the electromagnetic scattering characteristics of SAR targets and shows the feature extraction advantages of CNNs due to the simultaneous use of ASC-modulated and CNN convolutional kernels. Experiments based on the studied SAR images demonstrate that the proposed network can ensure excellent SAR target recognition performance while reducing the demand for training samples.
17
Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions. Synthetic Aperture Radar (SAR) is extensively utilized in civilian and military domains due to its all-weather, all-time monitoring capabilities. In recent years, deep learning has been widely employed to automatically interpret SAR images. However, due to the constraints of satellite orbit and incident angle, SAR target samples face the issue of incomplete view coverage, which poses challenges for learning-based SAR target detection and recognition algorithms. This paper proposes a method for generating multi-view samples of SAR targets by integrating differentiable rendering, combining inverse Three-Dimensional (3D) reconstruction, and forward rendering techniques. By designing a Convolutional Neural Network (CNN), the proposed method inversely infers the 3D representation of targets from limited views of SAR target images and then utilizes a Differentiable SAR Renderer (DSR) to render new samples from more views, achieving sample interpolation in the view dimension. Moreover, the training process of the proposed method constructs the objective function using DSR, eliminating the need for 3D ground-truth supervision. According to experimental results on simulated data, this method can effectively increase the number of multi-view SAR target images and improve the recognition rate of typical SAR targets under few-shot conditions.
18
Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies. Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies.
19
Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored. Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored.
20
The current state of intelligent target recognition approaches for Synthetic Aperture Radar (SAR) continues to experience challenges owing to their limited robustness, generalizability, and interpretability. Currently, research focuses on comprehending the microwave properties of SAR targets and integrating them with advanced deep learning algorithms to achieve effective and resilient SAR target recognition. The computational complexity of SAR target characteristic-inversion approaches is often considerable, rendering their integration with deep neural networks for achieving real-time predictions in an end-to-end manner challenging. To facilitate the utilization of the physical properties of SAR targets in intelligent recognition tasks, advancing the development of microwave physical property sensing technologies that are efficient, intelligent, and interpretable is imperative. This paper focuses on the nonstationary nature of high-resolution SAR targets and proposes an improved intelligent approach for analyzing target characteristics using time-frequency analysis. This method enhances the processing flow and calculation efficiency, making it more suitable for SAR targets. It is integrated with a deep neural network for SAR target recognition to achieve consistent performance improvement. The proposed approach exhibits robust generalization capabilities and notable computing efficiency, enabling the acquisition of classification outcomes of the SAR target characteristics that are readily interpretable from a physical standpoint. The enhancement in the performance of the target recognition algorithm is comparable to that achieved by the attribute scattering center model. The current state of intelligent target recognition approaches for Synthetic Aperture Radar (SAR) continues to experience challenges owing to their limited robustness, generalizability, and interpretability. Currently, research focuses on comprehending the microwave properties of SAR targets and integrating them with advanced deep learning algorithms to achieve effective and resilient SAR target recognition. The computational complexity of SAR target characteristic-inversion approaches is often considerable, rendering their integration with deep neural networks for achieving real-time predictions in an end-to-end manner challenging. To facilitate the utilization of the physical properties of SAR targets in intelligent recognition tasks, advancing the development of microwave physical property sensing technologies that are efficient, intelligent, and interpretable is imperative. This paper focuses on the nonstationary nature of high-resolution SAR targets and proposes an improved intelligent approach for analyzing target characteristics using time-frequency analysis. This method enhances the processing flow and calculation efficiency, making it more suitable for SAR targets. It is integrated with a deep neural network for SAR target recognition to achieve consistent performance improvement. The proposed approach exhibits robust generalization capabilities and notable computing efficiency, enabling the acquisition of classification outcomes of the SAR target characteristics that are readily interpretable from a physical standpoint. The enhancement in the performance of the target recognition algorithm is comparable to that achieved by the attribute scattering center model.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go