Most Downloaded
1
2024, 13(5): 941-954.
Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research.
Three-Dimensional (3D) Synthetic Aperture Radar (SAR) holds great potential for applications in fields such as mapping and disaster management, making it an important research focus in SAR technology. To advance the application and development of 3D SAR, especially by reducing the number of observations or antenna array elements, the Aerospace Information Research Institute, Chinese Academy of Sciences, (AIRCAS) has pioneered the development of the full-polarimetric Microwave Vision 3D SAR (MV3DSAR) experimental system. This system is designed to serve as an experimental platform and a source of data for microwave vision SAR 3D imaging studies. This study introduces the MV3DSAR experimental system along with its full-polarimetric SAR data set. It also proposes a set of full-polarimetric data processing scheme that covers essential steps such as polarization correction, polarization coherent enhancement, microwave vision 3D imaging, and 3D fusion visualization. The results from the 3D imaging data set confirm the full-polarimetric capabilities of the MV3DSAR experimental system and validate the effectiveness of the proposed processing method. The full-polarimetric unmanned aerial vehicle -borne array interferometric SAR data set, released through this study, offers enhanced data resources for advancing 3D SAR imaging research.
2
2020, 9(5): 803-827.
Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.
Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.
3
2024, 13(3): 539-553.
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
4
2024, 13(3): 501-524.
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
5
2023, 12(5): 923-970.
As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.
As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.
6
Multidomain Characteristic-guided Multimodal Contrastive Recognition Method for Active Radar Jamming
2024, 13(5): 1004-1018.
Achieving robust joint utilization of multidomain characteristics and deep-network features while maintaining a high jamming-recognition accuracy with limited samples is challenging. To address this issue, this paper proposes a multidomain characteristic-guided multimodal contrastive recognition method for active radar jamming. This method involves first thoroughly extracting the multidomain characteristics of active jamming and then designing an optimization unit to automatically select effective characteristics and generate a text modality imbued with implicit expert knowledge. The text modality and involved time-frequency transformation image are separately fed into text and image encoders to construct multimodal-feature pairs and map them to a high-dimensional space for modal alignment. The text features are used as anchors and a guide to time-frequency image features for aggregation around the anchors through contrastive learning, optimizing the image encoder’s representation capability, achieving tight intraclass and separated interclass distributions of active jamming. Experiments show that compared to existing methods, which involve directly combining multidomain characteristics and deep-network features, the proposed guided-joint method can achieve differential feature processing, thereby enhancing the discriminative and generalization capabilities of recognition features. Moreover, under extremely small-sample conditions (2~3 training samples for each type of jamming), the accuracy of our method is 9.84% higher than those of comparative methods, proving the effectiveness and robustness of the proposed method.
Achieving robust joint utilization of multidomain characteristics and deep-network features while maintaining a high jamming-recognition accuracy with limited samples is challenging. To address this issue, this paper proposes a multidomain characteristic-guided multimodal contrastive recognition method for active radar jamming. This method involves first thoroughly extracting the multidomain characteristics of active jamming and then designing an optimization unit to automatically select effective characteristics and generate a text modality imbued with implicit expert knowledge. The text modality and involved time-frequency transformation image are separately fed into text and image encoders to construct multimodal-feature pairs and map them to a high-dimensional space for modal alignment. The text features are used as anchors and a guide to time-frequency image features for aggregation around the anchors through contrastive learning, optimizing the image encoder’s representation capability, achieving tight intraclass and separated interclass distributions of active jamming. Experiments show that compared to existing methods, which involve directly combining multidomain characteristics and deep-network features, the proposed guided-joint method can achieve differential feature processing, thereby enhancing the discriminative and generalization capabilities of recognition features. Moreover, under extremely small-sample conditions (2~3 training samples for each type of jamming), the accuracy of our method is 9.84% higher than those of comparative methods, proving the effectiveness and robustness of the proposed method.
7
2023, 12(4): 906-922.
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
8
2024, 13(3): 554-564.
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
9
2024, 13(5): 985-1003.
Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images.
Spaceborne Synthetic Aperture Radar (SAR) systems are often subject to strong electromagnetic interference, resulting in imaging quality degradation. However, existing image domain-based interference suppression methods are prone to image distortion and loss of texture detail information, among other difficulties. To address these problems, this paper proposes a method for suppressing active suppression interferences inspaceborne SAR images based on perceptual learning of regional feature refinement. First, an active suppression interference signal and image model is established in the spaceborne SAR image domain. Second, a high-precision interference recognition network based on regional feature perception is designed to extract the active suppression interference pattern features of the involved SAR image using an efficient channel attention mechanism, consequently resulting in effective recognition of the interference region of the SAR image. Third, a multivariate regional feature refinement interference suppression network is constructed based on the joint learning of the SAR image and suppression interference features, which are combined to form the SAR image and suppression interference pattern. A feature refinement interference suppression network is then constructed based on the joint learning of the SAR image and suppression interference feature. The network slices the SAR image into multivariate regions, and adopts multi-module collaborative processing of suppression interference features on the multivariate regions to realize refined suppression of the active suppression interference of the SAR image under complex conditions. Finally, a simulation dataset of SAR image active suppression interference is constructed, and the evaluated Sentinel-1 data are used for experimental verification and analysis. The experimental results show that the proposed method can effectively recognize and suppress various typical active suppression interferences in spaceborne SAR images.
10
2024, 13(1): 46-67.
With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing.
With the growing demand for radar target detection, Sparse Recovery (SR) technology based on the Compressive Sensing (CS) model has been widely used in radar signal processing. This paper first outlines the fundamental theory of SR and then introduces the sparse characteristics in radar signal processing from the perspectives of scene sparsity and observation sparsity. Subsequently, it explores these sparse properties to provide an overview of CS applications in radar signal processing, including spatial domain processing, pulse compression, coherent processing, radar imaging, and target detection. Finally, the paper summarizes the applications of CS in radar signal processing.
11
2023, 12(3): 471-499.
Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies.
Multi-Radar Collaborative Surveillance (MRCS) technology enables a geographically distributed detection configuration through the linkage of multiple radars, which can fully obtain detection gains in terms of spatial and frequency diversity, thereby enhancing the detection performance and viability of radar systems in the context of complex electromagnetic environments. MRCS is one of the key development directions in radar technology and has received extensive attention in recent years. Considerable research on MRCS has been conducted, and numerous achievements in system architecture design, signal processing, and resource scheduling for MRCS have been accumulated. This paper first summarizes the concept of MRCS technology, elaborates on the signal processing-based closed-loop mechanism of cognitive collaboration, and analyzes the challenges faced in the process of MRCS’s implementation. Then, the paper focuses on cognitive tracking and resource scheduling algorithms and implements the technical summary regarding the connotation characteristics, system configuration, tracking model, information fusion, performance evaluation, resource scheduling algorithm, optimization criteria, and cognitive process of cognitive tracking. The relevance between multi-radar cognitive tracking and its system resource scheduling is further analyzed. Subsequently, the recent research trends of cognitive tracking and resource scheduling algorithms are identified and summarized in terms of five aspects: radar resource elements, information fusion architectures, tracking performance indicators, resource scheduling models, and complex task scenarios. Finally, the full text is summarized and future technology in this field is explored to provide a reference for subsequent research on related technologies.
12
2024, 13(5): 1073-1091.
Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network.
Real Aperture Radar (RAR) observes wide-scope target information by scanning its antenna. However, because of the limited antenna size, the angular resolution of RAR is much lower than the range resolution. Angular super-resolution methods can be applied to enhance the angular resolution of RAR by inverting the low-rank steering matrix based on the convolution relationship between the antenna pattern and target scatterings. Because of the low-rank characteristics of the antenna steering matrix, traditional angular super-resolution methods suffer from manual parameter selection and high computational complexity. In particular, these methods exhibit poor super-resolution angular resolution at low signal-to-noise ratios. To address these problems, an iterative adaptive approach for angular super-resolution imaging of scanning RAR is proposed by combining the traditional Iterative Adaptive Approach (IAA) with a deep network framework, namely IAA-Net. First, the angular super-resolution problem for RAR is transformed into an echo autocorrelation matrix inversion problem to mitigate the ill-posed condition of the inverse matrix. Second, a learnable repairing matrix is introduced into the IAA procedure to combine the IAA algorithm with the deep network framework. Finally, the echo autocorrelation matrix is updated via iterative learning to improve the angular resolution. Simulation and experimental results demonstrate that the proposed method avoids manual parameter selection and reduces computational complexity. The proposed method provides high angular resolution under a low signal-to-noise ratio because of the learning ability of the deep network.
13
2020, 9(1): 1-33.
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
14
2024, 13(5): 1092-1108.
Due to the short wavelength of millimeter-wave, active electrical scanning millimeter-wave imaging system requires large imaging scenarios and high resolutions in practical applications. These requirements lead to a large uniform array size and high complexity of the feed network that satisfies the Nyquist sampling theorem. Accordingly, the system faces contradictions among imaging accuracy, imaging speed, and system cost. To this end, a novel, Credible Bayesian Inference of near-field Sparse Array Synthesis (CBI-SAS) algorithm is proposed under the framework of sparse Bayesian learning. The algorithm optimizes the complex-valued excitation weights based on Bayesian inference in a sparse manner. Therefore, it obtains the full statistical posterior Probability Density Function (PDF) of these weights. This enables the algorithm to utilize higher-order statistical information to obtain the optimal values, confidence intervals, and confidence levels of the excitation weights. In Bayesian inference, to achieve a small number of array elements to synthesize the desired beam orientation pattern, a heavy-tailed Laplace sparse prior is introduced to the excitation weights. However, considering that the prior probability model is not conjugated with the reference pattern data probability, the prior model is encoded in a hierarchical Bayesian manner so that the full posterior distribution can be represented in closed-form solutions. To avoid the high-dimensional integral in the full posterior distribution, a variational Bayesian expectation maximization method is employed to calculate the posterior PDF of the excitation weights, enabling reliable Bayesian inference. Simulation results show that compared with conventional sparse array synthesis algorithms, the proposed algorithm achieves lower element sparsity, a smaller normalized mean square error, and higher accuracy for matching the desired directional pattern. In addition, based on the measured raw data from near-field 1D electrical scanning and 2D plane electrical scanning, an improved 3D time domain algorithm is applied for 3D image reconstruction. Results verify that the proposed CBI-SAS algorithm can guarantee imaging results and reduce the complexity of the system.
Due to the short wavelength of millimeter-wave, active electrical scanning millimeter-wave imaging system requires large imaging scenarios and high resolutions in practical applications. These requirements lead to a large uniform array size and high complexity of the feed network that satisfies the Nyquist sampling theorem. Accordingly, the system faces contradictions among imaging accuracy, imaging speed, and system cost. To this end, a novel, Credible Bayesian Inference of near-field Sparse Array Synthesis (CBI-SAS) algorithm is proposed under the framework of sparse Bayesian learning. The algorithm optimizes the complex-valued excitation weights based on Bayesian inference in a sparse manner. Therefore, it obtains the full statistical posterior Probability Density Function (PDF) of these weights. This enables the algorithm to utilize higher-order statistical information to obtain the optimal values, confidence intervals, and confidence levels of the excitation weights. In Bayesian inference, to achieve a small number of array elements to synthesize the desired beam orientation pattern, a heavy-tailed Laplace sparse prior is introduced to the excitation weights. However, considering that the prior probability model is not conjugated with the reference pattern data probability, the prior model is encoded in a hierarchical Bayesian manner so that the full posterior distribution can be represented in closed-form solutions. To avoid the high-dimensional integral in the full posterior distribution, a variational Bayesian expectation maximization method is employed to calculate the posterior PDF of the excitation weights, enabling reliable Bayesian inference. Simulation results show that compared with conventional sparse array synthesis algorithms, the proposed algorithm achieves lower element sparsity, a smaller normalized mean square error, and higher accuracy for matching the desired directional pattern. In addition, based on the measured raw data from near-field 1D electrical scanning and 2D plane electrical scanning, an improved 3D time domain algorithm is applied for 3D image reconstruction. Results verify that the proposed CBI-SAS algorithm can guarantee imaging results and reduce the complexity of the system.
15
2022, 11(3): 418-433.
Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted.
Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted.
16
2024, 13(1): 23-45.
The Multipath Exploitation Radar (MER) target detection technology is primarily based on the Non-Line-Of-Sight (NLOS) multipath propagation characteristics of electromagnetic waves, such as reflection and diffraction on the surface of the medium, enabling the effective detection of targets hidden in the “visually” blind area, such as urban street corners and vehicle occlusion. Thus, the technology can be feasible for various applications, including urban combat and intelligent driving. Further, it has significant practical and research implications. This paper summarizes the domestic and foreign literature in this field since the beginning of the 21st century to keep abreast of developments in this field and predict future development trends. The literature review revealed that according to the different types of detection platforms, MER target detection technology primarily consists of multipath detection technologies based on air and ground platforms. Both these technologies have achieved certain produced research results of practical significance. For air platforms, the following aspects are discussed: feasibility verification, analysis of influencing factors, architectural environment perception, and NLOS target detection. Further, for ground platforms, these four aspects are covered: target detection and recognition, two-dimensional target positioning, three-dimensional target information acquisition, and new detection methods. Finally, the prospects of MER target detection technology are summarized, and the potential issues and challenges in the current practical application of this technology are highlighted. These results show that MER target detection technology is evolving toward diversification and intelligence.
The Multipath Exploitation Radar (MER) target detection technology is primarily based on the Non-Line-Of-Sight (NLOS) multipath propagation characteristics of electromagnetic waves, such as reflection and diffraction on the surface of the medium, enabling the effective detection of targets hidden in the “visually” blind area, such as urban street corners and vehicle occlusion. Thus, the technology can be feasible for various applications, including urban combat and intelligent driving. Further, it has significant practical and research implications. This paper summarizes the domestic and foreign literature in this field since the beginning of the 21st century to keep abreast of developments in this field and predict future development trends. The literature review revealed that according to the different types of detection platforms, MER target detection technology primarily consists of multipath detection technologies based on air and ground platforms. Both these technologies have achieved certain produced research results of practical significance. For air platforms, the following aspects are discussed: feasibility verification, analysis of influencing factors, architectural environment perception, and NLOS target detection. Further, for ground platforms, these four aspects are covered: target detection and recognition, two-dimensional target positioning, three-dimensional target information acquisition, and new detection methods. Finally, the prospects of MER target detection technology are summarized, and the potential issues and challenges in the current practical application of this technology are highlighted. These results show that MER target detection technology is evolving toward diversification and intelligence.
17
2024, 13(4): 731-746.
Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging.
Due to height limitations, the traditional handheld or vehicle-mounted Through-the-Wall Radar (TWR) cannot provide the perspective imaging of internal targets in urban high-rise buildings. Unmanned Aerial Vehicle-TWR (UAV-TWR) offers flexibility, efficiency, convenience, and no height limitations, allowing for large-scale three-Dimensional (3D) penetration detection of urban high-rise buildings. While the multibaseline scanning mode is widely used in 3D tomographic Synthetic Aperture Radar (SAR) imaging to provide resolution in the altitude direction, it often suffers from the grating lobe problem owing to under-sampling in the altitude spatial domain. Therefore, this paper proposes a trajectory planning algorithm for UAV-through-the-wall 3D SAR imaging based on a genetic algorithm to address this issue. By nonuniformizing flight trajectories, the periodic radar echo energy superposition is weakened, thereby suppressing grating lobes to achieve better imaging quality. The proposed algorithm combines the inherent relationship between the flight distance and TWR imaging quality and establishes a cost function for UAV-TWR trajectory planning. We use the genetic algorithm to encode genes for three typical flight trajectory control points and optimize the population and individuals through gene hybridization and mutation. The optimal flight trajectory for each of the three flight modes is selected by minimizing the cost function. Compared with the traditional equidistant multibaseline flight mode, the imaging results from simulations and measured data show that the proposed algorithm significantly suppresses the grating lobe effect of targets. In addition, oblique UAV flight trajectories are significantly shortened, improving the efficiency of through-the-wall SAR imaging.
18
2023, 12(6): 1229-1248.
Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored.
Coherently combining distributed apertures adjusts the transmitted/received signals of multiple distributed small apertures, allowing coordinated distributed systems to obtain high power aperture products at much lower cost than large aperture. This is a promising and viable technology as an alternative to using large apertures. This study describes the concept and principles of coherently combining distributed apertures. Depending on whether external signal inputs at the combination destination are necessary, the implementation architecture of coherent combination is classified into two categories: closed- and open-loop. The development of coherently combining distributed apertures and their application in fields such as missile defense, deep space telemetry control, radar detection over ultralong range, and radio astronomy are then comprehensively presented. Furthermore, key techniques for aligning the time and phase of the transmitted/received signals for each aperture are elaborated, which are also necessary for coherently combining distributed apertures, including high-precision distributed time-frequency transfer and synchronization, and coherently combining parameters estimation, measurement and calibration, and prediction. Finally, summary is presented, and the scope of future works in this field is explored.
19
2020, 9(1): 86-106.
Synthetic Aperture Radar (SAR) has attracted much attention in the recent decades owing to its all-weather and high-resolution working mode. As an active radar system, the high-resolution imaging process of SAR systems is affected by different types of strong, complex, and variable electromagnetic interferences that can severely affect the final high-resolution SAR imaging results. Thus, developing ways to effectively suppress complex electromagnetic interferences is a major challenge and focus of SAR detection. In this paper, we summarize the key elements and main concepts underlying interference suppression in high-resolution SAR imaging, including different interference patterns, interference sources, interference scattering mechanisms, radar antenna configurations, and target characteristics. We then consider the essential task of interference suppression algorithms. Recent papers that detail the representative SAR algorithms used to mitigate suppressed and deceptive jamming are introduced and summarized to provide references for future research.
Synthetic Aperture Radar (SAR) has attracted much attention in the recent decades owing to its all-weather and high-resolution working mode. As an active radar system, the high-resolution imaging process of SAR systems is affected by different types of strong, complex, and variable electromagnetic interferences that can severely affect the final high-resolution SAR imaging results. Thus, developing ways to effectively suppress complex electromagnetic interferences is a major challenge and focus of SAR detection. In this paper, we summarize the key elements and main concepts underlying interference suppression in high-resolution SAR imaging, including different interference patterns, interference sources, interference scattering mechanisms, radar antenna configurations, and target characteristics. We then consider the essential task of interference suppression algorithms. Recent papers that detail the representative SAR algorithms used to mitigate suppressed and deceptive jamming are introduced and summarized to provide references for future research.
20
2024, 13(4): 747-760.
Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m.
Through-wall radar systems with single transmitter and receiver have the advantages of portability, simplicity, and independent operation; however, they cannot accomplish two-dimensional (2D) localization and tracking of targets. This paper proposes distributed wireless networking for through-wall radar systems based on a portable single transmitter and single receiver radar. Moreover, a target joint positioning method is proposed in this study, which can balance system portability, low cost, and target 2D information estimation. First, a complementary Gray code transmission waveform is utilized to overcome the issue of mutual interference when multiple radars operate simultaneously in the same frequency band, and each radar node communicates with the processing center via wireless modules, forming a distributed wireless networking radar system. In addition, a data synchronization method combines the behavioral cognition theory and template matching, which identifies identical motion states in data obtained from different radars, realizing slow-time synchronization among distributed radars and thereby eliminating the strict hardware requirements of conventional synchronization methods. Finally, a joint localization method based on Levenberg-Marquardt is proposed, which can simultaneously estimate the positions of radar nodes and targets without requiring prior radar position information. Simulation and field experiments are performed, and the results reveal that the distributed wireless networking radar system developed in this study can obtain 2D target positions and track moving targets in real time. The estimation accuracy of the radar’s own position is less than 0.06 m, and the positioning accuracy of moving human targets is less than 0.62 m.
- First
- Prev
- 1
- 2
- 3
- 4
- 5
- Next
- Last
- Total:5
- To
- Go