Most Downloaded

1
Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence. Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence.
2
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition. Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
3
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
4

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

5
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments. Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
6

Over the recent years, deep-learning technology has been widely used. However, in research based on Synthetic Aperture Radar (SAR) ship target detection, it is difficult to support the training of a deep-learning network model because of the difficulty in data acquisition and the small scale of the samples. This paper provides a SAR ship detection dataset with a high resolution and large-scale images. This dataset comprises 31 images from Gaofen-3 satellite SAR images, including harbors, islands, reefs, and the sea surface in different conditions. The backgrounds include various scenarios such as the near shore and open sea. We conducted experiments using both traditional detection algorithms and deep-learning algorithms and observed the densely connected end-to-end neural network to achieve the highest average precision of 88.1%. Based on the experiments and performance analysis, corresponding benchmarks are provided as a basis for further research on SAR ship detection using this dataset.

Over the recent years, deep-learning technology has been widely used. However, in research based on Synthetic Aperture Radar (SAR) ship target detection, it is difficult to support the training of a deep-learning network model because of the difficulty in data acquisition and the small scale of the samples. This paper provides a SAR ship detection dataset with a high resolution and large-scale images. This dataset comprises 31 images from Gaofen-3 satellite SAR images, including harbors, islands, reefs, and the sea surface in different conditions. The backgrounds include various scenarios such as the near shore and open sea. We conducted experiments using both traditional detection algorithms and deep-learning algorithms and observed the densely connected end-to-end neural network to achieve the highest average precision of 88.1%. Based on the experiments and performance analysis, corresponding benchmarks are provided as a basis for further research on SAR ship detection using this dataset.

7

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

8
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
9
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
10
This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB. This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB.
11
As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research. As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research.
12
Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR. Spaceborne Synthetic Aperture Radar (SAR), which can be mounted on space vehicles to collect information of the entire planet with all-day and all-weather imaging capacity, has been an indispensable device for earth observation. Currently, the technology of our spaceborne SAR has achieved a considerable technological improvement, including the resolution change from meter to submeter, the imaging mode from stripmap to azimuth beam steering like the sliding spotlight, the practical application of the multichannel approach and the conversion of single polarization into full polarization. With the development of SAR techniques, forthcoming SAR will make breakthroughs in SAR architectures, concepts, technologies and modes, for example, high-resolution wide-swath imaging, multistatic SAR, payload miniaturization and intelligence. All of these will extend the observation dimensions and obtain multidimensional data. This study focuses on the forthcoming development of spaceborne SAR.
13
With the expansion of China’s space interests and the growth in the scale of on-orbit assets, high-precision detection of dark and weak targets in noncooperative space has become the core bottleneck in space security defense and debris removal. Traditional optical or radar detection technologies are limited by diffraction limit and signal-to-noise ratio constraints, and the detection and identification accuracy of “fast, far, small, and dark” targets is insufficient. Light Detection and Ranging (LiDAR), with its high precision and anti-jamming advantages, has gradually become the core technical means of accurately detecting space targets. Technologies such as sub-pixel scanning, synthetic aperture, and reflective tomography enable long-range super-resolution imaging by breaking through the physical limitations of conventional LiDAR systems. This paper begins by summarizing and sorting the critical problems associated with LiDAR super-resolution technology. The key technological research progress is then reported, typical experimental systems and experimental results are analyzed, and the characteristics, advantages, and shortcomings of each system are described with respect to requirements of space exploration, remote sensing, and mapping missions. Finally, the application prospects and development trends are presented. With the expansion of China’s space interests and the growth in the scale of on-orbit assets, high-precision detection of dark and weak targets in noncooperative space has become the core bottleneck in space security defense and debris removal. Traditional optical or radar detection technologies are limited by diffraction limit and signal-to-noise ratio constraints, and the detection and identification accuracy of “fast, far, small, and dark” targets is insufficient. Light Detection and Ranging (LiDAR), with its high precision and anti-jamming advantages, has gradually become the core technical means of accurately detecting space targets. Technologies such as sub-pixel scanning, synthetic aperture, and reflective tomography enable long-range super-resolution imaging by breaking through the physical limitations of conventional LiDAR systems. This paper begins by summarizing and sorting the critical problems associated with LiDAR super-resolution technology. The key technological research progress is then reported, typical experimental systems and experimental results are analyzed, and the characteristics, advantages, and shortcomings of each system are described with respect to requirements of space exploration, remote sensing, and mapping missions. Finally, the application prospects and development trends are presented.
14
The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed. The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed.
15
Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features. Synthetic Aperture Radar (SAR) image target recognition technology based on deep learning has matured. However, challenges remain due to scattering phenomenon and noise interference that cause significant intraclass variability in imaging results. Invariant features, which represent the essential attributes of a specific target class with consistent expressions, are crucial for high-precision recognition. We define these invariant features from the entity, its surrounding environment, and their combined context as the target’s essential features. Guided by multilevel essential feature modeling theory, we propose a SAR image target recognition method based on graph networks and invariant feature perception. This method employs a dual-branch network to process multiview SAR images simultaneously using a rotation-learnable unit to adaptively align dual-branch features and reinforce invariant features with rotational immunity by minimizing intraclass feature differences. Specifically, to support essential feature extraction in each branch, we design a feature-guided graph feature perception module based on multilevel essential feature modeling. This module uses salient points for target feature analysis and comprises a target ontology feature enhancement unit, an environment feature sampling unit, and a context-based adaptive fusion update unit. Outputs are analyzed with a graph neural network and constructed into a topological representation of essential features, resulting in a target category vector. The t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to qualitatively evaluate the algorithm’s classification ability, while metrics like accuracy, recall, and F1 score are used to quantitatively analyze key units and overall network performance. Additionally, class activation map visualization methods are employed to validate the extraction and analysis of invariant features at different stages and branches. The proposed method achieves recognition accuracies of 98.56% on the MSTAR dataset, 94.11% on SAR-ACD dataset, and 86.20% on OpenSARShip dataset, demonstrating its effectiveness in extracting essential target features.
16
Compared to ground-based external radiation source radar, satellite signal-based external radiation source radar (i.e., satellite signal external radiation source radar) offers advantages such as global, all-time, and all-weather coverage, which can compensate for the limitations of ground-based external radiation source radar in terms of maritime coverage. In contrast to medium and high-altitude satellite signals, Low-Earth Orbit (LEO) communication satellite signals have advantages such as strong reception power and a large number of satellites, which can provide substantial detection range and accuracy for passive detection of maritime targets. In response to future development needs, this paper provides a detailed discussion of the research status and application prospects of satellite signal external radiation source radar, and presents a feasibility analysis for constructing a low-earth orbit communication satellite signal external radiation source radar system using Iridium and Starlink, two types of LEO communication satellite systems, which integrates high and low frequencies with both wide and narrow bandwidths. Based on this, the paper summarizes the technical challenges and potential solutions in the development of low-earth orbit communication satellite signal external radiation source radar systems. The aforementioned research can serve as an important reference for wide-area external radiation source radar detection. Compared to ground-based external radiation source radar, satellite signal-based external radiation source radar (i.e., satellite signal external radiation source radar) offers advantages such as global, all-time, and all-weather coverage, which can compensate for the limitations of ground-based external radiation source radar in terms of maritime coverage. In contrast to medium and high-altitude satellite signals, Low-Earth Orbit (LEO) communication satellite signals have advantages such as strong reception power and a large number of satellites, which can provide substantial detection range and accuracy for passive detection of maritime targets. In response to future development needs, this paper provides a detailed discussion of the research status and application prospects of satellite signal external radiation source radar, and presents a feasibility analysis for constructing a low-earth orbit communication satellite signal external radiation source radar system using Iridium and Starlink, two types of LEO communication satellite systems, which integrates high and low frequencies with both wide and narrow bandwidths. Based on this, the paper summarizes the technical challenges and potential solutions in the development of low-earth orbit communication satellite signal external radiation source radar systems. The aforementioned research can serve as an important reference for wide-area external radiation source radar detection.
17
Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed. Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed.
18
Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described. Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described.
19
Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector. Considering the problem of radar target detection in the sea clutter environment, this paper proposes a deep learning-based marine target detector. The proposed detector increases the differences between the target and clutter by fusing multiple complementary features extracted from different data sources, thereby improving the detection performance for marine targets. Specifically, the detector uses two feature extraction branches to extract multiple levels of fast-time and range features from the range profiles and the range-Doppler (RD) spectrum, respectively. Subsequently, the local-global feature extraction structure is developed to extract the sequence relations from the slow time or Doppler dimension of the features. Furthermore, the feature fusion block is proposed based on adaptive convolution weight learning to efficiently fuse slow-fast time and RD features. Finally, the detection results are obtained through upsampling and nonlinear mapping to the fused multiple levels of features. Experiments on two public radar databases validated the detection performance of the proposed detector.
20
Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed. Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go