Most Downloaded

1
This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB. This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB.
2
The bistatic Synthetic Aperture Radar (SAR) system, which employs spatially separated transmitting and receiving platforms, provides high-resolution imaging of terrestrial and maritime scenes and targets in complex environments. Its advantages include flexible configuration, strong concealment capabilities, high interference resistance, and comprehensive target information acquisition, making it valuable in high-precision remote sensing mapping, covert imaging, and precision strikes. Image processing is critical for obtaining high-resolution Bistatic SAR (BiSAR) images. However, the echo model and characteristics of BiSAR substantially differ from those of traditional monostatic SAR, necessitating specialized image processing methods tailored to various operational modes and configurations. This study examines key challenges and solutions for several BiSAR configurations, including airborne BiSAR, BiSAR with high-speed and highly maneuverable platforms, spaceborne heterogeneous BiSAR, and spaceborne homogeneous BiSAR. This study also addresses motion compensation approaches and moving target imaging in BiSAR systems, reviews relevant domestic and international research advancements, and provides an outlook on future trends in BiSAR image processing. The bistatic Synthetic Aperture Radar (SAR) system, which employs spatially separated transmitting and receiving platforms, provides high-resolution imaging of terrestrial and maritime scenes and targets in complex environments. Its advantages include flexible configuration, strong concealment capabilities, high interference resistance, and comprehensive target information acquisition, making it valuable in high-precision remote sensing mapping, covert imaging, and precision strikes. Image processing is critical for obtaining high-resolution Bistatic SAR (BiSAR) images. However, the echo model and characteristics of BiSAR substantially differ from those of traditional monostatic SAR, necessitating specialized image processing methods tailored to various operational modes and configurations. This study examines key challenges and solutions for several BiSAR configurations, including airborne BiSAR, BiSAR with high-speed and highly maneuverable platforms, spaceborne heterogeneous BiSAR, and spaceborne homogeneous BiSAR. This study also addresses motion compensation approaches and moving target imaging in BiSAR systems, reviews relevant domestic and international research advancements, and provides an outlook on future trends in BiSAR image processing.
3
Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence. Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence.
4

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

5
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition. Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
6
Conventional spaceborne monostatic radar systems incur huge engineering costs to achieve small moving-target detection and low anti-interference ability. By manipulating the transmitter-receiver separation in a spaceborne bistatic radar system, the target radar cross section can be effectively improved by adopting a configuration with a large azimuth bistatic angle, and the anti-interference ability can be improved because the receiver does not transmit signals. However, the characteristics of the background clutter echo in a spaceborne bistatic radar system differ drastically from those in a spaceborne monostatic radar system because of the transmitter-receiver separation in the former. To overcome the limitations of existing empirical clutter scattering coefficient models, which typically do not capture the variation of scattering coefficient with azimuth bistatic angle, this study proposes a semiempirical bistatic clutter scattering coefficient model based on the two-scale model. In the proposed model, an empirical clutter backscattering coefficient model can be converted to a bistatic clutter scattering coefficient model based on electromagnetic scattering theories, and the bistatic scattering coefficient is further modified based on the two-scale model. The proposed model was validated using real measured data of bistatic clutter scattering coefficients obtained from existing literature. Using the proposed model, clutter suppression performance under different azimuth bistatic angles was analyzed by employing space-time adaptive processing in spaceborne bistatic radar systems. Reportedly, under HH polarization, the clutter suppression performance was relatively good when the azimuth bistatic angle was 30°~130°, whereas the clutter suppression performance was considerably affected by large-power main-lobe clutter when the azimuth bistatic angle was >150°. Conventional spaceborne monostatic radar systems incur huge engineering costs to achieve small moving-target detection and low anti-interference ability. By manipulating the transmitter-receiver separation in a spaceborne bistatic radar system, the target radar cross section can be effectively improved by adopting a configuration with a large azimuth bistatic angle, and the anti-interference ability can be improved because the receiver does not transmit signals. However, the characteristics of the background clutter echo in a spaceborne bistatic radar system differ drastically from those in a spaceborne monostatic radar system because of the transmitter-receiver separation in the former. To overcome the limitations of existing empirical clutter scattering coefficient models, which typically do not capture the variation of scattering coefficient with azimuth bistatic angle, this study proposes a semiempirical bistatic clutter scattering coefficient model based on the two-scale model. In the proposed model, an empirical clutter backscattering coefficient model can be converted to a bistatic clutter scattering coefficient model based on electromagnetic scattering theories, and the bistatic scattering coefficient is further modified based on the two-scale model. The proposed model was validated using real measured data of bistatic clutter scattering coefficients obtained from existing literature. Using the proposed model, clutter suppression performance under different azimuth bistatic angles was analyzed by employing space-time adaptive processing in spaceborne bistatic radar systems. Reportedly, under HH polarization, the clutter suppression performance was relatively good when the azimuth bistatic angle was 30°~130°, whereas the clutter suppression performance was considerably affected by large-power main-lobe clutter when the azimuth bistatic angle was >150°.
7
Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed. Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed.
8
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
9
Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing. Weak target signal processing is the cornerstone and prerequisite for radar to achieve excellent detection performance. In complex practical applications, due to strong clutter interference, weak target signals, unclear image features, and difficult effective feature extraction, weak target detection and recognition have always been challenging in the field of radar processing. Conventional model-based processing methods do not accurately match the actual working background and target characteristics, leading to weak universality. Recently, deep learning has made significant progress in the field of radar intelligent information processing. By building deep neural networks, deep learning algorithms can automatically learn feature representations from a large amount of radar data, improving the performance of target detection and recognition. This article systematically reviews and summarizes recent research progress in the intelligent processing of weak radar targets in terms of signal processing, image processing, feature extraction, target classification, and target recognition. This article discusses noise and clutter suppression, target signal enhancement, low- and high-resolution radar image and feature processing, feature extraction, and fusion. In response to the limited generalization ability, single feature expression, and insufficient interpretability of existing intelligent processing applications for weak targets, this article underscores future developments from the aspects of small sample object detection (based on transfer learning and reinforcement learning), multidimensional and multifeature fusion, network model interpretability, and joint knowledge- and data-driven processing.
10
The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy. The miniature multistatic Synthetic Aperture Radar (SAR) system uses a flexible configuration of transceiver division compared with the miniature monostatic SAR system, thereby affording the advantages of multi-angle imaging. As the transceiver-separated SAR system uses mutually independent oscillator sources, phase synchronization is necessary for high-precision imaging of the miniature multistatic SAR. Although current research on phase synchronization schemes for bistatic SAR is relatively mature, these schemes are primarily based on the pulse SAR system. However, a paucity of research exists on phase synchronization for the miniature multistatic Frequency Modulated Continuous Wave (FMCW) SAR. In comparison with the pulse SAR, the FMCW SAR system lacks a temporal interval between the transmitted pulses. Consequently, some phase synchronization schemes developed for the pulse SAR system cannot be directly applied to the FMCW SAR system. To this end, this study proposes a novel phase synchronization method for the miniature multistatic FMCW SAR, effectively resolving the problem of the FMCW SAR. This method uses the generalized Short-Time Shift-Orthogonal (STSO) waveform as the phase synchronization signal of disparate radar platforms. The phase error between the radar platforms can be effectively extracted through pulse compression to realize phase synchronization. Compared with the conventional linear frequency-modulated waveform, after the generalized STSO waveform is pulsed by the same pulse compression function, the interference signal energy is concentrated away from the peak of the matching signal and the phase synchronization accuracy is enhanced. Furthermore, the proposed method is adapted to the characteristics of dechirp reception in FMCW miniature multistatic SAR systems, and ground and numerical simulation experiments verify that the proposed method has high synchronization accuracy.
11
In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in remote sensing have garnered significant attention. Remote sensing MLLMs achieve deep integration of visual features and semantic information through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent remote sensing interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in remote sensing, specifically examining the development of Remote Sensing MLLMs (RS-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of RS-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to remote sensing agents. Finally, we summarize the research status of RS-MLLMs and discuss future research directions. In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in remote sensing have garnered significant attention. Remote sensing MLLMs achieve deep integration of visual features and semantic information through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent remote sensing interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in remote sensing, specifically examining the development of Remote Sensing MLLMs (RS-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of RS-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to remote sensing agents. Finally, we summarize the research status of RS-MLLMs and discuss future research directions.
12
Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Method of Multiplier (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments. Bistatic Synthetic Aperture Radar (BiSAR) needs to suppress ground background clutter when detecting and imaging ground moving targets. However, due to the spatial configuration of BiSAR, the clutter poses a serious space-time nonstationary problem, which deteriorates the clutter suppression performance. Although Space-Time Adaptive Processing based on Sparse Recovery (SR-STAP) can reduce the nonstationary problem by reducing the number of samples, the off-grid dictionary problem will occur during processing, resulting in a decrease in the space-time spectrum estimation effect. Although most of the typical SR-STAP methods have clear mathematical relations and interpretability, they also have some problems, such as improper parameter setting and complicated operation in complex and changeable scenes. To solve the aforementioned problems, a complex neural network based on the Alternating Direction Method of Multiplier (ADMM), is proposed for BiSAR space-time adaptive clutter suppression. First, a sparse recovery model of the continuous clutter space-time domain of BiSAR is constructed based on the Atomic Norm Minimization (ANM) to overcome the off-grid problem associated with the traditional discrete dictionary model. Second, ADMM is used to rapidly and iteratively solve the BiSAR clutter spectral sparse recovery model. Third according to the iterative and data flow diagrams, the artificial hyperparameter iterative process is transformed into ANM-ADMM-Net. Then, the normalized root-mean-square-error network loss function is set up and the network model is trained with the obtained data set. Finally, the trained ANM-ADMM-Net architecture is used to quickly process BiSAR echo data, and the space-time spectrum of BiSAR clutter is accurately estimated and efficiently restrained. The effectiveness of this approach is validated through simulations and airborne BiSAR clutter suppression experiments.
13
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments. Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
14
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
15
As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research. As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research.
16
Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described. Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described.
17
As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms. As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms.
18
Synthetic Aperture Radar (SAR) is an all-weather and all-time imaging radar with high resolution, which is widely used for enemy reconnaissance to provide timely and accurate intelligence for taking decisions during wars. It has become a hot issue in the contemporary electronic warfare to suppress and disorder the reconnaissance imaging of SAR equipment for protecting high-value targets and important strategic areas. This study discusses the development and future trend of SAR jamming techniques. First, the history of development of SAR jamming techniques is discussed and explained in detail. Then, the advantages and disadvantages of the typical SAR jamming models are comparatively analyzed together with simulation experiments. Finally, the current defects of the SAR jamming techniques are summarized and the future trend of the SAR jamming techniques is also pointed out, providing some reference for experts and scholars. Synthetic Aperture Radar (SAR) is an all-weather and all-time imaging radar with high resolution, which is widely used for enemy reconnaissance to provide timely and accurate intelligence for taking decisions during wars. It has become a hot issue in the contemporary electronic warfare to suppress and disorder the reconnaissance imaging of SAR equipment for protecting high-value targets and important strategic areas. This study discusses the development and future trend of SAR jamming techniques. First, the history of development of SAR jamming techniques is discussed and explained in detail. Then, the advantages and disadvantages of the typical SAR jamming models are comparatively analyzed together with simulation experiments. Finally, the current defects of the SAR jamming techniques are summarized and the future trend of the SAR jamming techniques is also pointed out, providing some reference for experts and scholars.
19
In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors. In recent years, target recognition systems based on radar sensor networks have been widely studied in the field of automatic target recognition. These systems observe the target from multiple angles to achieve robust recognition, which also brings the problem of using the correlation and difference information of multiradar sensor echo data. Furthermore, most existing studies used large-scale labeled data to obtain prior knowledge of the target. Considering that a large amount of unlabeled data is not effectively used in target recognition tasks, this paper proposes an HRRP unsupervised target feature extraction method based on Multiple Contrastive Loss (MCL) in radar sensor networks. The proposed method combines instance level loss, Fisher loss, and semantic consistency loss constraints to identify consistent and discriminative feature vectors among the echoes of multiple radar sensors and then use them in subsequent target recognition tasks. Specifically, the original echo data are mapped to the contrast loss space and the semantic label space. In the contrast loss space, the contrastive loss is used to constrain the similarity and aggregation of samples so that the relative and absolute distances between different echoes of the same target obtained by different sensors are reduced while the relative and absolute distances between different target echoes are increased. In the semantic loss space, the extracted discriminant features are used to constrain the semantic labels so that the semantic information and discriminant features are consistent. Experiments on an actual civil aircraft dataset revealed that the target recognition accuracy of the MCL-based method is improved by 0.4% and 1.4%, respectively, compared with the most advanced unsupervised algorithm CC and supervised target recognition algorithm PNN. Further, MCL can effectively improve the target recognition performance of radar sensors when applied in conjunction with the sensors.
20

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go