Most Downloaded

1
Deep learning is primarily used for target detection in Synthetic Aperture Radar (SAR) images; however, its performance heavily relies on large-scale labeled datasets. The detection performance of deep learning models degrades when applied to SAR data with varying distributions, hindering their real-world applicability. In addition, manual labeling of SAR data is costly. Hence, cross-domain learning strategies based on multisource information are being explored to address these challenges. These strategies can assist detection models in realizing cross-domain knowledge migration by integrating prior information from optical remote sensing images or heterogeneous SAR images acquired from different sensors. This paper focuses on cross-domain learning technologies within the deep learning framework. In addition, it provides a systematic overview of the latest research progress in this field and analyzes the core issues, advantages, and applicable scenarios of existing technologies from a methodological perspective. It outlines future research directions based on the law of technological evolution, aiming to offer theoretical support and methodological references to enhance the generalizability of target detection in SAR images. Deep learning is primarily used for target detection in Synthetic Aperture Radar (SAR) images; however, its performance heavily relies on large-scale labeled datasets. The detection performance of deep learning models degrades when applied to SAR data with varying distributions, hindering their real-world applicability. In addition, manual labeling of SAR data is costly. Hence, cross-domain learning strategies based on multisource information are being explored to address these challenges. These strategies can assist detection models in realizing cross-domain knowledge migration by integrating prior information from optical remote sensing images or heterogeneous SAR images acquired from different sensors. This paper focuses on cross-domain learning technologies within the deep learning framework. In addition, it provides a systematic overview of the latest research progress in this field and analyzes the core issues, advantages, and applicable scenarios of existing technologies from a methodological perspective. It outlines future research directions based on the law of technological evolution, aiming to offer theoretical support and methodological references to enhance the generalizability of target detection in SAR images.
2
Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition. Passive radar plays an important role in early warning detection and Low Slow Small (LSS) target detection. Due to the uncontrollable source of passive radar signal radiations, target characteristics are more complex, which makes target detection and identification extremely difficult. In this paper, a passive radar LSS detection dataset (LSS-PR-1.0) is constructed, which contains the radar echo signals of four typical sea and air targets, namely helicopters, unmanned aerial vehicles, speedboats, and passenger ships, as well as sea clutter data at low and high sea states. It provides data support for radar research. In terms of target feature extraction and analysis, the singular-value-decomposition sea-clutter-suppression method is first adopted to remove the influence of the strong Bragg peak of sea clutter on target echo. On this basis, four categories of ten multi-domain feature extraction and analysis methods are proposed, including time-domain features (relative average amplitude), frequency-domain features (spectral features, Doppler waterfall plot, and range Doppler features), time-frequency-domain features, and motion features (heading difference, trajectory parameters, speed variation interval, speed variation coefficient, and acceleration). Based on the actual measurement data, a comparative analysis is conducted on the characteristics of four types of sea and air targets, summarizing the patterns of various target characteristics and laying the foundation for subsequent target recognition.
3
Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence. Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence.
4
China has one of the longest land borders in the world and features a diverse range of terrain types and a dense electromagnetic environment. Therefore, in practical applications, airborne radar faces complex environments. The efficacy of detecting airborne radar is severely deteriorated in regions with complex terrains and electromagnetic environments, limiting the ability to meet military operational requirements. Cognitive Space-Time Adaptive Processing (STAP) is an effective technical approach for addressing this problem. In this study, a cognitive STAP architecture is proposed, and based on this architecture, the database, algorithm library, cognitive STAP technology, and feedback control are introduced. Analysis of the simulated data reveals that compared to traditional STAP technology, cognitive space-time adaptive processing technology can significantly enhance the efficacy of detecting moving targets using airborne radar in complex environments. China has one of the longest land borders in the world and features a diverse range of terrain types and a dense electromagnetic environment. Therefore, in practical applications, airborne radar faces complex environments. The efficacy of detecting airborne radar is severely deteriorated in regions with complex terrains and electromagnetic environments, limiting the ability to meet military operational requirements. Cognitive Space-Time Adaptive Processing (STAP) is an effective technical approach for addressing this problem. In this study, a cognitive STAP architecture is proposed, and based on this architecture, the database, algorithm library, cognitive STAP technology, and feedback control are introduced. Analysis of the simulated data reveals that compared to traditional STAP technology, cognitive space-time adaptive processing technology can significantly enhance the efficacy of detecting moving targets using airborne radar in complex environments.
5
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition. This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
6
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments. Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
7
Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities. Detection of small, slow-moving targets, such as drones using Unmanned Aerial Vehicles (UAVs) poses considerable challenges to radar target detection and recognition technology. There is an urgent need to establish relevant datasets to support the development and application of techniques for detecting small, slow-moving targets. This paper presents a dataset for detecting low-speed and small-size targets using a multiband Frequency Modulated Continuous Wave (FMCW) radar. The dataset utilizes Ku-band and L-band FMCW radar to collect echo data from six UAV types and exhibits diverse temporal and frequency domain resolutions and measurement capabilities by modulating radar cycles and bandwidth, generating an LSS-FMCWR-1.0 dataset (Low Slow Small, LSS). To further enhance the capability for extracting micro-Doppler features from UAVs, this paper proposes a method for UAV micro-Doppler extraction and parameter estimation based on the local maximum synchroextracting transform. Based on the Short Time Fourier Transform (STFT), this method extracts values at the maximum energy point in the time-frequency domain to retain useful signals and refine the time-frequency energy representation. Validation and analysis using the LSS-FMCWR-1.0 dataset demonstrate that this approach reduces entropy on an average by 5.3 dB and decreases estimation errors in rotor blade length by 27.7% compared with traditional time-frequency methods. Moreover, the proposed method provides the foundation for subsequent target recognition efforts because it balances high time-frequency resolution and parameter estimation capabilities.
8
This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB. This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB.
9
Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed. Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed.
10
As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research. As the electromagnetic spectrum becomes a key operational domain in modern warfare, radars will face a more complex, dexterous, and smarter electromagnetic interference environment in future military operations. Cognitive Intelligent Radar (CIR) has become one of the key development directions in the field of radar technology because it has the capabilities of active environmental perception, arbitrary transmit and receive design, intelligent signal processing, and resource scheduling, therefore, can adapt to the complex and changeable battlefield electromagnetic confrontation environment. In this study, the CIR is decomposed into four functional modules: cognitive transmitting, cognitive receiving, intelligent signal processing, and intelligent resource scheduling. Then, the antijamming principle of each link (i.e., interference perception, transmit design, receive design, signal processing, and resource scheduling) of CIR is elucidated. Finally, we summarize the representative literature in recent years and analyze the technological development trend in this field to provide the necessary reference and basis for future technological research.
11
Traditional airborne radar Pulse Compression (PC) and Space-Time Adaptive Processing (STAP) suffer performance degradation in complex target and clutter environments due to their reliance on predefined linear models. To address this issue, we developed a deep learning-based joint STAP-PC technique. This approach employed dedicated networks—a super-resolution space-time spectrum network for nonlinear clutter estimation and a PC network for nonlinear PC. The proposed architecture effectively mitigated model mismatch within the processing chain, leading to improved clutter suppression and target detection. Notably, we mathematically established the feasibility of post-pulse compensation to prevent nonlinear PC from introducing phase errors across elements and pulses. The implemented architecture utilized multimodule convolutional neural networks for super-resolution space-time spectrum estimation and PC, with each module’s functionality demonstrating clear mathematical correspondence, thereby ensuring the reliability of the overall processing chain. Simulation results revealed that in scenarios with dense weak targets and limited samples, the proposed nonlinear joint processing technique improved signal-to-clutter-plus-noise ratio by approximately 20 dB over traditional methods. Traditional airborne radar Pulse Compression (PC) and Space-Time Adaptive Processing (STAP) suffer performance degradation in complex target and clutter environments due to their reliance on predefined linear models. To address this issue, we developed a deep learning-based joint STAP-PC technique. This approach employed dedicated networks—a super-resolution space-time spectrum network for nonlinear clutter estimation and a PC network for nonlinear PC. The proposed architecture effectively mitigated model mismatch within the processing chain, leading to improved clutter suppression and target detection. Notably, we mathematically established the feasibility of post-pulse compensation to prevent nonlinear PC from introducing phase errors across elements and pulses. The implemented architecture utilized multimodule convolutional neural networks for super-resolution space-time spectrum estimation and PC, with each module’s functionality demonstrating clear mathematical correspondence, thereby ensuring the reliability of the overall processing chain. Simulation results revealed that in scenarios with dense weak targets and limited samples, the proposed nonlinear joint processing technique improved signal-to-clutter-plus-noise ratio by approximately 20 dB over traditional methods.
12
Maritime target detection and identification technology are developed using large-scale, high-quality multi-sensor measurement data. Therefore, the Sea Detection Radar Data Sharing Program (SDRDSP) was upgraded to the Maritime Target Data Sharing Program (MTDSP), integrating multiple observation modalities, such as HH-polarized radar, VV-polarized radar, electro-optical devices, and Automatic Identification System (AIS) equipment to conduct multisource observation experiments on maritime vessel targets. The program collects various data types, including radar intermediate frequency/video echo slice data, visible and infrared imagery, AIS static and dynamic messages, and meteorological and hydrological data, covering representative sea conditions and multiple vessel types. A comprehensive multisource observation dataset was constructed, enabling the matching and annotation of multimodal data for the same target. Moreover, an automated data management system was implemented to support data storage, conditional retrieval, and batch export, providing a solid foundation for the automated acquisition, long-term accumulation, and efficient use of maritime target characteristic data. Based on this system and measured data, the time/frequency domain features of the same and different vessel targets under different sea states, attitudes, polarization conditions are compared and analyzed, and the statistical conclusion of the change in target features is obtained. Maritime target detection and identification technology are developed using large-scale, high-quality multi-sensor measurement data. Therefore, the Sea Detection Radar Data Sharing Program (SDRDSP) was upgraded to the Maritime Target Data Sharing Program (MTDSP), integrating multiple observation modalities, such as HH-polarized radar, VV-polarized radar, electro-optical devices, and Automatic Identification System (AIS) equipment to conduct multisource observation experiments on maritime vessel targets. The program collects various data types, including radar intermediate frequency/video echo slice data, visible and infrared imagery, AIS static and dynamic messages, and meteorological and hydrological data, covering representative sea conditions and multiple vessel types. A comprehensive multisource observation dataset was constructed, enabling the matching and annotation of multimodal data for the same target. Moreover, an automated data management system was implemented to support data storage, conditional retrieval, and batch export, providing a solid foundation for the automated acquisition, long-term accumulation, and efficient use of maritime target characteristic data. Based on this system and measured data, the time/frequency domain features of the same and different vessel targets under different sea states, attitudes, polarization conditions are compared and analyzed, and the statistical conclusion of the change in target features is obtained.
13
The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed. The Back Projection (BP) algorithm is an important direction in the development of synthetic aperture radar imaging algorithms. However, the large computational load of the BP algorithm has hindered its development in engineering applications. Therefore, techniques to enhance the computational efficiency of the BP algorithm have recently received widespread attention. This paper discusses the fast BP algorithm based on various imaging plane coordinate systems, including the distance-azimuth plane coordinate system, the ground plane coordinate system, and the non-Euclidean coordinate system. First, the principle of the original BP algorithm and the impact of different coordinate systems on accelerating the BP algorithm are introduced, and the development history of the BP algorithm is sorted out. Then, the research progress of the fast BP algorithm based on different imaging plane coordinate systems is examined, focusing on the recent research work completed by the author’s research team. Finally, the application of fast BP algorithm in engineering is introduced, and the research development trend of the fast BP imaging algorithm is discussed.
14
In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions. In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions.
15
One remarkable trend in applying synthetic aperture radar technology is the automatic interpretation of Synthetic Aperture Radar (SAR) images. The electromagnetic scattering characteristics have a robust correlation with the target structure, which provides key support for SAR image interpretation. Therefore, elucidating how to extract accurate electromagnetic characteristics and how to use these electromagnetic characteristics to retrieve target characteristics has been widely valued recently. This study discusses the research accomplishments, summarizes the key elements and ideas of electromagnetic characteristic extraction and electromagnetic-characteristic-based target recognition, and details the extension applications of the electromagnetic scattering mechanism in imaging and recognition. Finally, the future research direction of electromagnetic scattering characteristic extraction and application was proposed. One remarkable trend in applying synthetic aperture radar technology is the automatic interpretation of Synthetic Aperture Radar (SAR) images. The electromagnetic scattering characteristics have a robust correlation with the target structure, which provides key support for SAR image interpretation. Therefore, elucidating how to extract accurate electromagnetic characteristics and how to use these electromagnetic characteristics to retrieve target characteristics has been widely valued recently. This study discusses the research accomplishments, summarizes the key elements and ideas of electromagnetic characteristic extraction and electromagnetic-characteristic-based target recognition, and details the extension applications of the electromagnetic scattering mechanism in imaging and recognition. Finally, the future research direction of electromagnetic scattering characteristic extraction and application was proposed.
16
As a novel radar system, the Multiple-Input Multiple-Output (MIMO) radar with waveform diversity has demonstrated excellent performance in several aspects, including target detection, parameter estimation, radio frequency stealth, and anti-jamming characteristics. After nearly 20 years of in-depth research by scholars, the MIMO radar theory based on orthogonal waveforms has significantly improved. It has been widely applied in fields such as automobile-assisted driving and safety defense. In recent years, with the introduction of the concepts of electromagnetic environment perception and knowledge aid, and the application requirements of radar-active anti-jamming, radio frequency stealth, and detection-communication integration, multiple new theories and methods have been generated for the MIMO radar in system architecture, transmit waveform design, and signal processing. This paper aims to review and summarize the research works on MIMO radar published in the past 20 years, including: the principle of the orthogonal-waveform MIMO radar, its target detection performance analysis and typical applications; waveform design and characteristics of the orthogonal-waveform MIMO radar; knowledge-aided cognitive MIMO waveform design and algorithm; MIMO detection-communication integrated waveform design and algorithm; MIMO radar parameter estimation; MIMO radar target detection; and MIMO radar resource management and scheduling. Finally, the paper discusses the clutter suppression and Space-Time Adaptive Processing (STAP) of MIMO radar in airborne applications, the signal processing of MIMO radar in imaging, and the signal processing of chirp millimeter-wave (mmWave) MIMO radar based on time division multi-waveform diversity. As a novel radar system, the Multiple-Input Multiple-Output (MIMO) radar with waveform diversity has demonstrated excellent performance in several aspects, including target detection, parameter estimation, radio frequency stealth, and anti-jamming characteristics. After nearly 20 years of in-depth research by scholars, the MIMO radar theory based on orthogonal waveforms has significantly improved. It has been widely applied in fields such as automobile-assisted driving and safety defense. In recent years, with the introduction of the concepts of electromagnetic environment perception and knowledge aid, and the application requirements of radar-active anti-jamming, radio frequency stealth, and detection-communication integration, multiple new theories and methods have been generated for the MIMO radar in system architecture, transmit waveform design, and signal processing. This paper aims to review and summarize the research works on MIMO radar published in the past 20 years, including: the principle of the orthogonal-waveform MIMO radar, its target detection performance analysis and typical applications; waveform design and characteristics of the orthogonal-waveform MIMO radar; knowledge-aided cognitive MIMO waveform design and algorithm; MIMO detection-communication integrated waveform design and algorithm; MIMO radar parameter estimation; MIMO radar target detection; and MIMO radar resource management and scheduling. Finally, the paper discusses the clutter suppression and Space-Time Adaptive Processing (STAP) of MIMO radar in airborne applications, the signal processing of MIMO radar in imaging, and the signal processing of chirp millimeter-wave (mmWave) MIMO radar based on time division multi-waveform diversity.
17
As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms. As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms.
18

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

As one of the core components of Advanced Driver Assistance Systems (ADAS), automotive millimeter-wave radar has become the focus of scholars and manufacturers at home and abroad because it has the advantages of all-day and all-weather operation, miniaturization, high integration, and key sensing capabilities. The core performance indicators of the automotive millimeter-wave radar are distance, speed, angular resolution, and field of view. Accuracy, cost, real-time and detection performance, and volume are the key issues to be considered. The increasing performance requirements pose several challenges for the signal processing of millimeter-wave radar systems. Radar signal processing technology is crucial for improving radar performance to meet more stringent requirements. Obtaining dense radar point clouds, generating accurate radar imaging results, and mitigating mutual interference among multiple radar systems are the key points and the foundation for subsequent tracking, recognition, and other applications. Therefore, this paper discusses the practical application of the automotive millimeter-wave radar system based on the key technologies of signal processing, summarizes relevant research results, and mainly discusses the topics of point cloud imaging processing, synthetic aperture radar imaging processing, and interference suppression. Finally, herein, we summarize the research status at home and abroad. Moreover, future development trends for automotive millimeter-wave radar systems are forecast with the hope of enlightening readers in related fields.

19
Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed. Multi-sensor multi-target tracking is a popular topic in the field of information fusion. It improves the accuracy and stability of target tracking by fusing multiple local sensor information. By the fusion system, the multi-sensor multi-target tracking is grouped into distributed fusion, centralized fusion, and hybrid fusion. Distributed fusion is widely applied in the military and civilian fields with the advantages of strong reliability, high stability, and low requirements on network communication bandwidth. Key techniques of distributed multi-sensor multi-target tracking include multi-target tracking, sensor registration, track-to-track association, and data fusion. This paper reviews the theoretical basis and applicable conditions of these key techniques, highlights the incomplete measurement spatial registration algorithm and track association algorithm, and provides the simulation results. Finally, the weaknesses of the key techniques of distributed multi-sensor multi-target tracking are summarized, and the future development trends of these key techniques are surveyed.
20

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

Flying birds and Unmanned Aerial Vehicles (UAVs) are typical “low, slow, and small” targets with low observability. The need for effective monitoring and identification of these two targets has become urgent and must be solved to ensure the safety of air routes and urban areas. There are many types of flying birds and UAVs that are characterized by low flying heights, strong maneuverability, small radar cross-sectional areas, and complicated detection environments, which are posing great challenges in target detection worldwide. “Visible (high detection ability) and clear-cut (high recognition probability)” methods and technologies must be developed that can finely describe and recognize UAVs, flying birds, and “low-slow-small” targets. This paper reviews the recent progress in research on detection and recognition technologies for rotor UAVs and flying birds in complex scenes and discusses effective detection and recognition methods for the detection of birds and drones, including echo modeling and recognition of fretting characteristics, the enhancement and extraction of maneuvering features in ubiquitous observation mode, distributed multi-view features fusion, differences in motion trajectories, and intelligent classification via deep learning. Lastly, the problems of existing research approaches are summarized, and we consider the future development prospects of target detection and recognition technologies for flying birds and UAVs in complex scenarios.

  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:5
  • To
  • Go