Most Downloaded
1
2026, 15(1): 1-25.
With the increasing demands on imaging accuracy, efficiency, and robustness in modern three-Dimensional (3D) Synthetic Aperture Radar (SAR) imaging systems, the performance of traditional 3D imaging methods, such as matched filtering and compressed sensing, has become limited in these aspects. In recent years, the rapid development of Deep Learning (DL) technology has provided new theoretical solutions for SAR 3D imaging by enabling the integration of neural networks with physical radar imaging models, leading to the emergence of a learning-based imaging paradigm that combines data-driven and model-driven approaches. This paper systematically reviews recent research progress in DL-based SAR 3D imaging. Focusing on two core issues, namely super-resolution imaging and enhanced imaging, this paper discusses current research advances and hotspots in SAR 3D imaging. These include super-resolution 3D imaging methods based on feedforward neural networks and deep unfolding networks, as well as 3D enhancement techniques such as multichannel data preprocessing and point cloud post-processing. This paper also summarizes publicly available datasets for SAR 3D imaging. In addition, this paper explores current research challenges in DL SAR 3D imaging, including high-generalization and high-precision DL SAR super-resolution 3D imaging technology, DL SAR elevation dimension disambiguation technology, integrated study of DL SAR 3D imaging and image enhancement, and the construction of DL SAR 3D imaging datasets. This paper provides an outlook on future development trends, aiming to offer research references and technical guidance for scholars in related fields.
With the increasing demands on imaging accuracy, efficiency, and robustness in modern three-Dimensional (3D) Synthetic Aperture Radar (SAR) imaging systems, the performance of traditional 3D imaging methods, such as matched filtering and compressed sensing, has become limited in these aspects. In recent years, the rapid development of Deep Learning (DL) technology has provided new theoretical solutions for SAR 3D imaging by enabling the integration of neural networks with physical radar imaging models, leading to the emergence of a learning-based imaging paradigm that combines data-driven and model-driven approaches. This paper systematically reviews recent research progress in DL-based SAR 3D imaging. Focusing on two core issues, namely super-resolution imaging and enhanced imaging, this paper discusses current research advances and hotspots in SAR 3D imaging. These include super-resolution 3D imaging methods based on feedforward neural networks and deep unfolding networks, as well as 3D enhancement techniques such as multichannel data preprocessing and point cloud post-processing. This paper also summarizes publicly available datasets for SAR 3D imaging. In addition, this paper explores current research challenges in DL SAR 3D imaging, including high-generalization and high-precision DL SAR super-resolution 3D imaging technology, DL SAR elevation dimension disambiguation technology, integrated study of DL SAR 3D imaging and image enhancement, and the construction of DL SAR 3D imaging datasets. This paper provides an outlook on future development trends, aiming to offer research references and technical guidance for scholars in related fields.
2
2026, 15(1): 42-63.
Non-Line-Of-Sight (NLOS) millimeter wave radar 3D imaging leverages electromagnetic wave propagation characteristics such as reflection, diffraction, scattering, and penetration to detect, locate, and image hidden targets in occluded environments. It holds significant potential for applications in autonomous driving, disaster rescue, and urban warfare. However, uncertainties introduced by reflection surfaces and occluding objects in practical NLOS scenarios, such as phase errors, aperture shadowing, and multipath effect, lead to issues like blurred imaging and increased artifacts in radar imaging. To address these challenges, this study proposes a 3D imaging method for NLOS millimeter wave radar based on Range Migration (RM) operator learning, leveraging the adaptive optimization properties of deep unfolding networks and prior environmental perception. First, a 3D imaging model for NLOS millimeter wave radar in Looking Around Corner (LAC) scenarios is established. An RM kernel operator is introduced to enhance imaging efficiency and reduce computational complexity. Second, a high-precision NLOS 3D imaging network is constructed based on the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) framework. Utilizing features specific to NLOS scenes and designing algorithm parameters as functions of network weights, the method achieves high-precision, high-efficiency 3D reconstruction of NLOS targets. Finally, a near-field NLOS millimeter wave radar imaging platform is developed. Experimental validations are performed on targets, including metal letters “O” and “S”, an Eiffel Tower model, and an artificial satellite model, under both ideal and non-ideal reflection surface conditions. The results demonstrate that the proposed method significantly improves 3D imaging precision, achieving a two-orders-of-magnitude increase in computational speed over traditional sparse imaging algorithms.
Non-Line-Of-Sight (NLOS) millimeter wave radar 3D imaging leverages electromagnetic wave propagation characteristics such as reflection, diffraction, scattering, and penetration to detect, locate, and image hidden targets in occluded environments. It holds significant potential for applications in autonomous driving, disaster rescue, and urban warfare. However, uncertainties introduced by reflection surfaces and occluding objects in practical NLOS scenarios, such as phase errors, aperture shadowing, and multipath effect, lead to issues like blurred imaging and increased artifacts in radar imaging. To address these challenges, this study proposes a 3D imaging method for NLOS millimeter wave radar based on Range Migration (RM) operator learning, leveraging the adaptive optimization properties of deep unfolding networks and prior environmental perception. First, a 3D imaging model for NLOS millimeter wave radar in Looking Around Corner (LAC) scenarios is established. An RM kernel operator is introduced to enhance imaging efficiency and reduce computational complexity. Second, a high-precision NLOS 3D imaging network is constructed based on the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) framework. Utilizing features specific to NLOS scenes and designing algorithm parameters as functions of network weights, the method achieves high-precision, high-efficiency 3D reconstruction of NLOS targets. Finally, a near-field NLOS millimeter wave radar imaging platform is developed. Experimental validations are performed on targets, including metal letters “O” and “S”, an Eiffel Tower model, and an artificial satellite model, under both ideal and non-ideal reflection surface conditions. The results demonstrate that the proposed method significantly improves 3D imaging precision, achieving a two-orders-of-magnitude increase in computational speed over traditional sparse imaging algorithms.
3
2025, 14(2): 439-455.
With the rapid development of electronic technology, the electromagnetic environment is becoming increasingly complex. For instance, adaptive beamforming cannot suppress main-lobe jammers for traditional phased array radars; therefore, developing measures to tackle this common problem is an urgent need in radar technology. This study addresses the problem of main-lobe deceptive jammer suppression using space-time multidimensional coding. The first step is to design a three-dimensional phase coding scheme applicable across transmit channels, pulses, and subpulses. A Doppler division multiple access technique is employed at the receiver to separate the transmit signals. To solve the problem of waveform misalignment caused by high-speed moving targets, a novel approach is proposed to estimate the compensation index according to differences in beamforming energy. Subsequently, a dual-phase compensation method that leverages the phase differences between the main-lobe deceptive jammers and the target is proposed; this method can distinguish the true target, pulse-delayed jammers, and rapidly generated jammers in the transmit spatial frequency domain. Moreover, spatial filtering is applied to suppress all the main-lobe deceptive jammers by designing an appropriate transmit-receive weight vector. Additionally, an optimization problem aiming to maximize the output Signal-to-Interference-plus-Noise Ratio (SINR) is formulated to address the problem of performance degradation due to the Direction of Arrival (DOA) Errors. Further, to solve this problem, an alternating optimization method is utilized to obtain the optimized weight vector and transmit and receive coding coefficients iteratively to improve the SINR. Simulation results demonstrate that the proposed method suppresses the main-lobe deceptive jammers more effectively than other radar frameworks. Specifically, compared to the conventional multiple-input multiple-output radar, the proposed method achieves an SINR improvement of 34 dB in the presence of four main-lobe deceptive jammers.
With the rapid development of electronic technology, the electromagnetic environment is becoming increasingly complex. For instance, adaptive beamforming cannot suppress main-lobe jammers for traditional phased array radars; therefore, developing measures to tackle this common problem is an urgent need in radar technology. This study addresses the problem of main-lobe deceptive jammer suppression using space-time multidimensional coding. The first step is to design a three-dimensional phase coding scheme applicable across transmit channels, pulses, and subpulses. A Doppler division multiple access technique is employed at the receiver to separate the transmit signals. To solve the problem of waveform misalignment caused by high-speed moving targets, a novel approach is proposed to estimate the compensation index according to differences in beamforming energy. Subsequently, a dual-phase compensation method that leverages the phase differences between the main-lobe deceptive jammers and the target is proposed; this method can distinguish the true target, pulse-delayed jammers, and rapidly generated jammers in the transmit spatial frequency domain. Moreover, spatial filtering is applied to suppress all the main-lobe deceptive jammers by designing an appropriate transmit-receive weight vector. Additionally, an optimization problem aiming to maximize the output Signal-to-Interference-plus-Noise Ratio (SINR) is formulated to address the problem of performance degradation due to the Direction of Arrival (DOA) Errors. Further, to solve this problem, an alternating optimization method is utilized to obtain the optimized weight vector and transmit and receive coding coefficients iteratively to improve the SINR. Simulation results demonstrate that the proposed method suppresses the main-lobe deceptive jammers more effectively than other radar frameworks. Specifically, compared to the conventional multiple-input multiple-output radar, the proposed method achieves an SINR improvement of 34 dB in the presence of four main-lobe deceptive jammers.
4
2025, 14(6): 1343-1357.
China has one of the longest land borders in the world and features a diverse range of terrain types and a dense electromagnetic environment. Therefore, in practical applications, airborne radar faces complex environments. The efficacy of detecting airborne radar is severely deteriorated in regions with complex terrains and electromagnetic environments, limiting the ability to meet military operational requirements. Cognitive Space-Time Adaptive Processing (STAP) is an effective technical approach for addressing this problem. In this study, a cognitive STAP architecture is proposed, and based on this architecture, the database, algorithm library, cognitive STAP technology, and feedback control are introduced. Analysis of the simulated data reveals that compared to traditional STAP technology, cognitive space-time adaptive processing technology can significantly enhance the efficacy of detecting moving targets using airborne radar in complex environments.
China has one of the longest land borders in the world and features a diverse range of terrain types and a dense electromagnetic environment. Therefore, in practical applications, airborne radar faces complex environments. The efficacy of detecting airborne radar is severely deteriorated in regions with complex terrains and electromagnetic environments, limiting the ability to meet military operational requirements. Cognitive Space-Time Adaptive Processing (STAP) is an effective technical approach for addressing this problem. In this study, a cognitive STAP architecture is proposed, and based on this architecture, the database, algorithm library, cognitive STAP technology, and feedback control are introduced. Analysis of the simulated data reveals that compared to traditional STAP technology, cognitive space-time adaptive processing technology can significantly enhance the efficacy of detecting moving targets using airborne radar in complex environments.
5
Direction of Arrival (DOA) estimation for low-elevation angle targets is a critical challenge in meter-wave and holographic staring radar systems, as its accuracy directly affects target height measurement performance. Traditional beamspace methods reduce computational complexity by projecting high-dimensional element-space data onto a low-dimensional beamspace using a beamformer. However, this lossy mapping leads to partial information loss, resulting in degraded elevation-angle estimation accuracy compared to that of element-space methods. To address this issue, this study proposes a high-accuracy beamspace DOA estimation method for low-elevation angle targets. First, the Cramér-Rao Bound (CRB) for both element-space and beamspace DOA estimation is derived, and the conditions under which these bounds are equal are analyzed. Since these conditions are difficult to satisfy in practical scenarios, an approximate-condition-based beamformer design strategy is developed to reduce data dimensionality while preserving effective target information. Finally, precise elevation-angle estimation is achieved using the maximum likelihood criterion. Simulation and experimental results show that the proposed method significantly reduces data dimensionality while maintaining estimation accuracy comparable to that of element-space methods at low-elevation angles, clearly outperforming existing beamspace algorithms.
Direction of Arrival (DOA) estimation for low-elevation angle targets is a critical challenge in meter-wave and holographic staring radar systems, as its accuracy directly affects target height measurement performance. Traditional beamspace methods reduce computational complexity by projecting high-dimensional element-space data onto a low-dimensional beamspace using a beamformer. However, this lossy mapping leads to partial information loss, resulting in degraded elevation-angle estimation accuracy compared to that of element-space methods. To address this issue, this study proposes a high-accuracy beamspace DOA estimation method for low-elevation angle targets. First, the Cramér-Rao Bound (CRB) for both element-space and beamspace DOA estimation is derived, and the conditions under which these bounds are equal are analyzed. Since these conditions are difficult to satisfy in practical scenarios, an approximate-condition-based beamformer design strategy is developed to reduce data dimensionality while preserving effective target information. Finally, precise elevation-angle estimation is achieved using the maximum likelihood criterion. Simulation and experimental results show that the proposed method significantly reduces data dimensionality while maintaining estimation accuracy comparable to that of element-space methods at low-elevation angles, clearly outperforming existing beamspace algorithms.
6
2026, 15(1): 276-291.
Detecting targets despite sea clutter is crucial in military and civilian applications. In complex marine environments, sea clutter exhibits target-like spikes and inherently broad-spectrum characteristics, posing a significant challenge for marine radars in detecting Low-Slow-Small (LSS) targets and leading to high false alarm rates. In this study, an S-band holographic staring radar with high-Doppler and high-range-resolution capabilities (i.e., “dual-high” capability) was utilized in sea detection experiments. We obtained sea clutter data, LSS target data (over the sea surface and in the air), ground truth data on target positions and trajectories, as well as wind and wave data. Using these data, we constructed an S-band holographic staring radar dataset for low-observable targets at sea. The time-domain, frequency-domain, and time-Doppler characteristics of the dataset were analyzed, and the results served as a reference for data utilization. Future work will involve continuing experiments to expand the maritime experimental environment (e.g., sea state and region) and target types toward enhancing data diversity. This open dataset will support the enhancement of new radar systems for detecting low-observable targets at sea and improving maritime target detection and recognition performance.
Detecting targets despite sea clutter is crucial in military and civilian applications. In complex marine environments, sea clutter exhibits target-like spikes and inherently broad-spectrum characteristics, posing a significant challenge for marine radars in detecting Low-Slow-Small (LSS) targets and leading to high false alarm rates. In this study, an S-band holographic staring radar with high-Doppler and high-range-resolution capabilities (i.e., “dual-high” capability) was utilized in sea detection experiments. We obtained sea clutter data, LSS target data (over the sea surface and in the air), ground truth data on target positions and trajectories, as well as wind and wave data. Using these data, we constructed an S-band holographic staring radar dataset for low-observable targets at sea. The time-domain, frequency-domain, and time-Doppler characteristics of the dataset were analyzed, and the results served as a reference for data utilization. Future work will involve continuing experiments to expand the maritime experimental environment (e.g., sea state and region) and target types toward enhancing data diversity. This open dataset will support the enhancement of new radar systems for detecting low-observable targets at sea and improving maritime target detection and recognition performance.
7
2026, 15(1): 361-386.
In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions.
In recent years, the rapid development of Multimodal Large Language Models (MLLMs) and their applications in earth observation have garnered significant attention. Earth observation MLLMs achieve deep integration of multimodal information, including optical imagery, Synthetic Aperture Radar (SAR) imagery, and textual data, through the design of bridging mechanisms between large language models and vision models, combined with joint training strategies. This integration facilitates a paradigm shift in intelligent earth observation interpretation—from shallow semantic matching to higher-level understanding based on world knowledge. In this study, we systematically review the research progress in the applications of MLLMs in earth observation, specifically examining the development of Earth Observation MLLMs (EO-MLLMs), which provides a foundation for future research directions. Initially, we discuss the concept of EO-MLLMs and review their development in chronological order. Subsequently, we provide a detailed analysis and statistical summary of the proposed architectures, training methods, applications, and corresponding benchmark datasets, along with an introduction to Earth Observation Agents (EO-Agent). Finally, we summarize the research status of EO-MLLMs and discuss future research directions.
8
2026, 15(1): 307-330.
In recent years, with the increasing diversification of mission requirements, radar imaging has expanded from conventional side-looking and squint-looking modes to the forward-looking mode. In this regard, the monopulse imaging method offers several advantages, including its forward-looking imaging capability, real-time processing ability, and effective anti-jamming performance. These features can help efficiently overcome the problems faced by conventional imaging methods, i.e., low azimuth resolution and Doppler ambiguity in the forward-looking region. Hence, this method has emerged as a key solution to these challenges. This study first explores the distinction between monopulse tracking and monopulse imaging, followed by a systematic review of the existing technical approaches and evaluation metrics for monopulse imaging. Subsequently, the performance of different methods is analyzed, and specific applications of monopulse imaging technology in various scenarios are introduced, including three-dimensional imaging, moving target localization and imaging, and multi-view image fusion. The paper ends with a discussion of the development trends of monopulse imaging technology and an analysis of future research directions, such as imaging quality improvement and the expansion of the application scope.
In recent years, with the increasing diversification of mission requirements, radar imaging has expanded from conventional side-looking and squint-looking modes to the forward-looking mode. In this regard, the monopulse imaging method offers several advantages, including its forward-looking imaging capability, real-time processing ability, and effective anti-jamming performance. These features can help efficiently overcome the problems faced by conventional imaging methods, i.e., low azimuth resolution and Doppler ambiguity in the forward-looking region. Hence, this method has emerged as a key solution to these challenges. This study first explores the distinction between monopulse tracking and monopulse imaging, followed by a systematic review of the existing technical approaches and evaluation metrics for monopulse imaging. Subsequently, the performance of different methods is analyzed, and specific applications of monopulse imaging technology in various scenarios are introduced, including three-dimensional imaging, moving target localization and imaging, and multi-view image fusion. The paper ends with a discussion of the development trends of monopulse imaging technology and an analysis of future research directions, such as imaging quality improvement and the expansion of the application scope.
9
2023, 12(5): 971-985.
Single snapshot forward-looking imaging technology with high performance and resolution is crucial for enabling the development of automotive radars. However, range migration issues can limit the implementation of coherent integration methods, and improving system resolution is generally difficult due to hardware parameter limitations. Based on the Time-Division Multiplexing Multiple-Input-Multiple-Output (TDM-MIMO) forward-looking imaging systems of automotive millimeter wave radar, this paper proposes Doppler domain compensation and point-to-point echo correction measures for achieving multidomain signal decoupling. However, the accuracy of traditional single-dimension range and angle imaging is limited by the number of finite array elements and significant noise interference. Therefore, this paper proposes a multidomain joint estimation algorithm based on the Improved Bayesian Matching Pursuit (IBMP) method. The Bayesian method is based on the Bernoulli-Gaussian (BG) model, and the estimated parameters and support domain are iteratively updated in this method while adhering to the Maximum a Posteriori (MAP) criterion constraint to achieve the high-precision reconstruction of multidimensional joint signals. The final set of simulation and actual measurement results demonstrate that the proposed method can effectively solve the problem of range migration and improve the angle resolution of radar forward-looking imaging while exhibiting excellent noise robustness.
Single snapshot forward-looking imaging technology with high performance and resolution is crucial for enabling the development of automotive radars. However, range migration issues can limit the implementation of coherent integration methods, and improving system resolution is generally difficult due to hardware parameter limitations. Based on the Time-Division Multiplexing Multiple-Input-Multiple-Output (TDM-MIMO) forward-looking imaging systems of automotive millimeter wave radar, this paper proposes Doppler domain compensation and point-to-point echo correction measures for achieving multidomain signal decoupling. However, the accuracy of traditional single-dimension range and angle imaging is limited by the number of finite array elements and significant noise interference. Therefore, this paper proposes a multidomain joint estimation algorithm based on the Improved Bayesian Matching Pursuit (IBMP) method. The Bayesian method is based on the Bernoulli-Gaussian (BG) model, and the estimated parameters and support domain are iteratively updated in this method while adhering to the Maximum a Posteriori (MAP) criterion constraint to achieve the high-precision reconstruction of multidimensional joint signals. The final set of simulation and actual measurement results demonstrate that the proposed method can effectively solve the problem of range migration and improve the angle resolution of radar forward-looking imaging while exhibiting excellent noise robustness.
10
2026, 15(1): 26-41.
Synthetic Aperture Radar (SAR) is an active microwave sensing technology capable of all-weather, day-and-night operation, making it a critical source of Earth observation data. However, conventional two-dimensional SAR imagery often suffers from echo overlap, limiting its effectiveness for target recognition. Although three-Dimensional (3D) SAR imaging using multibaseline observations can mitigate target occlusion, single-pass airborne or spaceborne SAR systems are typically constrained by system complexity, resulting in sparse track sampling that is inadequate for conventional 3D imaging algorithms. To address this limitation, a novel microwave vision-based 3D imaging framework has recently been proposed, in which visual semantic information is extracted and fused to enhance imaging performance. However, the characterization and application of geometric continuity in SAR imagery remain largely unexplored. This study characterizes the geometric continuity properties of typical urban buildings in the SAR 3D imaging domain and proposes sparse-track 3D imaging methods constrained by both implicit and explicit geometric continuity. Experimental results obtained from measured airborne-array InSAR data demonstrate that incorporating geometric continuity constraints effectively enhances 3D imaging performance under sparse sampling conditions. These findings indicate that geometric continuity-based representations provide a practical and effective pathway toward realizing microwave-vision 3D SAR imaging.
Synthetic Aperture Radar (SAR) is an active microwave sensing technology capable of all-weather, day-and-night operation, making it a critical source of Earth observation data. However, conventional two-dimensional SAR imagery often suffers from echo overlap, limiting its effectiveness for target recognition. Although three-Dimensional (3D) SAR imaging using multibaseline observations can mitigate target occlusion, single-pass airborne or spaceborne SAR systems are typically constrained by system complexity, resulting in sparse track sampling that is inadequate for conventional 3D imaging algorithms. To address this limitation, a novel microwave vision-based 3D imaging framework has recently been proposed, in which visual semantic information is extracted and fused to enhance imaging performance. However, the characterization and application of geometric continuity in SAR imagery remain largely unexplored. This study characterizes the geometric continuity properties of typical urban buildings in the SAR 3D imaging domain and proposes sparse-track 3D imaging methods constrained by both implicit and explicit geometric continuity. Experimental results obtained from measured airborne-array InSAR data demonstrate that incorporating geometric continuity constraints effectively enhances 3D imaging performance under sparse sampling conditions. These findings indicate that geometric continuity-based representations provide a practical and effective pathway toward realizing microwave-vision 3D SAR imaging.
11
2025, 14(1): 73-90.
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
Millimeter-wave radar is increasingly being adopted for smart home systems, elder care, and surveillance monitoring, owing to its adaptability to environmental conditions, high resolution, and privacy-preserving capabilities. A key factor in effectively utilizing millimeter-wave radar is the analysis of point clouds, which are essential for recognizing human postures. However, the sparse nature of these point clouds poses significant challenges for accurate and efficient human action recognition. To overcome these issues, we present a 3D point cloud dataset tailored for human actions captured using millimeter-wave radar (mmWave-3DPCHM-1.0). This dataset is enhanced with advanced data processing techniques and cutting-edge human action recognition models. Data collection is conducted using Texas Instruments (TI)’s IWR1443-ISK and Vayyar’s vBlu radio imaging module, covering 12 common human actions, including walking, waving, standing, and falling. At the core of our approach is the Point EdgeConv and Transformer (PETer) network, which integrates edge convolution with transformer models. For each 3D point cloud frame, PETer constructs a locally directed neighborhood graph through edge convolution to extract spatial geometric features effectively. The network then leverages a series of Transformer encoding models to uncover temporal relationships across multiple point cloud frames. Extensive experiments reveal that the PETer network achieves exceptional recognition rates of 98.77% on the TI dataset and 99.51% on the Vayyar dataset, outperforming the traditional optimal baseline model by approximately 5%. With a compact model size of only 1.09 MB, PETer is well-suited for deployment on edge devices, providing an efficient solution for real-time human action recognition in resource-constrained environments.
12
2026, 15(1): 215-237.
Spaceborne Interferometric Synthetic Aperture Radar (InSAR) enables surface elevation measurement and deformation monitoring by measuring phase differences along the radar line of sight. However, meeting the future demand for higher-precision measurements remains challenging: analytical models linking InSAR system design parameters to measurement accuracy are still limited by incomplete key parameters and insufficient or unclear physical constraints. These limitations restrict the development of next-generation InSAR technology. This study examines the complex multifactor coupling between system design parameters and measurement accuracy. It provides a detailed analysis of the imaging mechanism and theoretical constraints of spaceborne InSAR with spatial and temporal baselines and presents a spatiotemporal error model integrating multisource decorrelation. The nonlinear relationship between baseline parameters and measurement accuracy is quantitatively characterized, and a comprehensive evaluation framework is established based on key indicators such as coherence, elevation accuracy, and coherent temporal baseline-based deformation sensitivity. Built on top of these analysis, the concept and system architecture of very large baseline spaceborne InSAR are proposed, and its performance is analyzed in detail. The associated technical challenges—including orbit configuration, system design, synchronization, error correction, and phase unwrapping—are systematically discussed. Potential applications of this type of InSAR system architecture in high-precision elevation, deformation measurements, and distributed SAR systems are introduced. The proposed framework provides theoretical support for the design of next-generation high-precision, multidimensional InSAR systems and is expected to play a key role in the frontier of Earth science exploration and the safety assurance of major national engineering projects.
Spaceborne Interferometric Synthetic Aperture Radar (InSAR) enables surface elevation measurement and deformation monitoring by measuring phase differences along the radar line of sight. However, meeting the future demand for higher-precision measurements remains challenging: analytical models linking InSAR system design parameters to measurement accuracy are still limited by incomplete key parameters and insufficient or unclear physical constraints. These limitations restrict the development of next-generation InSAR technology. This study examines the complex multifactor coupling between system design parameters and measurement accuracy. It provides a detailed analysis of the imaging mechanism and theoretical constraints of spaceborne InSAR with spatial and temporal baselines and presents a spatiotemporal error model integrating multisource decorrelation. The nonlinear relationship between baseline parameters and measurement accuracy is quantitatively characterized, and a comprehensive evaluation framework is established based on key indicators such as coherence, elevation accuracy, and coherent temporal baseline-based deformation sensitivity. Built on top of these analysis, the concept and system architecture of very large baseline spaceborne InSAR are proposed, and its performance is analyzed in detail. The associated technical challenges—including orbit configuration, system design, synchronization, error correction, and phase unwrapping—are systematically discussed. Potential applications of this type of InSAR system architecture in high-precision elevation, deformation measurements, and distributed SAR systems are introduced. The proposed framework provides theoretical support for the design of next-generation high-precision, multidimensional InSAR systems and is expected to play a key role in the frontier of Earth science exploration and the safety assurance of major national engineering projects.
13
2025, 14(5): 1115-1141.
The bistatic Synthetic Aperture Radar (SAR) system, which employs spatially separated transmitting and receiving platforms, provides high-resolution imaging of terrestrial and maritime scenes and targets in complex environments. Its advantages include flexible configuration, strong concealment capabilities, high interference resistance, and comprehensive target information acquisition, making it valuable in high-precision remote sensing mapping, covert imaging, and precision strikes. Image processing is critical for obtaining high-resolution Bistatic SAR (BiSAR) images. However, the echo model and characteristics of BiSAR substantially differ from those of traditional monostatic SAR, necessitating specialized image processing methods tailored to various operational modes and configurations. This study examines key challenges and solutions for several BiSAR configurations, including airborne BiSAR, BiSAR with high-speed and highly maneuverable platforms, spaceborne heterogeneous BiSAR, and spaceborne homogeneous BiSAR. This study also addresses motion compensation approaches and moving target imaging in BiSAR systems, reviews relevant domestic and international research advancements, and provides an outlook on future trends in BiSAR image processing.
The bistatic Synthetic Aperture Radar (SAR) system, which employs spatially separated transmitting and receiving platforms, provides high-resolution imaging of terrestrial and maritime scenes and targets in complex environments. Its advantages include flexible configuration, strong concealment capabilities, high interference resistance, and comprehensive target information acquisition, making it valuable in high-precision remote sensing mapping, covert imaging, and precision strikes. Image processing is critical for obtaining high-resolution Bistatic SAR (BiSAR) images. However, the echo model and characteristics of BiSAR substantially differ from those of traditional monostatic SAR, necessitating specialized image processing methods tailored to various operational modes and configurations. This study examines key challenges and solutions for several BiSAR configurations, including airborne BiSAR, BiSAR with high-speed and highly maneuverable platforms, spaceborne heterogeneous BiSAR, and spaceborne homogeneous BiSAR. This study also addresses motion compensation approaches and moving target imaging in BiSAR systems, reviews relevant domestic and international research advancements, and provides an outlook on future trends in BiSAR image processing.
14
2025, 14(5): 1276-1293.
This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB.
This study addresses the issue of fine-grained feature extraction and classification for Low-Slow-Small (LSS) targets, such as birds and drones, by proposing a multi-band multi-angle feature fusion classification method. First, data from five types of rotorcraft drones and bird models were collected at multiple angles using K-band and L-band frequency-modulated continuous-wave radars, forming a dataset for LSS target detection. Second, to capture the periodic vibration characteristics of the L-band target signals, empirical mode decomposition was applied to extract high-frequency features and reduce noise interference. For the K-band echo signals, short-time Fourier transform was applied to obtain high-resolution micro-Doppler features from various angles. Based on these features, a Multi-band Multi-angle Feature Fusion Network (MMFFNet) was designed, incorporating an improved convolutional long short-term memory network for temporal feature extraction, along with an attention fusion module and a multiscale feature fusion module. The proposed architecture improves target classification accuracy by integrating features from both bands and angles. Validation using a real-world dataset showed that compared with methods relying on single radar features, the proposed approach improved the classification accuracy for seven types of LSS targets by 3.1% under a high Signal-to-Noise Ratio (SNR) of 5 dB and by 12.3% under a low SNR of −3 dB.
15
2025, 14(4): 1019-1045.
Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed.
Integrated Sensing And Communications (ISAC), a key technology for 6G networks, has attracted extensive attention from both academia and industry. Leveraging the widespread deployment of communication infrastructures, the integration of sensing functions into communication systems to achieve ISAC networks has emerged as a research focus. To this end, the signal design for communication-centric ISAC systems should be addressed first. Two main technical routes are considered for communication-centric signal design: (1) pilot-based sensing signal design and (2) data-based ISAC signal design. This paper provides an in-depth and systematic overview of signal design for the aforementioned technical routes. First, a comprehensive review of the existing literature on pilot-based signal design for sensing is presented. Then, the data-based ISAC signal design is analyzed. Finally, future research topics on the ISAC signal design are proposed.
16
2023, 12(2): 456-469.
Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence.
Marine target detection and recognition depend on the characteristics of marine targets and sea clutter. Therefore, understanding the essential features of marine targets based on the measured data is crucial for advancing target detection and recognition technology. To address the issue of insufficient data on the scattering characteristics of marine targets, the Sea-Detecting Radar Data-Sharing Program (SDRDSP) was upgraded to obtain data on marine targets and their environment under different polarizations and sea states. This upgrade expanded the physical dimension of radar target observation and improved radar and auxiliary data acquisition capabilities. Furthermore, a dual-polarized multistate scattering characteristic dataset of marine targets was constructed, and the statistical distribution characteristics, time and space correlation, and Doppler spectrum were analyzed, supporting the data usage. In the future, the types and quantities of maritime targets will continue to accumulate, providing data support for improving marine target detection and recognition performance and intelligence.
17
2023, 12(4): 906-922.
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
This study proposes a Synthetic Aperture Radar (SAR) aircraft detection and recognition method combined with scattering perception to address the problem of target discreteness and false alarms caused by strong background interference in SAR images. The global information is enhanced through a context-guided feature pyramid module, which suppresses strong disturbances in complex images and improves the accuracy of detection and recognition. Additionally, scatter key points are used to locate targets, and a scatter-aware detection module is designed to realize the fine correction of the regression boxes to improve target localization accuracy. This study generates and presents a high-resolution SAR-AIRcraft-1.0 dataset to verify the effectiveness of the proposed method and promote the research on SAR aircraft detection and recognition. The images in this dataset are obtained from the satellite Gaofen-3, which contains 4,368 images and 16,463 aircraft instances, covering seven aircraft categories, namely A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and other. We apply the proposed method and common deep learning algorithms to the constructed dataset. The experimental results demonstrate the excellent effectiveness of our method combined with scattering perception. Furthermore, we establish benchmarks for the performance indicators of the dataset in different tasks such as SAR aircraft detection, recognition, and integrated detection and recognition.
18
2025, 14(4): 1071-1091.
Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described.
Joint radar communication leverages resource-sharing mechanisms to improve system spectrum utilization and achieve lightweight design. It has wide applications in air traffic control, healthcare monitoring, and autonomous vehicles. Traditional joint radar communication algorithms often rely on precise mathematical modeling and channel estimation and cannot adapt to dynamic and complex environments that are difficult to describe. Artificial Intelligence (AI), with its powerful learning ability, automatically learns features from large amounts of data without the need for explicit modeling, thereby promoting the deep fusion of radar communication. This article provides a systematic review of the research on AI-driven joint radar communication. Specifically, the model and challenges of the joint radar communication system are first elaborated. On this basis, the latest research progress on AI-driven joint radar communication is summarized from two aspects: radar communication coexistence and dual-functional radar communication. Finally, the article is summarized, and the potential technical challenges and future research directions in this field are described.
19
2025, 14(3): 528-547.
As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms.
As an important method of 3D (Three-Dimensional) data processing, point cloud fusion technology has shown great potential and promising applications in many fields. This paper systematically reviews the basic concepts, commonly used techniques, and applications of point cloud fusion and thoroughly analyzes the current status and future development trends of various fusion methods. Additionally, the paper explores the practical applications and challenges of point cloud fusion in fields such as autonomous driving, architecture, and robotics. Special attention is given to balancing algorithmic complexity with fusion accuracy, particularly in addressing issues like noise, data sparsity, and uneven point cloud density. This study serves as a strong reference for the future development of point cloud fusion technology by providing a comprehensive overview of the existing research progress and identifying possible research directions for further improving the accuracy, robustness, and efficiency of fusion algorithms.
20
2026, 15(1): 107-119.
Synthetic Aperture Radar Tomography (TomoSAR), by virtue of its three-dimensional (3D) resolution capability, can be used to study the 3D structure of semitransparent targets, such as forests, icebergs, and snowpacks. Currently, TomoSAR measurements, especially spaceborne TomoSAR, are mostly obtained through repeat-pass observations, which introduce two major problems: Temporal decorrelation and signal delay caused by the troposphere or ionosphere. Severe temporal decorrelation and signal delay can lead to defocused tomograms, which make it impossible to reconstruct the 3D structure of a target. Unlike repeat-pass TomoSAR systems, multi-static TomoSAR systems simultaneously collect multibaseline images, reducing temporal decorrelation to zero and canceling all types of signal delay, making them an ideal tool for 3D TomoSAR reconstruction. The Hongtu-1 constellation, launched in 2023 and operated by PIESAT Information Technology Limited, is the world’s first spaceborne multi-static SAR system. In this paper, we conduct spaceborne multi-static TomoSAR processing and forest height estimation experiments using Hongtu-1 multi-static images. By comparing tomograms from tropical and temperate forests, it is found that the X-band signal from Hongtu-1 cannot reach the ground in dense tropical forests, but can in temperate forests, due to much lower tree and leaf density. This indicates that Hongtu-1 is capable of forest height measurement in temperate forests. By comparing forest height inversion in temperate forests obtained from Hongtu-1 TomoSAR and GEDI LiDAR, it is found that, at the test sites considered in this paper, Hongtu-1 TomoSAR measurements can provide more accurate forest height inversion (with a 35% improvement), more measurement points, and higher-resolution products than GEDI, which further demonstrates the capability and superiority of Hongtu-1 TomoSAR in forest height estimation.
Synthetic Aperture Radar Tomography (TomoSAR), by virtue of its three-dimensional (3D) resolution capability, can be used to study the 3D structure of semitransparent targets, such as forests, icebergs, and snowpacks. Currently, TomoSAR measurements, especially spaceborne TomoSAR, are mostly obtained through repeat-pass observations, which introduce two major problems: Temporal decorrelation and signal delay caused by the troposphere or ionosphere. Severe temporal decorrelation and signal delay can lead to defocused tomograms, which make it impossible to reconstruct the 3D structure of a target. Unlike repeat-pass TomoSAR systems, multi-static TomoSAR systems simultaneously collect multibaseline images, reducing temporal decorrelation to zero and canceling all types of signal delay, making them an ideal tool for 3D TomoSAR reconstruction. The Hongtu-1 constellation, launched in 2023 and operated by PIESAT Information Technology Limited, is the world’s first spaceborne multi-static SAR system. In this paper, we conduct spaceborne multi-static TomoSAR processing and forest height estimation experiments using Hongtu-1 multi-static images. By comparing tomograms from tropical and temperate forests, it is found that the X-band signal from Hongtu-1 cannot reach the ground in dense tropical forests, but can in temperate forests, due to much lower tree and leaf density. This indicates that Hongtu-1 is capable of forest height measurement in temperate forests. By comparing forest height inversion in temperate forests obtained from Hongtu-1 TomoSAR and GEDI LiDAR, it is found that, at the test sites considered in this paper, Hongtu-1 TomoSAR measurements can provide more accurate forest height inversion (with a 35% improvement), more measurement points, and higher-resolution products than GEDI, which further demonstrates the capability and superiority of Hongtu-1 TomoSAR in forest height estimation.
- First
- Prev
- 1
- 2
- 3
- 4
- 5
- Next
- Last
- Total:5
- To
- Go
Submit Manuscript
Peer Review
Editor Work
Abstract
23305KB
微信 | 公众平台 