2022 Vol. 11, No. 3

Synthetic Aperture Radar
Video Synthetic Aperture Radar (SAR) presents great potential in ground moving target detection and tracking through high frame rate and high-resolution imaging. Target Doppler energy is essential for traditional SAR Ground Moving Target Indication (SAR-GMTI), as the target shadow can also be used for detection in video SAR. However, neither of these detection methods can stand alone to achieve robust detection in video SAR, owing to the distortion or smearing of target energy and its shadow. This paper presents the processing results of airborne video SAR real data using the Faster Region-based Convolutional Neural Network (Faster R-CNN) and the traditional track association based on dual-domain joint detection as proposed in the literature. These two approaches successfully utilize the feature and space time information of target Doppler energy and shadow in the detection of a maneuvering target. Video Synthetic Aperture Radar (SAR) presents great potential in ground moving target detection and tracking through high frame rate and high-resolution imaging. Target Doppler energy is essential for traditional SAR Ground Moving Target Indication (SAR-GMTI), as the target shadow can also be used for detection in video SAR. However, neither of these detection methods can stand alone to achieve robust detection in video SAR, owing to the distortion or smearing of target energy and its shadow. This paper presents the processing results of airborne video SAR real data using the Faster Region-based Convolutional Neural Network (Faster R-CNN) and the traditional track association based on dual-domain joint detection as proposed in the literature. These two approaches successfully utilize the feature and space time information of target Doppler energy and shadow in the detection of a maneuvering target.
High-precision extraction of river boundaries in Synthetic Aperture Radar (SAR) images is of great significance in monitoring rivers. In this paper, the detection of the health of the Yellow River after the rainstorm in 20 July, 2021 in Zhengzhou is the focus of this paper. The refined-Lee filtering concept and the filtering characteristics of the convolution operation are combined, and an optimized internal weight convolution kernel Refined-Lee Kernel is proposed according to the geometric characteristics of the river channel. A novel river extraction deep neural network model, the River-Net, is also proposed. To verify the effectiveness of the proposed model, this article utilized 20 m resolution Interferometric Wideswath (IW) image data obtained from the European Space Agency Sentinel-1 satellite before and after the 20 July rainstorm in Zhengzhou, employing the images before the rainstorm to train the model. The model, after training, was used to extract the Yellow River channel and analyze the rise of the river after the rainstorm. Experimental results show that the proposed model can extract river channels from SAR images more accurately than trendy semantic segmentation models. The model has important application value for flood disaster detection and evaluation. High-precision extraction of river boundaries in Synthetic Aperture Radar (SAR) images is of great significance in monitoring rivers. In this paper, the detection of the health of the Yellow River after the rainstorm in 20 July, 2021 in Zhengzhou is the focus of this paper. The refined-Lee filtering concept and the filtering characteristics of the convolution operation are combined, and an optimized internal weight convolution kernel Refined-Lee Kernel is proposed according to the geometric characteristics of the river channel. A novel river extraction deep neural network model, the River-Net, is also proposed. To verify the effectiveness of the proposed model, this article utilized 20 m resolution Interferometric Wideswath (IW) image data obtained from the European Space Agency Sentinel-1 satellite before and after the 20 July rainstorm in Zhengzhou, employing the images before the rainstorm to train the model. The model, after training, was used to extract the Yellow River channel and analyze the rise of the river after the rainstorm. Experimental results show that the proposed model can extract river channels from SAR images more accurately than trendy semantic segmentation models. The model has important application value for flood disaster detection and evaluation.
The anchor-free network represented by a Fully Convolutional One-Stage object detector (FCOS) avoids the hyperparameter setting issue caused by the preset anchor boxes; however, the result of the horizontal bounding boxes cannot indicate the precise boundary and orientation of the arbitrary-oriented ship detection in synthetic-aperture radar images. To solve this problem, this paper proposes a detection algorithm named FCOSR. First, the angle parameter is added to the FCOS regression branch to output the rotatable bounding boxes. Second, 9-point features based on deformable convolution are introduced to predict the ship confidence and the boundary-box residual to reduce the land false alarm and improve the accuracy of the boundary box regression. Finally, in the training stage, the rotatable adaptive sample selection strategy is used to allocate appropriate positive sample points to the real ship to improve the network detection accuracy. Compared to the FCOS and currently published anchor-based rotatable detection networks, the proposed network exhibited faster detection speed and higher detection accuracy on the SSDD+ and HRSID datasets with the mAPs of 91.7% and 84.3%, respectively. The average detection time of image slices was only 33 ms. The anchor-free network represented by a Fully Convolutional One-Stage object detector (FCOS) avoids the hyperparameter setting issue caused by the preset anchor boxes; however, the result of the horizontal bounding boxes cannot indicate the precise boundary and orientation of the arbitrary-oriented ship detection in synthetic-aperture radar images. To solve this problem, this paper proposes a detection algorithm named FCOSR. First, the angle parameter is added to the FCOS regression branch to output the rotatable bounding boxes. Second, 9-point features based on deformable convolution are introduced to predict the ship confidence and the boundary-box residual to reduce the land false alarm and improve the accuracy of the boundary box regression. Finally, in the training stage, the rotatable adaptive sample selection strategy is used to allocate appropriate positive sample points to the real ship to improve the network detection accuracy. Compared to the FCOS and currently published anchor-based rotatable detection networks, the proposed network exhibited faster detection speed and higher detection accuracy on the SSDD+ and HRSID datasets with the mAPs of 91.7% and 84.3%, respectively. The average detection time of image slices was only 33 ms.
Wide-swath Synthetic Aperture Radar (SAR), represented by TopSAR and ScanSAR acquisition modes, can observe a vast area of ocean scenes. However, achieving wide-swath reduces the quality of imaging resolution, which causes the ships captured in wide-swath SAR images to not have clear structural characteristics. This phenomenon brings a great challenge to the identification of large maritime ships. Further, the lack of wide-swath SAR sample data of large critical ships, such as moving aircraft carriers and amphibious ships, makes the identification of maritime moving ships difficult. To solve this problem, we construct a wide-swath SAR large maritime moving ships dataset, which includes 2291 samples. The dataset is divided into the following categories: large military ships, large civilian ships of lengths greater than 250 m, and large civilian ships of lengths between 150~250 m. The construction process of the dataset is as follows: first, the sample data of large military ships in the port area are obtained from prior knowledge; second, the sample data of large civilian ships are obtained via the length screening of OpenSARShip dataset with attribute information; finally, the imaging results of moving ships at sea are simulated by adding quadratic phase error in a range-Doppler domain. This study also analyzes the recognition performance of the constructed dataset and motion simulation of the processed data using classical recognition algorithms and deep learning methods. Experimental results show that using SAR image complex information at low resolution can improve the recognition rate of the algorithm to a certain extent, and the defocusing problem of the moving ship target has a considerable impact on the recognition accuracy. Wide-swath Synthetic Aperture Radar (SAR), represented by TopSAR and ScanSAR acquisition modes, can observe a vast area of ocean scenes. However, achieving wide-swath reduces the quality of imaging resolution, which causes the ships captured in wide-swath SAR images to not have clear structural characteristics. This phenomenon brings a great challenge to the identification of large maritime ships. Further, the lack of wide-swath SAR sample data of large critical ships, such as moving aircraft carriers and amphibious ships, makes the identification of maritime moving ships difficult. To solve this problem, we construct a wide-swath SAR large maritime moving ships dataset, which includes 2291 samples. The dataset is divided into the following categories: large military ships, large civilian ships of lengths greater than 250 m, and large civilian ships of lengths between 150~250 m. The construction process of the dataset is as follows: first, the sample data of large military ships in the port area are obtained from prior knowledge; second, the sample data of large civilian ships are obtained via the length screening of OpenSARShip dataset with attribute information; finally, the imaging results of moving ships at sea are simulated by adding quadratic phase error in a range-Doppler domain. This study also analyzes the recognition performance of the constructed dataset and motion simulation of the processed data using classical recognition algorithms and deep learning methods. Experimental results show that using SAR image complex information at low resolution can improve the recognition rate of the algorithm to a certain extent, and the defocusing problem of the moving ship target has a considerable impact on the recognition accuracy.
Tomographic Synthetic Aperture Radar (TomoSAR) is an advanced technology for three-dimensional (3D) mountain reconstruction. However, the TomoSAR mountain point clouds have a significant location error in the elevation direction, making high-precision 3D reconstruction of mountains difficult. A geometry constrained Moving Least Squares (MLS)-based high-precision 3D reconstruction method is addressed in this issue. This method not only has the benefits of the traditional MLS in that it uses the local subspace principle for fitting complex surface structures but also fully uses the TomoSAR point cloud characteristic of monotonically increasing elevation with ground distance for reconstruction error correction. The point clouds are first projected onto a new azimuth-ground-elevation domain. Subsequently, the suggested iterative solution-based geometry constrained MLS performs location error correction in the elevation direction. Finally, the projection transformation is used to generate 3D reconstruction results of mountains. The simulation and measurement of airborne array TomoSAR mountain data, AW3D30 DSM data, and 1:10,000 DEM data validate the effectiveness of the proposed method and demonstrate the feasibility and superiority of airborne array TomoSAR for applications such as high-precision 3D mountain reconstruction. Tomographic Synthetic Aperture Radar (TomoSAR) is an advanced technology for three-dimensional (3D) mountain reconstruction. However, the TomoSAR mountain point clouds have a significant location error in the elevation direction, making high-precision 3D reconstruction of mountains difficult. A geometry constrained Moving Least Squares (MLS)-based high-precision 3D reconstruction method is addressed in this issue. This method not only has the benefits of the traditional MLS in that it uses the local subspace principle for fitting complex surface structures but also fully uses the TomoSAR point cloud characteristic of monotonically increasing elevation with ground distance for reconstruction error correction. The point clouds are first projected onto a new azimuth-ground-elevation domain. Subsequently, the suggested iterative solution-based geometry constrained MLS performs location error correction in the elevation direction. Finally, the projection transformation is used to generate 3D reconstruction results of mountains. The simulation and measurement of airborne array TomoSAR mountain data, AW3D30 DSM data, and 1:10,000 DEM data validate the effectiveness of the proposed method and demonstrate the feasibility and superiority of airborne array TomoSAR for applications such as high-precision 3D mountain reconstruction.
A geosynchronous (GEO) satellite can provide continuous illumination with broad beam coverage for a Low Earth Orbit (LEO) receiver, used as the transmitting station of bistatic Synthetic Aperture Radar (SAR). Meanwhile, because the bistatic SAR system comprises a separate transmitter and receiver, the LEO receiver can realize multiview imaging such as downward-, forward-, and backward-looking. Therefore, GEO-LEO bistatic SAR is widely used in earth surveying and mapping to reconnaissance and surveillance application. To realize large-scene imaging, the pulse repetition rate of the GEO SAR transmitter should be low. Meanwhile, the LEO SAR receiver introduces a wide Doppler bandwidth, resulting in the azimuth undersampling of the GEO-LEO bistatic SAR. Although the multichannel technology in the receiver can suppress the ambiguity, the multichannel unambiguous recovery method requires numerous channels, resulting in the undersampling of the GEO-LEO bistatic SAR, and hindering the miniaturization of the receiving system. To address the problem of ambiguous imaging of complex observation scenes under the condition of severe azimuth subsampling condition, a sequential joint multiframe and multireceiving channel recovery unambiguous imaging method is proposed. The unambiguous imaging is recovered jointly from the correlation between sequential multiframe observation scenes and multireceiving channel sampling information. First, the unambiguous imaging problem of the GEO-LEO bistatic SAR is modeled as a joint low rank and sparse tensor optimization problem. Second, in the iterative solution of the alternating direction multiplier method, the multireceiving channel information is used to realize the unambiguous imaging of the GEO-LEO bistatic SAR for complex observation scenes. The proposed method can significantly reduce the number of receiving channels required for unambiguous imaging compared with the imaging method based on traditional multichannel The results obtained by the proposed method are validated by simulations and experiments. A geosynchronous (GEO) satellite can provide continuous illumination with broad beam coverage for a Low Earth Orbit (LEO) receiver, used as the transmitting station of bistatic Synthetic Aperture Radar (SAR). Meanwhile, because the bistatic SAR system comprises a separate transmitter and receiver, the LEO receiver can realize multiview imaging such as downward-, forward-, and backward-looking. Therefore, GEO-LEO bistatic SAR is widely used in earth surveying and mapping to reconnaissance and surveillance application. To realize large-scene imaging, the pulse repetition rate of the GEO SAR transmitter should be low. Meanwhile, the LEO SAR receiver introduces a wide Doppler bandwidth, resulting in the azimuth undersampling of the GEO-LEO bistatic SAR. Although the multichannel technology in the receiver can suppress the ambiguity, the multichannel unambiguous recovery method requires numerous channels, resulting in the undersampling of the GEO-LEO bistatic SAR, and hindering the miniaturization of the receiving system. To address the problem of ambiguous imaging of complex observation scenes under the condition of severe azimuth subsampling condition, a sequential joint multiframe and multireceiving channel recovery unambiguous imaging method is proposed. The unambiguous imaging is recovered jointly from the correlation between sequential multiframe observation scenes and multireceiving channel sampling information. First, the unambiguous imaging problem of the GEO-LEO bistatic SAR is modeled as a joint low rank and sparse tensor optimization problem. Second, in the iterative solution of the alternating direction multiplier method, the multireceiving channel information is used to realize the unambiguous imaging of the GEO-LEO bistatic SAR for complex observation scenes. The proposed method can significantly reduce the number of receiving channels required for unambiguous imaging compared with the imaging method based on traditional multichannel The results obtained by the proposed method are validated by simulations and experiments.
Radar Signal Processing
Moving target indication using space-based early warning radar is important in military applications. For the space-based early warning radar, complicated non-stationary clutter characteristics are induced due to the high-speed movement of the radar platform and Earth’s rotation, and more serious clutter non-homogeneity than the airborne radar scenario is caused by the large beam illumination region. Consequently, traditional Space-Time Adaptive Processing (STAP) methods, which have been widely used in airborne early warning radar, cannot be applied directly to the space-based early warning radar. In this study, we analyze the characteristic of clutter distribution and build a novel STAP framework, where high-resolution clutter spectra used to construct the adaptive weights is estimated via a Convolutional Neural Network (CNN). First, clutter data sets were randomly simulated with different ranges of bin, latitude, spatial error, internal clutter motion, and coefficients of surface scattering, where the radar and satellite parameters were utilized as a priori knowledge. Then, we designed a two-dimensional CNN with five layers that converted a low-resolution clutter spectrum into a high-resolution spectrum. Finally, a space-time adaptive filter was calculated using the estimated high-resolution space-time spectrum and employed for clutter suppression and target detection. The simulation results show that the proposed CNN STAP can achieve sub-optimal performance under limited sample conditions, and a smaller computational load compared with a state-of-the-art sparse recovery STAP method. Therefore, this framework is suitable for practical application in space-based early warning radar. Moving target indication using space-based early warning radar is important in military applications. For the space-based early warning radar, complicated non-stationary clutter characteristics are induced due to the high-speed movement of the radar platform and Earth’s rotation, and more serious clutter non-homogeneity than the airborne radar scenario is caused by the large beam illumination region. Consequently, traditional Space-Time Adaptive Processing (STAP) methods, which have been widely used in airborne early warning radar, cannot be applied directly to the space-based early warning radar. In this study, we analyze the characteristic of clutter distribution and build a novel STAP framework, where high-resolution clutter spectra used to construct the adaptive weights is estimated via a Convolutional Neural Network (CNN). First, clutter data sets were randomly simulated with different ranges of bin, latitude, spatial error, internal clutter motion, and coefficients of surface scattering, where the radar and satellite parameters were utilized as a priori knowledge. Then, we designed a two-dimensional CNN with five layers that converted a low-resolution clutter spectrum into a high-resolution spectrum. Finally, a space-time adaptive filter was calculated using the estimated high-resolution space-time spectrum and employed for clutter suppression and target detection. The simulation results show that the proposed CNN STAP can achieve sub-optimal performance under limited sample conditions, and a smaller computational load compared with a state-of-the-art sparse recovery STAP method. Therefore, this framework is suitable for practical application in space-based early warning radar.
To solve the problem of the simultaneous suppression of main-lobe and side-lobe interferences, this study applies polarization information as input to the airborne bistatic radar and constructs an airborne bistatic polarization-sensitive array. The method is mainly realized by bistatic polarization hierarchical suppression. First, the reconstructed covariance matrix methods are used to suppress the side-lobe interference of the primary and auxiliary radars, Further, the data received by the airborne bistatic radar are aligned in the time domain, Finally, the main-lobe interference steering vector of the primary and auxiliary radars is corrected, and polarization cancellation is used to suppress the main-lobe interference. The simulation results show that the bistatic polarization classification method can simultaneously suppress multiple main-lobe and side-lobe interferences, and considerably improve the anti-interference capability of the radar system. To solve the problem of the simultaneous suppression of main-lobe and side-lobe interferences, this study applies polarization information as input to the airborne bistatic radar and constructs an airborne bistatic polarization-sensitive array. The method is mainly realized by bistatic polarization hierarchical suppression. First, the reconstructed covariance matrix methods are used to suppress the side-lobe interference of the primary and auxiliary radars, Further, the data received by the airborne bistatic radar are aligned in the time domain, Finally, the main-lobe interference steering vector of the primary and auxiliary radars is corrected, and polarization cancellation is used to suppress the main-lobe interference. The simulation results show that the bistatic polarization classification method can simultaneously suppress multiple main-lobe and side-lobe interferences, and considerably improve the anti-interference capability of the radar system.
This paper proposes an adaptive filtering method called the Adaptive Moving spectral Depolarization Ratio (AMsDR) filter to mitigate the clutter for dual-polarization weather radar based on Jensen-Shannon divergence principle. Specifically, the spectral depolarization ratio in the range-Doppler domain is the main variable distinguishing precipitation from clutter. The AMsDR filter can remove the clutter and noise and retain precipitation based on the difference of the spectral polarization feature and the spectral continuity of precipitation and clutter. The AMsDR filter can adaptively select the filter threshold depending on the echo difference between precipitation and clutter in different azimuths. Thus, the performance of the proposed filter is better than that of the current methods. This paper proposes an adaptive filtering method called the Adaptive Moving spectral Depolarization Ratio (AMsDR) filter to mitigate the clutter for dual-polarization weather radar based on Jensen-Shannon divergence principle. Specifically, the spectral depolarization ratio in the range-Doppler domain is the main variable distinguishing precipitation from clutter. The AMsDR filter can remove the clutter and noise and retain precipitation based on the difference of the spectral polarization feature and the spectral continuity of precipitation and clutter. The AMsDR filter can adaptively select the filter threshold depending on the echo difference between precipitation and clutter in different azimuths. Thus, the performance of the proposed filter is better than that of the current methods.
Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted. Radar emitter signal deinterleaving is a key technology for radar signal reconnaissance and an essential part of battlefield situational awareness. This paper systematically sorts out the mainstream technology of radar emitter signal deinterleaving. It summarizes the main research progress in radar emitter signal deinterleaving from three directions: interpulse modulation characteristics-based, intrapulse modulation characteristics-based, and machine learning-based research. Particularly, this paper focuses on explaining the principle and technical characteristics of the latest deinterleaving technology, such as neural network-based and data stream clustering-based techniques. Finally, the shortcomings of the current radar emitter deinterleaving technology are summarized, and the future trend is predicted.
The direct position determination method based on compressed sensing depends on the accurate signal propagation model. With partially unknown propagation model parameters, its location performance will decline significantly. Thus, this study proposed a localization method via multi-dictionaries and hierarchical block sparse Bayesian framework. Herein, the emitter location problem is transformed into recovering signals from different dictionaries but with shared sparsity, and the emitter location with channel attenuation is solved by a multi-dictionary combination. Simulation results revealed that the algorithm has better performance than the traditional Sparse Bayesian Learning (SBL) method and Direct Position Determination (DPD) method under the condition of low signal-to-noise ratio and a few snapshots. The direct position determination method based on compressed sensing depends on the accurate signal propagation model. With partially unknown propagation model parameters, its location performance will decline significantly. Thus, this study proposed a localization method via multi-dictionaries and hierarchical block sparse Bayesian framework. Herein, the emitter location problem is transformed into recovering signals from different dictionaries but with shared sparsity, and the emitter location with channel attenuation is solved by a multi-dictionary combination. Simulation results revealed that the algorithm has better performance than the traditional Sparse Bayesian Learning (SBL) method and Direct Position Determination (DPD) method under the condition of low signal-to-noise ratio and a few snapshots.
Radar Target Tracking
Low-altitude small targets, represented by rotor unmanned aerial vehicles, always adopt slow move-and-stop strategy or employ an obstacle blocking strategy to avoid radar detection and conduct point-and-point strikes or interference on important information equipment and strategic bases. This type of target can appear and disappear from the radar Field of View (FoV) multiple times, thus, it is referred to as move-stop-move targets. Dealing with this type of target using traditional tracking models and algorithms can lead to discontinuities in target identity and track fragmentation. To this end, this study investigates the tracking problem of move-stop-move targets with the Labeled Multi-Bernoulli (LMB) filter based on random finite set statistics. To describe the evolution characteristics of multiple entries to the radar FoV, first, we introduce the third type of birth procedure, that is, the Re-Birth (RB) procedure. Specifically, based on the spatial and kinematic relationships between target states before and after returning to the radar FoV, a Spatial Correlation-based RB (SC-RB) procedure is proposed. Then, in the framework of Bayesian filtering, we derive the SC-RB-LMB filter with the proposed SC-RB model, which is capable of tracking move-stop-move targets continuously with its identity unchanged. In typical low-altitude surveillance scenarios, the effectiveness of the proposed model and algorithm is highlighted. Low-altitude small targets, represented by rotor unmanned aerial vehicles, always adopt slow move-and-stop strategy or employ an obstacle blocking strategy to avoid radar detection and conduct point-and-point strikes or interference on important information equipment and strategic bases. This type of target can appear and disappear from the radar Field of View (FoV) multiple times, thus, it is referred to as move-stop-move targets. Dealing with this type of target using traditional tracking models and algorithms can lead to discontinuities in target identity and track fragmentation. To this end, this study investigates the tracking problem of move-stop-move targets with the Labeled Multi-Bernoulli (LMB) filter based on random finite set statistics. To describe the evolution characteristics of multiple entries to the radar FoV, first, we introduce the third type of birth procedure, that is, the Re-Birth (RB) procedure. Specifically, based on the spatial and kinematic relationships between target states before and after returning to the radar FoV, a Spatial Correlation-based RB (SC-RB) procedure is proposed. Then, in the framework of Bayesian filtering, we derive the SC-RB-LMB filter with the proposed SC-RB model, which is capable of tracking move-stop-move targets continuously with its identity unchanged. In typical low-altitude surveillance scenarios, the effectiveness of the proposed model and algorithm is highlighted.
The Fields of Views (FoVs) of radars in a distributed network partially overlap due to detecting capability, waveform design, and antenna orientation constraints, resulting in observed discrepancies between radars and a significant obstacle to future information fusion. In this paper, we propose a distributed multitarget tracking method under the scene of partially overlapping radar FoVs, based on the Gaussian Mixture Cardinalized Probability Hypothesis Density (GM-CPHD) filter. First, we employ the product of the multitarget densities to split the PHD functions and find the part that characterizes the information of the targets commonly observed by multiple radars. Then, a standard distributed fusion (arithmetic average or geometric average fusion) acts on the splitting information to improve tracking performance, and a compensation fusion acts on the remaining information to expand the observation FoV. The proposed method does not require prior knowledge of the radar’s FoV and may adapt to the scene of distributed multitarget tracking while the FoVs are unknown. Simulations are provided to verify the effectiveness of the proposed approach under the scene of unknown and time-varying radar FoVs, and show that the proposed method has better performance than that of the cluster method based on Gaussian matching. The Fields of Views (FoVs) of radars in a distributed network partially overlap due to detecting capability, waveform design, and antenna orientation constraints, resulting in observed discrepancies between radars and a significant obstacle to future information fusion. In this paper, we propose a distributed multitarget tracking method under the scene of partially overlapping radar FoVs, based on the Gaussian Mixture Cardinalized Probability Hypothesis Density (GM-CPHD) filter. First, we employ the product of the multitarget densities to split the PHD functions and find the part that characterizes the information of the targets commonly observed by multiple radars. Then, a standard distributed fusion (arithmetic average or geometric average fusion) acts on the splitting information to improve tracking performance, and a compensation fusion acts on the remaining information to expand the observation FoV. The proposed method does not require prior knowledge of the radar’s FoV and may adapt to the scene of distributed multitarget tracking while the FoVs are unknown. Simulations are provided to verify the effectiveness of the proposed approach under the scene of unknown and time-varying radar FoVs, and show that the proposed method has better performance than that of the cluster method based on Gaussian matching.
Conventional multitarget-tracking data association algorithms must have prior information, such as the target motion model and clutter density. However, such prior information cannot be obtained timely and accurately before tracking. To address this issue, a data association algorithm for multitarget tracking based on a transformer network is proposed. First, considering that the radar may not perform accurate detected the target, virtual measurements are performed to re-establish the data association model. Thus, a data association method based on the transformer network is proposed to solve the matching problem of multitargets and multimeasurements. Moreover, a loss function combining Masked Cross entropy loss and Dice (MCD) loss is designed to optimize the network parameters. Simulation data and real measurement data results show that the proposed algorithm outperforms classic data association algorithms and algorithms based on bidirectional long short-term memory network under varying detection probability conditions. Conventional multitarget-tracking data association algorithms must have prior information, such as the target motion model and clutter density. However, such prior information cannot be obtained timely and accurately before tracking. To address this issue, a data association algorithm for multitarget tracking based on a transformer network is proposed. First, considering that the radar may not perform accurate detected the target, virtual measurements are performed to re-establish the data association model. Thus, a data association method based on the transformer network is proposed to solve the matching problem of multitargets and multimeasurements. Moreover, a loss function combining Masked Cross entropy loss and Dice (MCD) loss is designed to optimize the network parameters. Simulation data and real measurement data results show that the proposed algorithm outperforms classic data association algorithms and algorithms based on bidirectional long short-term memory network under varying detection probability conditions.
Radar Application Technology
Owing to the enormous amount of frozen water and the particularity of heat exchange, polar ice sheets act as an important indicator and amplifier of global climate change. However, the detection and cognition of the tomographic structure of polar ice sheets remain insufficient due to the special geographical location and harsh weather. Benefit from the advantage of strong penetrability and high-precision range measurement, the ice sounding radar is an optimal instrument for tomographic detection of polar ice sheets, which significantly promotes the development of polar science. Nevertheless, existing radar satellites still cannot detect ice beds in depth because of the complex low-frequency signal propagation in ice and an extremely long operating distance. This study focuses on the scientific objectives (spatial resolution: 100 m and revisit time: 3 months) and presents an in-depth analysis of the key problems of orbital ice sounding radar, including transmission attenuation, firn clutter, and cross-track resolution deterioration. With reference to the current state and trend of radar satellite technology, we proved the feasibility of the application of distributed Synthetic Aperture Radar (SAR) on the microsatellite platform for ice bed detection, identified the key parameters of distributed SAR and the technical challenges of orbital radar sounding system for polar ice sheet tomographic observation, and further explored the implementation scheme. Owing to the enormous amount of frozen water and the particularity of heat exchange, polar ice sheets act as an important indicator and amplifier of global climate change. However, the detection and cognition of the tomographic structure of polar ice sheets remain insufficient due to the special geographical location and harsh weather. Benefit from the advantage of strong penetrability and high-precision range measurement, the ice sounding radar is an optimal instrument for tomographic detection of polar ice sheets, which significantly promotes the development of polar science. Nevertheless, existing radar satellites still cannot detect ice beds in depth because of the complex low-frequency signal propagation in ice and an extremely long operating distance. This study focuses on the scientific objectives (spatial resolution: 100 m and revisit time: 3 months) and presents an in-depth analysis of the key problems of orbital ice sounding radar, including transmission attenuation, firn clutter, and cross-track resolution deterioration. With reference to the current state and trend of radar satellite technology, we proved the feasibility of the application of distributed Synthetic Aperture Radar (SAR) on the microsatellite platform for ice bed detection, identified the key parameters of distributed SAR and the technical challenges of orbital radar sounding system for polar ice sheet tomographic observation, and further explored the implementation scheme.
A contactless health monitoring system can contribute to health assessment in daily life by reducing appliance usage and avoiding discomfort from wearing electrodes or sensors. Such contactless approaches have the potential to continuously monitor the health status of users, alert patients and health personnel in time when acute medical emergencies occur, and meet the monitoring demands of special populations, such as newborns, burn patients, and patients with infectious diseases. The Frequency-Modulated Continuous-Wave (FMCW) radar can measure the range and velocity of sensing targets and be widely applied in heart and respiration rate monitoring and fall detection. Moreover, advances in FMCW radar have enabled low-cost radar-on-chip and antenna-on-chip systems. Thus, FMCW radar has vital application value in the medical and health monitoring fields. In this study, first, we introduce the basic knowledge of the application of FMCW radar in contactless health monitoring. Then, we systematically review the advanced applications and latest papers in this field. Finally, we summarize the present situations and limitations and provide a brief outlook for the application prospects and potential future research in the field. A contactless health monitoring system can contribute to health assessment in daily life by reducing appliance usage and avoiding discomfort from wearing electrodes or sensors. Such contactless approaches have the potential to continuously monitor the health status of users, alert patients and health personnel in time when acute medical emergencies occur, and meet the monitoring demands of special populations, such as newborns, burn patients, and patients with infectious diseases. The Frequency-Modulated Continuous-Wave (FMCW) radar can measure the range and velocity of sensing targets and be widely applied in heart and respiration rate monitoring and fall detection. Moreover, advances in FMCW radar have enabled low-cost radar-on-chip and antenna-on-chip systems. Thus, FMCW radar has vital application value in the medical and health monitoring fields. In this study, first, we introduce the basic knowledge of the application of FMCW radar in contactless health monitoring. Then, we systematically review the advanced applications and latest papers in this field. Finally, we summarize the present situations and limitations and provide a brief outlook for the application prospects and potential future research in the field.