Most Cited

(The cited data comes from the whole network and is updated monthly.)
1
The need of extra wireless spectrum is on the rise, given the rapid development of global wireless communication industry. To this end, Radar and Communication Spectrum Sharing (RCSS) has gained considerable attentions recently from both industry and academia. In particular, RCSS aims not only at enabling the spectral cohabitation of radar and communication systems, but also at designing a novel joint system that is capable of both functionalities. In this paper, a systematic overview of RCSS by focusing on the two main research directions are provided, i.e., Radar-Communication Coexistence (RCC) and Dual-Functional Radar-Communication (DFRC). We commence by discussing the coexistence examples of radar and communication at various frequency bands, and then elaborate on the practical application scenarios of the DFRC techniques. As a further step, the state-of-the-art approaches of both RCC and DFRC are reviewed. Finally we conclude the paper by identifying a number of open problems in the research area of RCSS. The need of extra wireless spectrum is on the rise, given the rapid development of global wireless communication industry. To this end, Radar and Communication Spectrum Sharing (RCSS) has gained considerable attentions recently from both industry and academia. In particular, RCSS aims not only at enabling the spectral cohabitation of radar and communication systems, but also at designing a novel joint system that is capable of both functionalities. In this paper, a systematic overview of RCSS by focusing on the two main research directions are provided, i.e., Radar-Communication Coexistence (RCC) and Dual-Functional Radar-Communication (DFRC). We commence by discussing the coexistence examples of radar and communication at various frequency bands, and then elaborate on the practical application scenarios of the DFRC techniques. As a further step, the state-of-the-art approaches of both RCC and DFRC are reviewed. Finally we conclude the paper by identifying a number of open problems in the research area of RCSS.
2
There is an urgent need for radar-measured data to tackle key technologies of radar maritime target detection. The ‘‘Sea-detecting X-band Radar and Data Acquisition Program’’, proposed in 2019, aims to obtain data through radar experiments and share them publicly. In 2020, the program continued to advance and conducted several experiments in three aspects, namely, Radar Cross-Section (RCS) calibration of radar targets, detection of sea clutter and target under different sea conditions, as well as detection and tracking of maneuvering targets in sea. The measurement data of the stainless steel sphere calibrator at different distances in radar slow-scanning mode, sea clutter in radar staring mode in different directions, sea target in radar staring mode, and marine engine speedboat in radar scanning mode are obtained. In addition, wind and wave data, data from the Automatic Identification System (AIS) of targets, visible/infrared data, and other associated sensor data are synchronously obtained. There is an urgent need for radar-measured data to tackle key technologies of radar maritime target detection. The ‘‘Sea-detecting X-band Radar and Data Acquisition Program’’, proposed in 2019, aims to obtain data through radar experiments and share them publicly. In 2020, the program continued to advance and conducted several experiments in three aspects, namely, Radar Cross-Section (RCS) calibration of radar targets, detection of sea clutter and target under different sea conditions, as well as detection and tracking of maneuvering targets in sea. The measurement data of the stainless steel sphere calibrator at different distances in radar slow-scanning mode, sea clutter in radar staring mode in different directions, sea target in radar staring mode, and marine engine speedboat in radar scanning mode are obtained. In addition, wind and wave data, data from the Automatic Identification System (AIS) of targets, visible/infrared data, and other associated sensor data are synchronously obtained.
3
With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition. With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition.
4
Polarimetric Synthetic Aperture Radar (PolSAR) uses two-dimensional pulse compression to obtain high-resolution images containing polarimetric information. PolSAR has been widely used in military reconnaissance, topographic mapping, environmental and natural disaster monitoring, marine ship detection, and related fields. Addressing the problems associated with sea-clutter modelling and parameter estimation, slow and small target detection, dense target detection, as well as other issues, still remains a challenge in PolSAR ship detection. In this paper, four main classes for PolSAR ship detection are summarized: target polarimetric feature detection, slow and small target detection, ship wake detection, and deep learning detection. In addition, the possible solutions to existing problems in each class are given, and their future development trends are predicted, which can provide some valuable suggestions for interested researchers. Polarimetric Synthetic Aperture Radar (PolSAR) uses two-dimensional pulse compression to obtain high-resolution images containing polarimetric information. PolSAR has been widely used in military reconnaissance, topographic mapping, environmental and natural disaster monitoring, marine ship detection, and related fields. Addressing the problems associated with sea-clutter modelling and parameter estimation, slow and small target detection, dense target detection, as well as other issues, still remains a challenge in PolSAR ship detection. In this paper, four main classes for PolSAR ship detection are summarized: target polarimetric feature detection, slow and small target detection, ship wake detection, and deep learning detection. In addition, the possible solutions to existing problems in each class are given, and their future development trends are predicted, which can provide some valuable suggestions for interested researchers.
5
Multi-Target Tracking (MTT) is a difficult task in radar data processing. When compared to tracking in various fields or scenario, Maritime MTT (MMTT) is a challenging one and also a daunting task. On one hand, low signal-to-clutter ratio in the highly complex marine environment limits the detection performance for small targets at sea, and the plots obtained by the detector contain missing detections and a large number of false alarms, which make MTT much more difficult. On the other hand, when marine targets are moving in the form of multiple groups, or when the high resolution radar is used in marine detection applications, the measurements of the target pave the way to show efficiently the distribution characteristics of occupying multiple cells. In this case, using of conventional MTT methods is not ideal as their performance is not effective as desired. Currently, the number of papers on MMTT at home and abroad is very limited, and most of them only focus on a single target. This paper summarizes the use of MMTT algorithms based on four methods: conventional MTT method, amplitude aided MTT method, multi-target track-before-detect method, and multiple extended target-tracking method. In addition, this paper also considers and analyzes the future perspective of MMTT comprehensively. Multi-Target Tracking (MTT) is a difficult task in radar data processing. When compared to tracking in various fields or scenario, Maritime MTT (MMTT) is a challenging one and also a daunting task. On one hand, low signal-to-clutter ratio in the highly complex marine environment limits the detection performance for small targets at sea, and the plots obtained by the detector contain missing detections and a large number of false alarms, which make MTT much more difficult. On the other hand, when marine targets are moving in the form of multiple groups, or when the high resolution radar is used in marine detection applications, the measurements of the target pave the way to show efficiently the distribution characteristics of occupying multiple cells. In this case, using of conventional MTT methods is not ideal as their performance is not effective as desired. Currently, the number of papers on MMTT at home and abroad is very limited, and most of them only focus on a single target. This paper summarizes the use of MMTT algorithms based on four methods: conventional MTT method, amplitude aided MTT method, multi-target track-before-detect method, and multiple extended target-tracking method. In addition, this paper also considers and analyzes the future perspective of MMTT comprehensively.
6
Waveform design of joint radar and communication has become a focus of intense research in recent years. Some scholars have proposed to use the odd and even carrier of Orthogonal Frequency Division Multiplexing (OFDM) signal to modulate the radar and communication functions, respectively, to realize the integration. However, OFDM systems generally use cyclic prefix to avoid Inter-Carrier Interference (ICI) and Inter-Symbol Interference (ISI) caused by multipath effects, reducing energy utilization and creating false targets, which affect radar performance. In addition, the traditional OFDM integrated signal is more sensitive to Doppler shift. A small Doppler frequency offset will also cause a considerable drop in orthogonal performance. On this basis, this paper proposes a new waveform design and processing method. This method uses blank guard intervals to replace cyclic prefixes, which can resist multipath effects while avoiding false targets introduced by cyclic prefixes, effectively preventing ICI and ISI. In terms of signal processing methods, this paper proposes a method for channel estimation and Doppler compensation using the priori information of the radar signal. Compared with the traditional method, this new method reduces the system’s resource overhead, such as pilot frequency and training sequence. It improves energy utilization and spectrum efficiency. The peak side lobe ratio, integration side lobe rate, and bit error ratio are also improved. Simulation experiments verify the effectiveness of this method. Waveform design of joint radar and communication has become a focus of intense research in recent years. Some scholars have proposed to use the odd and even carrier of Orthogonal Frequency Division Multiplexing (OFDM) signal to modulate the radar and communication functions, respectively, to realize the integration. However, OFDM systems generally use cyclic prefix to avoid Inter-Carrier Interference (ICI) and Inter-Symbol Interference (ISI) caused by multipath effects, reducing energy utilization and creating false targets, which affect radar performance. In addition, the traditional OFDM integrated signal is more sensitive to Doppler shift. A small Doppler frequency offset will also cause a considerable drop in orthogonal performance. On this basis, this paper proposes a new waveform design and processing method. This method uses blank guard intervals to replace cyclic prefixes, which can resist multipath effects while avoiding false targets introduced by cyclic prefixes, effectively preventing ICI and ISI. In terms of signal processing methods, this paper proposes a method for channel estimation and Doppler compensation using the priori information of the radar signal. Compared with the traditional method, this new method reduces the system’s resource overhead, such as pilot frequency and training sequence. It improves energy utilization and spectrum efficiency. The peak side lobe ratio, integration side lobe rate, and bit error ratio are also improved. Simulation experiments verify the effectiveness of this method.
7
In the Synthetic Aperture Radar (SAR) remote sensing image, ships are visually significant targets on the sea surface. Because they are made of metal, thus the backscatter is strong, while the sea surface is smooth and the backscatter is weak. However, the large-width SAR remote sensing image has a complicated sea background, and the features of various ship targets are quite different. To solve this problem, a SAR remote sensing image ship detection model called NanoDet is proposed. NanoDet is based on visual saliency. First, the image samples are divided into various scene categories using an automatic clustering algorithm. Second, differentiated saliency detection is performed for images in various scenes. Finally, the optimized lightweight network model, NanoDet, is used to perform feature learning on the training samples added with the saliency maps, so that the system model can achieve fast and high-precision ship detection effects. This method is helpful for the real-time application of SAR images. The lightweight model is conducive to hardware transplantation in the future.This study conducts experiments based on the public data set SSDD and AIR-SARship-2.0, and the experiments results verify the effectiveness of our approach. In the Synthetic Aperture Radar (SAR) remote sensing image, ships are visually significant targets on the sea surface. Because they are made of metal, thus the backscatter is strong, while the sea surface is smooth and the backscatter is weak. However, the large-width SAR remote sensing image has a complicated sea background, and the features of various ship targets are quite different. To solve this problem, a SAR remote sensing image ship detection model called NanoDet is proposed. NanoDet is based on visual saliency. First, the image samples are divided into various scene categories using an automatic clustering algorithm. Second, differentiated saliency detection is performed for images in various scenes. Finally, the optimized lightweight network model, NanoDet, is used to perform feature learning on the training samples added with the saliency maps, so that the system model can achieve fast and high-precision ship detection effects. This method is helpful for the real-time application of SAR images. The lightweight model is conducive to hardware transplantation in the future.This study conducts experiments based on the public data set SSDD and AIR-SARship-2.0, and the experiments results verify the effectiveness of our approach.
8
Sea surveillance is an important application of polarimetric Synthetic Aperture Radar (SAR), but ship detection in dense areas remains a major challenge. Due to the crosstalk of multiple targets in dense ship areas, it can be difficult to collect pure sea clutter samples for threshold determination when using the traditional Constant False Alarm Rate (CFAR) moving window, which decreases the detection performance. To address this issue, in this paper, a polarimetric SAR ship detection method is proposed based on polarimetric rotation domain features and superpixel technique, with consideration of both feature selection and detector design. For feature selection, the backscattering of radar targets is sensitive to the relative geometry between the target orientations and the radar line of sight. The information hidden in this scattering diversity can be mined using polarimetric rotation domain analysis, from which the polarimetric correlation pattern and a set of polarimetric rotation domain features are obtained. Target-to-Clutter Ratio (TCR) analysis is conducted, and the three polarimetric features with the highest TCR values are selected for successive target detection. On this basis, a clutter superpixel selection method is developed for detector design based on K-means clustering, which effectively circumvents the influence of dense ship targets on near sea clutter. CFAR ship detection results can be obtained based on the selected clutter samples. Experimental studies on spaceborne Radarsat-2 and GaoFen-3 full polarimetric SAR datasets indicate that, the proposed method can effectively detect dense ship targets with 95% higher figures of merit. Sea surveillance is an important application of polarimetric Synthetic Aperture Radar (SAR), but ship detection in dense areas remains a major challenge. Due to the crosstalk of multiple targets in dense ship areas, it can be difficult to collect pure sea clutter samples for threshold determination when using the traditional Constant False Alarm Rate (CFAR) moving window, which decreases the detection performance. To address this issue, in this paper, a polarimetric SAR ship detection method is proposed based on polarimetric rotation domain features and superpixel technique, with consideration of both feature selection and detector design. For feature selection, the backscattering of radar targets is sensitive to the relative geometry between the target orientations and the radar line of sight. The information hidden in this scattering diversity can be mined using polarimetric rotation domain analysis, from which the polarimetric correlation pattern and a set of polarimetric rotation domain features are obtained. Target-to-Clutter Ratio (TCR) analysis is conducted, and the three polarimetric features with the highest TCR values are selected for successive target detection. On this basis, a clutter superpixel selection method is developed for detector design based on K-means clustering, which effectively circumvents the influence of dense ship targets on near sea clutter. CFAR ship detection results can be obtained based on the selected clutter samples. Experimental studies on spaceborne Radarsat-2 and GaoFen-3 full polarimetric SAR datasets indicate that, the proposed method can effectively detect dense ship targets with 95% higher figures of merit.
9
The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear. The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear.
10
Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios. Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios.
11
Recently, reconfigurable metasurfaces have attracted intense attention in the field of electromagnetic metasurfaces. Compared with other metasurfaces, reconfigurable metasurfaces that uses steerable devices or materials to control the electromagnetic wave in real time are more versatile and show great promise in engineering applications. Our team has continuously explored advances of reconfigurable metasurfaces and also studied the microwave region from the perspectives of theory, technique and applications. This study reviews the research history of reconfigurable metasurfaces and summarizes some of our previous works, including a study on the amplitude, phase and polarization modulation of electromagnetic waves and their applications. Finally, the study discusses future challenges and possibilities for reconfigurable metasurfaces. Recently, reconfigurable metasurfaces have attracted intense attention in the field of electromagnetic metasurfaces. Compared with other metasurfaces, reconfigurable metasurfaces that uses steerable devices or materials to control the electromagnetic wave in real time are more versatile and show great promise in engineering applications. Our team has continuously explored advances of reconfigurable metasurfaces and also studied the microwave region from the perspectives of theory, technique and applications. This study reviews the research history of reconfigurable metasurfaces and summarizes some of our previous works, including a study on the amplitude, phase and polarization modulation of electromagnetic waves and their applications. Finally, the study discusses future challenges and possibilities for reconfigurable metasurfaces.
12
Under the constraints of the point scattering model, traditional Synthetic Aperture Radar (SAR) imaging algorithms can be regarded as a mapping from data space to image space. However, most objects in the real scene are extended targets, which are mismatched with the point scattering model in traditional linear imaging algorithms. The abovementioned reasons lead to the distortion of SAR image representation. A common phenomenon is that the extended targets appear as isolated scattered points, which hinder the application of target recognition on the basis of SAR images. SAR parametric nonlinear imaging techniques are established to solve the abovementioned model mismatch problem. Such methods are characterized by the scattering models that consider point targets and extended targets. Specifically, by using the sensitivity of the phase and amplitude characteristics of the echoes or images to the observation angles, SAR parametric imaging methods can first identify the target type and estimate the scattering parameters, and then reconstruct the target image on the basis of the scattering model. SAR parametric imaging methods can obtain better image quality than traditional linear methods for extended targets. This article mainly introduces the parametric imaging methods of linear extended targets, which correspond to the isolated strong points and continuous edges of objects in the real scene, and discusses the parametric imaging methods on the basis of the echo and image domains and experimental results. Last, the future development trends of SAR parametric imaging methods are discussed. Under the constraints of the point scattering model, traditional Synthetic Aperture Radar (SAR) imaging algorithms can be regarded as a mapping from data space to image space. However, most objects in the real scene are extended targets, which are mismatched with the point scattering model in traditional linear imaging algorithms. The abovementioned reasons lead to the distortion of SAR image representation. A common phenomenon is that the extended targets appear as isolated scattered points, which hinder the application of target recognition on the basis of SAR images. SAR parametric nonlinear imaging techniques are established to solve the abovementioned model mismatch problem. Such methods are characterized by the scattering models that consider point targets and extended targets. Specifically, by using the sensitivity of the phase and amplitude characteristics of the echoes or images to the observation angles, SAR parametric imaging methods can first identify the target type and estimate the scattering parameters, and then reconstruct the target image on the basis of the scattering model. SAR parametric imaging methods can obtain better image quality than traditional linear methods for extended targets. This article mainly introduces the parametric imaging methods of linear extended targets, which correspond to the isolated strong points and continuous edges of objects in the real scene, and discusses the parametric imaging methods on the basis of the echo and image domains and experimental results. Last, the future development trends of SAR parametric imaging methods are discussed.
13
Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%. Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%.
14
Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community. Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community.
15
It is difficult for the traditional radar to suppress deceptive mainlobe interference and separate the range ambiguous clutter. The proposal of a waveform diverse array changes the way of obtaining information through utilizing degrees-of-freedom in the transmit dimension. Through flexible system design and signal processing methods, this array enhances the ability of information extraction and improves the anti-jamming and detection performance, compared with the traditional phased array and Multiple-Input Multiple-Output (MIMO) radar. This paper summarizes the research progress of waveform diverse array radars in China and overseas and provides the basic concepts of the array diversity system regarding frequency, time, and phase modulation. Furthermore, the research trend of waveform diverse array radars has been discussed. Based on the existing basic theory and key technology research, the advantages of a waveform diverse array in providing new information about targets and increasing the additional controllable degrees-of-freedom of the system are verified, thereby improving the multidimensional detection capability of the new radar system. It is difficult for the traditional radar to suppress deceptive mainlobe interference and separate the range ambiguous clutter. The proposal of a waveform diverse array changes the way of obtaining information through utilizing degrees-of-freedom in the transmit dimension. Through flexible system design and signal processing methods, this array enhances the ability of information extraction and improves the anti-jamming and detection performance, compared with the traditional phased array and Multiple-Input Multiple-Output (MIMO) radar. This paper summarizes the research progress of waveform diverse array radars in China and overseas and provides the basic concepts of the array diversity system regarding frequency, time, and phase modulation. Furthermore, the research trend of waveform diverse array radars has been discussed. Based on the existing basic theory and key technology research, the advantages of a waveform diverse array in providing new information about targets and increasing the additional controllable degrees-of-freedom of the system are verified, thereby improving the multidimensional detection capability of the new radar system.
16
Due to the influence of the environment on the scattering characteristics of ground objects in flooded areas, the false error rate of the detection results increases when performing change detection on Synthetic Aperture Radar (SAR) images of these areas, which reduces the accuracy of the results obtained for the difference map. To solve this problem, in this paper, we propose a change-detection method based on a fusion difference map. This method combines the regional sensitivity of the entropy difference map with the regional retention of the mean difference map to construct a fusion difference map based on an improved relative entropy and mean value ratio. First, the initial clustering results of the fuzzy local information C-means clustering method are classified by their Pearson correlation coefficients, and second, the secondary classification results are used for the initial image segmentation. Third, the final segmentation results are obtained using the iterative condition model and Markov random field. To verify the flood-disaster-detection performance of the proposed method, we used the second of Europe Remote-Sensing (ERS-2) Satellite data obtained for the Bern area in Switzerland in April and May 1999 and Radarsat remote-sensing data for the Ottawa region in Canada in May and August 1997. We also applied the proposed method to data obtained for the Poyang Lake region of China in June and July 2020, and estimated the disaster area and change trend before and after the flood in Poyang Lake. The experimental results show that the algorithm had a low overall detection error, the false error rate of the detection results were somewhat reduced, and the accuracy of the detection results was improved. Due to the influence of the environment on the scattering characteristics of ground objects in flooded areas, the false error rate of the detection results increases when performing change detection on Synthetic Aperture Radar (SAR) images of these areas, which reduces the accuracy of the results obtained for the difference map. To solve this problem, in this paper, we propose a change-detection method based on a fusion difference map. This method combines the regional sensitivity of the entropy difference map with the regional retention of the mean difference map to construct a fusion difference map based on an improved relative entropy and mean value ratio. First, the initial clustering results of the fuzzy local information C-means clustering method are classified by their Pearson correlation coefficients, and second, the secondary classification results are used for the initial image segmentation. Third, the final segmentation results are obtained using the iterative condition model and Markov random field. To verify the flood-disaster-detection performance of the proposed method, we used the second of Europe Remote-Sensing (ERS-2) Satellite data obtained for the Bern area in Switzerland in April and May 1999 and Radarsat remote-sensing data for the Ottawa region in Canada in May and August 1997. We also applied the proposed method to data obtained for the Poyang Lake region of China in June and July 2020, and estimated the disaster area and change trend before and after the flood in Poyang Lake. The experimental results show that the algorithm had a low overall detection error, the false error rate of the detection results were somewhat reduced, and the accuracy of the detection results was improved.
17
Monopulse technique is used in scanning radar systems to improve image quality in the forward-looking area. However, monopulse measurements fail to resolve multiple targets in the same resolution cell because of angular glint which often results to image blurring. In response to this, we propose a monopulse forward-looking imaging method utilizing Doppler estimates of sum-difference measurements. First, target multiplicity is resolved by exploiting the different Doppler shifts caused by the relative motion between the platform and the targets at different directions. High azimuthal angle measurement accuracy of the Doppler estimates is then obtained using the Sum-Difference Amplitude-Comparison (SDAC) monopulse technique. Subsequently, the intensity of the sum channel estimates is projected onto the image plane according to the range and angle measurements. To further improve the precision of angle measurements, a Chirp-Z Transform (CZT)-based algorithm is proposed for the reconstruction of the Doppler estimates of the sum-difference channels. Simulation results demonstrate the capability of the proposed methods in resolving multiple targets at high squint angles in a large scanning field. Real data experiments show significant improvement of image profiles using the CZT-based algorithm compared to that of the conventional monopulse imaging method. Monopulse technique is used in scanning radar systems to improve image quality in the forward-looking area. However, monopulse measurements fail to resolve multiple targets in the same resolution cell because of angular glint which often results to image blurring. In response to this, we propose a monopulse forward-looking imaging method utilizing Doppler estimates of sum-difference measurements. First, target multiplicity is resolved by exploiting the different Doppler shifts caused by the relative motion between the platform and the targets at different directions. High azimuthal angle measurement accuracy of the Doppler estimates is then obtained using the Sum-Difference Amplitude-Comparison (SDAC) monopulse technique. Subsequently, the intensity of the sum channel estimates is projected onto the image plane according to the range and angle measurements. To further improve the precision of angle measurements, a Chirp-Z Transform (CZT)-based algorithm is proposed for the reconstruction of the Doppler estimates of the sum-difference channels. Simulation results demonstrate the capability of the proposed methods in resolving multiple targets at high squint angles in a large scanning field. Real data experiments show significant improvement of image profiles using the CZT-based algorithm compared to that of the conventional monopulse imaging method.
18
Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits. Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits.
19
Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) has been extensively applied in military and civilian fields. However, SAR images are very sensitive to the azimuth of the images, as the same target can differ greatly from different aspects. This means that more reliable and robust multiaspect ATR recognition is required. In this paper, we propose a multiaspect ATR model based on EfficientNet and BiGRU. To train this model, we use island loss, which is more suitable for SAR ATR. Experimental results have revealed that our proposed method can achieve 100% accuracy for 10-class recognition on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database. The SAR targets in three special imaging cases with large depression angles, version variants, and configuration variants reached recognition accuracies of 99.68%, 99.95%, and 99.91%, respectively. In addition, the proposed method achieves satisfactory accuracy even with smaller datasets. Our experimental results show that our proposed method outperforms other state-of-the-art ATR methods on most MSTAR datasets and exhibits a certain degree of robustness. Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) has been extensively applied in military and civilian fields. However, SAR images are very sensitive to the azimuth of the images, as the same target can differ greatly from different aspects. This means that more reliable and robust multiaspect ATR recognition is required. In this paper, we propose a multiaspect ATR model based on EfficientNet and BiGRU. To train this model, we use island loss, which is more suitable for SAR ATR. Experimental results have revealed that our proposed method can achieve 100% accuracy for 10-class recognition on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database. The SAR targets in three special imaging cases with large depression angles, version variants, and configuration variants reached recognition accuracies of 99.68%, 99.95%, and 99.91%, respectively. In addition, the proposed method achieves satisfactory accuracy even with smaller datasets. Our experimental results show that our proposed method outperforms other state-of-the-art ATR methods on most MSTAR datasets and exhibits a certain degree of robustness.
20
Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses. Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • Last
  • Total:4
  • To
  • Go