2021 Vol. 10, No. 4

Special Topic Papers: Intelligent Information Processing for Microwave Remote Sensing
Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community. Three-dimensional (3D) imaging is one of the leading trends in the development of Synthetic Aperture Radar (SAR) technology. The current SAR 3D imaging system mainly includes tomography and array interferometry, both with drawbacks of either long acquisition cycle or too much system complexity. Therefore, a novel framework of SAR microwave vision 3D imaging is proposed, which is to effectively combine the SAR imaging model with various 3D cues contained in SAR microwave scattering mechanism and the perceptual semantics in SAR images, so as to significantly reduce the system complexity, and achieve high-efficiency and low-cost SAR 3D imaging. In order to promote the development of SAR microwave vision 3D imaging theory and technology, a comprehensive SAR microwave vision 3D imaging dataset is planned to be constructed with the support of NSFC major projects. This paper outlines the composition and construction plan of the dataset, and gives detailed composition and information description of the first version of published data and the method of making the dataset, so as to provide some helpful support for SAR community.
The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear. The Bilateral Constant False Alarm Rate (BCFAR) detection algorithm calculates the spatial information of Synthetic Aperture Radar (SAR) image by the Gaussian kernel density estimator, and combines it with the intensity information of image to obtain the joint image for target detection. Compared with the classical CFAR detection algorithm which uses only intensity information for target detection, bilateral CFAR has better detection performance and robustness. However, with continuous high-intensity heterogeneous points (such as breakwater, azimuth ambiguity and phantom) in a complex environment, spatial information calculated by kernel density estimator will have more errors, which will lead to many false alarms in detection results. In addition, when it comes to a weak target with less similarity between adjacent pixels, it will miss detection. To effectively improve these problems, this paper designs an Improved Bilateral CFAR (IB-CFAR) algorithm in complex environment. The IB-CFAR proposed in this paper is mainly divided into three stages: intensity level division based on the nonuniform quantization method, intensity spatial domain information fusion and parameter estimation after clutter truncation. The intensity level division based on the nonuniform quantization method can improve the similarity and contrast information of weak targets, leading to improved ship detection rate. The information fusion of strength spatial domain is to fuse the spatial similarity, distance direction and strength information, which can further improve the detection rate and describe the ship structure information. Parameter estimation after clutter truncation can remove continuous high-intensity heterogeneous points in the background window and retain the real sea clutter samples to the maximum extent, which makes parameter estimation more accurate. Finally, according to the estimated parameters, an accurate sea clutter statistical model is established for CFAR detection. In this paper, the effectiveness and robustness of the proposed algorithm are verified by using GaoFen-3 and TerraSAR-X data.The experimental results show that the proposed algorithm performs well in the environment with more dense distribution of weak targets, and can obtain 97.85% detection rate and 3.52% false alarm rate in such environment. Compared with the existing detection algorithms, the detection rate increased by 5% and the false alarm rate reduced by 10%. However, when the number of weak targets is small and the background is very complex, few false alarms will appear.
In this paper, a Spatial-Channel Selective Kernel Fully Convolutional Network (SCSKFCN) and a Semi-supervised Preselection-United Optimization (SPUO) method are proposed for polarimetric Synthetic Aperture Radar (SAR) image classification. Integrated with spatial-channel attention mechanism, SCSKFCN adaptively fuses features that have different sizes of reception field, and achieves promising classification performance. SPUO can efficiently extract information contained in unlabeled samples according to annotated samples. It utilizes K-Wishart distance to preselect unlabeled samples for pseudo label generation, and then optimizes SCSKFCN with both labeled and pseudo labeled samples. During the training process of SCSKFCN, a two-step verification mechanism is applied on pseudo labeled samples to reserve reliable samples for united optimization. The experimental results show that the proposed SCSKFCN-SPUO can achieve promising performance and efficiency using limited number of annotated pixels. In this paper, a Spatial-Channel Selective Kernel Fully Convolutional Network (SCSKFCN) and a Semi-supervised Preselection-United Optimization (SPUO) method are proposed for polarimetric Synthetic Aperture Radar (SAR) image classification. Integrated with spatial-channel attention mechanism, SCSKFCN adaptively fuses features that have different sizes of reception field, and achieves promising classification performance. SPUO can efficiently extract information contained in unlabeled samples according to annotated samples. It utilizes K-Wishart distance to preselect unlabeled samples for pseudo label generation, and then optimizes SCSKFCN with both labeled and pseudo labeled samples. During the training process of SCSKFCN, a two-step verification mechanism is applied on pseudo labeled samples to reserve reliable samples for united optimization. The experimental results show that the proposed SCSKFCN-SPUO can achieve promising performance and efficiency using limited number of annotated pixels.
Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits. Deep-learning technology has enabled remarkable results for ship detection in SAR images. However, in view of the complex and changeable backgrounds of SAR ship images, how to accurately and efficiently extract target features and improve detection accuracy and speed is still a huge challenge. To solve this problem, a ship detection algorithm based on multiscale feature fusion and channel relation calibration of features is proposed in this paper. First, based on Faster R-CNN, a channel attention mechanism is introduced to calibrate the channel relationship between features in the feature extraction network, so as to improve the network’s expression ability for extraction of ship features in different scenes. Second, unlike the original method of generating candidate regions based on single-scale features, this paper introduces an improved feature pyramid structure based on a neural architecture search algorithm, which helps improve the performance of the network. The multiscale features are effectively fused to settle the problem of missing detections of small targets and adjacent inshore targets. Experimental results on the SSDD dataset show that, compared with the original Faster R-CNN, the proposed algorithm improves detection accuracy from 85.4% to 89.4% and the detection rate from 2.8 FPS to 10.7 FPS. Thus, this method effectively achieves high-speed and high-accuracy SAR ship detection, which has practical benefits.
Multiscale object detection in Synthetic Aperture Radar (SAR) images can locate and recognize key objects in large-scene SAR images, and it is one of the key technologies in SAR image interpretation. However, for the simultaneous detection of SAR objects with large size differences, that is, cross-scale object detection, existing object detection methods are difficult to extract the features of cross-scale objects, and also difficult to realize cross-scale object simultaneous detection. In this study, we propose a multiscale object detection method based on the Feature-Transferable Pyramid Network (FTPN) for SAR images. In the feature extraction stage, the feature migration method is used to obtain an effective mosaic of the feature images of each layer and extract feature images with different scales. Simultaneously, the void convolution method is used to increase the receptive field of feature extraction and aid the network in extracting large object features. These steps can effectively preserve the features of objects of different sizes, to realize the simultaneous detection of cross-scale objects in SAR images. The experiments based on the GaoFen-3 SAR dataset, SAR Ship Detection Dataset (SSDD), and high-resolution SSDD-2.0 show that the proposed method can detect cross-scale objects, such as airports and ships in SAR images, and the mean Average Precision (mAP) can reach 96.5% on the existing dataset, which is 8.1% higher than that of the characteristic pyramid network algorithm. Moreover, the overall performance of the proposed method is better than that of the latest YOLOv4 and other object detection algorithms. Multiscale object detection in Synthetic Aperture Radar (SAR) images can locate and recognize key objects in large-scene SAR images, and it is one of the key technologies in SAR image interpretation. However, for the simultaneous detection of SAR objects with large size differences, that is, cross-scale object detection, existing object detection methods are difficult to extract the features of cross-scale objects, and also difficult to realize cross-scale object simultaneous detection. In this study, we propose a multiscale object detection method based on the Feature-Transferable Pyramid Network (FTPN) for SAR images. In the feature extraction stage, the feature migration method is used to obtain an effective mosaic of the feature images of each layer and extract feature images with different scales. Simultaneously, the void convolution method is used to increase the receptive field of feature extraction and aid the network in extracting large object features. These steps can effectively preserve the features of objects of different sizes, to realize the simultaneous detection of cross-scale objects in SAR images. The experiments based on the GaoFen-3 SAR dataset, SAR Ship Detection Dataset (SSDD), and high-resolution SSDD-2.0 show that the proposed method can detect cross-scale objects, such as airports and ships in SAR images, and the mean Average Precision (mAP) can reach 96.5% on the existing dataset, which is 8.1% higher than that of the characteristic pyramid network algorithm. Moreover, the overall performance of the proposed method is better than that of the latest YOLOv4 and other object detection algorithms.
Radar Electronic Countermeasures
Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses. Retrieving the working modes of multifunction radar from electronic reconnaissance data is a difficult problem, and it has attracted widespread attention in the field of electronic reconnaissance. It is also an important task when extracting benefits from big electromagnetic data and provides straightforward support to applications, such as radar type recognition, working state recognition, radar intention inferring, and precise electronic jamming. Based on the assumption of model simplicity, this study defines a complexity measurement rule for multifunction radar pulse trains and introduces the semantic coding theory to analyze the temporal structure of multifunction radar pulse trains. The model complexity minimization criterion guides the semantic coding procedure to extract radar pulse groups corresponding to different radar functions from pulse trains. Furthermore, based on the coded sequence of the pulse train, the switching matrix between different pulse groups is estimated, and the hierarchical working model of multifunction radars is ultimately reconstructed. Simulations are conducted to verify the feasibility and performance of the new method. Simulation results indicate that the coding theory is successfully used in the proposed method to automatically extract pulse groups and rebuild operating models based on multifunction radar pulse trains. Moreover, the method is robust to data noises, such as missing pulses.
Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios. Deep convolutional neural networks have achieved great success in recent years. They have been widely used in various applications such as optical and SAR image scene classification, object detection and recognition, semantic segmentation, and change detection. However, deep neural networks rely on large-scale high-quality training data, and can only guarantee good performance when the training and test data are independently sampled from the same distribution. Deep convolutional neural networks are found to be vulnerable to subtle adversarial perturbations. This adversarial vulnerability prevents the deployment of deep neural networks in security-sensitive applications such as medical, surveillance, autonomous driving and military scenarios. This paper first presents a holistic view of security issues for deep convolutional neural network-based image recognition systems. The entire information processing chain is analyzed regarding safety and security risks. In particular, poisoning attacks and evasion attacks on deep convolutional neural networks are analyzed in detail. The root causes of adversarial vulnerabilities of deep recognition models are also discussed. Then, we give a formal definition of adversarial robustness and present a comprehensive review of adversarial attacks, adversarial defense, and adversarial robustness evaluation. Rather than listing existing research, we focus on the threat models for the adversarial attack and defense arms race. We perform a detailed analysis of several representative adversarial attacks on SAR image recognition models and provide an example of adversarial robustness evaluation. Finally, several open questions are discussed regarding recent research progress from our workgroup. This paper can be further used as a reference to develop more robust deep neural network-based image recognition models in dynamic adversarial scenarios.
An optimal joint allocation of multijammer resources is proposed for jamming a Netted Radar System (NRS) in the case of multitarget penetration. First, the multitarget detection probabilities of NRS in the suppressive jamming environment are used as an interference performance metric. Then, the resource optimization model is established, including two optimization variables, namely, jamming beam and transmitting power, considering the detection performance requirements of different targets. Particle swarm optimization is used to solve the resource-optimization problem. Finally, considering the generalization error of the detection probability caused by the parameter uncertainty of the NRS, the robust resource-optimization model is established. The simulation results show that the proposed optimization model is effective in suppressing the NRS and reducing the probability of the penetrating targets detected by the NRS. Compared with the traditional method, the robust algorithm improves the cooperative interference performance of multiple jammers against NRS and is robust. An optimal joint allocation of multijammer resources is proposed for jamming a Netted Radar System (NRS) in the case of multitarget penetration. First, the multitarget detection probabilities of NRS in the suppressive jamming environment are used as an interference performance metric. Then, the resource optimization model is established, including two optimization variables, namely, jamming beam and transmitting power, considering the detection performance requirements of different targets. Particle swarm optimization is used to solve the resource-optimization problem. Finally, considering the generalization error of the detection probability caused by the parameter uncertainty of the NRS, the robust resource-optimization model is established. The simulation results show that the proposed optimization model is effective in suppressing the NRS and reducing the probability of the penetrating targets detected by the NRS. Compared with the traditional method, the robust algorithm improves the cooperative interference performance of multiple jammers against NRS and is robust.
Radar Application Technology
Space target state estimation aims to obtain a target’s on-orbit attitude, structure, movement, and other parameters accurately. This process helps observers analyze the target action intention, check for potential fault threats, and predict the development of on-orbit situations and is the core technology in the field of space situation awareness. Currently, the estimation of the on-orbit state of space targets mainly relies on external observations from high-performance sensors, such as radars, paralleled by the emergence of a series of representative methods. This paper briefly introduces the development status of inverse synthetic-aperture radar used for space target monitoring at home and abroad. Then, several representative methods, including data feature matching, three-dimensional (3D) imaging reconstruction, and multi-look fusion estimation, are introduced. The data feature-matching technology performs well when the priori target 3D model and scene conditions are given. The state estimation with 3D geometric reconstruction has the potential for fine description of the target, but high-level observation conditions are required. Finally, the future development trend of this direction is forecasted. Space target state estimation aims to obtain a target’s on-orbit attitude, structure, movement, and other parameters accurately. This process helps observers analyze the target action intention, check for potential fault threats, and predict the development of on-orbit situations and is the core technology in the field of space situation awareness. Currently, the estimation of the on-orbit state of space targets mainly relies on external observations from high-performance sensors, such as radars, paralleled by the emergence of a series of representative methods. This paper briefly introduces the development status of inverse synthetic-aperture radar used for space target monitoring at home and abroad. Then, several representative methods, including data feature matching, three-dimensional (3D) imaging reconstruction, and multi-look fusion estimation, are introduced. The data feature-matching technology performs well when the priori target 3D model and scene conditions are given. The state estimation with 3D geometric reconstruction has the potential for fine description of the target, but high-level observation conditions are required. Finally, the future development trend of this direction is forecasted.
Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%. Multi-sensor fusion perception is one of the key technologies to realize intelligent automobile driving, and it has become a hot issue in the field of intelligent driving. However, because of the limited resolution of millimeter-wave radars, the interference of noise, clutter, and multipath, and the influence of weather on LiDAR, the existing fusion algorithm cannot easily achieve accurate fusion of the data of two sensors and obtain robust results. To address the problem of accurate and robust perception in intelligent driving, this study proposes a robust perception algorithm that combines millimeter-wave radar and LiDAR. Using a new method of spatial correction based on feature-based two-step registration, the precise spatial synchronization of the 3D LiDAR and 2D radar point clouds is realized. The improved millimeter-wave radar filtering algorithm is used to reduce the influence of noise and multipath on the radar point cloud. Then, according to the novel fusion method proposed in this study, the data of the two sensors are fused to obtain accurate and robust sensing results, which solves the problem of the influence of smoke on LiDAR performance. Finally, we conducted multiple sets of experiments in a real environment to verify the effectiveness and robustness of our method. Even in extreme environments such as smoke, we can still achieve accurate positioning and robust mapping. The environment map established by the fusion method proposed in this study is more accurate than that established by a single sensor. Moreover, the location error obtained can be reduced by at least 50%.
Radar Signal and Data Processing
Aircraft wake are a couple of counter-rotating vortices generated by a flying aircraft, which can pose a serious hazard to follower aircraft. The behavior prediction of it is a key issue for air traffic safety management. To this end, we propose a prediction method based on data assimilation, which can be used to predict the evolution and hazard area of aircraft wake vortex from the vortex-core’s positions and circulation. To construct our wake vortex prediction model, we use linear shear and least square estimation. In addition, we use a data assimilation model based on the unscented Kalman filter to instantly correct the predicted trajectories. Our experimental results show that the proposed method performs well and runs steadily, thus, providing an effective tool for aircraft wake vortex prediction and support for the establishment of dynamic wake separation in air traffic management. Aircraft wake are a couple of counter-rotating vortices generated by a flying aircraft, which can pose a serious hazard to follower aircraft. The behavior prediction of it is a key issue for air traffic safety management. To this end, we propose a prediction method based on data assimilation, which can be used to predict the evolution and hazard area of aircraft wake vortex from the vortex-core’s positions and circulation. To construct our wake vortex prediction model, we use linear shear and least square estimation. In addition, we use a data assimilation model based on the unscented Kalman filter to instantly correct the predicted trajectories. Our experimental results show that the proposed method performs well and runs steadily, thus, providing an effective tool for aircraft wake vortex prediction and support for the establishment of dynamic wake separation in air traffic management.
Airborne Synthetic Aperture Radar (SAR) location error is affected by the position/speed measurement error of the aircraft, system time error, etc., and also related to the residual error of motion compensation. However, the existing airborne SAR location model rarely considers the effect of residual motion error. Considering that motion and trajectory measurement errors are common in practice, this paper derives a location error transfer model of an airborne SAR image based on the motion compensation and frequency-domain imaging algorithms. The proposed model clarifies the influence of trajectory measurement error on location deviation when residual motion error exists and provides a method of error calibration measurement. The simulation experiments validate the correctness of the proposed location error transfer model. The present method obtains a more accurate error calibration measurement result than the location error model that does not consider the residual motion error, proving the superiority of the proposed model. Airborne Synthetic Aperture Radar (SAR) location error is affected by the position/speed measurement error of the aircraft, system time error, etc., and also related to the residual error of motion compensation. However, the existing airborne SAR location model rarely considers the effect of residual motion error. Considering that motion and trajectory measurement errors are common in practice, this paper derives a location error transfer model of an airborne SAR image based on the motion compensation and frequency-domain imaging algorithms. The proposed model clarifies the influence of trajectory measurement error on location deviation when residual motion error exists and provides a method of error calibration measurement. The simulation experiments validate the correctness of the proposed location error transfer model. The present method obtains a more accurate error calibration measurement result than the location error model that does not consider the residual motion error, proving the superiority of the proposed model.
With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition. With the advent of the aging population, fall detection has gradually become a research hotspot. Aiming at the detection of human fall using millimeter-wave radar, a Range-Doppler heat map Sequence detection Network (RDSNet) model that combines the convolutional neural network and long short-term memory network is proposed in this study. First, feature extraction is performed using the convolutional neural network. After obtaining the feature vector, the feature vector corresponding to the dynamic sequence is inputted to the long short-term memory network. Subsequently, the time correlation information of the heat map sequence is learned. Finally, the detection results are obtained using the classifier. Moreover, diverse human movement information of different objects is collected using millimeter-wave radar, and a range-Doppler heat map dataset is built in this work. Comparative experiments show that the proposed RDSNet model can reach an accuracy of 96.67% and the calculation delay is not higher than 50 ms. The proposed RDSNet model has good generalization capabilities and provides new technical ideas for human fall detection and human posture recognition.