Most Cited
Deep learning such as deep neural networks has revolutionized the computer vision area. Deep learning-based algorithms have surpassed conventional algorithms in terms of performance by a significant margin. This paper reviews our works in the application of deep convolutional neural networks to target recognition and terrain classification using the SAR image. A convolutional neural network is employed to automatically extract a hierarchic feature representation from the data, based on which the target recognition and terrain classification can be conducted. Experimental results on the MSTAR benchmark dataset reveal that deep convolutional network could achieve a state-of-the-art classification accuracy of 99% for the 10-class task. For a polarimetric SAR image classification, we propose complex-valued convolutional neural networks for complex SAR images. This algorithm achieved a state-of-the-art accuracy of 95% for the 15-class task on the Flevoland benchmark dataset.
Deep learning such as deep neural networks has revolutionized the computer vision area. Deep learning-based algorithms have surpassed conventional algorithms in terms of performance by a significant margin. This paper reviews our works in the application of deep convolutional neural networks to target recognition and terrain classification using the SAR image. A convolutional neural network is employed to automatically extract a hierarchic feature representation from the data, based on which the target recognition and terrain classification can be conducted. Experimental results on the MSTAR benchmark dataset reveal that deep convolutional network could achieve a state-of-the-art classification accuracy of 99% for the 10-class task. For a polarimetric SAR image classification, we propose complex-valued convolutional neural networks for complex SAR images. This algorithm achieved a state-of-the-art accuracy of 95% for the 15-class task on the Flevoland benchmark dataset.
Used to suppress strong clutter and jamming in airborne radar data, Space Time Adaptive Processing (STAP) is a multidimensional adaptive filtering technique that simultaneously combines signals from elements of an antenna array and multiple pulses of coherent radar waveforms. As a key technology for improving the performance of airborne radar, it has attracted much attention in the field of radar research and from powerful military nations in recent years. In this paper, the research and development status of STAP technology is reviewed including methodologies, experimental systems, and applications and we focus on the key technical problems encountered during its development. Then, the application of STAP technology in equipment is introduced. Finally, the next development trends, future directions, and areas worthy of further research are presented.
Used to suppress strong clutter and jamming in airborne radar data, Space Time Adaptive Processing (STAP) is a multidimensional adaptive filtering technique that simultaneously combines signals from elements of an antenna array and multiple pulses of coherent radar waveforms. As a key technology for improving the performance of airborne radar, it has attracted much attention in the field of radar research and from powerful military nations in recent years. In this paper, the research and development status of STAP technology is reviewed including methodologies, experimental systems, and applications and we focus on the key technical problems encountered during its development. Then, the application of STAP technology in equipment is introduced. Finally, the next development trends, future directions, and areas worthy of further research are presented.
Phased array radar can simultaneously form multiple beams that can scan without inertia allowing for flexible pointing. In this paper, we propose a joint beam and dwell time allocation strategy for multi-target tracking in a phased array radar system to achieve multi-target tracking with less system resources. First, we formulate an optimization problem for minimizing the total dwell time on all targets while guaranteeing to meet a predetermined target-tracking accuracy requirement. The Bayesian Cramer-Rao Lower Bound (BCRLB) is introduced as the tracking performance metric since it provides a lower bound for the error of target state estimate. Second, after proving the optimization problem is nonconvex, we propose a two-step decomposition algorithm which is first to determine the beam pointing and then allocate the beam dwell time to solve it. Finally, we achieve multi-target tracking based on the resource allocation results. Simulation results show that our optimization strategy is effective in saving resources and is favorable for achieving a better tracking performance of worse targets as compared to an operating mode wherein uniform resource allocation occurs.
Phased array radar can simultaneously form multiple beams that can scan without inertia allowing for flexible pointing. In this paper, we propose a joint beam and dwell time allocation strategy for multi-target tracking in a phased array radar system to achieve multi-target tracking with less system resources. First, we formulate an optimization problem for minimizing the total dwell time on all targets while guaranteeing to meet a predetermined target-tracking accuracy requirement. The Bayesian Cramer-Rao Lower Bound (BCRLB) is introduced as the tracking performance metric since it provides a lower bound for the error of target state estimate. Second, after proving the optimization problem is nonconvex, we propose a two-step decomposition algorithm which is first to determine the beam pointing and then allocate the beam dwell time to solve it. Finally, we achieve multi-target tracking based on the resource allocation results. Simulation results show that our optimization strategy is effective in saving resources and is favorable for achieving a better tracking performance of worse targets as compared to an operating mode wherein uniform resource allocation occurs.
This paper presents a novel Synthetic Aperture Radar (SAR)-image-change-detection method, which integrates effective-image preprocessing and Convolutional Neural Network (CNN) classification. To validate the efficiency of the proposed method, two SAR images of the same devastated region obtained by TerraSAR-X before and after the 2011 Tohoku earthquake are investigated. During image preprocessing, the image backgrounds such as mountains and water bodies are extracted and removed using Digital Elevation Model (DEM) model and Otsu’s thresholding method. A CNN is employed to automatically extract hierarchical feature representation from the data. The SAR image is then classified with the theoretically obtained features. The classification accuracies of the training and testing datasets are 98.25% and 97.86%, respectively. The changed areas between two SAR images are detected using image difference method. The accuracy and efficiency of the proposed method are validated. In addition, with other traditional methods as comparison, this paper presents change-detection results using the proposed method. Results show that the proposed method has higher accuracy in comparison with traditional change-detection methods.
This paper presents a novel Synthetic Aperture Radar (SAR)-image-change-detection method, which integrates effective-image preprocessing and Convolutional Neural Network (CNN) classification. To validate the efficiency of the proposed method, two SAR images of the same devastated region obtained by TerraSAR-X before and after the 2011 Tohoku earthquake are investigated. During image preprocessing, the image backgrounds such as mountains and water bodies are extracted and removed using Digital Elevation Model (DEM) model and Otsu’s thresholding method. A CNN is employed to automatically extract hierarchical feature representation from the data. The SAR image is then classified with the theoretically obtained features. The classification accuracies of the training and testing datasets are 98.25% and 97.86%, respectively. The changed areas between two SAR images are detected using image difference method. The accuracy and efficiency of the proposed method are validated. In addition, with other traditional methods as comparison, this paper presents change-detection results using the proposed method. Results show that the proposed method has higher accuracy in comparison with traditional change-detection methods.
- First
- Prev
- 1
- 2
- 3
- 4
- Next
- Last
- Total:4
- To
- Go