2017 Vol. 6, No. 5

Reviews
Classification is one of the core components in the interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) images. A new PolSAR image classification approach employs the structural properties of the Riemannian manifold formed by PolSAR covariance matrices. In this paper, we first review the Riemannian manifold metrics generally used in PolSAR image analysis. Then, we describe a sparse coding method for the covariance matrices in the Riemannian manifold. For supervised classification, we propose a PolSAR image classification method that considers spatial information based on kernel space sparse coding. As for unsupervised PolSAR image classification, a method that takes advantage of Riemannian sparse induced similarity is proposed. Experimental results on EMISAR and AIRSAR data demonstrate the effectiveness of the proposed methods. Classification is one of the core components in the interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) images. A new PolSAR image classification approach employs the structural properties of the Riemannian manifold formed by PolSAR covariance matrices. In this paper, we first review the Riemannian manifold metrics generally used in PolSAR image analysis. Then, we describe a sparse coding method for the covariance matrices in the Riemannian manifold. For supervised classification, we propose a PolSAR image classification method that considers spatial information based on kernel space sparse coding. As for unsupervised PolSAR image classification, a method that takes advantage of Riemannian sparse induced similarity is proposed. Experimental results on EMISAR and AIRSAR data demonstrate the effectiveness of the proposed methods.
Backscattering of radar targets is sensitive to the relative geometry between target orientations and the radar line of sight. This scattering diversity makes imaging radar represented by polarimetric Synthetic Aperture Radar (SAR) information processing and applications very difficult. This situation has become one of the main bottlenecks in the interpretation of the target scattering mechanism and quantitative applications. In this work, we review and introduce a new interpretation of the target scattering mechanism in the rotation domain along the radar line of sight. This concept includes the recently established uniform polarimetric matrix rotation theory and polarimetric coherence pattern visualization and interpretation in the rotation domain. The core idea of target scattering interpretation in the rotation domain is to extend the amount of target information acquired at a given geometry to the rotation domain, which then provides fundamentals for the deep mining and utilization of target scattering information. This work mainly focuses on the investigation of derived new polarimetric feature sets and application demonstrations. Comparison study results validate the promising potential for the application of the established interpretation framework in the rotation domain with respect to target discrimination and classification. Backscattering of radar targets is sensitive to the relative geometry between target orientations and the radar line of sight. This scattering diversity makes imaging radar represented by polarimetric Synthetic Aperture Radar (SAR) information processing and applications very difficult. This situation has become one of the main bottlenecks in the interpretation of the target scattering mechanism and quantitative applications. In this work, we review and introduce a new interpretation of the target scattering mechanism in the rotation domain along the radar line of sight. This concept includes the recently established uniform polarimetric matrix rotation theory and polarimetric coherence pattern visualization and interpretation in the rotation domain. The core idea of target scattering interpretation in the rotation domain is to extend the amount of target information acquired at a given geometry to the rotation domain, which then provides fundamentals for the deep mining and utilization of target scattering information. This work mainly focuses on the investigation of derived new polarimetric feature sets and application demonstrations. Comparison study results validate the promising potential for the application of the established interpretation framework in the rotation domain with respect to target discrimination and classification.
GaoFen-3 (GF-3) is the first commercial C-Band multi-polarimetric Synthetic Aperture Radar (SAR) satellite that was launched by China. The characteristics observed by both all-day and all-weather observation depict significant advantages of national sea area use dynamic monitoring. We have thoroughly discussed both the imaging mode and the standard preprocessing of GF-3 imagery by analyzing national sea area use dynamic monitoring. We have portrayed reclamation and aquaculture as significant examples of dynamic monitoring. We have presented both identification and classification results using various image modes of GF-3 satellite, compared with the existing approaches. Finally, we have elaborated on the scope for future research. GaoFen-3 (GF-3) is the first commercial C-Band multi-polarimetric Synthetic Aperture Radar (SAR) satellite that was launched by China. The characteristics observed by both all-day and all-weather observation depict significant advantages of national sea area use dynamic monitoring. We have thoroughly discussed both the imaging mode and the standard preprocessing of GF-3 imagery by analyzing national sea area use dynamic monitoring. We have portrayed reclamation and aquaculture as significant examples of dynamic monitoring. We have presented both identification and classification results using various image modes of GF-3 satellite, compared with the existing approaches. Finally, we have elaborated on the scope for future research.
Paper
GF-3, the first C-band full-polarimetric Synthetic Aperture Radar (SAR) satellite with a space resolution up to 1 m, has multiple strip and scan imaging modes. In this paper, we propose a maritime ship detection algorithm that detects ship targets via pixel classification in a Bayesian framework and employ effective enhancement methods to improve detection performance based on the data characteristics. We compare and analyze the results of detection experiments using the proposed algorithm with those of several Constant False Alarm Rate (CFAR) algorithms. The experimental results verify the effectiveness of the proposed algorithm. GF-3, the first C-band full-polarimetric Synthetic Aperture Radar (SAR) satellite with a space resolution up to 1 m, has multiple strip and scan imaging modes. In this paper, we propose a maritime ship detection algorithm that detects ship targets via pixel classification in a Bayesian framework and employ effective enhancement methods to improve detection performance based on the data characteristics. We compare and analyze the results of detection experiments using the proposed algorithm with those of several Constant False Alarm Rate (CFAR) algorithms. The experimental results verify the effectiveness of the proposed algorithm.
In this paper, we present a Synthetic Aperture Radar (SAR) image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM) features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database,and they demonstrate the effectiveness of the proposed approach. In this paper, we present a Synthetic Aperture Radar (SAR) image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM) features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database,and they demonstrate the effectiveness of the proposed approach.
Object reconstruction is of vital importance in Synthetic Aperture Radar (SAR) image analysis. In this paper, we propose a novel method based on shape prior to reconstruct aircraft in high resolution SAR images. The method mainly contains two stages. In the shape prior modeling stage, a generative deep learning method is used to model deep shape priors; a novel framework is then proposed in the reconstruction stage, which integrates the shape priors in the process of reconstruction. Specifically, to address the issue of object rotation, a novel pose estimation method is proposed to obtain candidate poses, which avoids making an exhaustive search for each pose. In addition, an energy function combining a scattering region term and a shape prior term is proposed; this is optimized via an iterative optimization algorithm to achieve the goal of object reconstruction. To the best of our knowledge, this is the first attempt made to reconstruct objects with complex shapes in SAR images using deep shape priors. Experiments are conducted on the dataset acquired by TerraSAR-X and results demonstrate the accuracy and robustness of the proposed method. Object reconstruction is of vital importance in Synthetic Aperture Radar (SAR) image analysis. In this paper, we propose a novel method based on shape prior to reconstruct aircraft in high resolution SAR images. The method mainly contains two stages. In the shape prior modeling stage, a generative deep learning method is used to model deep shape priors; a novel framework is then proposed in the reconstruction stage, which integrates the shape priors in the process of reconstruction. Specifically, to address the issue of object rotation, a novel pose estimation method is proposed to obtain candidate poses, which avoids making an exhaustive search for each pose. In addition, an energy function combining a scattering region term and a shape prior term is proposed; this is optimized via an iterative optimization algorithm to achieve the goal of object reconstruction. To the best of our knowledge, this is the first attempt made to reconstruct objects with complex shapes in SAR images using deep shape priors. Experiments are conducted on the dataset acquired by TerraSAR-X and results demonstrate the accuracy and robustness of the proposed method.
SAR image classification is an important task in SAR image interpretation. Supervised learning methods, such as the Convolutional Neural Network (CNN), demand samples that are accurately labeled. However, this presents a major challenge in SAR image labeling. Due to their unique imaging mechanism, SAR images are seriously affected by speckle, geometric distortion, and incomplete structural information. Thus, SAR images have a strong non-intuitive property, which causes difficulties in SAR image labeling, and which results in the weakened learning and generalization performance of many classifiers (including CNN). In this paper, we propose a Probability Transition CNN (PTCNN) for patch-level SAR image classification with noisy labels. Based on the classical CNN, PTCNN builds a bridge between noise-free labels and their noisy versions via a noisy-label transition layer. As such, we derive a new CNN model trained with a noisily labeled training dataset that can potentially revise noisy labels and improve learning capacity with noisily labeled data. We use a 16-class land cover dataset and the MSTAR dataset to demonstrate the effectiveness of our model. Our experimental results show the PTCNN model to be robust with respect to label noise and demonstrate its promising classification performance compared with the classical CNN model. Therefore, the proposed PTCNN model could lower the standards required regarding the quality of image labels and have a variety of practical applications. SAR image classification is an important task in SAR image interpretation. Supervised learning methods, such as the Convolutional Neural Network (CNN), demand samples that are accurately labeled. However, this presents a major challenge in SAR image labeling. Due to their unique imaging mechanism, SAR images are seriously affected by speckle, geometric distortion, and incomplete structural information. Thus, SAR images have a strong non-intuitive property, which causes difficulties in SAR image labeling, and which results in the weakened learning and generalization performance of many classifiers (including CNN). In this paper, we propose a Probability Transition CNN (PTCNN) for patch-level SAR image classification with noisy labels. Based on the classical CNN, PTCNN builds a bridge between noise-free labels and their noisy versions via a noisy-label transition layer. As such, we derive a new CNN model trained with a noisily labeled training dataset that can potentially revise noisy labels and improve learning capacity with noisily labeled data. We use a 16-class land cover dataset and the MSTAR dataset to demonstrate the effectiveness of our model. Our experimental results show the PTCNN model to be robust with respect to label noise and demonstrate its promising classification performance compared with the classical CNN model. Therefore, the proposed PTCNN model could lower the standards required regarding the quality of image labels and have a variety of practical applications.
Terrain classification is an important application for understanding and interpreting Polarimetric Synthetic Aperture Radar (PolSAR) images. One common PolSAR terrain classification uses roll-invariant feature parameters such as H/A/α/SPAN. However, the back scattering response of a target is closely related to its orientation and attitude. This frequently introduces ambiguity in the interpretation of scattering mechanisms and limits the accuracy of the PolSAR terrain classification, which only uses roll-invariant feature parameters for classification. To address this problem, the uniform polarimetric matrix rotation theory, which interprets a target’s scattering properties when its polarimetric matrix is rotated along the radar line of sight and derives a series of polarimetric features to describe hidden information of the target in the rotation domain was proposed. Based on this theory, in this study, we apply the polarimetric features in the rotation domain to PolSAR terrain discrimination and classification, and develop a PolSAR terrain classification method using both the polarimetric features in the rotation domain and the roll-invariant features of H/A/α/SPAN. This method also uses both the selected polarimetric feature parameters in the rotation domain and H/A/α/SPAN as input for a Support Vector Machine (SVM) classifier and achieves better classification performance by complementing the terrain discrimination abilities of both. Results from comparison experiments based on AIRSAR and UAVSAR data demonstrate that compared with the conventional method, which only uses H/A/α/SPAN as SVM classifier input, the proposed method can achieve higher classification accuracy and better robustness. For fifteen terrain classes of AIRSAR data, the total classification accuracy of the proposed method was 92.3%, which is higher than the 91.1% of the conventional method. Moreover, for seven terrain classes of multi-temporal UAVSAR data, the averaged total classification accuracy of the proposed method was 95.72%, which is much higher than the 87.80% of the conventional method. These results demonstrate that our proposed method has better robustness for multi-temporal data. The research also demonstrates that mining and extracting polarimetric scattering information of a target deep in the rotation domain provides a feasible new approach for PolSAR image interpretation and application. Terrain classification is an important application for understanding and interpreting Polarimetric Synthetic Aperture Radar (PolSAR) images. One common PolSAR terrain classification uses roll-invariant feature parameters such as H/A/α/SPAN. However, the back scattering response of a target is closely related to its orientation and attitude. This frequently introduces ambiguity in the interpretation of scattering mechanisms and limits the accuracy of the PolSAR terrain classification, which only uses roll-invariant feature parameters for classification. To address this problem, the uniform polarimetric matrix rotation theory, which interprets a target’s scattering properties when its polarimetric matrix is rotated along the radar line of sight and derives a series of polarimetric features to describe hidden information of the target in the rotation domain was proposed. Based on this theory, in this study, we apply the polarimetric features in the rotation domain to PolSAR terrain discrimination and classification, and develop a PolSAR terrain classification method using both the polarimetric features in the rotation domain and the roll-invariant features of H/A/α/SPAN. This method also uses both the selected polarimetric feature parameters in the rotation domain and H/A/α/SPAN as input for a Support Vector Machine (SVM) classifier and achieves better classification performance by complementing the terrain discrimination abilities of both. Results from comparison experiments based on AIRSAR and UAVSAR data demonstrate that compared with the conventional method, which only uses H/A/α/SPAN as SVM classifier input, the proposed method can achieve higher classification accuracy and better robustness. For fifteen terrain classes of AIRSAR data, the total classification accuracy of the proposed method was 92.3%, which is higher than the 91.1% of the conventional method. Moreover, for seven terrain classes of multi-temporal UAVSAR data, the averaged total classification accuracy of the proposed method was 95.72%, which is much higher than the 87.80% of the conventional method. These results demonstrate that our proposed method has better robustness for multi-temporal data. The research also demonstrates that mining and extracting polarimetric scattering information of a target deep in the rotation domain provides a feasible new approach for PolSAR image interpretation and application.
Unsupervised classification is a significant step inthe automated interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) images. However, determining the number of clusters in this process is still a challenging problem. To this end, we propose a region-based unsupervised classification method for PolSAR images by introducing Wishart mixture models and a Density Peaks Clustering (DPC) algorithm. More precisely, the Simple Linear Iterative Clustering (SLIC) algorithm is first used to segment the PolSAR image into superpixels. Subsequently, the Wishart mixture models are adopted to model each superpixel, and the pairwise distances between different superpixels are measured by Cauchy-Schwarz divergence. Finally, the unsupervised classification result of the PolSAR image is obtained via clustering by fast search and find of density peaks. The experimental results obtained from different PolSAR images demonstrate that the proposed method is effective. Unsupervised classification is a significant step inthe automated interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) images. However, determining the number of clusters in this process is still a challenging problem. To this end, we propose a region-based unsupervised classification method for PolSAR images by introducing Wishart mixture models and a Density Peaks Clustering (DPC) algorithm. More precisely, the Simple Linear Iterative Clustering (SLIC) algorithm is first used to segment the PolSAR image into superpixels. Subsequently, the Wishart mixture models are adopted to model each superpixel, and the pairwise distances between different superpixels are measured by Cauchy-Schwarz divergence. Finally, the unsupervised classification result of the PolSAR image is obtained via clustering by fast search and find of density peaks. The experimental results obtained from different PolSAR images demonstrate that the proposed method is effective.
More features and contextual information can be extracted and exploited to improve classification accuracy in complex Polarimetric Synthetic Aperture Radar (PolSAR) imagery classification. However, the problems of overfitting and feature interference caused by the increased high dimensions of features lead to poor classification performance. To address these problems, a PolSAR image classification method based on combined Conditional Random Fields (CRF) is proposed in this paper. Unlike the traditional way of utilizing multiple feature information wherein multiple feature vectors are directly stacked to form a new one, combined CRF first forms multiple feature subsets according to different feature types and utilizes these feature subsets to train the same CRF model to obtain multiple child classifiers, thus obtaining multiple classification results. Then, the final classification result is gained by fusing multiple child classification results with the normalized overall classification accuracy of each classifier as the weight. Extensive experiments conducted on two real-world PolSAR images demonstrate that the accuracy of the proposed method is significantly improved than that of the single child classifier. For both the data sets used for performance evaluation, the classification accuracies of the proposed method increased by 13.38% and 11.55% than those of the method of stacking features, respectively, and by 13.78% and 14.75% than those of support vector machine-based method, respectively. More features and contextual information can be extracted and exploited to improve classification accuracy in complex Polarimetric Synthetic Aperture Radar (PolSAR) imagery classification. However, the problems of overfitting and feature interference caused by the increased high dimensions of features lead to poor classification performance. To address these problems, a PolSAR image classification method based on combined Conditional Random Fields (CRF) is proposed in this paper. Unlike the traditional way of utilizing multiple feature information wherein multiple feature vectors are directly stacked to form a new one, combined CRF first forms multiple feature subsets according to different feature types and utilizes these feature subsets to train the same CRF model to obtain multiple child classifiers, thus obtaining multiple classification results. Then, the final classification result is gained by fusing multiple child classification results with the normalized overall classification accuracy of each classifier as the weight. Extensive experiments conducted on two real-world PolSAR images demonstrate that the accuracy of the proposed method is significantly improved than that of the single child classifier. For both the data sets used for performance evaluation, the classification accuracies of the proposed method increased by 13.38% and 11.55% than those of the method of stacking features, respectively, and by 13.78% and 14.75% than those of support vector machine-based method, respectively.
This paper proposes a classification method for the intertidal area using quad-polarimetric synthetic aperture radar data. In this paper, a systematic comparison of four well-known multipolarization features is provided so that appropriate features can be selected based on the characteristics of the intertidal area. Analysis result shows that the two most powerful multipolarization features are polarimetric entropy and anisotropy. Furthermore, through our detailed analysis of the scattering mechanisms of the polarimetric entropy, the Generalized Extreme Value (GEV) distribution is employed to describe the statistical characteristics of the intertidal area based on the extreme value theory. Consequently, a new classification method is proposed by combining the GEV Mixture Models and the EM algorithm. Finally, experiments are performed on the Radarsat-2 quad-polarization data of the Dongtan intertidal area, Shanghai, to validate our method. This paper proposes a classification method for the intertidal area using quad-polarimetric synthetic aperture radar data. In this paper, a systematic comparison of four well-known multipolarization features is provided so that appropriate features can be selected based on the characteristics of the intertidal area. Analysis result shows that the two most powerful multipolarization features are polarimetric entropy and anisotropy. Furthermore, through our detailed analysis of the scattering mechanisms of the polarimetric entropy, the Generalized Extreme Value (GEV) distribution is employed to describe the statistical characteristics of the intertidal area based on the extreme value theory. Consequently, a new classification method is proposed by combining the GEV Mixture Models and the EM algorithm. Finally, experiments are performed on the Radarsat-2 quad-polarization data of the Dongtan intertidal area, Shanghai, to validate our method.

This paper presents a novel Synthetic Aperture Radar (SAR)-image-change-detection method, which integrates effective-image preprocessing and Convolutional Neural Network (CNN) classification. To validate the efficiency of the proposed method, two SAR images of the same devastated region obtained by TerraSAR-X before and after the 2011 Tohoku earthquake are investigated. During image preprocessing, the image backgrounds such as mountains and water bodies are extracted and removed using Digital Elevation Model (DEM) model and Otsu’s thresholding method. A CNN is employed to automatically extract hierarchical feature representation from the data. The SAR image is then classified with the theoretically obtained features. The classification accuracies of the training and testing datasets are 98.25% and 97.86%, respectively. The changed areas between two SAR images are detected using image difference method. The accuracy and efficiency of the proposed method are validated. In addition, with other traditional methods as comparison, this paper presents change-detection results using the proposed method. Results show that the proposed method has higher accuracy in comparison with traditional change-detection methods.

This paper presents a novel Synthetic Aperture Radar (SAR)-image-change-detection method, which integrates effective-image preprocessing and Convolutional Neural Network (CNN) classification. To validate the efficiency of the proposed method, two SAR images of the same devastated region obtained by TerraSAR-X before and after the 2011 Tohoku earthquake are investigated. During image preprocessing, the image backgrounds such as mountains and water bodies are extracted and removed using Digital Elevation Model (DEM) model and Otsu’s thresholding method. A CNN is employed to automatically extract hierarchical feature representation from the data. The SAR image is then classified with the theoretically obtained features. The classification accuracies of the training and testing datasets are 98.25% and 97.86%, respectively. The changed areas between two SAR images are detected using image difference method. The accuracy and efficiency of the proposed method are validated. In addition, with other traditional methods as comparison, this paper presents change-detection results using the proposed method. Results show that the proposed method has higher accuracy in comparison with traditional change-detection methods.

As a pre-processing technique, superpixel segmentation algorithms should be of high computational efficiency, accurate boundary adherence and regular shape in homogeneous regions. A fast superpixel segmentation algorithm based on Iterative Edge Refinement (IER) has shown to be applicable on optical images. However, it is difficult to obtain the ideal results when IER is applied directly to PolSAR images due to the speckle noise and small or slim regions in PolSAR images. To address these problems, in this study, the unstable pixel set is initialized as all the pixels in the PolSAR image instead of the initial grid edge pixels. In the local relabeling of the unstable pixels, the fast revised Wishart distance is utilized instead of the Euclidean distance in CIELAB color space. Then, a post-processing procedure based on dissimilarity measure is empolyed to remove isolated small superpixels as well as to retain the strong point targets. Finally, extensive experiments based on a simulated image and a real-world PolSAR image from Airborne Synthetic Aperture Radar (AirSAR) are conducted, showing that the proposed algorithm, compared with three state-of-the-art methods, performs better in terms of several commonly used evaluation criteria with high computational efficiency, accurate boundary adherence, and homogeneous regularity. As a pre-processing technique, superpixel segmentation algorithms should be of high computational efficiency, accurate boundary adherence and regular shape in homogeneous regions. A fast superpixel segmentation algorithm based on Iterative Edge Refinement (IER) has shown to be applicable on optical images. However, it is difficult to obtain the ideal results when IER is applied directly to PolSAR images due to the speckle noise and small or slim regions in PolSAR images. To address these problems, in this study, the unstable pixel set is initialized as all the pixels in the PolSAR image instead of the initial grid edge pixels. In the local relabeling of the unstable pixels, the fast revised Wishart distance is utilized instead of the Euclidean distance in CIELAB color space. Then, a post-processing procedure based on dissimilarity measure is empolyed to remove isolated small superpixels as well as to retain the strong point targets. Finally, extensive experiments based on a simulated image and a real-world PolSAR image from Airborne Synthetic Aperture Radar (AirSAR) are conducted, showing that the proposed algorithm, compared with three state-of-the-art methods, performs better in terms of several commonly used evaluation criteria with high computational efficiency, accurate boundary adherence, and homogeneous regularity.