HUA Wenqiang, WANG Shuang, GUO Yanhe, et al. Semi-supervised PolSAR image classification based on the neighborhood minimum spanning tree[J]. Journal of Radars, 2019, 8(4): 458–470. doi: 10.12000/JR18104
Citation: LENG Xiangguang, JI Kefeng, XIONG Boli, et al. Statistical modeling methods of single-channel complex-valued SAR images for ship detection [J]. Journal of Radars, 2020, 9(3): 477–496. doi: 10.12000/JR20070

Statistical Modeling Methods of Single-channel Complex-valued SAR Images for Ship Detection

DOI: 10.12000/JR20070
Funds:  The National Natural Science Foundation of China (61601035, 61971426)
More Information
  • Corresponding author: LENG Xiangguang, luckight@163.com; JI Kefeng, jikefeng@nudt.edu.cn
  • Received Date: 2020-05-28
  • Rev Recd Date: 2020-06-19
  • Available Online: 2020-06-30
  • Publish Date: 2020-06-01
  • Synthetic Aperture Radar (SAR), which features rich imaging modes, wide coverage, and high resolution, is an effective technique for long-term, dynamic, and large-scale monitoring of the ocean. Under the assumption of fully developed speckle, traditional ship detection methods in single-channel SAR images focus mainly on amplitude information. Since conventional assumptions are not strictly true in high-resolution situations, this prevents the full investigation of phase or complex-valued information in single-channel SAR images. In this paper, with a focus on ship detection applications, we categories the methods used in the statistical modeling of single-channel complex-valued SAR images as amplitude-, phase-, or complex-valued-based. After providing a brief overview of amplitude statistical modeling methods, we focus on phase and complex-valued statistical modeling methods of single-channel SAR images, describing their modeling processes and parameter estimation methods. We then present the results of our recent ship detection research based on complex-valued statistical information in single-channel SAR images and make suggestions regarding future research.

     

  • 极化SAR图像分类作为极化SAR图像理解与解译的重要研究内容,近年来受到越来越多研究者的关注,并广泛应用到各个领域,如土地覆盖类型判别、地面目标检测、地质勘探、植被种类判别等[13]。根据分类方法中标记样本和无标记样本的利用方式,极化SAR地物分类方法主要可以分为3种类型:无监督分类方法[4,5]、监督分类方法[6,7]和半监督分类方法[8,9]

    对于极化SAR图像分类问题,监督分类方法通常比无监督分类更容易获得好的分类结果,但是监督分类方法通常需要充足的标记样本作为训练样本,而实际中标记样本的获取是非常困难,需要耗费大量的人力物力。而无标记数据获取相对容易,并且无标记的数据也能反映数据的某些信息,能够有效地帮助学习分类器。因此,如何利用大量的无标记样本对少量的标记样本进行补充辅助训练的半监督学习方法,引起了研究者的广泛关注,成为了当前研究的热点。近年来,很多半监督分类方法被提出来,如自训练(Self-training)方法[10]、协同训练方法(Co-training和Tri-training)[11,12]、标签传播聚类算法、基于图的半监督分类算法[13,14]和基于半监督的神经网络算法[1517]等。然而针对极化SAR图像分类问题的半监督方法研究较少,Hansch[18]提出了一种基于聚类算法的半监督极化SAR分类方法,将半监督思想同聚类方法相结合,通过被选择未标记样本对聚类中心进行约束,利用未标记样本的约束影响聚类中心,获得更好的分类结果。为利用极化SAR数据中的空间信息,Liu等人[19]提出了基于邻域约束半监督特征提取的极化SAR图像分类方法。为使半监督训练中选择的未标记样本具有更高的可靠性和多样性,Wang等人[20]提出了基于改进协同训练的半监督极化SAR图像分类方法,通过协同训练的方式选择多样性的样本,通过预选择的方法增加被选择样本的可靠性。此外,结合深度学习方法和半监督学习思想,Geng等人[21]提出了基于超像素约束的深度神经网络半监督极化SAR分类方法。但是这些半监督分类方法都需要一定的标记样本,在标记样本非常少,只有几个标记像素的条件下,很难获得较好的分类结果。因此,本文针对此问题,提出一种基于邻域最小生成树的半监督极化SAR图像分类方法。该方法利用邻域最小生成树方法辅助半监督学习,在自训练的过程中通过邻域最小生成树辅助的方式选择更可靠的无标记样本扩大训练样本集,改善分类器的性能。

    自训练学习方法是一种典型的半监督学习方法,该方法利用现有的标记数据训练得到的模型对无标记的样本进行预测,选择可靠性高的样本以及其被赋予的标签加入到标记样本集中,通过不断循环的自训练,逐渐增加训练集中的样本数量并逐步改善分类器性能,该方法的框架图如图1所示。由图1可以看出,自训练方法的关键是选择可靠性的样本,如果选择的样本不正确,使错误的样本加入到训练集中,不仅不能使分类器性能得到改善反而会降低分类器的性能。因此,如何选择高置信度的样本成为自训练算法的关键。而在极化SAR图像分类中,由于只有少量的标记样本,在少量标记样本下训练的分类器是一个弱分类器,直接在弱分类器的结果中选择的样本很难保证其可靠性。如果将错误标记的样本加入到标记样本集中,反而会使分类器的性能下降。因此,为增加被选择样本的可靠性,结合极化SAR图像像素间的空间信息,本文提出了基于邻域最小生成树的样本选择方法,通过邻域最小生成树辅助选择的方法增加被选择样本的可靠性。

    图  1  自训练方法
    Figure  1.  Self-training method

    因此,本文算法的主要贡献为:(1)针对极化SAR图像分类中标记样本非常少的问题,提出了一种新的基于邻域最小生成树的半监督极化SAR图像分类方法,该方法同时利用未标记样本和标记样本的信息有效地提高分类正确率;(2)为增加自训练过程中被选择样本的可靠性,结合极化SAR图像像素间的空间信息,在最小生成树的基础上针对极化SAR图像分类的特性,提出了基于邻域最小生成树样本选择方法。

    在极化SAR数据中,每个像素点都可以表示为一个相干矩阵T或协方差矩阵C

    [C]=[|SHH|22SHHSHVSHHSVV2SHVSHH2|SHV|22SHVSVVSVVSHH2SVVSHV|SVV|2]

    (1)

    其中,HH表示水平发射水平接收,VV表示垂直发射垂直接收,HV表示水平发射垂直接收。由协方差矩阵C的矩阵表示形式可以看出,协方差矩阵是一个对角线为实数的复共轭对称矩阵,并且由协方差矩阵转换的9维特征向量通常可以作为极化SAR数据特征的一种表示,并在极化图像处理中取得良好的效果[9],该向量表示为

    view=[C11,C22,C33,real(C12),imag(C12),real(C13),imag(C13),real(C23),imag(C23)]

    (2)

    其中,real()表示实部,imag()表示虚部。

    图2(a)为美国旧金山地区的极化SAR数据,图2(b)图2(j)为由该数据的协方差矩阵转化的9维特征向量中每一元素增强10倍的灰度图。由9维特征向量每一元素的灰度图可以看出,每一元素都可以基本描述原始图像的大致信息,并且不同元素的灰度图都不相同,具有一定互补性,因此可以直接做为极化SAR图像的特征信息来描述极化SAR图像。

    图  2  极化SAR协方差矩阵中9个元素的灰度值
    Figure  2.  The gray value of 9 elements in PolSAR covariance matrix

    为增强自训练过程中被选样本的可靠性,在训练过程中逐步优化基分类器,结合极化SAR图像像素间的空间邻域信息,本文提出了基于邻域最小生成树的样本选择方法。

    在图论问题中,对于连通且没有环路的连通图称为树,在一个连通图里删除所有的环路而形成的树叫做该图的生成树,其中具有最小总权重的树,被称为最小生成 (Minimum Spanning Tree, MST)[22]。定义为:在无带权的无向连通图G中,W(vi,vj)表示任意两个节点ij之间边的权重的大小,若无向图G中存在着权重之和最小的生成树,则该树就是无向图G的最小生成树。图3为带权值的连通图G和其最小生成树。

    图  3  带权无向图G及其最小生成树
    Figure  3.  Weighted undirected graph G and its minimum spanning tree

    图3(a)可以看出任意两个节点都通过带权重的边相连,对于无向图G来说,可以由不同的节点出发得到不同的生成树模型。图3(b)为由权重最小的边遍历所有节点得到的最小生成树,对于无向图G来说,图3(b)是其唯一的最小生成树。

    本文采用Prim算法[23]计算最小生成树,该算法是一种产生最小生成树的算法。该算法从给定的顶点开始,每次选择一个与当前顶点最近的一个点,将该点与顶点之间的边加入到树中。其形式描述如下:

    步骤1 输入:在一个加权无向图G中,顶点集合为V,权值边的集合为E

    步骤2 初始化:Vr={x},其中x为初始顶点,Er={}为空;

    步骤3 重复下列操作,直到所有的顶点都加入到集合Vr中:(1)在集合E中选择权重最小的边[u, v],其中uVr中的元素,v为集合V中的元素,且vVr; (2)将v加入到集合Vr中,将边[u, v]加入到集合Er中;

    步骤4 输出:用集合VrEr表示所得到的最小生成树。

    通过对最小生成树算法分析可以看出,最小生成树的生成过程非常符合极化SAR图像的分类过程,极化SAR图像中每一像素点对应生成树中的节点,像素之间的相似性关系类似于生成树中节点间的边的权重,因此最小生成树方法非常适用于极化SAR图像的分类。然而要生成最小生成树,首先要构建无向图G,顶点的集合V和边的集合E,然而对大小为N×N的极化SAR图像来说,需要计算N2(N21)/2条边,需要耗费大量的时间。而极化SAR图像分类是对图像中每一个像素点分类,因此根据图像中像素点之间的空间关系,相邻的像素之间具有更高的相似性,提出了基于像素点空间邻域的Prim最小生成树算法,该算法描述如下:

    步骤1 构建无向图G(V, E),其中V为顶点(已标记像素点),用式(3)计算每一顶点于其8邻域边的集合E

    步骤2 选择顶点其8邻域内与其边的权值最小的边,并对与其权值最小的像素点进行标记,然后将其作为标记样本加入到顶点集合V中;

    步骤3 重复步骤1—步骤2过程直到选择完整幅图像中所有的像素点。

    该方法中需要计算各个顶点之间边的距离,由于极化SAR数据服从复Wishart分布,因此在极化SAR图像中,两个像素点之间的相似距离通常采用Wishart距离[24]表示

    wi,j=12Tr((Ti)1Tj+(Tj)1Ti)q
    (3)

    其中,Tr()表示矩阵的迹,TiTj分别表示像素点ij的相干矩阵,对于发射与接收是一体的雷达,由于其互易性,则q=3,对于发射和接收不是一体的雷达,q=4

    图4为该算法的生成过程,图中绿色的矩形表示初始的顶点,灰色的矩形表示其邻域的顶点,矩形中的数字表示中心像素点与邻域像素点的距离,距离越小越相似。第1次学习过程,选择初始顶点邻域边最小的顶点,距离为‘1’的点,如图4(b)所示,然后再在新的顶点集合的邻域内选择边最小的顶点,如图4(c)所示,添加到以初始顶点为根的树的集合中,依次循环,直到选择完所有的顶点为止。

    图  4  基于邻域的最小生成树生成过程
    Figure  4.  The spanning process of neighborhood minimum spanning tree

    本文针对极化SAR图像分类中只有少量标记样本的问题,为在少量标记样本的条件下获得较高的分类正确率,在传统自训练方法的基础上提出了基于邻域最小生成树的半监督极化SAR图像分类方法。该方法的核心是在自训练的过程中由大量的无标记样本中选择可靠的样本,将其添加到标记样本中,扩大标记样本的数量,逐渐优化分类器性能,最终实现提高分类正确率的目的。为此,结合最小生成树方法和极化SAR图像中像素点的空间信息,提出了基于邻域最小生成的样本选择方法,增加被选择样本的可靠性。本文所提方法的整个框架图如图5所示,具体步骤如下:

    图  5  基于邻域最小生成树的半监督极化SAR分类方法
    Figure  5.  Semi-supervised PolSAR classification based on the neighborhood minimum spanning tree

    步骤1 为降低斑点噪声对极化SAR数据的影响,采用精致Lee滤波[25]对极化SAR数据滤波,滤波窗口大小为7×7

    步骤2 以初始的标记像素点为初始顶点,构建无向图G,生成多个邻域最小生成树,每一个树中的像素点具有相同的标记;

    步骤3 利用初始的标记样本点,以view为每一个像素点的特征信息训练SVM分类器,并用训练好的SVM分类器对邻域最小生成树标记的样本进行测试;

    步骤4 挑选由分类器测试得到的结果中与邻域最小生成树生成的结果中标记一致的样本,添加到初始的标记样本集中,更新标记样本集;

    步骤5 重复步骤2到步骤4过程t次,直到得到满意的分类器;

    步骤6 用训练好的分类器对剩余样本进行测试。

    本文采用3组真实的极化SAR数据:(1)荷兰Flevoland 地区1989年8月由L波段的NASA/JPI AIRSAR 获得,该数据包含有750×1024个像素点,空间分辨率为6 m×12.1 m,主要包含15类农作物,如图6所示;(2)荷兰地区2008年4月由C波段的Radarsat-2获取的极化SAR数据,该数据主要包含1400×1200个像素点,空间分辨率为12 m×8 m,主要包含城市、水域、深林和农田4种类别,如图7所示;(3)美国旧金山地区2008年由C波段的Radarsat-2获取的极化SAR数据,该数据主要包含1300×1300个像素点,空间分辨率为12 m×8 m,主要包含高密度城市、低密度城区、水域、植被和开发区域5种类别,如图8所示。

    图  6  Flevoland地区AIRSAR L波段数据不同方法的分类结果
    Figure  6.  Classification results of the Flevoland data acquired by AIRSAR
    图  7  Flevoland地区Radarsat-2 C波段数据不同方法的分类结果
    Figure  7.  Classification result of the Flevoland data acquired by Radarsat-2
    图  8  旧金山地区Radarsat-2 C波段数据不同方法的分类结果
    Figure  8.  Classification result of the San Francisco data acquired by Radarsat-2

    本文以SVM为基本分类器,采用径向基核函数和5倍的交叉验证,为了验证本文算法的有效性,将本文方法与传统的基于自训练的半监督方法(Self-training)[10]、基于SVM分类器的监督分类方法(采用径向基核函数和5倍的交叉验证)[26]和监督Wishart方法[27]进行比较,并用总分类正确率和Kappa系数对实验结果进行评估,所有的实验进行10次,用平均值表示最终的分类结果。

    本实验中每类别选择不同数量的标记样本(10, 8, 6, 4)作为训练样本。图6(a)为Pauli分解的RGB图,图6(a1)为真实地物。实验结果如图6表1表2所示。图6(b)为本文方法的分类结果,图6(c)为传统Self-training算法的分类结果,图6(d)为监督Wishart方法的分类结果,图6(e)为SVM方法的分类结果。 表1为每类训练样本数量为10时不同方法的分类正确率。

    表  1  AIRSAR L波段的Felvoland地区不同分类算法的分类精度(%)
    Table  1.  Classification accuracy of the Flevoland area acquired by AIRSAR L band (%)
    区域方法
    WishartSVMSelf-training本文方法
    Stembeans91.4870.0790.8298.75
    Rapeseed61.8338.0267.1459.58
    Bare soil97.5186.8970.9796.75
    Potatoes79.4758.3880.2781.99
    Beet92.3585.6195.0594.60
    Wheat 267.4371.8067.3989.86
    Peas93.1077.7095.2497.56
    Wheat 382.0882.4294.3397.05
    Lucerne84.5340.7781.6795.06
    Barley81.9698.2998.6298.39
    Wheat81.4668.2885.3485.41
    Grasses66.4965.0381.7580.08
    Forest84.2161.0377.6694.77
    Water46.8565.3269.3993.35
    Building81.7778.912.1885.58
    OA79.4070.3077.1989.92
    下载: 导出CSV 
    | 显示表格
    表  2  AIRSAR L波段的Felvoland 地区不同训练样本的分类结果
    Table  2.  Classification results of the Flevoland area acquired by AIRSAR L band with different number of training samples
    方法训练样本数
    4 6 8 10
    OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
    Wishart74.620.7215 76.190.7459 78.780.7656 80.260.7831
    SVM56.070.542358.120.561164.420.610270.300.6682
    Self-training63.360.602568.420.656973.890.714677.230.7489
    本文方法79.330.788883.060.809386.900.841689.920.8852
    下载: 导出CSV 
    | 显示表格

    表1可以看出,本文分类方法的分类正确率为89.92%,高于Self-training分类方法12.73%,高于SVM分类方法19.62%,高于监督Wishart方法10.52%,而且本文方法中大部分类别的分类正确率都高于其它的对比方法。这主要是因为本文所提出半监督分类算法能够有效地利用标记样本和无标记样本的信息,并采用邻域最小生成树的策略辅助选择高可靠性的样本,改善了基分类器的性能。但是本文方法在Rapeseed的分类正确率只有59.58%,低于Self-training方法7.56%。由图6(b)可以看出,在本文方法中一部分Rapeseed被分为了Wheat 2和Wheat 3,这主要是这几种农作物的叶子形状非常相近,很难区别。对比图6(c)可以看出,在Self-training方法中一部分Wheat 2和Wheat 3被错分为Rapeseed,因此虽然在Self-training方法中Rapeseed的分类正确率高,但是Wheat 2和Wheat 3分类正确率要低于本文方法的分类结果。此外本文方法在Bare soil区域的分类正确率虽然低于Wishart方法的分类正确率,但是分类正确率也已经大于96%。而且由图6(d)可以看出,Wishart方法将很大一部分Water区域错划分为Bare soil区域,使Water区域的分类正确率只有46.85%,远低于本文方法在该区域的分类正确率93.35%。由表2可以看出不同标记样本时本文方法的分类正确率都要高于对比方法的分类结果;本文方法的Kappa系数也高于对比方法的Kappa系数,而且通过对比图6中本文方法和对比方法的分类结果表示,也可以看出本文方法的分类结果的区域一致性也比其它的对比方法好。

    本实验中分别选择每类别为不同数量的标记样本(10, 8, 6, 4)作为训练样本。图7(a)为Pauli分解的RGB图,图7(a1)为真实地物。实验结果如图7表3表4所示。图7(b)为本文算法的分类结果,图7(c)为Self-training方法的分类结果,图7(d)为监督Wishart方法的分类结果,图7(e)为SVM方法的分类结果。 表3为每类选10个标记样本时,不同方法的分类正确率。

    表  3  Radarsat-2 C波段的Felvoland地区不同分类算法的分类精度(%)
    Table  3.  Classification accuracy of the Flevoland area acquired by Radarsat-2 C band (%)
    区域方法
    WishartSVMSelf-training本文方法
    Urban69.6154.7563.9371.44
    Water98.7196.8399.1098.82
    Forest91.6565.2573.8383.63
    Cropland55.2778.9779.2382.24
    OA78.8173.9579.0284.03
    下载: 导出CSV 
    | 显示表格
    表  4  Radarsat-2 C波段的Felvoland 地区不同训练样本的分类结果
    Table  4.  Classification results of the Flevoland area acquired by Radarsat-2 C band with different number of training samples
    方法训练样本数
    4 6 8 10
    OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
    Wishart69.210.5803 73.650.6239 76.810.6854 78.810.7026
    SVM50.790.415364.790.547170.050.596873.950.6394
    Self-training65.690.523370.410.591174.400.660579.450.7144
    本文方法76.710.676879.290.723582.020.764484.030.7882
    下载: 导出CSV 
    | 显示表格

    表3表4可以看出,本文方法的分类结果明显高于传统的Self-training方法,SVM方法和Wishart分类方法。由表4可以看出当每类训练样本数量10时,本文分类方法的分类正确率为84.03%,高于Self-training分类方法4.58%,高于SVM分类方法10.08%,高于监督Wishart方法5.22%。由表3可以看出本文方法在Urban和Cropland区域的分类正确率都要高于对比方法,但是在Forest区域的分类正确率低于监督Wishart方法的分类正确率。由图7(d)可以看出,这主要是因为Wishart方法中一部分Cropland区域被分为了Forest类,虽然Wishart方法的Water区域分类正确率高,但是Cropland区域的分类正确率只有55.27%,明显低于本文所提方法,而且本文方法Forest和Cropland区域总的分类正确率也要高于Wishart方法。而由表4可以看出选择不同数量的标记样本时,本文方法的分类正确率都要高于对比方法;同时本文方法的Kappa系数也高于对比方法的Kappa系数,而且通过对比图7中本文方法和对比方法的分类结果图,也可以看出本文方法的分类结果的区域一致性也比其它的对比方法要好。因此可以得出相同的结论,本文所提方法要明显优于传统的分类方法,尤其是在标记样本较少的情况下。

    本实验分别选择每类别为不同数量的标记样本(10, 8, 6, 4)作为训练样本。图8(a)为Pauli分解的RGB图,图8(a1)为真实地物。实验结果如图8表5表6所示。图8(b)为本文方法的分类结果,图8(c)为Self-training方法的分类结果,图8(d)为监督Wishart方法的分类结果,图8(e)为SVM方法的分类结果。表5为每类选10个标记样本时,不同方法的分类正确率。

    表  5  Radarsat-2 C波段的旧金山地区不同分类算法的分类结果(%)
    Table  5.  Classification accuracy of the San Francisco area acquired by radarsat-2 C Band (%)
    区域方法
    WishartSVMSelf-training本文方法
    Water98.7090.0498.0499.92
    Vegetation91.0378.5184.4591.50
    Low-Density Urban81.3042.3170.1875.05
    High-Density Urban42.5877.1533.0168.27
    Developed55.2624.0056.1658.81
    OA73.7762.4068.3778.71
    下载: 导出CSV 
    | 显示表格
    表  6  Radarsat-2 C波段的旧金山地区不同训练样本的分类结果
    Table  6.  Classification results of the San Francisco area acquired by Radarsat-2 C band with different number of training samples
    方法训练样本数
    4 6 8 10
    OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
    Wishart68.090.5181 70.440.5439 72.490.5867 73.770.6011
    SVM50.240.281751.250.290556.310.362862.400.4342
    Self-training52.340.312658.620.366963.270.435768.420.5308
    本文方法70.870.548273.150.598675.230.628478.710.6852
    下载: 导出CSV 
    | 显示表格

    表5表6可以看出,本文方法的分类结果明显高于传统的Self-training方法,SVM方法和Wishart分类方法。由表6可以看出当每类训练样本数量10时,本文分类方法的分类正确率为78.71%,高于Self-training分类方法10.29%,高于SVM分类方法16.31%,高于监督Wishart方法4.94%。由表5可以看出本文方法在大部分区域的分类正确率都要高于对比方法,但是在Low-Density Urban区域的分类正确率低于监督Wishart方法的分类正确率。由图8(d)可以看出,这主要是因为Wishart方法中Low-Density Urban区域和High-Density Urban区域没有被有效地区分开,一部分的High-Density Urban区域被错分为Low-Density Urban,导致虽然Wishart方法的Low-Density Urban区域分类正确率高,但是High-Density Urban区域的分类正确率只有42.58%,明显低于本文所提方法,而且在本文方法中这两个区域总的分类正确率也要高于Wishart方法。而由表6可以看出当标记样本数量不同时,本文方法的分类正确率都要高于对比方法;对比本文方法的Kappa系数和对比方法的Kappa系数,可以发现本文方法的Kappa系数要明显高于对比方法的,而且通过对比图8中本文方法和对比方法的分类结果图,也可以看出本文方法的分类结果的区域一致性也比其它的对比方法要好。因此我们可以得出相同的结论,本文所提方法要明显优于传统的分类方法,尤其是在标记样本较少的情况下。

    前面的实验已经验证了本文方法的有效性,本节分析迭代次数(自训练次数)对实验结果的影响。图9(a)为迭代次数对分类正确率的影响,由图9(a)可以看出随着迭代次数的增加分类正确率逐渐增加,当迭代次数大于8次的时候分类正确率的增长逐渐减小趋于平滑。图9(b)为迭代次数所消耗的时间成本,由图9(b)可以看出随着迭代次数的增加所耗费的时间迅速增加,这主要是因为随着迭代次数的增加,标记样本数量增加,最小生成树的种子点数量增加,最小生成树所需要的时间增加,自训练分类器的时间也增加。

    图  9  迭代次数对实验结果的影响
    Figure  9.  The effects of number of iterations in the proposed method

    本文提出了一种基于邻域最小生成树的半监督极化SAR图像分类方法。该方法能够有效地利用标记样本和无标记样本,通过邻域最小生成树辅助学习的方式选择高可靠性的样本,添加到标记样本集中,通过自训练的方式不断扩大标记样本集,优化分类器,使在只有少量标记样本时能够获得较高的分类正确率。并对3组真实极化SAR数据进行测试,实验结果表明本文方法能够获得满意的分类结果,尤其是在标记样本非常少的情况下。而且通过选择不同比例的训练样本实验表明相较于传统的方法本文方法获得的分类精度更高。此外,通过分析迭代次数对实验结果的影响实验表明,本文方法选择的无标记样本是可靠的,通过添加被选择的无标记样本扩大标记样本集逐渐改善分类器的性能。

  • [1]
    LEE J S and POTTIER E. Polarimetric Radar Imaging: From Basics to Applications[M]. Boca Raton: CRC Press, 2009.
    [2]
    OLIVER C and QUEGAN S. Understanding Synthetic Aperture Radar Images[M]. Boston: SciTech Publishing, 2004.
    [3]
    邓云凯, 赵凤军, 王宇. 星载SAR技术的发展趋势及应用浅析[J]. 雷达学报, 2012, 1(1): 1–10. doi: 10.3724/SP.J.1300.2012.20015

    DENG Yunkai, ZHAO Fengjun, and WANG Yu. Brief analysis on the development and application of Spaceborne SAR[J]. Journal of Radars, 2012, 1(1): 1–10. doi: 10.3724/SP.J.1300.2012.20015
    [4]
    杨建宇. 雷达对地成像技术多向演化趋势与规律分析[J]. 雷达学报, 2019, 8(6): 669–692. doi: 10.12000/JR19099

    YANG Jianyu. Multi-directional evolution trend and law analysis of radar ground imaging technology[J]. Journal of Radars, 2019, 8(6): 669–692. doi: 10.12000/JR19099
    [5]
    金亚秋. 多模式遥感智能信息与目标识别: 微波视觉的物理智能[J]. 雷达学报, 2019, 8(6): 710–716. doi: 10.12000/JR19083

    JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083
    [6]
    杜兰, 王兆成, 王燕, 等. 复杂场景下单通道SAR目标检测及鉴别研究进展综述[J]. 雷达学报, 2020, 9(1): 34–54. doi: 10.12000/JR19104

    DU Lan, WANG Zhaocheng, WANG Yan, et al. Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes[J]. Journal of Radars, 2020, 9(1): 34–54. doi: 10.12000/JR19104
    [7]
    CRISP D J. The state-of-the-art in ship detection in synthetic aperture radar imagery[R]. DATO-RR-0272, 2004.
    [8]
    GAO G, GAO S, and HE J. Maritime Surveillance with SAR Data[M]. Chapter. Ship Detection. IET book, in publishing.
    [9]
    GAO Gui. Statistical modeling of SAR images: A survey[J]. Sensors, 2010, 10(1): 775–795. doi: 10.3390/s100100775
    [10]
    VESPE M and GREIDANUS H. SAR image quality assessment and indicators for vessel and oil spill detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2012, 50(11): 4726–4734. doi: 10.1109/TGRS.2012.2190293
    [11]
    VELOTTO D, SOCCORSI M, and LEHNER S. Azimuth ambiguities removal for ship detection using full polarimetric X-band SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2014, 52(1): 76–88. doi: 10.1109/TGRS.2012.2236337
    [12]
    GREIDANUS H, CLAYTON P, INDREGARD M, et al. Benchmarking operational SAR ship detection[C]. 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, USA, 2004: 4215–4218.
    [13]
    OUCHI K. Current status on vessel detection and classification by synthetic aperture radar for maritime security and safety[C]. The 38th Symposium on Remote Sensing for Environmental Sciences, Gamagori, Aichi, Japan, 2016: 5–12.
    [14]
    PAN Zongxu, LIU Lei, QIU Xiaolan, et al. Fast vessel detection in Gaofen-3 SAR images with ultrafine strip-map mode[J]. Sensors, 2017, 17(7): 1578. doi: 10.3390/s17071578
    [15]
    AN Quanzhi, PAN Zongxu, and YOU Hongjian. Ship detection in Gaofen-3 SAR images based on sea clutter distribution analysis and deep convolutional neural network[J]. Sensors, 2018, 18(2): 334. doi: 10.3390/s18020334
    [16]
    WANG Shigang, WANG Min, YANG Shuyuan, et al. New hierarchical saliency filtering for fast ship detection in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(1): 351–362. doi: 10.1109/TGRS.2016.2606481
    [17]
    IERVOLINO P and GUIDA R. A novel ship detector based on the generalized-likelihood ratio test for SAR imagery[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(8): 3616–3630. doi: 10.1109/JSTARS.2017.2692820
    [18]
    LENG Xiangguang, JI Kefeng, YANG Kai, et al. A bilateral CFAR algorithm for ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(7): 1536–1540. doi: 10.1109/LGRS.2015.2412174
    [19]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. An adaptive ship detection scheme for spaceborne SAR imagery[J]. Sensors, 2016, 16(9): 1345. doi: 10.3390/s16091345
    [20]
    LENG Xiangguang, JI Kefeng, XING Xiangwei, et al. Area ratio invariant feature group for ship detection in SAR imagery[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(7): 2376–2388. doi: 10.1109/JSTARS.2018.2820078
    [21]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Ship detection based on complex signal kurtosis in single-channel SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6447–6461. doi: 10.1109/TGRS.2019.2906054
    [22]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Discriminating ship from radio frequency interference based on noncircularity and non-Gaussianity in Sentinel-1 SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(1): 352–363. doi: 10.1109/TGRS.2018.2854661
    [23]
    EL-DARYMLI K, MCGUIRE P, GILL E W, et al. Characterization and statistical modeling of phase in single-channel synthetic aperture radar imagery[J]. IEEE Transactions on Aerospace and Electronic Systems, 2015, 51(3): 2071–2092. doi: 10.1109/TAES.2015.140711
    [24]
    EL-DARYMLI K, MOLONEY C, GILL E, et al. Nonlinearity and the effect of detection on single-channel synthetic aperture radar imagery[C]. OCEANS 2014-TAIPEI, Taipei, China, 2014: 1–7.
    [25]
    OLLILA E. On the circularity of a complex random variable[J]. IEEE Signal Processing Letters, 2008, 15: 841–844. doi: 10.1109/LSP.2008.2005050
    [26]
    OLLILA E, KOIVUNEN V, and POOR H V. Complex-valued signal processing—essential models, tools and statistics[C]. 2011 Information Theory and Applications Workshop, La Jolla, USA, 2011: 1–10.
    [27]
    OLLILA E, ERIKSSON J, and KOIVUNEN V. Complex elliptically symmetric random variables—Generation, characterization, and circularity tests[J]. IEEE Transactions on Signal Processing, 2011, 59(1): 58–69. doi: 10.1109/TSP.2010.2083655
    [28]
    ERIKSSON J and KOIVUNEN V. Complex random vectors and ICA models: Identifiability, uniqueness, and separability[J]. IEEE Transactions on Information Theory, 2006, 52(3): 1017–1029. doi: 10.1109/TIT.2005.864440
    [29]
    ERIKSSON J, OLLILA E, and KOIVUNEN V. Essential statistics and tools for complex random variables[J]. IEEE Transactions on Signal Processing, 2010, 58(10): 5400–5408. doi: 10.1109/TSP.2010.2054085
    [30]
    NOVEY M, ADALI T, and ROY A. Circularity and Gaussianity detection using the complex generalized Gaussian distribution[J]. IEEE Signal Processing Letters, 2009, 16(11): 993–996. doi: 10.1109/LSP.2009.2028412
    [31]
    NOVEY M, ADALI T, and ROY A. A complex generalized Gaussian distribution—Characterization, generation, and estimation[J]. IEEE Transactions on Signal Processing, 2010, 58(3): 1427–1433. doi: 10.1109/TSP.2009.2036049
    [32]
    NOVEY M, OLLILA E, and ADALI T. On testing the extent of noncircularity[J]. IEEE Transactions on Signal Processing, 2011, 59(11): 5632–5637. doi: 10.1109/TSP.2011.2162951
    [33]
    SCHREIER P J and SCHARF L L. Statistical Signal Processing of Complex-valued Data: The Theory of Improper and Noncircular Signals[M]. Cambridge: Cambridge University Press, 2010.
    [34]
    WU Wenjin, GUO Huadong, LI Xinwu, et al. Urban land use information extraction using the ultrahigh-resolution Chinese airborne SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(10): 5583–5599. doi: 10.1109/TGRS.2015.2425658
    [35]
    WU Wenjin, LI Xinwu, GUO Huadong, et al. Noncircularity parameters and their potential applications in UHR MMW SAR data sets[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(10): 1547–1551. doi: 10.1109/LGRS.2016.2595762
    [36]
    SOCCORSI M and DATCU M. Stochastic models of SLC HR SAR images[C]. 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 2007: 3887–3890.
    [37]
    SOCCORSI M, DATCU M, and GLEICH D. TerraSAR-X: Complex Image Inversion for Feature Extraction[C]. 2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, USA, 2008: III-99–III-102.
    [38]
    冷祥光, 计科峰, 周石琳. SAR图像方位模糊去除方法研究[C]. 第五届高分辨率对地观测学术年会论文集, 西安, 2018.

    LENG Xiangguang, JI Kefeng, and ZHOU Shilin. Research on azimuth ambiguity removal methods in SAR imagery[C]. The 5th China High Resolution Earth Observation Conference, Xi’an, China, 2018.
    [39]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Azimuth ambiguities removal in littoral zones based on multi-temporal SAR images[J]. Remote Sensing, 2017, 9(8): 866. doi: 10.3390/rs9080866
    [40]
    JAKEMAN E and PUSEY P. A model for non-Rayleigh sea echo[J]. IEEE Transactions on Antennas and Propagation, 1976, 24(6): 806–814. doi: 10.1109/TAP.1976.1141451
    [41]
    GOLDSTEIN G B. False-alarm regulation in log-normal and Weibull clutter[J]. IEEE Transactions on Aerospace and Electronic Systems, 1973, AES–9(1): 84–92.
    [42]
    TRUNK G V and GEORGE S F. Detection of targets in non-Gaussian sea clutter[J]. IEEE Transactions on Aerospace and Electronic Systems, 1970, AES–6(5): 620–628.
    [43]
    DANA R A and KNEPP D L. The impact of strong scintillation on space based radar design II: Noncoherent detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 1986, AES–22(1): 34–46.
    [44]
    TISON C, NICOLAS J M, TUPIN F, et al. A new statistical model for Markovian classification of urban areas in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2004, 42(10): 2046–2057. doi: 10.1109/TGRS.2004.834630
    [45]
    KURUOGLU E E and ZERUBIA J. Modeling SAR images with a generalization of the Rayleigh distribution[J]. IEEE Transactions on Image Processing, 2004, 13(4): 527–533. doi: 10.1109/TIP.2003.818017
    [46]
    MIGLIACCIO M, FERRARA G, GAMBARDELLA A, et al. A physically consistent speckle model for marine SLC SAR images[J]. IEEE Journal of Oceanic Engineering, 2007, 32(4): 839–847. doi: 10.1109/JOE.2007.903985
    [47]
    LIAO Mingsheng, WANG Changcheng, WANG Yong, et al. Using SAR images to detect ships from sea clutter[J]. IEEE Geoscience and Remote Sensing Letters, 2008, 5(2): 194–198. doi: 10.1109/LGRS.2008.915593
    [48]
    FERRARA G, MIGLIACCIO M, NUNZIATA F, et al. Generalized-K (GK)-based observation of metallic objects at sea in full-resolution Synthetic Aperture Radar (SAR) data: A multipolarization study[J]. IEEE Journal of Oceanic Engineering, 2011, 36(2): 195–204. doi: 10.1109/JOE.2011.2109491
    [49]
    SAHED M, MEZACHE A, and LAROUSSI T. A novel [z log(z)]-based closed form approach to parameter estimation of K-distributed clutter plus noise for radar detection[J]. IEEE Transactions on Aerospace and Electronic Systems, 2015, 51(1): 492–505. doi: 10.1109/TAES.2014.140180
    [50]
    ROSENBERG L, WATTS S, and BOCQUET S. Application of the K+Rayleigh distribution to high grazing angle sea-clutter[C]. 2014 International Radar Conference, Lille, France, 2014: 1–6.
    [51]
    ROSENBERG L and BOCQUET S. Application of the Pareto plus noise distribution to medium grazing angle sea-clutter[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015, 8(1): 255–261. doi: 10.1109/JSTARS.2014.2347957
    [52]
    MIDDLETON D. New physical-statistical methods and models for clutter and reverberation: The KA-distribution and related probability structures[J]. IEEE Journal of Oceanic Engineering, 1999, 24(3): 261–284. doi: 10.1109/48.775289
    [53]
    DONG Yunhan. Distribution of X-band high resolution and high grazing angle sea clutter[R]. DSTO-RR-0316, 2006.
    [54]
    ROSENBERG L, CRISP D J, and STACY N J. Analysis of the KK-distribution with medium grazing angle sea-clutter[J]. IET Radar, Sonar & Navigation, 2010, 4(2): 209–222.
    [55]
    FICHE A, ANGELLIAUME S, ROSENBERG L, et al. Analysis of X-band SAR sea-clutter distributions at different grazing angles[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(8): 4650–4660. doi: 10.1109/TGRS.2015.2405577
    [56]
    FICHE A, ANGELLIAUME S, ROSENBERG L, et al. Statistical analysis of low grazing angle high resolution X-band SAR sea clutter[C]. 2014 International Radar Conference, Lille, France, 2014: 1–6.
    [57]
    秦先祥. 基于广义Gamma分布的SAR图像统计建模及应用研究[D]. [博士论文], 国防科学技术大学, 2015.

    QIN Xianxiang. Research on statistical modeling of SAR images and its application based on generalized Gamma distribution[D]. [Ph. D. Dissertation], National University of Defense Technology, 2015.
    [58]
    ACHIM A, KURUOGLU E E, and ZERUBIA J. SAR image filtering based on the heavy-tailed Rayleigh model[J]. IEEE Transactions on Image Processing, 2006, 15(9): 2686–2693. doi: 10.1109/TIP.2006.877362
    [59]
    RIHACZEK A W and HERSHKOWITZ S J. Theory and Practice of Radar Target Identification[M]. Boston: Artech House, 2000.
    [60]
    RIHACZEK A W and HERSHKOWITZ S J. Radar Resolution and Complex-image Analysis[M]. Boston: Artech House, 1996.
    [61]
    JAO J K, LEE C E, and AYASLI S. Coherent spatial filtering for SAR detection of stationary targets[J]. IEEE Transactions on Aerospace and Electronic systems, 1999, 35(2): 614–626. doi: 10.1109/7.766942
    [62]
    DATCU M, SCHWARZ G, SOCCORSI M, et al. Phase information contained in meter-scale SAR images[C]. SPIE SAR Image Analysis, Modeling, and Techniques IX, Florence, Italy, 2007: 67460H.
    [63]
    [64]
    FISHER N I. Statistical Analysis of Circular Data[M]. Cambridge: Cambridge University Press, 1995.
    [65]
    MARDIA K V and JUPP P E. Directional Statistics[M]. Chichester: John Wiley & Sons, 2009.
    [66]
    EVANS M, HASTINGS N, and PEACOCK B. Statistical Distributions[M]. 3rd ed. New York: Wiley, 2000: 117–118.
    [67]
    EL-DARYMLI K, MCGUIRE P, POWER D, et al. Rethinking the phase in single-channel SAR imagery[C]. 2013 14th International Radar Symposium, Dresden, Germany, 2013: 429–436.
    [68]
    EL-DARYMLI K, MOLONEY C, GILL E, et al. On circularity/noncircularity in single-channel synthetic aperture radar imagery[C]. 2014 Oceans-St. John’s, St. John’s, Canada, 2014: 1–4.
    [69]
    EL-DARYMLI K, MCGUIRE P, GILL E W, et al. Holism-based features for target classification in focused and complex-valued synthetic aperture radar imagery[J]. IEEE Transactions on Aerospace and Electronic Systems, 2016, 52(2): 786–808. doi: 10.1109/TAES.2015.140757
    [70]
    LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Fast shape parameter estimation of the complex generalized Gaussian distribution in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2020, in press. doi: 10.1109/LGRS.2019.2960095
    [71]
    FANG Kaitai, KOTZ S, and NG K W. Symmetric Multivariate and Related Distributions[M]. London: Chapman and Hall, 1990.
    [72]
    LI Hualiang and ADALI T. A class of complex ICA algorithms based on the kurtosis cost function[J]. IEEE Transactions on Neural Networks, 2008, 19(3): 408–420. doi: 10.1109/TNN.2007.908636
    [73]
    DOUGLAS S C. Fixed-point algorithms for the blind separation of arbitrary complex-valued non-Gaussian signal mixtures[J]. EURASIP Journal on Advances in Signal Processing, 2007, 2007: 036525. doi: 10.1155/2007/36525
    [74]
    [75]
    LENG Xiangguang, JI Kefeng, and ZHOU Shilin. A novel ship segmentation method based on kurtosis test in complex-valued SAR imagery[C]. 2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing, Beijing, China, 2018: 1–4.
    [76]
    [77]
    Shanghai Jiaotong University. Opensar platform[EB/OL]. http://opensar.sjtu.edu.cn/, 2017.
    [78]
    HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [79]
    SANTAMARIA C, ALVAREZ M, GREIDANUS H, et al. Mass processing of Sentinel-1 images for maritime surveillance[J]. Remote Sensing, 2017, 9(7): 678. doi: 10.3390/rs9070678
    [80]
    MANSOUR A and JUTTEN C. What should we say about the kurtosis?[J]. IEEE Signal Processing Letters, 1999, 6(12): 321–322. doi: 10.1109/97.803435
    [81]
    DUMITRU O C and DATCU M. Information content of very high resolution SAR images: Study of dependency of SAR image structure descriptors with incidence angle[J]. International Journal on Advances in Telecommunications, 2012, 5(3/4): 239–251.
  • Relative Articles

    [1]DU Huagui, SONG Yongping, SUN Xiaoying, JIANG Nan, FAN Chongyi, CHEN Leping, HUANG Xiaotao. A New Approach to High-order Range Cell Migration Correction for SAR Ground Moving Targets Based on Phase Tracking[J]. Journal of Radars, 2024, 13(5): 955-973. doi: 10.12000/JR24122
    [2]GAO Zhiqi, SUN Shuchen, HUANG Pingping, QI Yaolong, XU Wei. Improved L1/2 Threshold Iterative High Resolution SAR Imaging Algorithm[J]. Journal of Radars, 2023, 12(5): 1044-1055. doi: 10.12000/JR22243
    [3]WANG Bingnan, ZHAO Juanying, LI Wei, SHI Ruihua, XIANG Maosheng, ZHOU Yu, JIA Jianjun. Array Synthetic Aperture Ladar with High Spatial Resolution Technology[J]. Journal of Radars, 2022, 11(6): 1110-1118. doi: 10.12000/JR22204
    [4]QU Haiyou, CHENG Di, CHEN Chang, CHEN Weidong. High-resolution Sparse Self-calibration Imaging for Vortex Radar with Phase Error[J]. Journal of Radars, 2021, 10(5): 699-717. doi: 10.12000/JR21094
    [5]ZENG Tao, WEN Yuhan, WANG Yan, DING Zegang, WEI Yangkai, YUAN Tiaotiao. Research Progress on Synthetic Aperture Radar Parametric Imaging Methods[J]. Journal of Radars, 2021, 10(3): 327-341. doi: 10.12000/JR21004
    [6]CHEN Hui, TIAN Xiang, LI Zihao, JIANG Xinrui. Reduced-dimension Target Parameter Estimation For Conformal FDA-MIMO Radar[J]. Journal of Radars, 2021, 10(6): 811-821. doi: 10.12000/JR21197
    [7]WEI Yangkai, ZENG Tao, CHEN Xinliang, DING Zegang, FAN Yujie, WEN Yuhan. Parametric SAR Imaging for Typical Lines and Surfaces[J]. Journal of Radars, 2020, 9(1): 143-153. doi: 10.12000/JR19077
    [8]XING Mengdao, LIN Hao, CHEN Jianlai, SUN Guangcai, YAN Bangbang. A Review of Imaging Algorithms in Multi-platform-borne Synthetic Aperture Radar[J]. Journal of Radars, 2019, 8(6): 732-757. doi: 10.12000/JR19102
    [9]ZHU Daiyin, ZHANG Ying, YU Xiang, MAO Xinhua, ZHANG Jindong, LI Yong. Imaging Signal Processing Technology for Miniature Synthetic Aperture Radar[J]. Journal of Radars, 2019, 8(6): 793-803. doi: 10.12000/JR19094
    [10]Sun Xiang, Song Hongjun, Wang Robert, Li Ning. POA Correction Method Using High-resolution Full-polarization SAR Image[J]. Journal of Radars, 2018, 7(4): 465-474. doi: 10.12000/JR18026
    [11]Dou Fangzheng, Diao Wenhui, Sun Xian, Zhang Yue, Fu Kun. Aircraft Reconstruction in High Resolution SAR Images Using Deep Shape Prior[J]. Journal of Radars, 2017, 6(5): 503-513. doi: 10.12000/JR17047
    [12]Tang Jiangwen, Deng Yunkai, Wang Robert, Zhao Shuo, Li Ning. High-resolution Slide Spotlight SAR Imaging by BP Algorithm and Heterogeneous Parallel Implementation[J]. Journal of Radars, 2017, 6(4): 368-375. doi: 10.12000/JR16053
    [13]Wen Xuejiao, Qiu Xiaolan, You Hongjian, Lu Xiaojun. Focusing and Parameter Estimation of Fluctuating Targets in High Resolution Spaceborne SAR[J]. Journal of Radars, 2017, 6(2): 213-220. doi: 10.12000/JR17005
    [14]Jin Tian. An Enhanced Imaging Method for Foliage Penetration Synthetic Aperture Radar[J]. Journal of Radars, 2015, 4(5): 503-508. doi: 10.12000/JR15114
    [15]Xing Meng-dao, Sun Guang-cai, Li Xue-shi. Study on SAR/GMTI Processing for High-resolution Wide-swath SAR System[J]. Journal of Radars, 2015, 4(4): 375-385. doi: 10.12000/JR15096
    [16]Xu Cheng-bin, Zhou Wei, Cong Yu, Guan Jian. Ship Analysis and Detection in High-resolution Pol-SAR Imagery Based on Peak Zone[J]. Journal of Radars, 2015, 4(3): 367-373. doi: 10.12000/JR14093
    [17]Li Hai, Zhang Zhi-qiang, Zhou Meng. A Novel Detection and Parameter Estimation Method of Airborne Phased Array Radar for Maneuvering Target Based on Wigner-Ville Distributed[J]. Journal of Radars, 2015, 4(4): 393-400. doi: 10.12000/JR15094
    [18]Tian Rui-qi, Bao Qing-long, Wang Ding-he, Chen Zeng-ping. An Algorithm for Target Parameter Estimation Based on Fractional Fourier and Keystone Transforms[J]. Journal of Radars, 2014, 3(5): 511-517. doi: 10.3724/SP.J.1300.2014.14058
    [19]Chen Gong-bo, Li Yong, Tao Man-yi. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging[J]. Journal of Radars, 2013, 2(2): 203-209. doi: 10.3724/SP.J.1300.2013.20073
    [20]Zheng Ming-jie, Yan He, Zhang Bing-chen, Zhao Feng-jun, Yang Ru-liang. A Novel Method of Moving Target Detection and Parameters Estimation for Dual-channel WAS Radar Based on DBS Image[J]. Journal of Radars, 2012, 1(1): 36-42. doi: 10.3724/SP.J.1300.2013.20007
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(17)  / Tables(2)

    Article views(4616) PDF downloads(367) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    • 表  1  AIRSAR L波段的Felvoland地区不同分类算法的分类精度(%)
      Table  1.  Classification accuracy of the Flevoland area acquired by AIRSAR L band (%)
      区域方法
      WishartSVMSelf-training本文方法
      Stembeans91.4870.0790.8298.75
      Rapeseed61.8338.0267.1459.58
      Bare soil97.5186.8970.9796.75
      Potatoes79.4758.3880.2781.99
      Beet92.3585.6195.0594.60
      Wheat 267.4371.8067.3989.86
      Peas93.1077.7095.2497.56
      Wheat 382.0882.4294.3397.05
      Lucerne84.5340.7781.6795.06
      Barley81.9698.2998.6298.39
      Wheat81.4668.2885.3485.41
      Grasses66.4965.0381.7580.08
      Forest84.2161.0377.6694.77
      Water46.8565.3269.3993.35
      Building81.7778.912.1885.58
      OA79.4070.3077.1989.92
      下载: 导出CSV 
      | 显示表格
    • 表  2  AIRSAR L波段的Felvoland 地区不同训练样本的分类结果
      Table  2.  Classification results of the Flevoland area acquired by AIRSAR L band with different number of training samples
      方法训练样本数
      4 6 8 10
      OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
      Wishart74.620.7215 76.190.7459 78.780.7656 80.260.7831
      SVM56.070.542358.120.561164.420.610270.300.6682
      Self-training63.360.602568.420.656973.890.714677.230.7489
      本文方法79.330.788883.060.809386.900.841689.920.8852
      下载: 导出CSV 
      | 显示表格
    • 表  3  Radarsat-2 C波段的Felvoland地区不同分类算法的分类精度(%)
      Table  3.  Classification accuracy of the Flevoland area acquired by Radarsat-2 C band (%)
      区域方法
      WishartSVMSelf-training本文方法
      Urban69.6154.7563.9371.44
      Water98.7196.8399.1098.82
      Forest91.6565.2573.8383.63
      Cropland55.2778.9779.2382.24
      OA78.8173.9579.0284.03
      下载: 导出CSV 
      | 显示表格
    • 表  4  Radarsat-2 C波段的Felvoland 地区不同训练样本的分类结果
      Table  4.  Classification results of the Flevoland area acquired by Radarsat-2 C band with different number of training samples
      方法训练样本数
      4 6 8 10
      OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
      Wishart69.210.5803 73.650.6239 76.810.6854 78.810.7026
      SVM50.790.415364.790.547170.050.596873.950.6394
      Self-training65.690.523370.410.591174.400.660579.450.7144
      本文方法76.710.676879.290.723582.020.764484.030.7882
      下载: 导出CSV 
      | 显示表格
    • 表  5  Radarsat-2 C波段的旧金山地区不同分类算法的分类结果(%)
      Table  5.  Classification accuracy of the San Francisco area acquired by radarsat-2 C Band (%)
      区域方法
      WishartSVMSelf-training本文方法
      Water98.7090.0498.0499.92
      Vegetation91.0378.5184.4591.50
      Low-Density Urban81.3042.3170.1875.05
      High-Density Urban42.5877.1533.0168.27
      Developed55.2624.0056.1658.81
      OA73.7762.4068.3778.71
      下载: 导出CSV 
      | 显示表格
    • 表  6  Radarsat-2 C波段的旧金山地区不同训练样本的分类结果
      Table  6.  Classification results of the San Francisco area acquired by Radarsat-2 C band with different number of training samples
      方法训练样本数
      4 6 8 10
      OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
      Wishart68.090.5181 70.440.5439 72.490.5867 73.770.6011
      SVM50.240.281751.250.290556.310.362862.400.4342
      Self-training52.340.312658.620.366963.270.435768.420.5308
      本文方法70.870.548273.150.598675.230.628478.710.6852
      下载: 导出CSV 
      | 显示表格