Intelligent Technology for Aircraft Detection and Recognition through SAR Imagery: Advancements and Prospects
-
摘要: 合成孔径雷达(SAR)采用相干成像机制,具有全天时、全天候成像的独特优势。飞机目标作为一种典型高价值目标,其检测与识别已成为SAR图像解译领域的研究热点。近年来,深度学习技术的引入,极大提升了SAR图像飞机目标检测与识别的性能。该文结合团队在SAR图像目标特别是飞机目标的检测与识别理论、算法及应用等方面的长期研究积累,对基于深度学习的SAR图像飞机目标检测与识别进行了全面回顾和综述,深入分析了SAR图像飞机目标特性及检测识别难点,总结了最新的研究进展以及不同方法的特点和应用场景,汇总整理了公开数据集及常用性能评估指标,最后,探讨了该领域研究面临的挑战和发展趋势。Abstract: Synthetic Aperture Radar (SAR), with its coherent imaging mechanism, has the unique advantage of all-day and all-weather imaging. As a typical and important topic, aircraft detection and recognition have been widely studied in the field of SAR image interpretation. With the introduction of deep learning, the performance of aircraft detection and recognition, which is based on SAR imagery, has considerably improved. This paper combines the expertise gathered by our research team on the theory, algorithms, and applications of SAR image-based target detection and recognition, particularly aircraft. Additionally, this paper presents a comprehensive review of deep learning-powered aircraft detection and recognition based on SAR imagery. This review includes a detailed analysis of the aircraft target characteristics and current challenges associated with SAR image-based detection and recognition. Furthermore, the review summarizes the latest research advancements, characteristics, and application scenarios of various technologies and collates public datasets and performance evaluation metrics. Finally, several challenges and potential research prospects are discussed.
-
1. 背景与引言
2019年,国家自然科学基金委经过反复论证,将“合成孔径雷达微波视觉三维成像理论与应用基础”设立为重大研究项目内容。该项目的一个重大原始创新是将“微波视觉”概念引入到SAR三维成像框架中,以期在减少SAR观察次数的情况下,仍能有效处理SAR图像的叠掩现象和散射体的高程信息复原能力。 笔者觉得,“微波视觉”目前仍是一个框架性概念,“微波视觉语义”也是一个“内涵”比较广的概念,需要对这些内容进行探讨和具体化。笔者长期从事计算机视觉研究,对生物视觉也有一些了解,但对SAR成像和处理了解非常浅薄。借此专刊提供契机,谈一点自己对这些问题的粗浅看法,不妥之处敬请批评指正。另外,本文主要介绍一些笔者的看法,而不是系统介绍相关内容,所以尽量减少了对相关文献的索引。
2. 什么是视觉和视觉语义
在探讨“微波视觉”之前,先讨论一下什么是“视觉”。关于什么是“视觉”这个问题,从古希腊亚里士多德起,人们的讨论就没有停止过。笔者觉得,“计算视觉”(computational vision)的奠基人David Marr[1](马尔)在其vision一书中,给出的定义可能是对“视觉”最精炼和简洁的描述。马尔认为,“视觉”就是通过看来确定什么东西在什么地方(Vision is to know what is where by looking)。尽管很多人觉得“视觉”远不止马尔描述的“what”和“where”问题,但“什么东西”在“什么地方”至少是视觉的基本功能。把视觉功能过分扩大就会与脑功能混淆,如盲人都具备的能力,似乎不太合适。笔者觉得,“视觉”应该首先研究“大脑视觉皮层”的功能,而不宜重点研究涉及多通道融合的脑皮层区域的功能,否则,“视觉”与“脑科学”就没有多少区别了。
2.1 物体视觉和空间视觉
为了适应白天和黑夜光强的剧烈变化,人类视觉系统进化成了“日视”和“夜视”两套成像系统。人类约1.2亿个感光细胞 (photoreceptors) 中,约1.1亿为杆状细胞(rod),600万~700万为锥状细胞(cone)。 杆状细胞主要负责夜视,锥状细胞大多可以感知颜色,用于“日视”。视网膜(retina)是成像部位,对图像进过初级加工后,如去噪、对比度增强等,然后将信号传到枕叶(occipital lobe:图1中的绿色区域)的视觉初级加工区域(V1, V2区等)进行加工处理。
如图1所示,信号经过视皮层初级处理(如边缘提取、运动检测、视差估计等)后,主要分成两个加工通道,一个是腹部通道(ventral pathway:绿色到蓝色区域虚线),主要负责物体识别,称为“物体视觉”(object vision)。另一个是背部通道(dorsal pathway:绿色到红色区域虚线),主要负责“操作物体”的视觉,由于操作物体必然涉及空间位置和距离等信息,所以称为“空间视觉”(spatial vision)。
2.2 深度感知:单目感知和双目感知
由于本文主要关心“图像三维视觉语义”,下面对单目深度感知和双目深度感知进行一些简单介绍。
目前神经科学对“双目立体视觉”(binocular stereo)的机理相对比较清楚。单眼信号首先在视觉V1区进行融合,并对绝对视差(absolute disparity)进行加工,然后在后续皮层进一步对绝对视差精化和相对视差计算。腹部通道和背部通道均涉及视差处理,但到目前为止,人们还没有发现任何一个脑皮层区域“专门用来处理视差”。双目视差处理目前主要有2种计算模型:一种是1990年Ohzawa等人[2]提出的视差能量模型(disparity energy model),一种是Haefner和Cumming[3]于2008年提出的扩展的视差能量模型:2SU模型。由于人的双眼间距很小,外界环境在双眼视网膜上的成像基本上相差一个很小的平移,所以视差能量模型本质上是多个神经元对“图像相关”计算的一种模型。
很显然,单目也可以进行场景深度感知,仅仅是感知的精度要较双目差一些。目前,就笔者所知,还没有任何关于单目深度感知的相关神经加工机理的报道。目前的一些线索大多是“心理学”的一些实验结论[4]。如: 相对尺寸“relative size”(同样大小的两个物体,看上去大的物体在前,小的在后),纹理梯度(Texture Gradient)(梯度大的区域在前面),线性透视(Linear Perspective)(平行线的投影越到后面变得越窄)等。这些线索还很难上升成“计算原理”。因为这些都是一些“感觉”。三维成像是对现实的复原,而三维感觉却可以远离现实。目前的很多“虚拟或增强现实”,事实上都在给予人们对“非现实”的“现实感”,本质上都是一些错觉。
2.3 视觉错觉(visual illusion)
人类视觉系统既可以从“无真实三维信息的图像”感觉到三维信息,也可以从“包含真实三维信息的图像”得到错误感觉。如图2的线画图,人们可以产生三维感觉。图3的“The Ames room illusion”,两个人的真实身高差不多,但人们都会有“前面人高,后面人矮”的感觉。
“视觉语义”就是对场景感知信息在“语义层次上”的描述,即在“概念”层次上的一种描述。“错觉”就会导致“错误的视觉语义”。人们可以对场景有多种感受,但不是所有感受都可以上升到概念层次。另外,感知得到的视觉语义并不见得都是对真实场景的表述。如人们看到的颜色本质上是对波长信息的语义表述。如何在SAR三维成像中利用三维视觉语义信息,以提高三维成像质量,仍需要在框架、理论和算法3个层次上进行深度探讨。
3. 什么是“微波视觉语义”
什么是“微波视觉语义”?笔者觉得就是人们从微波图像“感知”得到的“场景语义信息”。也就是人们从微波图像“直接看到的”场景语义信息。尽管SAR是距离成像,存在叠掩等光学成像系统不存在的特有现象,但人们直接从SAR图像也确实可以感知到一些场景三维结构信息,如从图4的SAR图像中,人们可以感知到的船和桥的一些三维结构信息。
本文笔者仅仅讨论SAR三维成像中的视觉语义,还不是更广泛意义下的微波视觉语义。由于“视觉”包含“视觉感知”和“视觉认知”。计算机视觉传统意义下主要研究视觉感知问题,而视觉认知覆盖了更广泛的概念(如回想视觉事件、视觉概念形成、视觉事件推理等),且笔者觉得,视觉认知问题似乎与其他感觉通道信息的认知机理也没有本质区别(如视觉事件推理与听觉事件推理似乎没有本质区别)。正像计算机视觉领域一样,把“计算机视觉”范畴过度扩展,就会与“图像理解”,“图像分析”,“视频分析”产生混淆,笔者觉得,“微波视觉”似乎也应该避免类似问题,否则也会存在与微波图像理解、微波图像分析混淆的可能。
笔者觉得,“SAR三维成像中的视觉语义”,就是指如何利用从SAR图像中感知的语义信息来“增强SAR图像的三维成像能力”。也就是说,在传统SAR三维成像中,如何通过增加“视觉语义”约束,来提高SAR三维成像的性能。
4. 如何利用视觉语义提高SAR三维成像:处理框架
正像前面所述,“三维成像”是对真实三维场景的复原,“视觉语义”是对真实场景的“主观感受”,而主观感受可能产生错觉。所以如何在SAR三维成像中融合合适的视觉语义信息,需要探索一套计算框架、计算理论和计算方法。下面主要围绕本重大项目的“层析SAR”(TomoSAR)问题进行一些讨论。
TomoSAR 是一种恢复高程信息的有效技术途径[5](这里的高程信息主要指位置信息)。本项目的一个主要目标在于“如何融合视觉语义信息,以减少TomoSAR的观测次数,实现SAR快速三维成像”,以有效解决传统TomoSAR周期长、成本高,不利于时效性要求较高的应用等问题。鉴于TomoSAR框架下文献中对单个像元的高程恢复问题已研究了20多年,如以谱分析方法[6]和压缩感知方法[7]为代表的两大类方法,笔者认为,融合视觉语义的TomoSAR研究,首先应该在处理框架上有别于传统方法。应该从“单像元”处理转变到“图像区域”处理,应该从“前馈式”处理方式转变到“反馈式”处理方式。关于图像区域处理,文献中已有一些报道,如Rambour等人介绍的空间正则途径[8]。这里的“前馈式”处理是指从SAR图像一次准确估计高程信息的过程,并不意味着在估计过程中没有迭代计算。“反馈式”处理是指把“初始估计的粗略高程结果”再反馈到下次估计,逐次迭代求精的处理方式。
4.1 基于鲁棒统计的迭代式处理框架
目前文献中的方法,基本上是增加各种约束,如基于压缩感知(Compressed Sensing, CS)的稀疏性约束,将“单个像元内所有散射体的高程信息一次准确恢复”的过程。这是一种典型的“前馈处理”方式。任何图像都不是随机分布的,这种以像元为基本处理单元的方式,既没有考虑像元邻域关系,也没有考虑“场景特有的结构先验知识”。TomoSAR旨在恢复未知的三维场景结构,TomoSAR处理中如何利用“待处理场景特有的结构先验”似乎就成了一个“鸡与蛋”的关系。解决“鸡与蛋”关系,计算中采用的是“迭代”策略,其核心假定是:当没有场景结构先验时,TomoSAR初始恢复的高程不可能非常准确,但存在“一定程度的可靠性”。这些具有一定可靠性的高程信息,特别是一个“区域”对应的粗略高程信息,构成了对场景结构恢复中下一次迭代的“有效先验”。通过将这些先验融入到下一次TomoSAR的迭代中,可望有效提升TomoSAR的性能。著名的Adaboost分类方法[9]是这方面的一个典型代表。每个弱分类器,只要其分类的正确概率大于0.5,多个弱分类器的组合,就可以构成一个性能优良的强分类器。在这种迭代框架下,随着迭代的进行,场景的高程信息会恢复得越来越准确。这种迭代式估计方法,其合理性支撑理论是鲁棒统计理论,如RANSAC方法[10]。图5给出一种TomoSAR迭代估计框架。
目前这种迭代框架下的求解TomoSAR的方法还不多见。Rambour等人[11]给出的REDRESS算法,利用城镇场景的特性,通过graph-cut对初始CS框架下估计的高程进一步优化后,进而利用场景信息改变CS中的稀疏性惩罚系数的方法,本质上是一种迭代框架下的TomoSAR方法。
4.2 伪多尺度处理框架
多尺度方法是信息领域一种广泛使用的方法[12],如图像的金字塔表示。多尺度理论在特征提取中的假定:真实的特征在不同尺度下均存在,虚假特征仅仅在某个尺度下出现。另外,多尺度理论也表明,一个特征具有其固有的尺度(intrinsic scale),也就是说,特征在其固有的尺度下更容易可靠提取,如公路不宜在“厘米分辨率”的图像上提取。TomoSAR也可以在多尺度处理框架下进行处理。如利用谱分析方法在低分辨率下先得到一个粗略高程估计,然后利用该粗略估计的信息作为先验,进一步在压缩感知框架下精化估计结果。这种途径可以将TomoSAR处理的两大途径:谱分析方法和压缩感知方法结合起来,同时可以在“反馈式”处理模式下进一步优化。图6为一种伪多尺度处理框架。这里“伪”是为了表明这不是一种真正符合“多尺度理论”的方法。
“迭代框架”、“多尺度框架”一定意味着更长的处理时间吗?计算机视觉领域的大量方法和应用表明[13,14],“多尺度”和“迭代”在估计精度提高的情况下,计算时间反而可以降低。TomoSAR处理与其他图像应用相比,在这方面似乎也不应该存在本质区别。
5. 如何利用视觉语义提高SAR三维成像:技术途径和算法
TomoSAR融合先验知识,包括视觉语义知识,目前的基本处理途径如式(1)所示,即在TomoSAR传统表述方程中增加视觉语义约束,以提高三维成像的质量。
(1) 这种途径在理论上具有融合各种先验知识的潜力和灵活性。式(1)中
$ f\left({X}_{M}\right) $ 既可以包含连续变量约束,也可以包含离散变量约束,既可以表示确定性正则化(deterministic regularization),也可以表示统计性正则化(statistical regularization),同时可以表示更一般的语义正则化(semantic regularization)(如$ {X}_{M} $ 位于同一空间水平面上)。既可以表示对单像元的约束,也可以表示空间邻域像元之间的约束。笔者觉得,SAR三维成像中的视觉语义可以在语义正则化框架下进行描述和体现。根据前面对视觉语义的讨论,SAR三维成像中的视觉语义应该是体现场景结构的语义信息,特别是组成场景的几何基元信息,如空间线段、面片等基元,其位置和朝向以及物体类别等信息。另外,这些几何基元信息可以通过“机器学习的途径”来提取。这里需要指出的是,“语义约束”从能量模型的观点看,一般是一个“高阶能量项”(high-order energy model)。从条件随机场(conditional Random Field )能量优化理论知道[15],除了很少的一些高阶能量模型外,一般的含高阶能量项的优化问题都是一个NP-Hard问题。所以,上述框架下设计“约束项”时,一定要考虑对应的求解问题。否则,会出现目前很多“设计了一个复杂优美的能量模型,用简化方法进行了求解,得到了与所设计的能量模型关系不大的结果”的怪现象。深度学习的进展使得计算机视觉研究发生了“变革性”进展。深度学习可以用在TomoSAR中吗?目前见到的报道并不多,少有的几项工作如Costante等人[16]直接从SAR图像推断DEM的工作,Budillon等人[17]直接用深度学习反演TomoSAR,以及Wu等人[18]在CS粗估计下进一步利用DNN进行高程超分辨率的工作。利用深度学习方法可以从单幅SAR图像直接推断高程信息吗?从单幅光学图像推断景深的进展和结果看[19],原理上并不存在任何困难。深度学习从单幅SAR图像推断高程,本质上也是建立SAR图像特征与高程信息的一种映射。由于深度网络可以有效逼近任何一种函数映射关系,所以,尽管SAR图像与光学图像的成像机理不同,但从SAR图像特征到高程信息的映射函数也可以用深度网络近似。从单幅光学图像学习景深成功的另外两个操作:特征的多尺度表示和高程的局部一致性约束,SAR图像原则上也成立。所以,从单幅SAR图像在深度学习框架下直接推断高程信息,笔者觉得核心问题是“缺乏大量标注数据”。尽管“标注数据匮乏”是任何一个领域的共性问题,但SAR图像的数据匮乏现象较光学图像更为严重。
目前计算机视觉领域应对标注数据不足的基本策略是:半监督学习(semi-supervised learning),即利用少量标注数据迭代扩大标注数据集;弱监督学习(weakly supervised learning),即利用标注质量不高(含噪声)的数据进行学习;主动学习(active learning),即在学习的过程中人工参与标注少量困难样本,以及模拟数据。SAR图像与波长和成像视角有关,存在相干斑噪声,比光学图像数据在数据增强方面更加困难,但笔者觉得,这也许仅仅是一个时间问题,很快含有高程信息的大量SAR标注数据集会问世。笔者觉得,在解决SAR图像标注数据匮乏问题方面,利用“仿真”和“合成”数据将是一条有效的途径。另外,鉴于目前遥感领域已有大量含有高程信息的光学影像,如何将这些高程信息从光学影像迁移到SAR图像中,也是一条值得探索的途径。
总之,基于深度学习从SAR图像推断高程信息,尽管目前仍有不少难度,文献中相关报道也不多,但笔者觉得是一条值得探索且有巨大潜力和前景的技术途径。
6. 结论
本文对SAR三维成像中的微波视觉问题进行了初步探讨。由于笔者对SAR图像处理了解不深,不妥之处在所难免,欢迎读者批评指正。关于TomoSAR中如何利用视觉语义,笔者的基本观点为:
(1) 处理框架:有必要探索基于鲁棒估计理论的“反馈式”处理框架。即先快速得到一些关于场景的粗略高程信息,然后根据这些粗略信息形成关于场景的一些粗略三维语义约束并反馈到下一轮高程估计中。随着迭代的进行,“高程信息”和“三维场景语义”互为依托和促进,使得高程估计变得越来越准确,同时“场景语义”也变得越来越精细和可靠;
(2) 视觉语义:场景结构基元,如线段、面片及其位置和姿态信息,以及物体的类别信息,是最基本和值得优先考虑的“视觉语义”信息。这些场景结构语义信息可以通过“语义正则化”途径来描述和体现;
(3) 语义提取:场景结构基元,包括三维结构基元,可以通过机器学习的途径来提取。鉴于当前机器学习缺乏“图像匹配中的外点剔除机制”,所以如何从单幅SAR图像来提取场景结构基元以及对场景几何结构进行推断,是一条值得探索的途径;
在结束本文之前,笔者还想谈两点与“SAR三维成像中的视觉语义”不太关联的看法:
(1) 随着SAR成像技术的进步,SAR图像的距离和方位向分辨率会越来越高。这样单个像元内存在较多叠掩的概率也会随之减少。当叠掩次数不超过2时,笔者觉得TomoSAR的处理技术似乎也会发生大的改变。显然,当一个像元仅仅包含一个散射体时,散射体对应的高程相对比较容易复原,如理论上可以证明,像元协方差矩阵最大特征值对应的特征向量是该散射体的投影向量。当像元包含两个散射体时,文献[20,21]表明,可以通过核PCA(kernel PCA)分解对应的两个最大特征值对应的特征向量来确定散射体的投影向量。这种基于PCA的方法以及其他谱分解方法,由于计算速度快,未来似乎应该给予必要的关注。当然,如何估计像元的协方差矩阵本身也是一个困难的问题。
(2) TomoSAR的一个主要目标是恢复散射体的高程信息,而恢复的高程信息往往又是一个具体应用的“中间结果”,如对建筑物的三维重建。鉴于一般情况下,含有多散射体的像元在整幅SAR图像中占的比例很小,那么,这种“耗费大量精力和时间”对单像元多散射体的努力,对“最终目标”的实现又有多大帮助呢?所以,从某种程度上说,TomoSAR研究似乎也需要充分考虑具体应用(application-oriented)。当然,从学术的观点看,能准确恢复所有散射体的精确位置和散射性质,永远是科学研究的一种不懈追求。
总之,SAR 高分辨率和全天候的成像能力为对地观测提供了变革性的观测手段,TomoSAR的引入,为恢复SAR高程信息提供了全新的途径。在人工智能如火如荼的今天,微波视觉的提出,视觉语义的融合,也可望为SAR快速三维成像提供有力的推动。
-
表 1 SAR图像飞机目标检测与识别实测数据集
Table 1. Public datasets for aircraft detection and recognition in SAR imagery
应用领域 数据集名称 数据采集平台 数据集内容及特点 目标检测 SADD数据集
(Zhang et al., 2022)[62]德国
TerraSAR-X● 在X波段和HH模式下成像,图像分辨率从0.5 m到3.0 m。
● 数据集背景复杂、尺度目标多样,存在大量小尺寸目标,还包含了一部分负样本(机场附近的空地和森林等)。
● 数据总量为2966幅,其中飞机目标图像884幅,共计7835架飞机。图像大小为224像素×224像素。MSAR-1.0数据集
(陈杰等,2022)[90]HISEA-1, Gaofen-3 ● 数据集的采集场景多样,包括飞机、油罐、桥梁和船只4类目标。
● 数据总量为28449幅,其中飞机目标图像108幅,共计6368架飞机。图像大小为256像素×256像素。目标识别 多角度SAR数据集
(王汝意等,2022)[83]无人机载SAR ● 以角度间隔5°,采集了72个不同方位下的飞机目标实测数据。
● 数据集包含两类飞机目标:大棕熊100和“空中拖拉机”AT-504,数据总量为144幅,图像大小为128像素×128像素。SAR-ACD数据集
(Sun et al., 2022)[78]Gaofen-3 ● 数据集包括6个民用飞机类别,14个其他飞机类别,共计4322架飞机。
● 目前民用飞机类别已开源,数据量共3032幅。其中,6类飞机目标:A220, A320/321, A330, ARJ21, Boeing737和Boeing787的图像分别为464, 512, 510, 514, 528, 504幅。
● 为飞机目标细粒度识别提供了数据基准。表 2 SAR图像飞机目标仿真数据集
Table 2. Simulation SAR datasets of aircraft targets
数据集 仿真平台 内容及特点 SPGAN-SAR
(Liu et al., 2018)[88]OpenSARSim[89] ● 数据集包含飞机、船只和车辆3类目标,可细分为10个子类。每个子类包括504幅仿真图像,图像大小为158像素×158像素。 IRIS-SAR数据集
(Ahmadibeni et al., 2020)[95–97]IRIS[98] ● 包含6类目标,分别为48架民用飞机,58架小型螺旋桨飞机,82架喷气式飞机,29架民用和54架非民用直升机,24辆民用和28辆非民用车辆,以及32艘船只,共355个CAD模型。
● 展示了355个CAD模型在5个俯仰角(从15°开始,增量为15°),12个方位角(从0°开始,增量为30°)和3个探测距离(100 m, 200 m, 300 m)下生成的多角度SAR仿真数据集。
● 数据总量为63900幅,图像大小为512像素×512像素。可用于目标分类和图像去斑研究。 -
[1] MOREIRA A, PRATS-IRAOLA P, YOUNIS M, et al. A tutorial on synthetic aperture radar[J]. IEEE Geoscience and Remote Sensing Magazine, 2013, 1(1): 6–43. doi: 10.1109/MGRS.2013.2248301. [2] CASTELLETTI D, FARQUHARSON G, STRINGHAM C, et al. Capella space first operational SAR satellite[C]. 2021 IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 1483–1486. [3] PR Newswire. ICEYE expands world’s largest SAR satellite constellation; launches first U.S. built spacecraft[EB/OL]. https://www.prnewswire.com/news-releases/iceye-expands-worlds-largest-sar-satellite-constellation-launches-first-us-built-spacecraft-301460822.html, 2022. [4] 徐丰, 王海鹏, 金亚秋. 合成孔径雷达图像智能解译[M]. 北京: 科学出版社, 2020: 1–463.XU Feng, WANG Haipeng, and JIN Yaqiu. Intelligent Interpretation of Synthetic Aperture Radar Images[M]. Beijing: Science Press, 2020: 1–463. [5] ROSS T D, BRADLEY J J, HUDSON L J, et al. SAR ATR: So what’s the problem? An MSTAR perspective[C]. SPIE 3721, Algorithms for Synthetic Aperture Radar Imagery VI, Orlando, United States, 1999: 662–672. [6] 郭倩, 王海鹏, 徐丰. SAR图像飞机目标检测识别进展[J]. 雷达学报, 2020, 9(3): 497–513. doi: 10.12000/JR20020.GUO Qian, WANG Haipeng, and XU Feng. Research progress on aircraft detection and recognition in SAR imagery[J]. Journal of Radars, 2020, 9(3): 497–513. doi: 10.12000/JR20020. [7] NOVAK L M, OWIRKA G J, and NETISHEN C M. Performance of a high-resolution polarimetric SAR automatic target recognition system[J]. The Lincoln Laboratory Journal, 1993, 6(1): 11–24. [8] NOVAK L M, HALVERSEN S D, OWIRKA G J, et al. Effects of polarization and resolution on the performance of a SAR automatic target recognition system[J]. The Lincoln Laboratory Journal, 1995, 8(1): 49–68. [9] KREITHEN D E, HALVERSEN S S, and OWIRKA G J. Discriminating targets from clutter[J]. Lincoln Laboratory Journal, 1993, 6(1): 25–52. [10] ZHU Xiaoxiang, MONTAZERI S, ALI M, et al. Deep learning meets SAR: Concepts, models, pitfalls, and perspectives[J]. IEEE Geoscience and Remote Sensing Magazine, 2021, 9(4): 143–172. doi: 10.1109/MGRS.2020.3046356. [11] “天智杯”人工智能挑战赛[EB/OL]. https://rsaicp.com.“Smart satellite” artificial intelligence challenge[EB/OL]. https://rsaicp.com, 2021. [12] “中科星图杯”国际高分遥感图像解译大赛[EB/OL]. https://www.gaofen-challenge.com, 2021.“GEOVIS CUP” Gaofen challenge on automated high-resolution earth observation image interpretation[EB/OL]. https://www.gaofen-challenge.com, 2021. [13] 黄培康, 殷红成, 许小剑. 雷达目标特性[M]. 北京: 电子工业出版社, 2005: 230–246.HUANG Peikang, YIN Hongcheng, and XU Xiaojian. Radar Target Signature[M]. Beijing: Publishing House of Electronics Industry, 2005: 230–246. [14] CUMMING I G and WONG F H. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation[M]. Boston: Artech House, 2005: 169–211. [15] 陈玉洁, 赵凌君, 匡纲要. 基于可变参数化几何模型的SAR图像飞机目标特征提取方法[J]. 现代雷达, 2016, 38(10): 47–53. doi: 10.16592/j.cnki.1004-7859.2016.10.012.CHEN Yujie, ZHAO Lingjun, and KUANG Gangyao. Feature extraction of aircraft targets in SAR image based on parametric geometric model[J]. Modern Radar, 2016, 38(10): 47–53. doi: 10.16592/j.cnki.1004-7859.2016.10.012. [16] 高君, 高鑫, 孙显. 基于几何特征的高分辨率SAR图像飞机目标解译方法[J]. 国外电子测量技术, 2015, 34(8): 21–28. doi: 10.3969/j.issn.1002-8978.2015.08.008.GAO Jun, GAO Xin, and SUN Xian. Geometrical features-based method for aircraft target interpretation in high-resolution SAR images[J]. Foreign Electronic Measurement Technology, 2015, 34(8): 21–28. doi: 10.3969/j.issn.1002-8978.2015.08.008. [17] 窦方正, 刁文辉, 孙显, 等. 基于深度形状先验的高分辨率SAR飞机目标重建[J]. 雷达学报, 2017, 6(5): 503–513. doi: 10.12000/JR17047.DOU Fangzheng, DIAO Wenhui, SUN Xian, et al. Aircraft reconstruction in high resolution SAR images using deep shape prior[J]. Journal of Radars, 2017, 6(5): 503–513. doi: 10.12000/JR17047. [18] 匡纲要, 高贵, 蒋咏梅, 等. 合成孔径雷达目标检测理论、算法及应用[M]. 长沙: 国防科技大学出版社, 2007: 133–165.KUANG Gangyao, GAO Gui, JIANG Yongmei, et al. Synthetic Aperture Radar Target: Detection Theory Algorithms and Applications[M]. Changsha: National University of Defense Technology Press, 2007: 133–165. [19] HE Chu, TU Mingxia, LIU Xinlong, et al. Mixture statistical distribution based multiple component model for target detection in high resolution SAR imagery[J]. ISPRS International Journal of Geo-Information, 2017, 6(11): 336. doi: 10.3390/ijgi6110336. [20] TAN Yihua, LI Qingyun, LI Yansheng, et al. Aircraft detection in high-resolution SAR images based on a gradient textural saliency map[J]. Sensors, 2015, 15(9): 23071–23094. doi: 10.3390/s150923071. [21] DOU Fangzheng, DIAO Wenhui, SUN Xian, et al. Aircraft recognition in high resolution SAR images using saliency map and scattering structure features[C]. 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 2016: 1575–1578. [22] CHEN Jiehong, ZHANG Bo, and WANG Chao. Backscattering feature analysis and recognition of civilian aircraft in TerraSAR-X images[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(4): 796–800. doi: 10.1109/LGRS.2014.2362845. [23] ZHANG Yueting, DING Chibiao, LEI Bin, et al. Feature modeling of SAR images for aircrafts based on typical structures[C]. 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 7007–7010. [24] FU Kun, DOU Fangzheng, LI Hengchao, et al. Aircraft recognition in SAR images based on scattering structure feature and template matching[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(11): 4206–4217. doi: 10.1109/JSTARS.2018.2872018. [25] ZOU Zhengxia, CHEN Keyan, SHI Zhenwei, et al. Object detection in 20 years: A survey[J]. Proceedings of the IEEE, 2023, 111(3): 257–276. doi: 10.1109/JPROC.2023.3238524. [26] 刘小波, 肖肖, 王凌, 等. 基于无锚框的目标检测方法及其在复杂场景下的应用进展[J]. 自动化学报, 2022, 48: 1–23. doi: 10.16383/j.aas.c220115.LIU Xiaobo, XIAO Xiao, WANG Ling, et al. Anchor-free based object detection methods and its application progress in complex scenes[J]. Acta Automatica Sinica, 2022, 48: 1–23. doi: 10.16383/j.aas.c220115. [27] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 2014: 580–587. [28] GIRSHICK R. Fast R-CNN[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1440–1448. [29] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031. [30] CAI Zhaowei and VASCONCELOS N. Cascade R-CNN: Delving into high quality object detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6154–6162. [31] TAN Mingxing, PANG Ruoming, and LE Q V. EfficientDet: Scalable and efficient object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 10778–10787. [32] LIU Wei, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 21–37. [33] REDMON J and FARHADI A. YOLOv3: An incremental improvement[EB/OL]. https://arxiv.org/abs/1804.02767v1, 2018. [34] GE Zheng, LIU Songtao, WANG Feng, et al. YOLOX: Exceeding YOLO series in 2021[EB/OL]. https://arxiv.org/abs/2107.08430v2, 2021. [35] WANG C Y, BOCHKOVSKIY A, and LIAO H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 2023: 7464–7475. [36] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: Fully convolutional one-stage object detection[C]. 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 9626–9635. [37] UZKENT B, YEH C, and ERMON S. Efficient object detection in large images using deep reinforcement learning[C]. 2020 IEEE Winter Conference on Applications of Computer Vision, Snowmass, USA, 2020: 1813–1822. [38] ZHANG Linbin, LI Chuyin, ZHAO Lingjun, et al. A cascaded three-look network for aircraft detection in SAR images[J]. Remote Sensing Letters, 2020, 11(1): 57–65. doi: 10.1080/2150704X.2019.1681599. [39] GUO Qian, WANG Haipeng, KANG Lihong, et al. Aircraft target detection from spaceborne SAR image[C]. 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 1168–1171. [40] WANG Jielan, XIAO Hongguang, CHEN Lifu, et al. Integrating weighted feature fusion and the spatial attention module with convolutional neural networks for automatic aircraft detection from SAR images[J]. Remote Sensing, 2021, 13(5): 910. doi: 10.3390/rs13050910. [41] XIAO Xiayang, YU Xueping, and WANG Haipeng. A high-efficiency aircraft detection approach utilizing auxiliary information in SAR images[C]. 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022: 1700–1703. [42] 郭倩, 王海鹏, 徐丰. 星载合成孔径雷达图像的飞机目标检测[J]. 上海航天, 2018, 35(6): 57–64. doi: 10.19328/j.cnki.1006-1630.2018.06.010.GUO Qian, WANG Haipeng, and XU Feng. Aircraft target detection from spaceborne synthetic aperture radar image[J]. Aerospace Shanghai, 2018, 35(6): 57–64. doi: 10.19328/j.cnki.1006-1630.2018.06.010. [43] 赵琰, 赵凌君, 匡纲要. 复杂环境大场景SAR图像飞机目标快速检测[J]. 电波科学学报, 2020, 35(4): 594–602. doi: 10.13443/j.cjors.2020040602.ZHAO Yan, ZHAO Lingjun, and KUANG Gangyao. Fast detection of aircrafts in complex large-scene SAR images[J]. Chinese Journal of Radio Science, 2020, 35(4): 594–602. doi: 10.13443/j.cjors.2020040602. [44] LI Chuyin, ZHAO Lingjun, and KUANG Gangyao. A two-stage airport detection model on large scale SAR images based on faster R-CNN[C]. SPIE 11179, Eleventh International Conference on Digital Image Processing, Guangzhou, China, 2019: 515–525. [45] CHEN Lifu, TAN Siyu, PAN Zhouhao, et al. A new framework for automatic airports extraction from SAR images using multi-level dual attention mechanism[J]. Remote Sensing, 2020, 12(3): 560. doi: 10.3390/rs12030560. [46] YIN Shoulin, LI Hang, and TENG Lin. Airport detection based on improved faster RCNN in large scale remote sensing images[J]. Sensing and Imaging, 2020, 21(1): 49. doi: 10.1007/s11220-020-00314-2. [47] 王思雨, 高鑫, 孙皓, 等. 基于卷积神经网络的高分辨率SAR图像飞机目标检测方法[J]. 雷达学报, 2017, 6(2): 195–203. doi: 10.12000/JR17009.WANG Siyu, GAO Xin, SUN Hao, et al. An aircraft detection method based on convolutional neural networks in high-resolution SAR images[J]. Journal of Radars, 2017, 6(2): 195–203. doi: 10.12000/JR17009. [48] DIAO Wenhui, SUN Xian, ZHENG Xinwei, et al. Efficient saliency-based object detection in remote sensing images using deep belief networks[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(2): 137–141. doi: 10.1109/LGRS.2015.2498644. [49] DIAO Wenhui, DOU Fangzheng, FU Kun, et al. Aircraft detection in SAR images using saliency based location regression network[C]. 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 2334–2337. [50] 李广帅, 苏娟, 李义红. 基于改进Faster R-CNN的SAR图像飞机检测算法[J]. 北京航空航天大学学报, 2021, 47(1): 159–168. doi: 10.13700/j.bh.1001-5965.2020.0004.LI Guangshuai, SU Juan, and LI Yihong. An aircraft detection algorithm in SAR image based on improved Faster R-CNN[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(1): 159–168. doi: 10.13700/j.bh.1001-5965.2020.0004. [51] AN Quanzhi, PAN Zongxu, LIU Lei, et al. DRBox-v2: An improved detector with rotatable boxes for target detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8333–8349. doi: 10.1109/TGRS.2019.2920534. [52] XIAO Xiayang, JIA Hecheng, XIAO Penghao, et al. Aircraft detection in SAR images based on peak feature fusion and adaptive deformable network[J]. Remote Sensing, 2022, 14(23): 6077. doi: 10.3390/rs14236077. [53] CHEN Lifu, LUO Ru, XING Jin, et al. Geospatial transformer is what you need for aircraft detection in SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5225715. doi: 10.1109/TGRS.2022.3162235. [54] WANG Zhen, XU Nan, GUO Jianxin, et al. SCFNet: Semantic condition constraint guided feature aware network for aircraft detection in SAR Images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5239420. doi: 10.1109/TGRS.2022.3224599. [55] HAN Ping, LIAO Dayu, HAN Binbin, et al. SEAN: A simple and efficient attention network for aircraft detection in SAR images[J]. Remote Sensing, 2022, 14(18): 4669. doi: 10.3390/rs14184669. [56] GUO Qian, WANG Haipeng, and XU Feng. Scattering Enhanced attention pyramid network for aircraft detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(9): 7570–7587. doi: 10.1109/TGRS.2020.3027762. [57] 李广帅, 苏娟, 李义红, 等. 基于卷积神经网络与注意力机制的SAR图像飞机检测[J]. 系统工程与电子技术, 2021, 43(11): 3202–3210. doi: 10.12305/j.issn.1001-506X.2021.11.20.LI Guangshuai, SU Juan, LI Yihong, et al. Aircraft detection in SAR images based on convolutional neural network and attention mechanism[J]. Systems Engineering and Electronics, 2021, 43(11): 3202–3210. doi: 10.12305/j.issn.1001-506X.2021.11.20. [58] 夏一帆, 赵凤军, 王樱洁, 等. 基于注意力和自适应特征融合的SAR图像飞机目标检测[J/OL]. 电讯技术, https://doi.org/10.20079/j.issn.1001-893x.221014002, 2023.XIA Yifan, ZHAO Fengjun, WANG Yingjie, et al. Aircraft detection in SAR images based on attention and adaptive feature fusion[J/OL]. Telecommunication Engineering, https://doi.org/10.20079/j.issn.1001-893x.221014002, 2023. [59] 李佳芯, 朱卫纲, 杨莹, 等. 基于改进YOLOv5的SAR图像飞机目标检测[J]. 电光与控制, 2023, 30(8): 61–67. doi: 10.39691/j.issn.167-637X.2023.08.011.LI Jiaxin, ZHU Weigang, YANG Ying, at al. Aircraft targets in SAR images based on improved YOLOv5[J]. Electronics Optics & Control, 2023, 30(8): 61–67. doi: 10.39691/j.issn.1671-637X.2023.08.011. [60] GUO Qian, WANG Haipeng, and XU Feng. Aircraft detection in high-resolution SAR images using scattering feature information[C]. The 6th Asia-Pacific Conference on Synthetic Aperture Radar, Xiamen, China, 2019: 1–5. [61] KANG Yuzhuo, WANG Zhirui, FU Jiamei, et al. SFR-Net: Scattering feature relation network for aircraft detection in complex SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5218317. doi: 10.1109/TGRS.2021.3130899. [62] ZHANG Peng, XU Hao, TIAN Tian, et al. SEFEPNet: Scale expansion and feature enhancement pyramid network for SAR aircraft detection with small sample dataset[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 3365–3375. doi: 10.1109/JSTARS.2022.3169339. [63] GE Ji, WANG Chao, ZHANG Bo, et al. Azimuth-Sensitive object detection of high-resolution SAR images in complex scenes by using a spatial orientation attention enhancement network[J]. Remote Sensing, 2022, 14(9): 2198. doi: 10.3390/rs14092198. [64] ZHANG Peng, XU Hao, TIAN Tian, et al. SFRE-Net: Scattering feature relation enhancement network for aircraft detection in SAR images[J]. Remote Sensing, 2022, 14(9): 2076. doi: 10.3390/rs14092076. [65] ZHAO Yan, ZHAO Lingjun, LI Chuyin, et al. Pyramid attention dilated network for aircraft detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(4): 662–666. doi: 10.1109/LGRS.2020.2981255. [66] 赵琰, 赵凌君, 匡纲要. 基于注意力机制特征融合网络的SAR图像飞机目标快速检测[J]. 电子学报, 2021, 49(9): 1665–1674. doi: 10.12263/DZXB.20200486.ZHAO Yan, ZHAO Lingjun, and KUANG Gangyao. Attention feature fusion network for rapid aircraft detection in SAR images[J]. Acta Electronica Sinica, 2021, 49(9): 1665–1674. doi: 10.12263/DZXB.20200486. [67] ZHAO Yan, ZHAO Lingjun, LIU Zhong, et al. Attentional feature refinement and alignment network for aircraft detection in SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5220616. doi: 10.1109/TGRS.2021.3139994. [68] LUO Ru, CHEN Lifu, XING Jin, et al. A fast aircraft detection method for SAR images based on efficient bidirectional path aggregated attention network[J]. Remote Sensing, 2021, 13(15): 2940. doi: 10.3390/rs13152940. [69] HE Chu, TU Mingxia, XIONG Dehui, et al. A component-based multi-layer parallel network for airplane detection in SAR imagery[J]. Remote Sensing, 2018, 10(7): 1016. doi: 10.3390/rs10071016. [70] 闫华, 张磊, 陆金文, 等. 任意多次散射机理的GTD散射中心模型频率依赖因子表达[J]. 雷达学报, 2021, 10(3): 370–381. doi: 10.12000/JR21005.YAN Hua, ZHANG Lei, LU Jinwen, et al. Frequency-dependent factor expression of the GTD scattering center model for the arbitrary multiple scattering mechanism[J]. Journal of Radars, 2021, 10(3): 370–381. doi: 10.12000/JR21005. [71] LI Mingwu, WEN Gongjian, HUANG Xiaohong, et al. A lightweight detection model for SAR aircraft in a complex environment[J]. Remote Sensing, 2021, 13(24): 5020. doi: 10.3390/rs13245020. [72] LIN Sizhe, CHEN Ting, HUANG Xiaohong, et al. Synthetic aperture radar image aircraft detection based on target spatial imaging characteristics[J]. Journal of Electronic Imaging, 2022, 32(2): 021608. doi: 10.1117/1.JEI.32.2.021608. [73] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 3–19. [74] HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/TPAMI.2019.2913372. [75] WANG Qilong, WU Banggu, ZHU Pengfei, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 11531–11539. [76] 严继伟, 李广帅, 苏娟. 基于多尺度生成式对抗网络的SAR飞机数据集增广[J]. 电光与控制, 2022, 29(7): 62–68.YAN Jiwei, LI Guangshuai, and SU Juan. SAR aircraft data sets augmentation based on multi-scale generative adversarial network[J]. Electronics Optics & Control, 2022, 29(7): 62–68. [77] GAO Quanwei, FENG Zhixi, YANG Shuyuan, et al. Multi-Path interactive network for aircraft identification with optical and SAR images[J]. Remote Sensing, 2022, 14(16): 3922. doi: 10.3390/rs14163922. [78] SUN Xian, LV Yixuan, WANG Zhirui, et al. SCAN: Scattering characteristics analysis network for few-shot aircraft classification in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5226517. doi: 10.1109/TGRS.2022.3166174. [79] ZHAO Danpei, CHEN Ziqiang, GAO Yue, et al. Classification matters more: Global instance contrast for fine-grained SAR aircraft detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5203815. doi: 10.1109/TGRS.2023.3250507. [80] 吕艺璇, 王智睿, 王佩瑾, 等. 基于散射信息和元学习的SAR图像飞机目标识别[J]. 雷达学报, 2022, 11(4): 652–665. doi: 10.12000/JR22044.LYU Yixuan, WANG Zhirui, WANG Peijin, et al. Scattering information and meta-learning based SAR images interpretation for aircraft target recognition[J]. Journal of Radars, 2022, 11(4): 652–665. doi: 10.12000/JR22044. [81] KANG Yuzhuo, WANG Zhirui, ZUO Haoyu, et al. ST-Net: Scattering topology network for aircraft classification in high-resolution SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5202117. doi: 10.1109/TGRS.2023.3236987. [82] PAN Zongxu, QIU Xiaolan, HUANG Zhongling, et al. Airplane recognition in TerraSAR-X images via scatter cluster extraction and reweighted sparse representation[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(1): 112–116. doi: 10.1109/LGRS.2016.2628162. [83] 王汝意, 张汉卿, 韩冰, 等. 基于角度内插仿真的飞机目标多角度SAR数据集构建方法研究[J]. 雷达学报, 2022, 11(4): 637–651. doi: 10.12000/JR21193.WANG Ruyi, ZHANG Hanqing, HAN Bing, et al. Multiangle SAR dataset construction of aircraft targets based on angle interpolation simulation[J]. Journal of Radars, 2022, 11(4): 637–651. doi: 10.12000/JR21193. [84] AHMADIBENI A, JONES B, BOROOSHAK L, et al. Automatic target recognition of aerial vehicles based on synthetic SAR imagery using hybrid stacked denoising auto-encoders[C]. SPIE 11393, Algoritchms for Synthetic Aperture Radar Imagery XXVII, 2020: 71–82. [85] LIU Lei, PAN Zongxu, QIU Xiaolan, et al. SAR target classification with CycleGAN transferred simulated samples[C]. 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 4411–4414. [86] AUER S, BAMLER R, and REINARTZ P. RaySAR-3D SAR simulator: Now open source[C]. 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 2016: 6730–6733. [87] ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017: 2242–2251. [88] LIU Wenlong, ZHAO Yuejin, LIU Ming, et al. Generating simulated SAR images using Generative Adversarial Network[C]. SPIE 10752, Applications of Digital Image Processing XLI, San Diego, USA, 2018: 32–42. [89] Qi Bin OpenSARSim[EB/OL]. https://sourceforge.net/projects/opensarsimongpu/, 2007. [90] 陈杰, 黄志祥, 夏润繁. 大规模多类SAR目标检测数据集-1.0[J/OL]. 雷达学报, https://radars.ac.cn/web/data/getData?dataType=MSAR, 2022.CHEN Jie, HUANG Zhixiang, and XIA Runfan. Large-scale multi-class SAR image target detection dataset-1.0[J/OL]. Journal of Radars, https://radars.ac.cn/web/data/getData?dataType=MSAR, 2022. [91] AFRL and DARPA. Sensor data management system website, MSTAR Overview[EB/OL]. https://www.sdms.afrl.af.mil/index.php?collection=mstar, 2022. [92] LEWIS B, SCARNATI T, SUDKAMP E, et al. A SAR dataset for ATR development: The synthetic and measured paired labeled experiment (SAMPLE)[C]. SPIE 10987, Algorithms for Synthetic Aperture Radar Imagery XXVI, Baltimore, USA, 2019: 39–54. [93] HAZLETT M, ANDERSH D J, LEE S W, et al. XPATCH: A high-frequency electromagnetic scattering prediction code using shooting and bouncing rays[C]. SPIE 2469, Targets and Backgrounds: Characterization and Representation, Orlando, USA, 1995: 266–275. [94] ANDERSH D, MOORE J, KOSANOVICH S, et al. Xpatch 4: The next generation in high frequency electromagnetic modeling and simulation software[C]. Record of the IEEE 2000 International Radar Conference, Alexandria, USA, 2000: 844–849. [95] AHMADIBENI A, BOROOSHAK L, JONES B, et al. Aerial and ground vehicles synthetic SAR dataset generation for automatic target recognition[C]. SPIE 11393, Algorithms for Synthetic Aperture Radar Imagery XXVII, California, United States, 2020: 96–107. [96] AHMADIBENI A, JONES B, SMITH D, et al. Dynamic transfer learning from physics-based simulated SAR imagery for automatic target recognition[C]. 3rd International Conference on Dynamic Data Driven Application Systems, Boston, USA, 2020: 152–159. [97] JONES B, AHMADIBENI A, and SHIRKHODAIE A. Physics-based simulated SAR imagery generation of vehicles for deep learning applications[C]. SPIE 11511, Applications of Machine Learning, California, United States, 2020: 162–173. [98] SHIRKHODAIE A. IRIS-intelligent robotics interface systems[R]. Developed at Tennessee State University, Department of Mechanical and Manufacturing Engineering, 2006. [99] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9. [100] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. https://arxiv.org/abs/1409.1556v6, 2015. [101] MA Ningning, ZHANG Xiangyu, ZHENG Haitao, et al. ShuffleNet V2: Practical guidelines for efficient CNN architecture design[C]. 15th European Conference on Computer Vision, Munich, Germany, 2018: 116–131. [102] TAN Mingxing and LE Q V. EfficientNet: Rethinking model scaling for convolutional neural networks[EB/OL]. https://arxiv.org/abs/1905.11946v5, 2020. [103] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. [104] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386. [105] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 2818–2826. [106] 金亚秋. 多模式遥感智能信息与目标识别: 微波视觉的物理智能[J]. 雷达学报, 2019, 8(6): 710–716. doi: 10.12000/JR19083.JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083. [107] 仇晓兰, 焦泽坤, 杨振礼, 等. 微波视觉三维SAR关键技术及实验系统初步进展[J]. 雷达学报, 2022, 11(1): 1–19. doi: 10.12000/JR22027.QIU Xiaolan, JIAO Zekun, YANG Zhenli, et al. Key technology and preliminary progress of microwave vision 3D SAR experimental system[J]. Journal of Radars, 2022, 11(1): 1–19. doi: 10.12000/JR22027. [108] 郁文贤. 自动目标识别的工程视角述评[J]. 雷达学报, 2022, 11(5): 737–752. doi: 10.12000/JR22178.YU Wenxian. Automatic target recognition from an engineering perspective[J]. Journal of Radars, 2022, 11(5): 737–752. doi: 10.12000/JR22178. [109] 邢孟道, 谢意远, 高悦欣, 等. 电磁散射特征提取与成像识别算法综述[J]. 雷达学报, 2022, 11(6): 921–942. doi: 10.12000/JR22232.XING Mengdao, XIE Yiyuan, GAO Yuexin, et al. Electromagnetic scattering characteristic extraction and imaging recognition algorithm: A review[J]. Journal of Radars, 2022, 11(6): 921–942. doi: 10.12000/JR22232. [110] 黄钟泠, 姚西文, 韩军伟. 面向SAR图像解译的物理可解释深度学习技术进展与探讨[J]. 雷达学报, 2022, 11(1): 107–125. doi: 10.12000/JR21165.HUANG Zhongling, YAO Xiwen, and HAN Junwei. Progress and perspective on physically explainable deep learning for synthetic aperture radar image interpretation[J]. Journal of Radars, 2022, 11(1): 107–125. doi: 10.12000/JR21165. [111] DATCU M, HUANG Zhongling, ANGHEL A, et al. Explainable, physics-aware, trustworthy artificial intelligence: A paradigm shift for synthetic aperture radar[J]. IEEE Geoscience and Remote Sensing Magazine, 2023, 11(1): 8–25. doi: 10.1109/MGRS.2023.3237465. [112] MISHRA P. Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python[M]. Berkeley: Apress, 2023: 17–249. [113] HAQUE A K M B, ISLAM A K M N, and MIKALEF P. Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research[J]. Technological Forecasting and Social Change, 2023, 186: 122120. doi: 10.1016/j.techfore.2022.122120. [114] ZHANG Quanshi, WANG Xin, WU Yingnian, et al. Interpretable CNNs for object classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(10): 3416–3431. doi: 10.1109/TPAMI.2020.2982882. [115] LUO Ru, XING Jin, CHEN Lifu, et al. Glassboxing deep learning to enhance aircraft detection from SAR imagery[J]. Remote Sensing, 2021, 13(18): 3650. doi: 10.3390/rs13183650. [116] KAWAUCHI H and FUSE T. SHAP-Based interpretable object detection method for satellite imagery[J]. Remote Sensing, 2022, 14(9): 1970. doi: 10.3390/rs14091970. [117] GUO Xianpeng, HOU Biao, REN Bo, et al. Network pruning for remote sensing images classification based on interpretable CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5605615. doi: 10.1109/TGRS.2021.3077062. [118] BELLONI C, BALLERI A, AOUF N, et al. Explainability of deep SAR ATR through feature analysis[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(1): 659–673. doi: 10.1109/TAES.2020.3031435. [119] 吕小玲, 仇晓兰, 俞文明, 等. 基于无监督域适应的仿真辅助SAR目标分类方法及模型可解释性分析[J]. 雷达学报, 2022, 11(1): 168–182. doi: 10.12000/JR21179.LYU Xiaoling, QIU Xiaolan, YU Wenming, et al. Simulation-assisted SAR target classification based on unsupervised domain adaptation and model interpretability analysis[J]. Journal of Radars, 2022, 11(1): 168–182. doi: 10.12000/JR21179. [120] 郭炜炜, 张增辉, 郁文贤, 等. SAR图像目标识别的可解释性问题探讨[J]. 雷达学报, 2020, 9(3): 462–476. doi: 10.12000/JR20059.GUO Weiwei, ZHANG Zenghui, YU Wenxian, et al. Perspective on explainable SAR target recognition[J]. Journal of Radars, 2020, 9(3): 462–476. doi: 10.12000/JR20059. [121] HU Mingzhe, ZHANG Jiahan, MATKOVIC L, et al. Reinforcement learning in medical image analysis: Concepts, applications, challenges, and future directions[J]. Journal of Applied Clinical Medical Physics, 2023, 24(2): e13898. doi: 10.1002/acm2.13898. [122] 杜兰, 王梓霖, 郭昱辰, 等. 结合强化学习自适应候选框挑选的SAR目标检测方法[J]. 雷达学报, 2022, 11(5): 884–896. doi: 10.12000/JR22121.DU Lan, WANG Zilin, GUO Yuchen, et al. Adaptive region proposal selection for SAR target detection using reinforcement learning[J]. Journal of Radars, 2022, 11(5): 884–896. doi: 10.12000/JR22121. [123] LI Bin, CUI Zongyong, CAO Zongjie, et al. Incremental learning based on anchored class centers for SAR automatic target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5235313. doi: 10.1109/TGRS.2022.3208346. [124] WANG Li, YANG Xinyao, TAN Haoyue, et al. Few-shot class-incremental SAR target recognition based on hierarchical embedding and incremental evolutionary network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5204111. doi: 10.1109/TGRS.2023.3248040. 期刊类型引用(2)
1. Wei Wang,Haixia Wang,Liankun Yu,Qiulei Dong,Zhanyi Hu. Exploiting SAR visual semantics in Tomo SAR for 3D modeling of buildings. National Science Open. 2024(05): 6-25 . 必应学术
2. 仇晓兰,焦泽坤,杨振礼,程遥,蔺蓓,罗一通,王卫,董勇伟,周良将,丁赤飚. 微波视觉三维SAR关键技术及实验系统初步进展. 雷达学报. 2022(01): 1-19 . 本站查看
其他类型引用(5)
-