Loading [MathJax]/jax/output/SVG/jax.js

面向SAR图像解译的物理可解释深度学习技术进展与探讨

黄钟泠 姚西文 韩军伟

杨小鹏, 马忠杰, 钟世超, 等. 基于遗传算法的无人机载穿墙三维SAR航迹规划方法[J]. 雷达学报(中英文), 2024, 13(4): 731–746. doi: 10.12000/JR24068
引用本文: 黄钟泠, 姚西文, 韩军伟. 面向SAR图像解译的物理可解释深度学习技术进展与探讨[J]. 雷达学报, 2022, 11(1): 107–125. doi: 10.12000/JR21165
YANG Xiaopeng, MA Zhongjie, ZHONG Shichao, et al. Trajectory planning method for UAV-through-the-wall 3D SAR based on a genetic algorithm[J]. Journal of Radars, 2024, 13(4): 731–746. doi: 10.12000/JR24068
Citation: HUANG Zhongling, YAO Xiwen, and HAN Junwei. Progress and perspective on physically explainable deep learning for synthetic aperture radar image interpretation[J]. Journal of Radars, 2022, 11(1): 107–125. doi: 10.12000/JR21165

面向SAR图像解译的物理可解释深度学习技术进展与探讨

DOI: 10.12000/JR21165
基金项目: 国家自然科学基金(62101459),中国博士后科学基金(BX2021248),中央高校基本科研业务费专项资金(G2021KY05104)
详细信息
    作者简介:

    黄钟泠(1994–),女,重庆人,2020年获中国科学院大学博士学位,现为西北工业大学自动化学院准聘副教授,硕士生导师。主要研究方向为SAR图像解译、深度学习和可解释人工智能

    姚西文(1988–),男,山东人,2016年获西北工业大学博士学位,现为西北工业大学自动化学院副研究员,博士生导师。主要研究方向为计算机视觉、遥感图像处理、细粒度图像分类和目标识别

    韩军伟(1977–),男,陕西人,2003年获西北工业大学博士学位,现为西北工业大学自动化学院教授,博士生导师。主要研究方向为计算机视觉与脑成像分析

    通讯作者:

    黄钟泠 huangzhongling@nwpu.edu.cn

  • 责任主编:计科峰 Corresponding Editor: JI Kefeng
  • 中图分类号: TN957.51

Progress and Perspective on Physically Explainable Deep Learning for Synthetic Aperture Radar Image Interpretation(in English)

Funds: The National Natural Science Foundation of China (62101459), China Postdoctoral Science Foundation (BX2021248), Fundamental Research Funds for the Central Universities (G2021KY05104)
More Information
  • 摘要:

    深度学习技术近年来在合成孔径雷达(SAR)图像解译领域发展迅速,但当前基于数据驱动的方法通常忽视了SAR潜在的物理特性,预测结果高度依赖训练数据,甚至违背了物理认知。深层次地整合理论驱动和数据驱动的方法在 SAR 图像解译领域尤为重要,数据驱动的方法擅长从大规模数据中自动挖掘新模式,对物理过程能起到有效的补充;反之,在数据驱动方法中加入可解释的物理模型能提升深度学习算法的透明度,并降低模型对标记样本的依赖。该文提出在SAR图像解译应用领域发展物理可解释的深度学习技术,从SAR信号、特性理解到图像语义和应用场景等多个维度开展研究,并结合物理机器学习提出了几种在SAR解译中融合物理模型和深度学习模型的研究思路,逐步发展可学习且可解释的智能化SAR图像解译新范式。在此基础上,该文回顾了近两三年在SAR图像解译相关领域中整合数据驱动深度学习和理论驱动物理模型的相关工作,主要聚焦信号特性理解和图像语义理解两大方向,并结合研究现状和其他领域的相关研究探讨了目前面临的挑战和未来可能的发展方向。

     

  • 穿墙雷达是一种利用低频电磁波穿透特性探测建筑物结构和墙后目标的技术,可不受障碍物遮挡影响,对墙后遮蔽空间信息进行穿透感知[13]。目前穿墙雷达的工作体制主要包括多发多收(Multiple-Input Multiple-Output, MIMO)雷达和合成孔径雷达(Synthetic Aperture Radar, SAR)。MIMO穿墙雷达常用于室内目标的定位与跟踪,具有较高时效性;穿墙SAR具有观测范围广、分辨率高等优势,常用于建筑物结构布局重构和室内静目标成像[4]。穿墙SAR通常利用超宽带信号提供距离向高分辨率,利用水平方位的合成孔径提供方位向高分辨率[5]。三维穿墙SAR成像能直观反映出目标高度维信息,在灾害救援、城市反恐等领域具有重要应用,近年来受到广泛关注[69]。目前,穿墙雷达主要使用车载或手持平台,在城市复杂高层楼宇场景中,传统车载穿墙雷达通常在地面上运行,无法胜任不可达高楼层场景,例如高层火灾救援、高楼反恐作战等;手持固定平台MIMO穿墙雷达观测范围受限问题更为突出,无法对大规模场景进行探测与成像。相比之下,将无人机的灵活性与穿墙雷达的穿透能力相结合的无人机载穿墙雷达(如图1所示),可无视高度限制,对高层建筑进行穿透探测与成像,从而有效解决传统穿墙雷达探测高度受限问题,但目前还鲜有利用无人机载穿墙雷达在城市建筑开展穿透探测的研究报道。

    图  1  无人机载穿墙雷达高层建筑探测场景示意图
    Figure  1.  Schematic diagram of detection scenario for UAV-TWR in high-rise buildings

    在传统直视场景三维层析SAR中,无人机通过水平方向飞行实现方位向合成孔径,不同高度多航过飞行实现高度向合成孔径。目前已有的无人机载三维层析SAR通常采用多基线飞行方式(Multi Baseline SAR, MB-SAR)[10],即以多个密集的水平航过扫描相同区域进行成像。这种飞行模式下,为避免高度向空域欠采样导致的栅瓣效应(或高程模糊),相邻基线间距需要小于半波长,由于电池容量限制,无人机飞行续航时间有限,无法满足城市楼宇大范围穿墙成像的需求[11]。而使用大间距基线扫描以满足更大探测区域时,成像质量会因栅瓣效应而严重恶化[12]。针对上述问题,可使用压缩感知技术替代传统的高度向脉冲压缩,在一定程度上缓解高度向栅瓣严重的问题,但面临稀疏恢复计算量大的难题[13]。针对MIMO雷达中稀疏阵列产生的栅瓣问题,通常的解决办法是使用阵列设计与优化方法,通过非均匀化阵元间距起到栅瓣抑制的效果[14]。Feng等人[15]通过复合算法迭代优化阵元位置,实现稀疏MIMO阵列低栅瓣高分辨率成像。因此,机载穿墙SAR可借鉴MIMO雷达天线阵列的设计思路,对无人机进行任意形态航迹规划,使其轨迹非均匀化,抑制高度向周期性栅瓣能量,从而达到最优的三维成像效果,克服成像质量与成像效率无法兼顾的问题[16]

    对于视距场景无人机全局航迹规划问题而言,通常采取的方法是根据飞行距离[17]、成像质量[18]等设置代价函数,将航迹规划问题转化为代价函数最小的优化问题进行求解[19]。而在具体的求解方法上,大致可分为两类:传统优化算法和智能优化算法[20]

    传统优化算法包括凸松弛、整数规划等,通常需要对问题进行近似或分割,以转化为凸优化问题进行求解。传统优化算法优点在于可解释性强,但存在局部最优和初始点敏感等问题。Lahmeri等人[21]使用混合整数非线性规划算法,并加以逐次凸近似算法进行优化问题求解,进行实时任务规划,在兼顾SAR成像质量条件下最大化无人机载雷达对地面的覆盖率,但该方法仍使用MB-SAR飞行方式,仅改变基线间距,依然存在飞行效率较低的问题。Drozdowicz等人[18]系统阐述了无人机载SAR三维成像的场景与原理,并基于Nelder-Mead优化算法,提出了一种降低三维SAR成像旁瓣能量的航迹规划方法,然而其算法核心是在“Z”字型轨迹上进行扭曲与优化,成像质量提升有限,受随机初始值影响较大,也未进行实测数据验证,在本文中记为“Z-SAR”。

    智能优化算法主要包括粒子群算法(Particle Swarm Optimization, PSO)、遗传算法(Genetic Algorithm, GA)和模拟退火算法(Simulated Annealing, SA)等,这类启发式智能算法通常是归纳自然界及动物活动规律所得,对非线性、非凸问题的适应性和鲁棒性更好,在搜索复杂问题最优解中有着广泛的应用,但也存在收敛速度慢、超参数选择困难等问题[22,23]。Brown等人[17]提出了一种基于多目标PSO算法的航迹规划方法,实现了最大化搜索区域和最小化燃料消耗之间的权衡,然而该研究的应用局限于海上监视场景,无法在城市楼宇穿墙成像场景中应用。王楚涵等人[24]结合使用PSO和GA,提出了一种机载分布式MIMO雷达的航迹规划算法,兼顾雷达监视区域大小和性能,该研究主要实现多无人机的平面布站,无人机抵达指定位置后便不再移动,不适用于单无人机的穿墙三维SAR场景。

    上述航迹规划的飞行模式较为单一,基本局限为平面直线轨迹,尚无针对无人机载穿墙雷达场景航迹规划方法研究。因此,本文针对无人机探测效率与成像质量之间的矛盾,创新性地将航迹规划应用于无人机载穿墙SAR成像领域,提出基于遗传算法的无人机载穿墙三维SAR航迹规划方法,实现无人机飞行效率与穿墙SAR成像质量的平衡。根据不同飞行模式设立相应基因型,经过遗传算法不断迭代,得到不同飞行模式下最优的航迹,并通过仿真与实测数据对所提方法进行验证。仿真与实验数据表明:在相同飞行距离的情况下,本航迹规划方法SAR成像质量明显优于传统多基线航迹穿墙SAR成像质量,其中斜线飞行模式可兼顾最佳的探测效率和最优的成像质量。

    当雷达空间阵列间距大于半波长时,空间欠采样将导致栅瓣效应,影响成像效果。假设如图2所示平面场景,为栅瓣推导简便起见,设置两个竖直排布间距为d的雷达,目标与雷达的水平距离为D,目标与雷达的竖直距离分别为x1x2,栅瓣与目标竖直距离为ΔxΔR1表示“雷达1-目标”间距和“雷达1-栅瓣”间距的差值,ΔR2同理,如图2中粉色实线所示。

    图  2  栅瓣场景示意图
    Figure  2.  Schematic diagram of grating scene

    对于上述场景,SAR成像过程是两个雷达各自成像结果的相干叠加,故而栅瓣所处位置应满足两个雷达在该点的辐角相等,即满足式(1):

    e4jπΔR1λ=e4jπΔR2λ (1)

    设置参数αβ使其满足式(2):

    α=2π(ΔR1+ΔR2)λβ=2π(ΔR1ΔR2)λ} (2)

    将式(1)根据欧拉公式展开,按三角函数和差化积公式合并,并以式(2)化简,可得式(3):

    sinβ[jcosαsinα]=0 (3)

    其中,中括号部分恒不为0,因此当且仅当式(4)成立时,式(3)成立。

    sinβ=0 (4)

    代入参数,可得式(5):

    ΔR1ΔR2=D2+x21D2+x22+(DΔD)2+(Δxx2)2(DΔD)2+(Δxx1)2 (5)

    当满足Dx1,x2,ΔD时,式(5)可化简为

    ΔR1ΔR2=ΔxdD (6)

    将式(6)代入式(4)中,化简得到栅瓣在竖直方向的位置:

    Δx=kλD2d(k=±1,±2,) (7)

    由式(7)可知,等间距空间采样时,若间距大于半波长,则会导致成像结果出现周期性的栅瓣,且雷达间距d越大,栅瓣间距就越小,栅瓣效应就越严重。因此,从阵列设计的角度而言,非等间距的随机采样能够在兼顾效率的同时抑制栅瓣的形成。在这种随机采样的模式下,雷达回波无法在非目标位置同相叠加,从而达到抑制栅瓣的目的。对于机载雷达而言,应该选取某种随机的飞行轨迹,以此同时满足高飞行效率和高成像质量。

    无人机载穿墙雷达的飞行方式会显著影响目标区域的成像质量,需要在有限的飞行距离下对三维成像的质量进行定量评估,最终结合雷达成像质量与飞行距离构造代价函数。峰值栅瓣比(Peak-to-Grating Lobe Ratio, PGLR)定义为最大栅瓣幅值与主瓣幅值的比值[25],通常用dB表示,常用于反映成像结果的栅瓣强弱,方便起见在本文中记为Rpgl。其数学形式如式(8)所示:

    Rpgl=20lg(max(Pgl)max(Pml)) (8)

    其中,Pgl代表栅瓣的点集,而Pml代表主瓣的点集,因此Rpgl越小说明成像质量越高。SAR图像的三维分辨率虽然也常用于评估成像质量,但主要受方位向和高度向的孔径大小影响,而不同飞行方式下的方位和高度向范围大致相同,即不同飞行轨迹的分辨率近似一致,因此不将分辨率作为评判标准,重点考虑栅瓣对成像质量的影响。

    遗传算法对代价函数高度依赖,因此合理地构造飞行距离和成像质量的联合代价函数至关重要。为符合无人机实际情况,代价函数中飞行距离部分设计为分段函数形式,如式(9)所示:

    {C(L)=w1L, L<LmaxC(L)=w1Lmax+w2(LLmax), LLmax (9)

    其中,C为飞行距离的代价函数,L为无人机飞行距离,Lmax为期望的最大飞行距离,设置为80 m;权重值w1,w2满足w1w2。故总代价函数J包含两部分,分别为飞行距离和雷达成像结果Rpgl,可表示为

    J(L,Rpgl)=C(L)+w3Rpgl (10)

    其中,w3为评估Rpgl的超参数权重,具体取值上应满足w3>w1,即图像质量对总代价函数的影响占比最大。无人机飞行距离和机载穿墙雷达的成像结果并非相互独立,Rpgl是依赖于无人机载雷达飞行轨迹L的隐式函数,因此无人机载穿墙SAR航迹规划问题可描述为最优化问题:

    minLJ(L,Rpgl) (11)

    由飞行轨迹获取得到图像Rpgl,并通过最小化联合总代价函数J(L,Rpgl)=C(L)+w3Rpgl,来获取当前评估准则下的最优飞行路径。

    针对上述无人机载穿墙雷达航迹规划问题,局部最优化算法和全局启发式搜索算法均可进行求解,但是局部最优化算法需要严格的数学理论推导,求解较为困难,同时容易陷入目标函数局部极值点,无法找到全局最优解。全局启发式优化算法理论较为简单,具备目标函数的全局搜索能力,本文以遗传算法为例,开展基于遗传算法的无人机载SAR航迹规划方法研究。遗传算法起源于自然界中种群的遗传与进化,是一种启发式的搜索算法,具有不易过早陷入局部极值的优点,在许多非凸优化问题的求解中有广泛应用。针对无人机载穿墙雷达的航迹规划问题,本节提出了3种可行的飞行模式,并结合遗传算法,对建立的目标函数进行优化,以获取在特定飞行模式下的最优飞行航迹,对穿墙雷达目标进行成像。

    3.2.1   无人机载穿墙雷达飞行模式基因编码

    (1) 非等间距水平多基线飞行模式

    对于传统机载多基线雷达而言,为了具有良好的栅瓣抑制效果,其相邻基线间距通常相等且小于雷达半波长。当均匀间距大于半波长时,会产生栅瓣,严重影响三维层析SAR的成像效果。针对上述问题,本节提出一种基于遗传算法优化的非等间距水平多基线SAR (Unequally-spaced Multi Baseline SAR, UMB-SAR)飞行模式,通过改变水平基线高度向位置实现栅瓣抑制。虽然这种飞行方式在水平方位向仍有较大的航迹冗余,但无人机飞行方式简单,易于操控实现。

    假设场景的高度向范围为0~5 m,将每个整体的航迹视为一个独立的个体,则个体的基因型可以表示为长度为500的01向量G,即图3左侧绿色框所示的向量。其中,深绿色位置值为1,浅绿色位置值为0,500个位置均匀对应5 m的高度向范围,每个位置对应0.01 m的高度。将无人机经过的高度位置设为1,没有经过的位置设为0。因此,当基因型G中第i个位置为深色时,代表该处取值为1,即高度0.01i m处有一条水平的航过。

    图  3  UMB-SAR基因型
    Figure  3.  UMB-SAR genotype

    为了保证飞行轨迹具有足够高的效率,应确保基线最短间距大于半波长,即对于基因型的下标而言,任意两个“1”的下标差应大于固定值c,如式(12)所示:

    i,j[1,500] where Gi=Gj=1 then |ij|>c (12)

    (2) 水平-竖直交叉飞行模式

    UMB-SAR仅改变了水平方位向飞行的基线间距,其本质上仍是水平多航过飞行,没有考虑竖直方向上的飞行。故在UMB-SAR的基础上加入竖直飞行,提出交叉飞行SAR (Cross Flight SAR, CF-SAR)模式,通过竖直方向的航过,以二维网格的形式进一步抑制高度向栅瓣。

    CF-SAR的基因型与UMB-SAR类似,同样设置为01向量形式。设高度向长度为5 m,方位向长度为10 m,为保证同等长度的基因对应同样的物理长度,故将基因型长度由500扩展为长度为1500的01向量G,其中前一部分1~500仍表征水平方向飞行,后一部分501~1500则表征竖直方向的飞行,如图4所示。其中绿色框图部分代表水平飞行的基因型,红色框图代表竖直飞行的基因型。当基因型G中第i个位置为深色时,若i小于等于500,即图中深绿色,表示高度向0.01i m处有一条水平的基线;若i大于500,即图中深红色,则表示0.01(i500) m处有一条竖直的基线。

    图  4  CF-SAR基因型
    Figure  4.  CF-SAR genotype

    同样地,为保证航过最短间距大于半波长,有类似式(12)的限制条件,如式(13)所示,其中c1c2为固定值:

    i,j[1,500] where Gi=Gj=1 then |ij|>c1i,j[501,1500] where Gi=Gj=1 then|ij|>c2} (13)

    (3) 三维斜线飞行模式

    直线飞行模式虽然轨迹简单易于操控,但仍存在航迹冗余,若将成像模式调整为三维斜线飞行SAR (Oblique Flight SAR, OF-SAR)模式,相同飞行距离条件下,能达到更优的成像质量。

    若总航过数目为n,则其基因型是行数为n+1、列数为3的二维矩阵。基因矩阵每一行数据对应n+1个路径点的三维XYZ坐标,其航迹为这n+1个点依次连接而成,如图5所示。为避免相邻两点之间的基线太短,本研究对单个基线的长度添加限制,必须大于最短距离lmin,如式(14)所示:

    图  5  OF-SAR基因型
    Figure  5.  OF-SAR genotype
    i,|Pi+1Pi|>lmin (14)

    其中,i为途径点下标,P为途径点的三维向量,|A|代表向量A的模。另外,为避免航迹中出现角度过大的转向而耗费时间,在航迹途径点中加入了角度限制,即相邻两条基线夹角应大于θ,如式(15)所示:

    i, where i=1,2,,s1arccos((Pi+1Pi)(Pi+2Pi+1)|Pi+1Pi||Pi+2Pi+1|)>θ (15)
    3.2.2   选择与杂交

    在选择与杂交前,需要对这一代所有个体计算代价函数,并对代价函数进行降序排列。若每一代有M个个体,那么一定只能有K个个体能够参与繁殖,其中满足K<M。这是因为必须引入优胜劣汰的过程,整体的基因型才会得到进化。而其中筛选比例K/M的选择将对遗传算法的收敛性产生较大的影响,当K/M太大时,选择性便会下降,优化速度与收敛速度也会随之下降;而当K/M太小时,容易导致种群基因型过于单一,提前收敛而陷入局部最优值,因此需要根据情况选择合理的筛选比例。在具体数值上,选取为M为100,K为10。

    在具体从M个个体中选择K个个体时,本研究采用代价加权的随机选择方式,即每个个体都有概率被选择留下子代,而其代价函数越小,被选择的概率越高,如式(16)所示:

    Pi=eλJiMk=1eλJk (16)

    其中,Pi表示个体i被选择的概率,Ji是个体i的代价函数,λ为调节参数,取值为0.3,用于控制代价对选择概率的影响程度。与直接选择概率最高的前K个个体相比,该方式更符合自然界中的选择过程,能更大限度地避免基因型固化。

    选择之后是杂交步骤,用于生成子代。杂交函数是遗传算法能否学习到遗传特性、能否快速收敛的关键步骤。一般地,杂交函数的输入是从K个已选择亲代中随机抽取两个个体,杂交函数将其基因型重组后,输出一个或多个新个体,当新个体不满足式(14)或式(15)时,需要重新生成。

    (1) UMB-SAR

    采用一种“随机位点交叉互换”的方法,即先在500个下标中随机选取一个下标r作为交换位点,设亲代基因型为GaGb,子代基因型为G,则G可由式(17)表示:

    G[1:r]=Ga[1:r]G[(r+1):500]=Gb[(r+1):500]} (17)

    式(17)表示子代的基因型G在位点r处以前和亲代Ga的对应位点相同,在位点r处以后和亲代Gb的对应位点相同。

    (2) CF-SAR

    CF-SAR的杂交方法与上述杂交方法相似,需要在1~500和501~1500中分别随机选取两个下标r1,r2作为交换位点,设亲代基因型为GaGb,子代基因型为G,则G可由式(18)表示:

    G[1:r1]=Ga[1:r1]G[(r1+1):500]=Gb[(r1+1):500]G[501:r2]=Ga[501:r2]G[(r2+1):1500]=Gb[(r2+1):1500]} (18)

    (3) OF-SAR

    此模式下的杂交函数也是在1(s+1)范围内随机选取下标。设亲代基因型为GaGb,子代基因型为G,则G可由式(19)表示:

    G[1:3,1:r]=Ga[1:3,1:r]G[1:3,(r+1):(s+1)]=Gb[1:3,(r+1):(s+1)]} (19)
    3.2.3   基因突变

    若只依靠杂交函数,那么种群中将不会产生新位点上的基因,这将导致算法结果高度依赖于种群的初始随机生成,因此还需要在杂交函数之后引入基因突变,在具体数值上设置突变概率为0.2。

    (1) UMB-SAR与CF-SAR

    这两种飞行模式的基因型均为01向量,故基因突变的方式相似:每次突变随机选取其中一个“1”,将其置为“0”,在随机选择附近下标一个原本为“0”的点,将其置为“1”。突变结束后需要判断是否满足限制条件,若不满足同样需要重新生成。

    (2) OF-SAR

    此模式下基因型为数值类型,与之前两种不同,故OF-SAR的基因突变为:在s+1个坐标点中随机选取一个三维坐标,并在满足式(15)和式(16)约束的条件下,在±10%范围内改变其三维坐标的数值。

    本节将通过仿真数据,对比分析所提无人机载穿墙雷达航迹规划算法对成像质量的改善效果,并通过华诺星空CEM 200线性调频连续波机载穿墙雷达进行外场挂飞实验,不同飞行轨迹的穿墙雷达实测数据处理结果进一步验证了本文所提算法的有效性。

    为验证穿墙场景下所提方法的有效性,电磁波需要穿过较为复杂的墙体介质,使用解析公式获取穿墙场景的雷达回波存在一定困难,本研究使用开源时域有限差分数值仿真软件gprMax生成雷达回波数据[26],对上述所提UMB-SAR, CF-SAR和OF-SAR飞行模式航迹优化结果进行验证。在生成仿真雷达回波数据时,直接使用gprMax发射大脉宽调频连续波信号将导致仿真时间过长,故设置雷达天线发射信号为无载波的冲激脉冲波形,所得回波近似为系统冲激响应。仿真场景可视为一个线性时不变系统,将冲激响应与线性调频信号卷积,从而得到对应于输入线性调频信号的仿真穿墙场景回波信号。对该回波进行脉冲压缩,并经过距离插值、后向投影(Back Projection, BP)、相干累加等过程,最终得到三维后向投影成像结果。

    上述遗传算法迭代过程中,设置单点目标坐标为(5.0 m, 5.0 m, 2.0 m),并对其Rpgl进行迭代优化。为衡量多点目标场景的算法效果,仿真场景在前期航迹优化处理基础上添加点目标1与点目标3,具体仿真参数如表1所示。

    表  1  穿墙场景数值仿真参数
    Table  1.  Simulation parameter settings
    参数 数值
    雷达载频 2.95 GHz
    雷达带宽 440 MHz
    场景方位向范围 0~10 m
    场景高度向范围 0~5 m
    场景距离向范围 0~10 m
    墙体厚度 0.2 m
    墙体相对介电常数 4.0
    点目标1坐标 (8.0 m, 7.0 m, 2.7 m)
    点目标2坐标 (5.0 m, 5.0 m, 2.0 m)
    点目标3坐标 (2.0 m, 6.0 m, 3.7 m)
    超参数权重(w1,w2,w3) (0.02, 0.43, 0.55)
    下载: 导出CSV 
    | 显示表格

    穿墙仿真的三维场景如图6所示,其中方位向范围为0~10 m,距离向范围为0~10 m,高度向范围为0~5 m。墙体位于距离向4 m处,无人机在墙体左侧空间内飞行,点目标位于墙体右侧,即图中小球所示。因墙体的反射回波能量显著强于目标散射回波,利用gprMax进行穿墙仿真后,通过墙体回波和目标回波在距离向的差异,将墙体回波从雷达数据中移除,以便后续三维SAR成像结果的比较。式(9)与式(10)中的超参数权重(w1,w2,w3)分别设置为0.02, 0.43和0.55。w2是飞行距离超出期望最大飞行距离时的权重,本研究为了避免超出最大飞行距离的情况,将w2设置得较大。倘若对最大飞行距离没有严格的限制,可以适度调小w2取值。w3w1的比例应当选用合适的取值,经仿真测试,若w3/w1太小(小于10),遗传算法会过于重视飞行距离而忽视成像质量,导致最终成像质量较差;若w3/w1太大(大于50),则基本等同于对单变量Rpgl的优化,不能起到兼顾成像质量的同时缩减飞行距离的作用。

    图  6  三维穿墙仿真场景
    Figure  6.  Through-the-wall 3D simulation scene
    4.1.1   MB-SAR

    为比较算法优劣,设置8条间距为0.625 m的MB-SAR成像结果作为对照,航迹如图7(a)所示,其中虚线代表无人机水平方位向的飞行航过,实心点代表每条飞行航过的起始与结束位置。由于墙后3个目标点的方位、距离与高度均不相同,无法采用一幅截面表示,故将3幅截面投影合一,以距离-高度向投影图和方位-高度向投影图反映成像质量。图7(b)图7(c)中可观察到明显的高度向周期性栅瓣。

    图  7  MB-SAR航迹与成像结果
    Figure  7.  MB-SAR trajectory and imaging results
    4.1.2   UMB-SAR

    UMB-SAR飞行模式下,以不同数值作为随机数种子,经过10次遗传算法迭代过程得到如图8(a)所示代价函数下降图线,不同迭代过程用不同的颜色表示。因初始种群基因型随机生成,初始代价函数在−1.8~−2.3随机分布。随着迭代次数增加,代价函数逐渐降低,当代价函数连续10次降低不超过1E−6时停止迭代,最终迭代约50次后收敛。在最优的情况(#9)中,代价函数从−2.12降低至−3.15。遗传算法迭代得到的最优航迹有8条长度为10 m的水平方位向航过,飞行距离为80 m。其水平航过间距各不相同,呈近似随机排布。

    图  8  UMB-SAR航迹优化与成像结果
    Figure  8.  UMB-SAR trajectory optimization and imaging results

    对比图7图8可知,MB-SAR成像结果存在严重栅瓣效应,难以确定点目标高度位置。UMB-SAR高度向栅瓣能量更为分散,无周期性栅瓣强点,优于MB-SAR的对应结果,但栅瓣效应仍较为显著。

    4.1.3   Z-SAR

    为对比已有飞行模式与所提模式的优劣,对Drozdowicz等人[18]的Z-SAR进行遗传算法优化,因其基因型与UMB-SAR相似,故不过多赘述。算法迭代结果如图9(a)所示,种群初始代价函数大致位于−2至−3区间内,其中#6为最优情况,对应代价函数从−2.23降至−3.68,相较于UMB-SAR的最佳飞行结果提升16.8%。对应的航迹如图9(b)所示,共有8条锯齿状航过,总飞行距离为80.16 m。

    图  9  Z-SAR航迹优化与成像结果
    Figure  9.  Z-SAR trajectory optimization and imaging results

    与UMB-SAR相比,Z-SAR整体栅瓣能量进一步降低,距离-高度图中栅瓣覆盖范围明显缩小,表明以斜线代替直线航迹将提升栅瓣抑制效果。

    4.1.4   CF-SAR

    在CF-SAR模式下,遗传算法迭代结果如图10(a)所示,种群初始代价函数基本位于−2.7至−3.0区间内,在第40次迭代左右收敛,其中#5为最优情况,对应代价函数从−2.88降至−4.21,相较于UMB-SAR的最佳飞行结果提升33.7%。对应的航迹如图10(b)所示,共有12条航过,包含4条长度为10 m的水平方位向航过和8条长度为5 m的竖直航过,总飞行距离为80 m,与最大飞行距离相同。

    图  10  CF-SAR航迹优化与成像结果
    Figure  10.  CF-SAR trajectory optimization and imaging results

    CF-SAR中栅瓣情况较UMB-SAR进一步分散,整体栅瓣能量进一步降低,但图10(b)中高度向仍存在少量周期性栅瓣。

    4.1.5   OF-SAR

    在OF-SAR模式下,遗传算法迭代结果如图11(a)所示,在第50次迭代左右收敛,最低代价函数为−5.25,相较于UMB-SAR的最佳飞行结果提升约66.7%。该结果对应的航迹如图11(b)所示,总飞行距离为56.5 m,小于前述所有模式。

    图  11  OF-SAR航迹优化与成像结果
    Figure  11.  OF-SAR trajectory optimization and imaging results

    OF-SAR中栅瓣情况前两种飞行模式有明显提升,在飞行距离最短的情况下,有着相对最佳的成像质量,说明了所提方法在机载穿墙雷达成像中有较好的效果。

    4.1.6   对比与分析

    为对比传统MB-SAR算法、Z-SAR与本文所提UMB-SAR, CF-SAR和OF-SAR算法,选取经遗传算法优化的点目标Rpgl衡量飞行模式的优劣,代价函数(遗传算法运行结果)与Rpgl表2所示。

    表  2  算法仿真结果对比
    Table  2.  Comparison of algorithm simulation results
    飞行模式代价函数Rpgl(dB)
    MB-SAR\−0.51
    UMB-SAR−3.15−8.64
    Z-SAR−3.68−9.73
    CF-SAR−4.21−10.56
    OF-SAR−5.25−11.60
    下载: 导出CSV 
    | 显示表格

    表2Rpgl是不同算法成像结果的峰值栅瓣比,其数值越小说明成像效果越好。根据对比结果可知,在相同飞行距离限制条件下,本文所提UMB-SAR方法优于传统MB-SAR飞行,而本文所提CF-SAR与OF-SAR方法则优于Z-SAR算法,从理论层面验证了所提方法的有效性。

    MB-SAR因其在欠采样场景下航过间距相等,栅瓣效应最为明显;UMB-SAR在其基础上改变航过间距,故栅瓣效应有所抑制;Z-SAR可视为UMB-SAR的改进,航过由水平直线转变为锯齿状斜线,引入更多采样位置的非均匀性,故成像质量优于UMB-SAR;CF-SAR在UMB-SAR的基础上添加竖直方向的航过,进一步降低高度向栅瓣;OF-SAR使用三维斜线飞行,采样位置非均匀性最大,在飞行距离最短的情况下有着最佳成像效果。

    为进一步验证所提算法轨迹优化后的无人机载穿墙雷达成像性能,本研究开展了无人机载穿墙雷达的外场实验验证。所用的无人机为大疆M350型号四旋翼无人机(图12),该型号无人机可利用网络实时动态(Real-Time Kinematic, RTK)差分定位技术提供高精度的无人机位置信息,可为后续BP成像提供定位数据。将预先优化好的航迹存储到大疆飞控端,再通过飞控系统将航迹文件将上传到无人机,通过预设航迹,无人机能够按照指定的航迹、速度与机头朝向进行规划飞行。

    图  12  CEM200型号无人机载穿墙雷达
    Figure  12.  UAV-TWR CEM200

    本文实验使用华诺星空CEM200型号无人机载穿墙雷达进行数据采集,实测数据参数如表3所示。为了保证天线结构的小型化,雷达射频信号频率为2.95~3.35 GHz,使用调频连续波信号,提供足够的平均发射功率,脉冲重复频率为1923 Hz,能够保证较大的多普勒带宽,中频信号的采样率为10 MHz。无人机的飞行速度可通过大疆的航迹规划软件进行设置,本文所有的实验均采用2 m/s的飞行速度(在控制点会降速,以保证无人机航迹切换的稳定性)。

    表  3  实测数据参数
    Table  3.  Actual measured data parameters
    实验参数 数值
    雷达载频 2.95 GHz
    脉冲重复频率 1923 Hz
    雷达带宽 440 MHz
    采样率 10 MHz
    飞行速度 2 m/s
    下载: 导出CSV 
    | 显示表格

    为验证算法结果在城市非视距场景的适用性,本研究在穿墙模式下开展了实验,飞行轨迹由提前上传的轨迹进行控制。其场景如图13(a)所示,角反射器坐标大约位于(5 m, 7 m, 2 m)处;图13(b)中的建筑物外墙高度约10 m,建筑物的宽度约8 m,建筑物外墙采用多孔红心砖墙修建,墙体厚度约为24 cm。无人机在距离墙面约4 m距离沿预设的4种轨迹进行自动巡航飞行,飞行轨迹通过图中的白色曲线线条进行示意。

    图  13  穿墙场景测试图
    Figure  13.  Measurement of through-the-wall scene
    4.2.1   无人机飞行轨迹与分析

    通过RTK记录的4种飞行模式航迹如图14所示,与理想轨迹基本一致,但因为受无人机飞行控制、近地表气流影响,不可避免地与理想轨迹有一定偏差,为对比评估轨迹偏差对成像质量的影响,本文将实测轨迹作为仿真输入,进行仿真SAR成像,获取不同飞行模式下的Rpgl,仿真参数与4.1节设置相同。实际轨迹与理想轨迹的对比结果如图15所示。

    图  14  实测穿墙场景下不同模式的无人机飞行航迹
    Figure  14.  Actual measurement of UAV flight trajectory in different modes in through-the-wall scenarios
    图  15  理想与实测航迹Rpgl仿真对比
    Figure  15.  Comparison of Rpgl between ideal and real trajectory in simulation

    由上述结果可知,除MB-SAR模式外,其余模式下实际轨迹均会导致成像质量略有下降,但下降幅度较小,总体趋势仍符合前述结论,无人机轨迹误差对成像质量的影响在合理范围内,可近似忽略。

    4.2.2   成像结果与分析

    因墙体回波和目标回波存在距离向上的差异,可使用时域选通将墙体回波去除,再通过三维成像获得目标位置。经过BP成像处理,MB-SAR, UMB-SAR, CF-SAR和OF-SAR成像结果的距离-高度向截面和方位高度截面图分别如图16所示,其中目标点由红色虚线圈出。

    图  16  穿墙场景下4种模式的成像情况(“Ⅰ”表示距离-高度向截面,“Ⅱ”表示方位-高度向截面)
    Figure  16.  Imaging situation of four modes in through-the-wall scenario (“Ⅰ” represents the range-height section, “Ⅱ” represents the azimuth-height section)

    图16(a)图16(b)可知,穿墙场景MB-SAR成像结果存在显著的周期性栅瓣,与仿真结果一致。图16(c)图16(d)中UMB-SAR的成像结果栅瓣有一定改善,但仍存在较强栅瓣点。图16(e)图16(f)中CF-SAR的栅瓣情况进一步降低,但目标点存在栅瓣聚集的现象,方位向仍存在较强栅瓣。图16(g)图16(h)中OF-SAR的效果最佳,整体栅瓣较为分散,且能量较低,与仿真结果吻合,再一次验证算法与实验结果的准确性。4种算法的Rpgl结果如表4所示。

    表  4  算法实测结果对比(dB)
    Table  4.  Comparison of algorithm measured results (dB)
    算法 Rpgl
    MB-SAR −1.91
    UMB-SAR −4.14
    CF-SAR −7.71
    OF-SAR −9.52
    下载: 导出CSV 
    | 显示表格

    针对无人机载穿墙雷达在高层建筑遮蔽空间穿透探测场景中,高度向欠采样导致成像结果包含大量栅瓣能量的问题,本文提出了一种基于遗传算法的无人机载穿墙雷达三维SAR成像的航迹规划方法。通过权衡无人机飞行距离和穿墙三维SAR的成像质量,构建了无人机航迹规划的代价函数,该最优化问题可从数学关系上描述无人机飞行轨迹与雷达成像质量的内在联系,具有较好的通用性。构建了3类无人机飞行模式,使用具有全局搜索能力的遗传算法,建立对应飞行轨迹的基因表达形式、种群的选择与杂交过程以及个体的变异特性。仿真和实测结果表明,本方法能够在较短飞行距离的情况下显著抑制栅瓣能量。在仿真数据中,本文所提3种飞行方式相较于传统等间距多基线飞行有显著的提升;在穿墙场景实验中,所提飞行方式实测成像结果与仿真结果均能相互印证,验证了算法的有效性。

    本文构建了统一的无人机载穿墙层析SAR航迹优化代价函数,但是由于三维空域采样的复杂性,若直接对该问题进行直接求解,在计算量上存在显著的困难。本研究所提方法基于不同飞行模式分别开展航迹规划,没有建立统一的航迹规划求解框架,后续将针对这一问题深入研究,实现无需规定飞行模式,即可实现探测区域内最优航迹自动规划。

  • 图  1  Sentinel-1卫星在不同成像条件下拍摄的SAR图像[2]

    Figure  1.  The SAR images obtained by Sentinel-1 under different imaging conditions[2]

    图  2  物理可解释的深度学习 SAR 图像解译应从多个维度开展研究,充分结合数据驱动和知识驱动的模型,逐步发展可学习且可解释的智能化图像解译新范式

    Figure  2.  The PXDL for SAR image interpretation is supposed to be carried out from multiple aspects, that deeply integrates the data-driven and knowledge-driven models to develop the novel learnable and explainable intelligent paradigm

    图  3  SAR图像解译思路,①②③④⑤表示可以发展物理可解释深度学习方法的模块

    Figure  3.  The SAR image interpretation guideline, ①②③④⑤ are the potential modules to develop PXDL

    图  4  文献[50]给出的全极化SAR图像 H/α 平面,以及选取的部分地物样本在其中的分布

    Figure  4.  The H/α plane for full-polarized SAR data and the selected land-use and land-cover samples distributed in Ref. [50]

    图  5  基于时频分析和极化特征扩展时频分析模型的无监督学习方法在不同极化SAR图像上的结果比较[92]

    Figure  5.  The unsupervised learning results of different polarized SAR images based on TFA and pol-extended TFA models[92]

    图  6  物理引导与注入式学习

    Figure  6.  Physics guided and injected learning

    图  7  文献[11]所提的SAR图像分类框架Deep SAR-Net (DSN)

    Figure  7.  The SAR image classification framework Deep SAR-Net (DSN) in Ref. [11]

    图  8  无监督的物理引导学习与CNN监督分类学习在训练集与测试集数据上的特征可视化[100]

    Figure  8.  The feature visualization of the unsupervised physics guided learning and supervised CNN classification on training and test set[100]

    图  9  基于ASC模型初始化的复数卷积神经网络第一层卷积核幅度可视化[106]

    Figure  9.  The amplitude images of convolution kernels in the first layer of CV-CNN based on ASC model initialization[106]

    图  10  不同SAR图像建筑物分割数据集和算法示例[121,123]

    Figure  10.  The different SAR image building segmentation datasets and algorithms[121,123]

    图  1  The SAR images obtained by Sentinel-1 under different imaging conditions[2]

    图  2  The PXDL for SAR image interpretation is supposed to be carried out from multiple aspects, that deeply integrates the data-driven and knowledge-driven models to develop the novel learnable and explainable intelligent paradigm

    图  3  The SAR image interpretation guideline. ① ② ③ ④ ⑤ are the potential modules to develop PXDL

    图  4  The H/ α plane for full-polarized SAR data and the selected land-use and land-cover samples distributed in Ref. [50]

    图  5  The unsupervised learning results of different polarized SAR images based on TFA and pol-extended TFA models[92]

    图  6  Physics guided and injected learning

    图  7  The SAR image classification framework Deep SAR-Net (DSN)[11]

    图  8  The feature visualization of the unsupervised physics guided learning and supervised CNN classification on training and test set[100]

    图  9  The amplitude images of convolution kernels in the first layer of CV-CNN based on ASC model initialization[106]

    图  10  The different SAR image building segmentation datasets and algorithms[121,123]

  • [1] CUMMING I G, WONG F H, 洪文, 胡东辉, 韩冰, 等译. 合成孔径雷达成像算法与实现[M]. 北京: 电子工业出版社, 2019, 93–100.

    CUMMING I G, WONG F H, HONG Wen, HU Donghui, HAN Bing, et al. translation. Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation[M]. Beijing: Publishing House of Electronics Industry, 2019, 93–100.
    [2] 黄钟泠. 面向合成孔径雷达图像分类的深度学习方法研究[D]. [博士论文], 中国科学院大学, 2020: 59.

    HUANG Zhongling. A study on synthetic aperture radar image classification with deep learning[D]. [Ph. D. dissertation], University of Chinese Academy of Sciences, 2020: 59.
    [3] 谷秀昌, 付琨, 仇晓兰. SAR图像判读解译基础[M]. 北京: 科学出版社, 2017.

    GU Xiuchang, FU Kun, and QIU Xiaolan. Fundamentals of SAR Image of SAR Image Interpretation[M]. Beijing: Science Press, 2017.
    [4] OLIVER C and QUEGAN S. Understanding Synthetic Aperture Radar Images[M]. London: SciTech Publishing, 2004.
    [5] GAO Gui, OUYANG Kewei, LUO Yongbo, et al. Scheme of parameter estimation for generalized gamma distribution and its application to ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(3): 1812–1832. doi: 10.1109/TGRS.2016.2634862
    [6] LENG Xiangguang, JI Kefeng, ZHOU Shilin, et al. Ship detection based on complex signal kurtosis in single-channel SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6447–6461. doi: 10.1109/TGRS.2019.2906054
    [7] CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720
    [8] HUANG Zhongling, DUMITRU C O, PAN Zongxu, et al. Classification of large-scale high-resolution SAR images with deep transfer learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(1): 107–111. doi: 10.1109/LGRS.2020.2965558
    [9] HUANG Zhongling, PAN Zongxu, and LEI Bin. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data[J]. Remote Sensing, 2017, 9(9): 907. doi: 10.3390/rs9090907
    [10] HUANG Zhongling, PAN Zongxu, and LEI Bin. What, where, and how to transfer in SAR target recognition based on deep CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(4): 2324–2336. doi: 10.1109/TGRS.2019.2947634
    [11] HUANG Zhongling, DATCU M, PAN Zongxu, et al. Deep SAR-Net: Learning objects from signals[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161: 179–193. doi: 10.1016/j.isprsjprs.2020.01.016
    [12] 金亚秋. 多模式遥感智能信息与目标识别: 微波视觉的物理智能[J]. 雷达学报, 2019, 8(6): 710–716. doi: 10.12000/JR19083

    JIN Yaqiu. Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision[J]. Journal of Radars, 2019, 8(6): 710–716. doi: 10.12000/JR19083
    [13] 张钹, 朱军, 苏航. 迈向第三代人工智能[J]. 中国科学:信息科学, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204

    ZHANG Bo, ZHU Jun, and SU Hang. Toward the third generation of artificial intelligence[J]. SCIENTIA SINICA Informationis, 2020, 50(9): 1281–1302. doi: 10.1360/SSI-2020-0204
    [14] DAS A and RAD P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey[OL]. arXiv: 2006.11371, 2020.
    [15] BAI Xiao, WANG Xiang, LIU Xianglong, et al. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments[J]. Pattern Recognition, 2021, 120: 108102. doi: 10.1016/j.patcog.2021.108102
    [16] ANGELOV P and SOARES E. Towards explainable deep neural networks (xDNN)[J]. Neural Networks, 2020, 130: 185–194. doi: 10.1016/j.neunet.2020.07.010
    [17] MOLNAR C. Interpretable machine learning: A guide for making black box models explainable[EB/OL]. https://christophm.github.io/interpretable-ml-book/, 2021.
    [18] CAMBURU O M. Explaining deep neural networks[D]. [Ph. D. dissertation], Oxford University, 2020.
    [19] 李玮杰, 杨威, 刘永祥, 等. 雷达图像深度学习模型的可解释性研究与探索[J]. 中国科学: 信息科学, 待出版. doi: 10.1360/SSI-2021-0102.

    LI Weijie, YANG Wei, LIU Yongxiang, et al. Research and exploration on interpretability of deep learning model in radar image[J]. SCIENTIA SINICA Informationis, in press. doi: 10.1360/SSI-2021-0102.
    [20] BELLONI C, BALLERI A, AOUF N, et al. Explainability of deep SAR ATR through feature analysis[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(1): 659–673. doi: 10.1109/TAES.2020.3031435
    [21] 郭炜炜, 张增辉, 郁文贤, 等. SAR图像目标识别的可解释性问题探讨[J]. 雷达学报, 2020, 9(3): 462–476. doi: 10.12000/JR20059

    GUO Weiwei, ZHANG Zenghui, YU Wenxian, et al. Perspective on explainable SAR target recognition[J]. Journal of Radars, 2020, 9(3): 462–476. doi: 10.12000/JR20059
    [22] KARNIADAKIS G E, KEVREKIDIS I G, LU Lu, et al. Physics-informed machine learning[J]. Nature Reviews Physics, 2021, 3(6): 422–440. doi: 10.1038/s42254-021-00314-5
    [23] THUEREY N, HOLL P, MUELLER M, et al. Physics-based deep learning[OL]. arXiv: 2109.05237, 2021.
    [24] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686–707. doi: 10.1016/j.jcp.2018.10.045
    [25] MENG Xuhui, LI Zhen, ZHANG Dongkun, et al. PPINN: Parareal physics-informed neural network for time-dependent PDEs[J]. Computer Methods in Applied Mechanics and Engineering, 2020, 370: 113250. doi: 10.1016/j.cma.2020.113250
    [26] GOSWAMI S, ANITESCU C, CHAKRABORTY S, et al. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture[J]. Theoretical and Applied Fracture Mechanics, 2020, 106: 102447. doi: 10.1016/j.tafmec.2019.102447
    [27] KARPATNE A, EBERT-UPHOFF I, RAVELA S, et al. Machine learning for the geosciences: Challenges and opportunities[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(8): 1544–1554. doi: 10.1109/TKDE.2018.2861006
    [28] CAMPS-VALLS G, REICHSTEIN M, ZHU Xiaoxiang, et al. Advancing deep learning for earth sciences: From hybrid modeling to interpretability[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3979–3982. doi: 10.1109/IGARSS39084.2020.9323558.
    [29] REICHSTEIN M, CAMPS-VALLS G, STEVENS B, et al. Deep learning and process understanding for data-driven Earth system science[J]. Nature, 2019, 566(7743): 195–204. doi: 10.1038/s41586-019-0912-1
    [30] CAMPS-VALLS G, SVENDSEN D H, CORTÉS-ANDRÉS J, et al. Physics-aware machine learning for geosciences and remote sensing[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 2086–2089. doi: 10.1109/IGARSS47720.2021.9554521.
    [31] JIA Xiaowei, WILLARD J, KARPATNE A, et al. Physics guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles[C]. The 2019 SIAM International Conference on Data Mining, Calgary, Canada, 2019: 558–566. doi: 10.1137/1.9781611975673.63.
    [32] DAW A, KARPATNE A, WATKINS W, et al. Physics-guided neural networks (PGNN): An application in lake temperature modeling[OL]. arXiv: 1710.11431, 2021. doi: https://arxiv.org/abs/1710.11431.
    [33] BEUCLER T, PRITCHARD M, GENTINE P, et al. Towards physically-consistent, data-driven models of convection[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3987–3990. doi: 10.1109/IGARSS39084.2020.9324569.
    [34] SHEN Huanfeng, JIANG Menghui, LI Jie, et al. Coupling model-driven and data-driven methods for remote sensing image restoration and fusion[OL]. arXiv: 2108.06073, 2021.
    [35] WANG Yuqing, WANG Qi, LU Wenkai, et al. Physics-constrained seismic impedance inversion based on deep learning[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3072132
    [36] XIA Wenchao, ZHENG Gan, WONG K K, et al. Model-driven beamforming neural networks[J]. IEEE Wireless Communications, 2020, 27(1): 68–75. doi: 10.1109/MWC.001.1900239
    [37] ZHANG Juping, XIA Wenchao, YOU Minglei, et al. Deep learning enabled optimization of downlink beamforming under per-antenna power constraints: Algorithms and experimental demonstration[J]. IEEE Transactions on Wireless Communications, 2020, 19(6): 3738–3752. doi: 10.1109/TWC.2020.2977340
    [38] ZHU Xiaoxiang, MONTAZERI S, ALI M, et al. Deep learning meets SAR: Concepts, models, pitfalls, and perspectives[J]. IEEE Geoscience and Remote Sensing Magazine, in press. doi: 10.1109/MGRS.2020.3046356.
    [39] MALMGREN-HANSEN D, KUSK A, DALL J, et al. Improving SAR automatic target recognition models with transfer learning from simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(9): 1484–1488. doi: 10.1109/LGRS.2017.2717486
    [40] 文贡坚, 朱国强, 殷红成, 等. 基于三维电磁散射参数化模型的SAR目标识别方法[J]. 雷达学报, 2017, 6(2): 115–135. doi: 10.12000/JR17034

    WEN Gongjian, ZHU Guoqiang, YIN Hongcheng, et al. SAR ATR based on 3D parametric electromagnetic scattering model[J]. Journal of Radars, 2017, 6(2): 115–135. doi: 10.12000/JR17034
    [41] 罗迎, 倪嘉成, 张群. 基于“数据驱动+智能学习”的合成孔径雷达学习成像[J]. 雷达学报, 2020, 9(1): 107–122. doi: 10.12000/JR19103

    LUO Ying, NI Jiacheng, and ZHANG Qun. Synthetic aperture radar learning-imaging method based on data-driven technique and artificial intelligence[J]. Journal of Radars, 2020, 9(1): 107–122. doi: 10.12000/JR19103
    [42] CHAN T H, JIA Kui, GAO Shenghua, et al. PCANet: A simple deep learning baseline for image classification?[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5017–5032. doi: 10.1109/TIP.2015.2475625
    [43] LI Mengke, LI Ming, ZHANG Peng, et al. SAR image change detection using PCANet guided by saliency detection[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(3): 402–406. doi: 10.1109/LGRS.2018.2876616
    [44] WANG Rongfang, ZHANG Jie, CHEN Jiawei, et al. Imbalanced learning-based automatic SAR images change detection by morphologically supervised PCA-net[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(4): 554–558. doi: 10.1109/LGRS.2018.2878420
    [45] CLOUDE S and POTTIER E. An entropy based classification scheme for land applications of polarimetric SAR[J]. IEEE Transactions on Geoscience and Remote Sensing, 1997, 35(1): 68–78. doi: 10.1109/36.551935
    [46] YAMAGUCHI Y, YAJIMA Y, and YAMADA H. A four-component decomposition of POLSAR images based on the coherency matrix[J]. IEEE Geoscience and Remote Sensing Letters, 2006, 3(3): 292–296. doi: 10.1109/LGRS.2006.869986
    [47] FERRO-FAMIL L, REIGBER A, and POTTIER E. Scene characterization using sub-aperture polarimetric interferometric SAR data[C]. IGARSS 2003-2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 2003: 702–704. doi: 10.1109/IGARSS.2003.1293889.
    [48] POTTER L C and MOSES R L. Attributed scattering centers for SAR ATR[J]. IEEE Transactions on Image Processing, 1997, 6(1): 79–91. doi: 10.1109/83.552098
    [49] JI Kefeng and WU Yonghui. Scattering mechanism extraction by a modified cloude-pottier decomposition for dual polarization SAR[J]. Remote Sensing, 2015, 7(6): 7447–7470. doi: 10.3390/rs70607447
    [50] YONEZAWA C, WATANABE M, and SAITO G. Polarimetric decomposition analysis of ALOS PALSAR observation data before and after a landslide event[J]. Remote Sensing, 2012, 4(8): 2314–2328. doi: 10.3390/rs4082314
    [51] NIU Shengren, QIU Xiaolan, LEI Bin, et al. Parameter extraction based on deep neural network for SAR target simulation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4901–4914. doi: 10.1109/TGRS.2020.2968493
    [52] NIU Shengren, QIU Xiaolan, LEI Bin, et al. A SAR target image simulation method with DNN embedded to calculate electromagnetic reflection[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2593–2610. doi: 10.1109/JSTARS.2021.3056920
    [53] GUO Jiayi, LEI Bin, DING Chibiao, et al. Synthetic aperture radar image synthesis by using generative adversarial nets[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(7): 1111–1115. doi: 10.1109/LGRS.2017.2699196
    [54] OH J and KIM M. PeaceGAN: A GAN-based multi-task learning method for SAR target image generation with a pose estimator and an auxiliary classifier[J]. Remote Sensing, 2021, 13(19): 3939. doi: 10.3390/rs13193939
    [55] CUI Zongyong, ZHANG Mingrui, CAO Zongjie, et al. Image data augmentation for SAR sensor via generative adversarial nets[J]. IEEE Access, 2019, 7: 42255–42268. doi: 10.1109/ACCESS.2019.2907728
    [56] SONG Qian, XU Feng, and JIN Yaqiu. SAR image representation learning with adversarial autoencoder networks[C]. IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 9498–9501. doi: 10.1109/IGARSS.2019.8898922.
    [57] WANG Ke, ZHANG Gong, LENG Yang, et al. Synthetic aperture radar image generation with deep generative models[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6): 912–916. doi: 10.1109/LGRS.2018.2884898
    [58] HU Xiaowei, FENG Weike, GUO Yiduo, et al. Feature learning for SAR target recognition with unknown classes by using CVAE-GAN[J]. Remote Sensing, 2021, 13(18): 3554. doi: 10.3390/rs13183554
    [59] XIE You, FRANZ E, CHU Mengyu, et al. TempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow[J]. ACM Transactions on Graphics, 2018, 37(4): 95.
    [60] CHU Mengyu, THUEREY N, SEIDEL H P, et al. Learning meaningful controls for fluids[J]. ACM Transactions on Graphics, 2021, 40(4): 100. doi: 10.1145/3450626.3459845
    [61] QIAN Jiang, HUANG Shaoyin, WANG Lu, et al. Super-resolution ISAR imaging for maneuvering target based on deep-learning-assisted time-frequency analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 5201514. doi: 10.1109/TGRS.2021.3050189
    [62] LIANG Jiadian, WEI Shunjun, WANG Mou, et al. ISAR compressive sensing imaging using convolution neural network with interpretable optimization[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 2483–2486. doi: 10.1109/IGARSS39084.2020.9323601.
    [63] GREGOR K and LECUN Y. Learning fast approximations of sparse coding[C]. 27th International Conference on Machine Learning, Haifa, Israel, 2010: 399–406.
    [64] LIU Jialin, CHEN Xiaohan, WANG Zhangyang, et al. ALISTA: Analytic weights are as good as learned weights in LISTA[C]. The 7th International Conference on Learning Representations, New Orleans, USA, 2019, 1–33.
    [65] BEHRENS F, SAUDER J, and JUNG P. Neurally augmented ALISTA[C]. The 9th International Conference on Learning Representations, Virtual Event, Austria, 2021: 1–10.
    [66] YANG Yan, SUN Jian, LI Huibin, et al. Deep ADMM-Net for compressive sensing MRI[C]. The 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 10–18. doi: 10.5555/3157096.3157098.
    [67] YANG Yan, SUN Jian, LI Huibin, et al. ADMM-CSNet: A deep learning approach for image compressive sensing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(3): 521–538. doi: 10.1109/TPAMI.2018.2883941
    [68] MASON E, YONEL B, and YAZICI B. Deep learning for SAR image formation[C]. SPIE 10201, Algorithms for Synthetic Aperture Radar Imagery XXIV, Anaheim, USA, 2017: 1020104. doi: 10.1117/12.2267831.
    [69] GAO Jingkun, DENG Biin, QIN Yuliang, et al. Enhanced radar imaging using a complex-valued convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(1): 35–39. doi: 10.1109/LGRS.2018.2866567
    [70] HU Changyu, WANG Ling, LI Ze, et al. Inverse synthetic aperture radar imaging using a fully convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(7): 1203–1207. doi: 10.1109/LGRS.2019.2943069
    [71] ALVER M B, SALEEM A, and ÇETIN M. Plug-and-play synthetic aperture radar image formation using deep priors[J]. IEEE Transactions on Computational Imaging, 2021, 7: 43–57. doi: 10.1109/TCI.2020.3047473
    [72] WANG Mou, WEI Shunjun, LIANG Jiadian, et al. TPSSI-Net: Fast and enhanced two-path iterative network for 3D SAR sparse imaging[J]. IEEE Transactions on Image Processing, 2021, 30: 7317–7332. doi: 10.1109/TIP.2021.3104168
    [73] HU Changyu, LI Ze, WANG Ling, et al. Inverse synthetic aperture radar imaging using a deep ADMM network[C]. 20th International Radar Symposium (IRS), Ulm, Germany, 2019: 1–9. doi: 10.23919/IRS.2019.8768138.
    [74] LI Xiaoyong, BAI Xueru, and ZHOU Feng. High-resolution ISAR imaging and autofocusing via 2d-ADMM-net[J]. Remote Sensing, 2021, 13(12): 2326. doi: 10.3390/rs13122326
    [75] LI Ruize, ZHANG Shuanghui, ZHANG Chi, et al. Deep learning approach for sparse aperture ISAR imaging and autofocusing based on complex-valued ADMM-net[J]. IEEE Sensors Journal, 2021, 21(3): 3437–3451. doi: 10.1109/JSEN.2020.3025053
    [76] HU Xiaowei, XU Feng, GUO Yiduo, et al. MDLI-Net: Model-driven learning imaging network for high-resolution microwave imaging with large rotating angle and sparse sampling[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–17. doi: 10.1109/TGRS.2021.3110579
    [77] RATHA D, GAMBA P, BHATTACHARYA A, et al. Novel techniques for built-up area extraction from polarimetric SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(1): 177–181. doi: 10.1109/LGRS.2019.2914913
    [78] AO Dongyang, DATCU M, SCHWARZ G, et al. Moving ship velocity estimation using TanDEM-X data based on subaperture decomposition[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(10): 1560–1564. doi: 10.1109/LGRS.2018.2846399
    [79] 廖明生, 王茹, 杨梦诗, 等. 城市目标动态监测中的时序InSAR分析方法及应用[J]. 雷达学报, 2020, 9(3): 409–424. doi: 10.12000/JR20022

    LIAO Mingsheng, WANG Ru, YANG Mengshi, et al. Techniques and applications of spaceborne time-series InSAR in urban dynamic monitoring[J]. Journal of Radars, 2020, 9(3): 409–424. doi: 10.12000/JR20022
    [80] SICA F, GOBBI G, RIZZOLI P, et al. Φ-Net: Deep residual learning for InSAR parameters estimation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(5): 3917–3941. doi: 10.1109/TGRS.2020.3020427
    [81] SONG Qian, XU Feng, and JIN Yaqiu. Radar image colorization: Converting single-polarization to fully polarimetric using deep neural networks[J]. IEEE Access, 2018, 6: 1647–1661. doi: 10.1109/ACCESS.2017.2779875
    [82] ZHAO Juanping, DATCU M, ZHANG Zenghai, et al. Contrastive-regulated CNN in the complex domain: A method to learn physical scattering signatures from flexible PolSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 10116–10135. doi: 10.1109/TGRS.2019.2931620
    [83] QU Junrong, QIU Xiaolan, and DING Chibiao. A study of recovering POLSAR information from single-polarized data using DNN[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 812–815. doi: 10.1109/IGARSS47720.2021.9554304.
    [84] CHENG Zezhou, YANG Qingxiong, and SHENG Bin. Deep colorization[C]. The IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 415–423. doi: 10.1109/ICCV.2015.55.
    [85] LUAN Fujun, PARIS S, SHECHTMAN E, et al. Deep photo style transfer[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6997–7005. doi: 10.1109/CVPR.2017.740.
    [86] JI Guang, WANG Zhaohui, ZHOU Lifan, et al. SAR image colorization using multidomain cycle-consistency generative adversarial network[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(2): 296–300. doi: 10.1109/LGRS.2020.2969891
    [87] TUPIN F and TISON C. Sub-aperture decomposition for SAR urban area analysis[C]. European Conference on Synthetic Aperture Radar (EUSAR), Ulm, Germany, 2004: 431–434.
    [88] BOVENGA F, DERAUW D, RANA F M, et al. Multi-chromatic analysis of SAR images for coherent target detection[J]. Remote Sensing, 2014, 6(9): 8822–8843. doi: 10.3390/rs6098822
    [89] SPIGAI M, TISON C, and SOUYRIS J C. Time-frequency analysis in high-resolution SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(7): 2699–2711. doi: 10.1109/TGRS.2011.2107914
    [90] FERRO-FAMIL L, REIGBER A, POTTIER E, et al. Scene characterization using subaperture polarimetric SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(10): 2264–2276. doi: 10.1109/TGRS.2003.817188
    [91] HUANG Zongling, DATCU M, PAN Zongxu, et al. HDEC-TFA: An unsupervised learning approach for discovering physical scattering properties of single-polarized SAR image[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(4): 3054–3071. doi: 10.1109/TGRS.2020.3014335
    [92] HUANG Zhongling, DATCU M, PAN Zongxu, et al. A hybrid and explainable deep learning framework for SAR images[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 1727–1730. doi: 10.1109/IGARSS39084.2020.9323845.
    [93] DE S, CLANTON C, BICKERTON S, et al. Exploring the relationships between scattering physics and auto-encoder latent-space embedding[C]. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, USA, 2020: 3501–3504. doi: 10.1109/IGARSS39084.2020.9323410.
    [94] HUANG Zhongling, YAO Xiwen, DUMITRU C O, et al. Physically explainable CNN for SAR image classification[OL]. arXiv: 2110.14144, 2021.
    [95] ZHANG Jinsong, XING Mengdao, and XIE Yiyuan. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(3): 2174–2187. doi: 10.1109/TGRS.2020.3003264
    [96] LEI Songlin, QIU Xiaolan, DING Chibiao, et al. A feature enhancement method based on the sub-aperture decomposition for rotating frame ship detection in SAR images[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 3573–3576. doi: 10.1109/IGARSS47720.2021.9553635.
    [97] THEAGARAJAN R, BHANU B, ERPEK T, et al. Integrating deep learning-based data driven and model-based approaches for inverse synthetic aperture radar target recognition[J]. Optical Engineering, 2020, 59(5): 051407. doi: 10.1117/1.OE.59.5.051407
    [98] HORI C, HORI T, LEE T Y, et al. Attention-based multimodal fusion for video description[C]. The IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4203–4212. doi: 10.1109/ICCV.2017.450.
    [99] PORIA S, CAMBRIA E, BAJPAI R, et al. A review of affective computing: From unimodal analysis to multimodal fusion[J]. Information Fusion, 2017, 37: 98–125. doi: 10.1016/j.inffus.2017.02.003
    [100] HUANG Zhongling, DUMITRU C O, and REN Jun. Physics-aware feature learning of SAR images with deep neural networks: A case study[C]. IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 1264–1267. doi: 10.1109/IGARSS47720.2021.9554842.
    [101] LEE J S, GRUNES M R, AINSWORTH T L, et al. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier[J]. IEEE Transactions on Geoscience and Remote Sensing, 1999, 37(5): 2249–2258. doi: 10.1109/36.789621
    [102] RATHA D, BHATTACHARYA A, and FRERY A C. Unsupervised classification of PolSAR data using a scattering similarity measure derived from a geodesic distance[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(1): 151–155. doi: 10.1109/LGRS.2017.2778749
    [103] LI Yi, DU Lan, and WEI Di. Multiscale CNN based on component analysis for SAR ATR[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–12. doi: 10.1109/TGRS.2021.3100137
    [104] FENG Sijia, JI Kefeng, ZHANG Linbin, et al. SAR target classification based on integration of ASC parts model and deep learning algorithm[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 10213–10225. doi: 10.1109/JSTARS.2021.3116979
    [105] LIU Qingshu and LANG Liang. MMFF: Multi-manifold feature fusion based neural networks for target recognition in complex-valued SAR imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 180: 151–162. doi: 10.1016/j.isprsjprs.2021.08.008
    [106] LIU Jiaming, XING Mengdao, YU Hanwen, et al. EFTL: Complex convolutional networks with electromagnetic feature transfer learning for SAR target recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–11. doi: 10.1109/TGRS.2021.3083261
    [107] CUI Yuanhao, LIU Fang, JIAO Licheng, et al. Polarimetric multipath convolutional neural network for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–18. doi: 10.1109/TGRS.2021.3071559
    [108] DAW A, THOMAS R Q, CAREY C C, et al. Physics-guided architecture (PGA) of neural networks for quantifying uncertainty in lake temperature modeling[C]. The 2020 SIAM International Conference on Data Mining (SDM), Cincinnati, USA, 2020: 532–540.
    [109] SUN Jian, NIU Zhan, INNANEN K A, et al. A theory-guided deep-learning formulation and optimization of seismic waveform inversion[J]. Geophysics, 2020, 85(2): R87–R99. doi: 10.1190/geo2019-0138.1
    [110] HE Qishan, ZHAO Lingjun, JI Kefeng, et al. SAR target recognition based on task-driven domain adaptation using simulated data[J]. IEEE Geoscience and Remote Sensing Letters, 2021: 1–5. doi: 10.1109/LGRS.2021.3116707
    [111] ZHANG Linbin, LENG Xiangguang, FENG Sijia, et al. Domain knowledge powered two-stream deep network for few-shot SAR vehicle recognition[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–15. doi: 10.1109/TGRS.2021.3116349
    [112] AGARWAL T, SUGAVANAM N, and ERTIN E. Sparse signal models for data augmentation in deep learning ATR[C]. IEEE Radar Conference, Florence, Italy, 2020: 1–6. doi: 10.1109/RadarConf2043947.2020.9266382.
    [113] DIEMUNSCH J R and WISSINGER J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR[C]. Proceedings of SPIE 3370, Algorithms for synthetic aperture radar Imagery V, Orlando, USA, 1998: 481–492. doi: 10.1117/12.321851.
    [114] HUANG Lanqing, LIU Bin, LI Boying, et al. OpenSARShip: A dataset dedicated to sentinel-1 ship interpretation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(1): 195–208. doi: 10.1109/JSTARS.2017.2755672
    [115] 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0: 高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi: 10.12000/JR19097

    SUN Xian, WANG Zhirui, SUN Yuanrui, et al. AIR-SARSHIP-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi: 10.12000/JR19097
    [116] 杜兰, 王兆成, 王燕, 等. 复杂场景下单通道SAR目标检测及鉴别研究进展综述[J]. 雷达学报, 2020, 9(1): 34–54. doi: 10.12000/JR19104

    DU Lan, WANG Zhaocheng, WANG Yan, et al. Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes[J]. Journal of Radars, 2020, 9(1): 34–54. doi: 10.12000/JR19104
    [117] CHEN Siwei and TAO Chensong. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(4): 627–631. doi: 10.1109/LGRS.2018.2799877
    [118] LIU Xu, JIAO Licheng, TANG Xu, et al. Polarimetric convolutional network for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(5): 3040–3054. doi: 10.1109/TGRS.2018.2879984
    [119] BI Haixia, SUN Jian, and XU Zongben. A graph-based semisupervised deep learning model for PoLSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(4): 2116–2132. doi: 10.1109/TGRS.2018.2871504
    [120] VINAYARAJ P, SUGIMOTO R, NAKAMURA R, et al. Transfer learning with CNNs for segmentation of PALSAR-2 power decomposition components[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 6352–6361. doi: 10.1109/JSTARS.2020.3031020
    [121] XIA Junshi, YOKOYA N, ADRIANO B, et al. A benchmark high-resolution GaoFen-3 SAR dataset for building semantic segmentation[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 5950–5963. doi: 10.1109/JSTARS.2021.3085122
    [122] WU Fan, WANG Chao, ZHANG Hong, et al. Built-up area mapping in China from GF-3 SAR imagery based on the framework of deep learning[J]. Remote Sensing of Environment, 2021, 262: 112515. doi: 10.1016/j.rse.2021.112515
    [123] CHEN Jiankun, QIU Xiaolan, DING Chibiao, et al. CVCMFF Net: Complex-valued convolutional and multifeature fusion network for building semantic segmentation of InSAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021: 1–14. doi: 10.1109/TGRS.2021.3068124
    [124] SHI Xianzheng, FU Shilei, CHEN Jin, et al. Object-level semantic segmentation on the high-resolution Gaofen-3 FUSAR-map dataset[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 3107–3119. doi: 10.1109/JSTARS.2021.3063797
    [125] 仇晓兰, 焦泽坤, 彭凌霄, 等. SARMV3D-1.0: SAR微波视觉三维成像数据集[J]. 雷达学报, 2021, 10(4): 485–498. doi: 10.12000/JR21112

    QIU Xiaolan, JIAO Zekun, PENG Lingxiao, et al. SARMV3D-1.0: Synthetic aperture radar microwave vision 3D imaging dataset[J]. Journal of Radars, 2021, 10(4): 485–498. doi: 10.12000/JR21112
  • 加载中
图(20)
计量
  • 文章访问数: 6620
  • HTML全文浏览量: 2910
  • PDF下载量: 877
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-04
  • 修回日期:  2021-12-08
  • 网络出版日期:  2021-12-31
  • 刊出日期:  2022-02-28

目录

/

返回文章
返回