一种基于EfficientNet与BiGRU的多角度SAR图像目标识别方法

赵鹏菲 黄丽佳

赵鹏菲, 黄丽佳. 一种基于EfficientNet与BiGRU的多角度SAR图像目标识别方法[J]. 雷达学报, 2021, 10(6): 895–904. doi: 10.12000/JR20133
引用本文: 赵鹏菲, 黄丽佳. 一种基于EfficientNet与BiGRU的多角度SAR图像目标识别方法[J]. 雷达学报, 2021, 10(6): 895–904. doi: 10.12000/JR20133
ZHAO Pengfei and HUANG Lijia. Target recognition method for multi-aspect synthetic aperture radar images based on EfficientNet and BiGRU[J]. Journal of Radars, 2021, 10(6): 895–904. doi: 10.12000/JR20133
Citation: ZHAO Pengfei and HUANG Lijia. Target recognition method for multi-aspect synthetic aperture radar images based on EfficientNet and BiGRU[J]. Journal of Radars, 2021, 10(6): 895–904. doi: 10.12000/JR20133

一种基于EfficientNet与BiGRU的多角度SAR图像目标识别方法

DOI: 10.12000/JR20133
基金项目: 国家自然科学基金(61991420, 62022082),中科院青促会专项支持
详细信息
    作者简介:

    赵鹏菲(1996–),男,硕士生,研究方向为合成孔径雷达图像分析

    黄丽佳(1984–),女,博士,研究员,硕士生导师,研究方向为合成孔径雷达信号处理与图像分析

    通讯作者:

    黄丽佳 iecas8huanglijia@163.com

  • 责任主编:林赟 Corresponding Editor: LIN Yun
  • 中图分类号: TP753

Target Recognition Method for Multi-aspect Synthetic Aperture Radar Images Based on EfficientNet and BiGRU

Funds: The National Natural Science Foundation of China (61991420, 62022082), Special Support of Youth Innovation Promotion Association Chinese Academy of Sciences
More Information
  • 摘要: 合成孔径雷达(SAR)的自动目标识别(ATR)技术目前已广泛应用于军事和民用领域。SAR图像对成像的方位角极其敏感,同一目标在不同方位角下的SAR图像存在一定差异,而多方位角的SAR图像序列蕴含着更加丰富的分类识别信息。因此,该文提出一种基于EfficientNet和BiGRU的多角度SAR目标识别模型,并使用孤岛损失来训练模型。该方法在MSTAR数据集10类目标识别任务中可以达到100%的识别准确率,对大俯仰角(擦地角)下成像、存在版本变体、存在配置变体的3种特殊情况下的SAR目标分别达到了99.68%, 99.95%, 99.91%的识别准确率。此外,该方法在小规模的数据集上也能达到令人满意的识别准确率。实验结果表明,该方法在MSTAR的大部分数据集上识别准确率均优于其他多角度SAR目标识别方法,且具有一定的鲁棒性。

     

  • 图  1  多角度SAR目标识别网络结构图

    Figure  1.  Multi-aspect SAR ATR framework

    图  2  GRU结构示意图

    Figure  2.  The structure of GRU

    图  3  BiGRU结构示意图

    Figure  3.  The structure of BiGRU

    图  4  不同方位角、同一目标的SAR图像

    Figure  4.  SAR images of the same target with different azimuth

    图  5  多角度图像序列构造示意图

    Figure  5.  Schematic diagram of multi-angle image sequence structure

    表  1  EfficientNet-B0网络结构

    Table  1.   EfficientNet-B0 network structure

    阶段模块输出尺寸层数
    1Conv3×316×32×321
    2MBConv1, k3×324×32×321
    3MBConv6, k3×340×16×162
    4MBConv6, k5×580×8×82
    5MBConv6, k3×3112×8×83
    6MBConv6, k5×5192×4×43
    7MBConv6, k5×5320×2×24
    8MBConv6, k3×31280×2×21
    9Conv1×1 & Pooling & FCk1
    下载: 导出CSV

    表  2  EfficientNet-B0与ResNet50网络对比

    Table  2.   Comparison of EfficientNet-B0 and ResNet50 networks

    模型参数量 (M)FLOPS (B)top1/top5准确率 (%)
    EfficientNet-B05.30.3977.3/93.5
    ResNet5026.04.1076.0/93.0
    下载: 导出CSV

    表  3  图像序列L为4时,SOC数据集大小

    Table  3.   SOC dataset size when L=4

    目标名称训练集数量测试集数量
    2S111621034
    BMP2883634
    BRDM_211581040
    BTR70889649
    BTR60978667
    D711621037
    T6211621032
    T72874642
    ZIL13111621034
    ZSU_23411621040
    合计105928809
    下载: 导出CSV

    表  4  图像序列L为4时,EOC-1数据集大小

    Table  4.   EOC-1 dataset size when L=4

    目标名称训练集数量测试集数量
    2S111661088
    BRDM_211621084
    T729131088
    ZSU_23411661088
    合计44074348
    下载: 导出CSV

    表  5  EOC-1, EOC-2与EOC-3数据集大小

    Table  5.   EOC-1, EOC-2 and EOC-3 dataset size

    L数据集训练集总数测试集总数
    4EOC-144074384
    4EOC-244739996
    4EOC-3447312969
    3EOC-133073310
    3EOC-228897773
    3EOC-3288910199
    2EOC-122022312
    2EOC-219345258
    2EOC-319346911
    下载: 导出CSV

    表  6  部分进行数据增广的数据集增广后大小

    Table  6.   The size of some data sets for data augmentation

    L数据集类型训练集总数
    4EOC-117392
    3SOC16032
    3EOC-113228
    3EOC-2&EOC-311544
    2SOC16041
    2EOC-18808
    2EOC-2&EOC-37736
    下载: 导出CSV

    表  7  SOC实验中各参数设置

    Table  7.   Parameter in SOC experiment

    名称设置参数
    Batch Size32
    优化器Adam
    Adam的学习率0.001
    Island Loss的优化器SGD
    SGD的学习率0.5
    Island Loss参数$ \lambda $0.001
    Island Loss参数$ { \lambda }_{1} $10
    Epochs260
    下载: 导出CSV

    表  8  图像序列数L为4时,EOC-1混淆矩阵

    Table  8.   The EOC-1 confusion matrix when L=4

    类型S1BRDM_2T72ZSU_234Acc (%)
    2S11076210098.90
    BRDM_20108400100.00
    T720010880100.00
    ZSU_234200108699.82
    平均值99.68
    下载: 导出CSV

    表  9  图像序列数L为4时,各方法识别准确率在SOC与EOC-1数据集上对比

    Table  9.   Comparison of the recognition accuracy on SOC and EOC-1 dataset when L is 4

    序号方法SOC EOC-1
    准确率 (%)图像样本数量图像序列样本数量准确率 (%)图像样本数量图像序列样本数量
    1MVDCNN[13]98.526904353394.6128319705
    2MS-CNN[15]99.922747274798.6111281128
    3ResNet-LSTM[16]100.002000772098.979283614
    4本文方法100.0027471059299.0811284407
    5经过图像增广的本文方法99.68112817628
    下载: 导出CSV

    表  10  图像序列数L为3时,各方法准确率对比(%)

    Table  10.   Comparison of test accuracy when L=3 (%)

    方法SOC准确率EOC-1准确率
    MVDCNN[13]98.1794.34
    MS-CNN[15]99.8897.48
    本文方法99.9498.58
    下载: 导出CSV

    表  11  图像序列数L为2时,各方法准确率对比(%)

    Table  11.   Comparison of test accuracy when L=2 (%)

    方法SOC准确率EOC-1准确率
    MVDCNN[13]97.8193.29
    MS-CNN[15]99.8496.69
    本文方法99.8797.60
    下载: 导出CSV

    表  12  EOC-2数据集识别准确率对比(%)

    Table  12.   Comparison of accuracy on EOC-2 (%)

    方法L=4L=3L=2
    MVDCNN[13]95.4695.0893.75
    MS-CNN[15]100.0010099.67
    本文方法99.9599.8299.39
    下载: 导出CSV

    表  13  EOC-3数据集识别准确率对比(%)

    Table  13.   Comparison of accuracy on EOC-3 (%)

    方法L=4L=3L=2
    MVDCNN[13]95.4595.2594.98
    MS-CNN[15]99.5899.0898.71
    本文方法99.9199.5799.13
    下载: 导出CSV

    表  14  在缩减数据集上的识别准确率(%)

    Table  14.   Recognition accuracy on the reduced dataset (%)

    数据集规模5%15%50%
    本文方法95.9899.7299.93
    ResNet-LSTM[16]93.9799.3799.58
    下载: 导出CSV

    表  15  消融实验结果

    Table  15.   Results of ablation experiments

    序号Center
    Loss
    Island
    Loss
    EfficientNetBiGRU准确率
    (%)
    提升
    (%)
    194.08
    295.811.73
    397.031.22
    498.461.43
    599.080.62
    下载: 导出CSV
  • [1] 盖旭刚, 陈晋汶, 韩俊, 等. 合成孔径雷达的现状与发展趋势[J]. 飞航导弹, 2011(3): 82–86, 95.

    GAI Xugang, CHEN Jinwen, HAN Jun, et al. Development status and trend of synthetic aperture radar[J]. Aerodynamic Missile Journal, 2011(3): 82–86, 95.
    [2] 张红, 王超, 张波, 等. 高分辨率SAR图像目标识别[M]. 北京: 科学出版社, 2009.

    ZHANG Hong, WANG Chao, ZHANG Bo, et al. Target Recognition in High Resolution SAR Images[M]. Beijing: Science Press, 2009.
    [3] MOREIRA A, PRATS-IRAOLA P, YOUNIS M, et al. A tutorial on synthetic aperture radar[J]. IEEE Geoscience and Remote Sensing Magazine, 2013, 1(1): 6–43. doi: 10.1109/MGRS.2013.2248301
    [4] 王瑞霞, 林伟, 毛军. 基于小波变换和PCA的SAR图像相干斑抑制[J]. 计算机工程, 2008, 34(20): 235–237. doi: 10.3969/j.issn.1000-3428.2008.20.086

    WANG Ruixia, LIN Wei, and MAO Jun. Speckle suppression for SAR image based on wavelet transform and PCA[J]. Computer Engineering, 2008, 34(20): 235–237. doi: 10.3969/j.issn.1000-3428.2008.20.086
    [5] CHEN Sizhe and WANG Haipeng. SAR target recognition based on deep learning[C]. 2014 International Conference on Data Science and Advanced Analytics, Shanghai, China, 2015.
    [6] 田壮壮, 占荣辉, 胡杰民, 等. 基于卷积神经网络的SAR图像目标识别研究[J]. 雷达学报, 2016, 5(3): 320–325. doi: 10.12000/JR16037

    TIAN Zhuangzhuang, ZHAN Ronghui, HU Jiemin, et al. SAR ATR based on convolutional neural network[J]. Journal of Radars, 2016, 5(3): 320–325. doi: 10.12000/JR16037
    [7] CHEN Sizhe, WANG Haipeng, XU Feng, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806–4817. doi: 10.1109/TGRS.2016.2551720
    [8] FURUKAWA H. Deep learning for target classification from SAR imagery: Data augmentation and translation invariance[R]. SANE2017-30, 2017.
    [9] 袁媛, 袁昊, 雷玲, 等. 一种同步轨道星机双基SAR成像方法[J]. 雷达科学与技术, 2007, 5(2): 128–132. doi: 10.3969/j.issn.1672-2337.2007.02.011

    YUAN Yuan, YUAN Hao, LEI Ling, et al. An imaging method of GEO Spaceborne-Airborne Bistatic SAR[J]. Radar Science and Technology, 2007, 5(2): 128–132. doi: 10.3969/j.issn.1672-2337.2007.02.011
    [10] 史洪印, 周荫清, 陈杰. 同步轨道星机双基地三通道SAR地面运动目标指示算法[J]. 电子与信息学报, 2009, 31(8): 1881–1885.

    SHI Hongyin, ZHOU Yinqing, and CHEN Jie. An algorithm of GEO spaceborne-airborne bistatic three-channel SAR ground moving target indication[J]. Journal of Electronics &Information Technology, 2009, 31(8): 1881–1885.
    [11] LI Zhuo, LI Chunsheng, YU Ze, et al. Back projection algorithm for high resolution GEO-SAR image formation[C]. 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, Canada, 2011: 336–339.
    [12] ZHANG Fan, HU Chen, YIN Qiang, et al. Multi-aspect-aware bidirectional LSTM networks for synthetic aperture radar target recognition[J]. IEEE Access, 2017, 5: 26880–26891. doi: 10.1109/ACCESS.2017.2773363
    [13] PEI Jifang, HUANG Yulin, HUO Weibo, et al. SAR automatic target recognition based on Multiview deep learning framework[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(4): 2196–2210. doi: 10.1109/TGRS.2017.2776357
    [14] 邹浩, 林赟, 洪文. 采用深度学习的多方位角SAR图像目标识别研究[J]. 信号处理, 2018, 34(5): 513–522. doi: 10.16798/j.issn.1003-0530.2018.05.002

    ZOU Hao, LIN Yun, and HONG Wen. Research on multi-aspect SAR images target recognition using deep learning[J]. Journal of Signal Processing, 2018, 34(5): 513–522. doi: 10.16798/j.issn.1003-0530.2018.05.002
    [15] ZHAO Pengfei, LIU Kai, ZOU Hao, et al. Multi-stream convolutional neural network for SAR automatic target recognition[J]. Remote Sensing, 2018, 10(9): 1473. doi: 10.3390/rs10091473
    [16] ZHANG Fan, FU Zhenzhen, ZHOU Yongsheng, et al. Multi-aspect SAR target recognition based on space-fixed and space-varying scattering feature joint learning[J]. Remote Sensing Letters, 2019, 10(10): 998–1007. doi: 10.1080/2150704X.2019.1635287
    [17] TAN Mingxing and LE Q V. EfficientNet: Rethinking model scaling for convolutional neural networks[J]. arXiv: 1905.11946, 2019.
    [18] CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[J]. arXiv: 1406.1078, 2014.
    [19] CAI Jie, MENG Zibo, KHAN A S, et al. Island loss for learning discriminative features in facial expression recognition[C]. The 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 2018: 302–309.
    [20] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016.
    [21] HOCHREITER S and SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735–1780. doi: 10.1162/neco.1997.9.8.1735
    [22] WEN Yandong, ZHANG Kaipeng, LI Zhifeng, et al. A discriminative feature learning approach for deep face recognition[C]. The 14th European Conference on Computer Vision – ECCV 2016, Amsterdam, The Netherlands, 2016.
  • 加载中
图(5) / 表(15)
计量
  • 文章访问数:  3470
  • HTML全文浏览量:  1274
  • PDF下载量:  271
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-26
  • 修回日期:  2020-12-21
  • 网络出版日期:  2021-01-07
  • 刊出日期:  2021-12-28

目录

    /

    返回文章
    返回