FCOSR:一种无锚框的SAR图像任意朝向船舶目标检测网络

徐昌贵 张波 高建威 吴樊 张红 王超

徐昌贵, 张波, 高建威, 等. FCOSR:一种无锚框的SAR图像任意朝向船舶目标检测网络[J]. 雷达学报, 2022, 11(3): 335–346. doi: 10.12000/JR21204
引用本文: 徐昌贵, 张波, 高建威, 等. FCOSR:一种无锚框的SAR图像任意朝向船舶目标检测网络[J]. 雷达学报, 2022, 11(3): 335–346. doi: 10.12000/JR21204
XU Changgui, ZHANG Bo, GAO Jianwei, et al. FCOSR: An anchor-free method for arbitrary-oriented ship detection in SAR images[J]. Journal of Radars, 2022, 11(3): 335–346. doi: 10.12000/JR21204
Citation: XU Changgui, ZHANG Bo, GAO Jianwei, et al. FCOSR: An anchor-free method for arbitrary-oriented ship detection in SAR images[J]. Journal of Radars, 2022, 11(3): 335–346. doi: 10.12000/JR21204

FCOSR:一种无锚框的SAR图像任意朝向船舶目标检测网络

DOI: 10.12000/JR21204
基金项目: 国家自然科学基金(41930110, 41901292)
详细信息
    作者简介:

    徐昌贵(1997–),男,中国科学院大学硕士生,研究方向为SAR图像目标检测与识别

    张 波(1976–),男,硕士生导师,中国科学院空天信息创新研究院副研究员,研究方向为SAR大数据处理、雷达目标特性、目标检测与识别等

    高建威(1987–),男,中国空间技术研究院卫星应用总体部工程师,研究方向为遥感大数据处理、高性能计算、高光谱遥感等

    吴 樊(1976–),男,中国科学院空天信息创新研究院副研究员,研究方向为SAR图像处理与信息提取

    张 红(1972–),女,博士生导师,中国科学院空天信息创新研究院研究员,IEEE GRSS北京分会副主席,中国图象图形学学会遥感图像专业委员会委员,主要研究方向为SAR图像智能处理、极化SAR、干涉SAR等

    王 超(1963–),男,博士生导师,中国科学院空天信息创新研究院研究员,中国科学院大学岗位教授,中国图象图形学会常务理事、IEEE GRSS高级会员、《遥感技术与应用》副主编、《中国图象图形学报》副主编,曾任IEEE GRSS Beijing Chapter主席,主要研究方向为InSAR高性能处理、SAR图像智能处理与应用

    通讯作者:

    张波 zhangbo202140@aircas.ac.cn

  • 责任主编:计科峰 Corresponding Editor: JI Kefeng
  • 中图分类号: TN957.52

FCOSR: An Anchor-free Method for Arbitrary-oriented Ship Detection in SAR Images

Funds: The National Natural Science Foundation of China (41930110, 41901292)
More Information
  • 摘要: 以FCOS为代表的无锚框网络避免了预设锚框带来的超参设定问题,然而其水平框的输出结果无法指示任意朝向下SAR船舶目标的精确边界和朝向。针对此问题,该文提出了一种名为FCOSR的检测算法。首先在FCOS回归分支中添加角度参量使其输出旋转框结果。其次,引入基于可形变卷积的9点特征参与船舶置信度和边界框残差值的预测来降低陆地虚警并提升边界框回归精度。最后,在训练阶段使用旋转自适应样本选择策略为每个船舶样本分配合适的正样本点,实现网络检测精度的提高。相较于FCOS以及目前已公开发表的锚框旋转检测网络,该网络在SSDD+和HRSID数据集上表现出更快的检测速率和更高的检测精度,mAP分别为91.7%和84.3%,影像切片平均检测时间仅需33 ms。

     

  • 图  1  FCOS算法的基础框架图

    Figure  1.  The basic framework of the FCOS

    图  2  FCOS的检测头部网络

    Figure  2.  The structure of the FCOS detection head network

    图  3  水平框与旋转框的参数表示

    Figure  3.  The parameters representation of horizontal bounding box and rotatable bounding box

    图  4  正负样本选择方式对比

    Figure  4.  Comparison of the positive/negative sample selection methods

    图  5  FCOSR结构图

    Figure  5.  The architecture of FCOSR

    图  6  9点位置的坐标变换

    Figure  6.  Coordinate transformation of the nine points location

    图  7  训练样本的筛选准则(蓝:正样本点,灰:负样本点)

    Figure  7.  The selection criteria for training samples (Blue: Positive sample points; Gray: Negative sample points)

    图  8  不同的正样本选择方法

    Figure  8.  Different positive sample selection methods

    图  9  用RATSS后的正样本在不同特征层的分布比例

    Figure  9.  The distribution ratio of positive samples in different feature layers after using the RATSS

    图  10  消融实验结果对比

    Figure  10.  Comparison of ablation experiment results

    图  11  FCOSR和FCOS的检测结果(蓝色:真值;黄色:虚警;绿色:漏检;红色:检测结果)

    Figure  11.  Results of FCOS and FCOSR (Blue: Ground truth; Yellow: False alarm; Green: Missing ship; Red: Detected result)

    图  12  远岸目标的检测结果(蓝色:真值;黄色:虚警;绿色:漏检;红色:检测结果)

    Figure  12.  Results of the offshore ships (Blue: Ground truth; Yellow: False Alarm; Green: Missing ship; Red: Detected result)

    图  13  近岸目标的检测结果 (蓝色:真值;黄色:虚警;绿色:漏检;红色:检测结果)

    Figure  13.  Results of the inshore ships (Blue: Ground truth; Yellow: False alarm; Green: Missing ship; Red: Detected result)

    图  14  复杂河道的检测结果(蓝色:真值;黄色:虚警;绿色:漏检;红色:检测结果)

    Figure  14.  Results of the ships in rivers (Blue: Ground truth; Yellow: False alarm; Green: Missing ship; Red: Detected result)

    表  1  COCO指标

    Table  1.   COCO metrics

    指标解释
    ${\rm{mAP} }$${{\rm{mAP}}}$ at IOU=0.50:0.05:0.95
    ${\rm mAP}_{50}$${{\rm{mAP}}}$ at IoU=0.50
    ${\rm mAP}_{ {\rm{S} } }$${{\rm{mAP}}}_{50}$ for small Ship: ${\rm{area}} < {32}^{2}$
    ${ {\rm{mAP} } }_{{\rm{M}}}$${{\rm{mAP}}}_{50}$ for medium Ship: ${32}^{2} < {\rm{area}} < {96}^{2}$
    ${ {\rm{mAP} } }_{{\rm{L}}}$${{\rm{mAP}}}_{50}$ for large Ship: ${\rm{area}} > {96}^{2}$
    下载: 导出CSV

    表  2  不同样本选择方法的实验结果

    Table  2.   Results of different samples selection methods

    方法kmAP (%)$\rm{m}{\rm{AP} }_{50}$ (%)$\rm{m} {\rm{AP} }_{S }$ (%) $ \rm{m} {\rm{AP}}_{{M}} $ (%)$\rm{m} {\rm{AP} }_{\rm{L} }$ (%)Time (s/iter)
    采样方式a30.275.734.031.230.30.266
    采样方式b38.685.638.343.441.50.269
    采样方式c36.683.539.232.033.60.268
    RATSS(无第4步)540.387.239.443.470.10.287
    RATSS340.291.838.744.955.10.295
    542.291.740.546.264.70.297
    741.892.240.445.750.90.300
    940.690.339.742.957.30.302
    1141.891.041.343.952.90.302
    下载: 导出CSV

    表  3  消融实验结果(%)

    Table  3.   Results of ablation experiments (%)

    9点特征表示残差回归分支mAP$\rm{m} {\rm{AP} }_{50}$$\rm{m} {\rm{AP} }_{\rm{S} }$$\rm{m} {\rm{AP} }_{\rm{M} }$$\rm{m} {\rm{AP} }_{\rm{L} }$
    ××34.885.435.833.730.6
    ×38.389.837.740.253.8
    42.291.740.546.264.7
    下载: 导出CSV

    表  4  近岸与远岸船舶的mAP结果值(%)

    Table  4.   The mAP results of the ships in inshore and offshore (%)

    场景9点特征表示残差回归
    分支
    $\rm{m}{\rm{AP} }_{50}$$\rm{m} {\rm{AP} }_{\rm{S} }$$\rm{m} {\rm{AP} }_{\rm{M} }$$\rm{m} {\rm{AP} }_{\rm{L} }$
    近岸××61.724.021.137.5
    ×75.628.629.568.5
    76.330.736.970.1
    远岸××95.140.042.926.7
    ×95.441.047.651.6
    97.444.052.364.2
    下载: 导出CSV

    表  5  FCOS与FCOSR的对比

    Table  5.   The performance comparison of FCOS and FCOSR

    方法骨干网络类型FPN输出
    通道数
    SSDD+HRSIDFPS
    Size
    (MB)
    Time
    (s/iter)
    Recall (%)mAP (%)mAP50 (%)Recall (%)mAP (%)mAP50 (%)
    FCOSResNet5025697.1845.193.787.2050.582.921.6192.10.352
    FCOSRResNet5025694.5541.992.187.5544.983.520.8196.80.584
    FCOSRResNet3412894.1742.291.787.5947.384.330.1188.10.297
    下载: 导出CSV

    表  6  不同检测网络的精度对比

    Table  6.   The comparison of the accuracy of different detection networks

    方法SSDD+HRSIDFPS
    Size
    (MB)
    Time
    (s/iter)
    Recall (%)mAP (%)mAP50 (%)Recall (%)mAP (%)mAP50 (%)
    ReDet88.1643.6987.5783.1144.280.111.8256.80.544
    R3Det91.1640.4789.2986.1447.283.711.4378.90.638
    R-RetinaNet89.8536.0086.2083.5742.180.915.6290.20.331
    FasterRCNN-O91.1642.1590.1285.5646.182.813.2441.50.496
    FCOSR94.1742.2091.7087.5947.384.330.1188.10.297
    下载: 导出CSV
  • [1] 郭倩, 王海鹏, 徐丰. SAR图像飞机目标检测识别进展[J]. 雷达学报, 2020, 9(3): 497–513. doi: 10.12000/JR20020

    GUO Qian, WANG Haipeng, and XU Feng. Research progress on aircraft detection and recognition in SAR imagery[J]. Journal of Radars, 2020, 9(3): 497–513. doi: 10.12000/JR20020
    [2] 樊海玮, 史双, 蔺琪, 等. 复杂背景下SAR图像船舶目标检测算法研究[J]. 计算机技术与发展, 2021, 31(10): 49–55. doi: 10.3969/j.issn.1673-629X.2021.10.009

    FAN Haiwei, SHI Shuang, LIN Qi, et al. Research on ship target detection algorithm in complex background SAR image[J]. Computer Technology and Development, 2021, 31(10): 49–55. doi: 10.3969/j.issn.1673-629X.2021.10.009
    [3] 李健伟, 曲长文, 彭书娟. 基于级联CNN的SAR图像舰船目标检测算法[J]. 控制与决策, 2019, 34(10): 2191–2197. doi: 10.13195/j.kzyjc.2018.0168

    LI Jianwei, QU Changwen, and PENG Shujuan. A ship detection method based on cascade CNN in SAR images[J]. Control and Decision, 2019, 34(10): 2191–2197. doi: 10.13195/j.kzyjc.2018.0168
    [4] CUI Zongyong, LI Qi, CAO Zongjie, et al. Dense attention pyramid networks for multi-scale ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8983–8997. doi: 10.1109/TGRS.2019.2923988
    [5] 张晓玲, 张天文, 师君, 等. 基于深度分离卷积神经网络的高速高精度SAR舰船检测[J]. 雷达学报, 2019, 8(6): 841–851. doi: 10.12000/JR19111

    ZHANG Xiaoling, ZHANG Tianwen, SHI Jun, et al. High-speed and high-accurate SAR ship detection based on a depthwise separable convolution neural network[J]. Journal of Radars, 2019, 8(6): 841–851. doi: 10.12000/JR19111
    [6] 陈慧元, 刘泽宇, 郭炜炜, 等. 基于级联卷积神经网络的大场景遥感图像舰船目标快速检测方法[J]. 雷达学报, 2019, 8(3): 413–424. doi: 10.12000/JR19041

    CHEN Huiyuan, LIU Zeyu, GUO Weiwei, et al. Fast detection of ship targets for large-scale remote sensing image based on a cascade convolutional neural network[J]. Journal of Radars, 2019, 8(3): 413–424. doi: 10.12000/JR19041
    [7] WANG Zhen, WANG Buhong, and XU Nan. SAR ship detection in complex background based on multi-feature fusion and non-local channel attention mechanism[J]. International Journal of Remote Sensing, 2021, 42(19): 7519–7550. doi: 10.1080/01431161.2021.1963003
    [8] WANG Yuanyuan, WANG Chao, ZHANG Hong, et al. Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery[J]. Remote Sensing, 2019, 11(5): 531. doi: 10.3390/rs11050531
    [9] ZHAO Yan, ZHAO Lingjun, XIONG Boli, et al. Attention receptive pyramid network for ship detection in SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 2738–2756. doi: 10.1109/JSTARS.2020.2997081
    [10] ZHANG Tianwen, ZHANG Xiaoling, and KE Xiao. Quad-FPN: A novel quad feature pyramid network for SAR ship detection[J]. Remote Sensing, 2021, 13(14): 2771. doi: 10.3390/rs13142771
    [11] LAW H and DENG Jia. CornerNet: Detecting objects as paired keypoints[J]. International Journal of Computer Vision, 2020, 128(3): 642–656. doi: 10.1007/s11263-019-01204-1
    [12] ZHOU Xingyi, WANG Dequan, and KRÄHENBÜHL P. Objects as points[C]. arXiv preprint arXiv: 1904.07850, 2019.
    [13] TIAN Zhi, SHEN Chunhua, CHEN Hao, et al. FCOS: Fully convolutional one-stage object detection[C]. The 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), 2019: 9626–9635.
    [14] KONG Tao, SUN Fuchun, LIU Huaping, et al. FoveaBox: Beyound anchor-based object detection[J]. IEEE Transactions on Image Processing, 2020, 29: 7389–7398. doi: 10.1109/TIP.2020.3002345
    [15] CUI Zongyong, WANG Xiaoya, LIU Nengyuan, et al. Ship detection in large-scale SAR images via spatial shuffle-group enhance attention[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(1): 379–391. doi: 10.1109/TGRS.2020.2997200
    [16] GUO Haoyuan, YANG Xi, WANG Nannan, et al. A CenterNet++ model for ship detection in SAR images[J]. Pattern Recognition, 2021, 112: 107787. doi: 10.1016/j.patcog.2020.107787
    [17] SUN Zhongzhen, DAI Muchen, LENG Xiangguang, et al. An anchor-free detection method for ship targets in high-resolution SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 7799–7816. doi: 10.1109/JSTARS.2021.3099483
    [18] FU Jiamei, SUN Xian, WANG Zhirui, et al. An anchor-free method based on feature balancing and refinement network for multiscale ship detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(2): 1331–1344. doi: 10.1109/TGRS.2020.3005151
    [19] MA Jianqi, SHAO Weiyuan, YE Hao, et al. Arbitrary-oriented scene text detection via rotation proposals[J]. IEEE Transactions on Multimedia, 2018, 20(11): 3111–3122. doi: 10.1109/TMM.2018.2818020
    [20] JIANG Yingying, ZHU Xiangyu, WANG Xiaobing, et al. R2CNN: Rotational region CNN for orientation robust scene text detection[C]. arXiv preprint arXiv: 1706.09579, 2017.
    [21] YANG Xue, YANG Jirui, YAN Junchi, et al. SCRDet: Towards more robust detection for small, cluttered and rotated objects[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019: 8231–8240.
    [22] YANG Xue, LIU Qingqing, YAN Junchi, et al. R3Det: Refined single-stage detector with feature refinement for rotating object[C]. arXiv preprint arXiv: 1908.05612, 2019.
    [23] HAN Jiaming, DING Jian, XUE Nan, et al. ReDet: A rotation-equivariant detector for aerial object detection[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 2785–2794.
    [24] WANG Jizhou, LU Changhua, and JIANG Weiwei. Simultaneous ship detection and orientation estimation in SAR images based on attention module and angle regression[J]. Sensors, 2018, 18(9): 2851. doi: 10.3390/s18092851
    [25] LIU Lei, CHEN Guowei, PAN Zongxu, et al. Inshore ship detection in SAR images based on deep neural networks[C]. IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018: 25–28.
    [26] AN Quanzhi, PAN Zongxu, LIU Lei, et al. DRBox-v2: An improved detector with rotatable boxes for target detection in SAR images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(11): 8333–8349. doi: 10.1109/TGRS.2019.2920534
    [27] CHEN Chen, HE Chuan, HU Changhua, et al. MSARN: A deep neural network based on an adaptive recalibration mechanism for multiscale and arbitrary-oriented SAR ship detection[J]. IEEE Access, 2019, 7: 159262–159283. doi: 10.1109/ACCESS.2019.2951030
    [28] PAN Zhenru, YANG Rong, and ZHANG Zhimin. MSR2N: Multi-stage rotational region based network for arbitrary-oriented ship detection in SAR images[J]. Sensors, 2020, 20(8): 2340. doi: 10.3390/s20082340
    [29] CHEN Shiqi, ZHANG Jun, and ZHAN Ronghui. R2FA-Det: Delving into high-quality rotatable boxes for ship detection in SAR images[J]. Remote Sensing, 2020, 12(12): 2031. doi: 10.3390/rs12122031
    [30] YANG Rong, WANG Gui, PAN Zhenru, et al. A novel false alarm suppression method for CNN-based SAR ship detector[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(8): 1401–1405. doi: 10.1109/LGRS.2020.2999506
    [31] DAI Jifeng, QI Haozhi, XIONG Yuwen, et al. Deformable convolutional networks[C]. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 764–773.
    [32] ZHANG Shifeng, CHI Cheng, YAO Yongqiang, et al. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 9756–9765.
    [33] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [34] LI Jianwei, QU Changwen, and SHAO Jiaqi. Ship detection in SAR images based on an improved faster R-CNN[C]. 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 2017: 1–6.
    [35] WEI Shunjun, ZENG Xiangfeng, QU Qizhe, et al. HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation[J]. IEEE Access, 2020, 8: 120234–120254. doi: 10.1109/ACCESS.2020.3005861
    [36] XIA Guisong, BAI Xiang, DING Jian, et al. DOTA: A large-scale dataset for object detection in aerial images[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 3974–3983.
  • 加载中
图(14) / 表(6)
计量
  • 文章访问数:  2621
  • HTML全文浏览量:  1841
  • PDF下载量:  203
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-12-16
  • 修回日期:  2022-02-22
  • 网络出版日期:  2022-03-24
  • 刊出日期:  2022-06-28

目录

    /

    返回文章
    返回