基于自适应锚框分配与IOU监督的复杂场景SAR舰船检测

胥小我 张晓玲 张天文 邵子康 徐彦钦 曾天娇

胥小我, 张晓玲, 张天文, 等. 基于自适应锚框分配与IOU监督的复杂场景SAR舰船检测[J]. 雷达学报, 2023, 12(5): 1097–1111. doi: 10.12000/JR23059
引用本文: 胥小我, 张晓玲, 张天文, 等. 基于自适应锚框分配与IOU监督的复杂场景SAR舰船检测[J]. 雷达学报, 2023, 12(5): 1097–1111. doi: 10.12000/JR23059
XU Xiaowo, ZHANG Xiaoling, ZHANG Tianwen, et al. SAR ship detection in complex scenes based on adaptive anchor assignment and IOU supervise[J]. Journal of Radars, 2023, 12(5): 1097–1111. doi: 10.12000/JR23059
Citation: XU Xiaowo, ZHANG Xiaoling, ZHANG Tianwen, et al. SAR ship detection in complex scenes based on adaptive anchor assignment and IOU supervise[J]. Journal of Radars, 2023, 12(5): 1097–1111. doi: 10.12000/JR23059

基于自适应锚框分配与IOU监督的复杂场景SAR舰船检测

doi: 10.12000/JR23059
基金项目: 国家自然科学基金(61571099)
详细信息
    作者简介:

    胥小我,博士生,主要研究方向为遥感图像智能解译、雷达目标检测识别跟踪

    张晓玲,博士,教授,主要研究方向为SAR成像技术、雷达探测技术、三维SAR成像的目标散射特性(RCS)反演

    张天文,博士,主要研究方向为雷达视觉感知融合、神经网络机器学习、SAR智能解译技术

    邵子康,硕士生,主要研究方向为遥感图像智能解译、SAR成像技术

    徐彦钦,博士生,主要研究方向为雷达信号处理技术、雷达成像技术

    曾天娇,博士,讲师,主要研究方向为计算成像、图像重建、深度学习与成像技术的交叉热点

    通讯作者:

    曾天娇 tzeng@uestc.edu.cn

  • 责任主编:计科峰 Corresponding Editor: JI Kefeng
  • 中图分类号: TN957.52

SAR Ship Detection in Complex Scenes Based on Adaptive Anchor Assignment and IOU Supervise

Funds: The National Natural Science Foundation of China (61571099)
More Information
  • 摘要: 针对复杂场景舰船检测中正负样本分配不合理以及定位质量较差的问题,该文提出了一种基于自适应锚框分配与交并比(IOU)监督的复杂场景合成孔径雷达(SAR)舰船检测方法(A3-IOUS-Net)。首先,A3-IOUS-Net提出了自适应锚框分配,建立概率分布模型来自适应地将锚框分配为正负样本,增强了复杂场景下的舰船样本学习能力。然后,A3-IOUS-Net提出了IOU监督,在预测头部增加IOU预测分支来监督检测框定位质量,使得网络能够精确定位复杂场景下的舰船目标。此外,在该IOU预测分支中引入了坐标注意力模块,抑制了背景杂波干扰,进一步提高了检测精度。在公开的SAR舰船检测数据集(SSDD)的实验结果表明,A3-IOUS-Net在复杂场景中的平均精度(AP)值为82.04%,优于其他15种对比模型。

     

  • 图  1  A3-IOUS-Net网络结构

    Figure  1.  Overall framework of A3-IOUS-Net

    图  2  预测框和真实边框之间的IOU示意图

    Figure  2.  Schematic diagram of IOU between the predicted box and the ground truth box

    图  3  经典锚框分配准则下SAR舰船复杂场景的锚框分布

    Figure  3.  The detection result of complex scenes under the classical anchor box allocation criteria

    图  4  自适应锚框分配示意图

    Figure  4.  Schematic diagram of adaptive anchor assignment

    图  5  IOU预测分支示意图

    Figure  5.  Schematic diagram of IOU prediction branch

    图  6  坐标注意力模块示意图

    Figure  6.  Schematic diagram of coordinate attention module

    图  7  预测框和真实边框之间的GIOU示意图

    Figure  7.  Schematic diagram of GIOU between the predicted box and the ground truth box

    图  8  不同方法在不同场景上的精度-召回率曲线

    Figure  8.  Precision-Recall curves of different methods in different scenes

    图  9  A3-IOUS-Net和次优模型Libra R-CNN的复杂场景检测结果对比

    Figure  9.  Detection performance comparison of A3-IOUS-Net and second-best model Libra R-CNN under complex scenes

    图  10  锚框分数样本分布图

    Figure  10.  The distribution figure of anchor score samples

    图  11  大场景SAR图像舰船检测结果图

    Figure  11.  Ship detection results in large scene SAR images

    1  自适应锚框分配基本流程

    1.   Basic process of adaptive anchor assignment

     输入:一组真实边框$\mathcal{G}$,一组锚框$\mathcal{A}$,一组来自第$ { i } $金字塔层级的
        锚框${\mathcal{A} }_{i}$,金字塔层级$ \mathcal{L} $,每个金字塔层级的候选锚框$\mathcal{K}$
     过程:
     1:do 自适应锚框分配
     2: $\mathcal{P} \leftarrow \varnothing, \;\mathcal{N} \leftarrow \varnothing$
     3: for $g \in \mathcal{G}$ do
     4: $ \mathcal{A}_{g} \leftarrow $锚框获取$(\mathcal{A}, g, \mathcal{G})$
     5: $ \mathcal{C}_{g} \leftarrow \varnothing $
     6: for $ i=1 $ to $ \mathcal{L} $ do
     7:   $\mathcal{A}_{i}^g \leftarrow \mathcal{A}_{i} \cap \mathcal{A}_{g}$
     8:   $\mathcal{S}_{i} \leftarrow$计算锚框分数$\left(\mathcal{A}_{i}^{g}, g\right)$
     9:   $t_{i} \leftarrow$收集分数前$\mathcal{Q}$锚框$ \left(s_{i}, \mathcal{K}\right) $
     10:  $\mathcal{C}_{g}^{i} \leftarrow\left\{a_{j} \in \mathcal{A}_{i}^{g} \mid t_{i} \le s_{j} \in \mathcal{S}_{i}\right\}$
     11:  $C_{g} \leftarrow \mathcal{C}_{g} \cup C_{g}^{i}$
     12: end
     13: $ \mathcal{B}, \mathcal{F} \leftarrow $高斯混合分布建模$\left(\mathcal{C}_{g}, 2\right)$
     14: $\mathcal{N}_{g,} \mathcal{P}_{g} \leftarrow$锚框分配$\left(\mathcal{C}_{g}, \mathcal{B}, \mathcal{F}\right)$
     15: $\mathcal{P} \leftarrow \mathcal{P} \cup \mathcal{P}_{g}, \;\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{N}_{g}, \;\mathcal{I} \leftarrow \mathcal{I}\cup\left(\mathcal{C}_{g} - \mathcal{P}_{g} - \mathcal{N}_{g}\right)$
     16:end
     17:$\mathcal{N} \leftarrow \mathcal{N}\cup(\mathcal{A}-P-\mathcal{N}-\mathcal{I})$
     18:end
     输出:一组正样本$ \mathcal{P} $,一组负样本$ \mathcal{N} $,一组忽略样本$\mathcal{ I} $
    下载: 导出CSV

    2  结合IOU监督的NMS后处理基本流程

    2.   Basic process of NMS combined with IOU supervision

     输入:初始检测框集合$ \mathcal{B}=\left\{b_{1}, b_{2}, \cdots, b_{N}\right\} $,初始检测框分类分
        数集合${\mathcal{S}}_{\mathrm{cls} }=\left\{\mathrm{cls}_{1,}, \mathrm{cls}_{2,} \cdots, \mathrm{cls}_{N}\right\}$,初始检测框IOU分数
        集合${\mathcal{S} }_{\mathrm{IOU} }=\left\{\operatorname{IOU}_{1}, \operatorname{IOU}_{2,} \cdots, \operatorname{IOU}_{N}\right\}$,IOU阈值$N_{{\rm{t}}}$。
     过程:
     1:do 联合分类得分和IOU得分
     2: $\mathcal{S} \leftarrow\{\;\}$
     3: ${\mathcal{S} }={\mathcal{S} }_{ {\rm{cls} } }^{1/2}\cdot {\mathcal{S} }_{ {\rm{IOU} } }^{1 / 2}$ IOU监督
     4:end
     5:do NMS后处理
     6: $\mathcal{D} \leftarrow\{\;\}$
     7: while $ \mathcal{B} \neq \varnothing $ do
     8:  $m \leftarrow \text {arg\,max} \;{\mathcal{S} }$
     9:  $\mathcal{M} \leftarrow b_{m}$
     10:  $ \mathcal{D} \leftarrow \mathcal{D} \cup \mathcal{M} ; \mathcal{B} \leftarrow \mathcal{B}-\mathcal{M} $
     11:  for $b_{i} \;\text { in }\; \mathcal{B}$ do
     12:   if ${ {\rm{IOU} } }\left(\mathcal{M}, b_{i}\right) \ge N_{{\rm{t}}}$ then
     13:    $\mathcal{B} \leftarrow \mathcal{B}-b_{i} ; \;\mathcal{S} \leftarrow \mathcal{S}-s_{i}$
     14:   end
     15:  end
     16: end
     17:end
     输出:NMS处理后的检测框集合$\mathcal{ D}$,NMS处理后的检测分数集
        合$\mathcal{S}$
    下载: 导出CSV

    表  1  SSDD数据集信息概览

    Table  1.   Information of SSDD

    参数指标
    SAR卫星RadarSat-2, TerraSAR-X, Sentinel-1
    SAR图像数量1160
    图像平均尺寸500像素×500像素
    训练集测试集比例8∶2
    极化模式HH, VV, VH, HV
    分辨率1~15 m
    地点中国烟台、印度维萨卡帕特南
    海况良好、较差
    场景复杂靠岸、简单离岸
    舰船数量2587
    最小尺寸舰船66像素
    最大尺寸舰船78597像素
    下载: 导出CSV

    表  2  A3-IOUS-Net和其他方法性能对比

    Table  2.   Comparison of performance of A3-IOUS-Net and other methods

    方法全部场景(%)复杂靠岸场景(%)简单离岸场景(%)Params (M)
    RPAPRPAPRPAP
    RetinaNet[17]83.1589.3782.2358.1476.9256.1594.6593.6594.0130.94
    ATSS[28]86.8187.1385.7862.7976.0659.0797.8691.0497.5030.86
    DCN[29]89.3872.6286.6772.6755.8063.0997.0681.0395.9141.49
    PANET[30]92.4979.6591.0677.9161.1971.9599.2089.4098.7644.16
    Faster R-CNN[31]86.8181.8785.1571.5163.7365.2993.8590.9393.1640.61
    Cascade R-CNN[32]87.1889.1486.4069.1973.0165.3695.4596.2395.2968.42
    Dynamic R-CNN[33]89.1985.5988.1869.7769.7765.4298.1392.4497.8340.61
    Double-Head R-CNN[34]92.3184.2891.2278.4968.5373.7298.6692.0298.2946.20
    Swim Transformer[35]89.0189.1788.4275.0073.7171.9595.4596.4995.3272.55
    Libra R-CNN[36]93.2283.9991.5980.8165.2673.6698.9394.1598.4340.88
    ARPN[37]89.0188.0488.0672.6773.9668.4996.5294.2696.2841.17
    Quad-FPN[38]92.1279.4690.9177.3359.6470.7998.9390.2498.7346.44
    HR-SDNet[39]91.7687.8990.8876.1670.4371.8398.9396.3598.7990.92
    GWFFE-Net[15]92.8671.8191.3481.9853.6175.7197.8682.6297.4561.48
    SER Faster R-CNN[40]93.0477.7991.5281.4059.3274.8898.4088.2597.9641.74
    A3-IOUS-Net95.0589.1894.0586.0579.5782.0499.2093.6998.8331.08
    下载: 导出CSV

    表  3  A3-IOUS-Net是否使用自适应锚框分配机制对精度的影响(%)

    Table  3.   Effect of whether A3-IOUS-Net using adaptive anchor assignment mechanism (%)

    自适应锚框分配全部场景复杂靠岸场景简单离岸场景
    RPAPRPAPRPAP
    ×91.7689.4690.7276.1675.7270.8798.9395.6198.76
    95.0589.1894.0586.0579.5782.0499.2093.6998.83
    下载: 导出CSV

    表  4  自适应锚框分配机制使用不同概率分布模型对精度的影响(%)

    Table  4.   Effect of adaptive anchor assignment mechanism using different probability distribution models (%)

    概率分布模型类型全部场景复杂靠岸场景简单离岸场景
    RPAPRPAPRPAP
    狄利克雷混合分布87.3688.1786.5064.5377.6262.3997.8691.9697.28
    T混合分布89.0192.5788.1769.7783.3366.8297.8696.0697.46
    β混合分布91.7685.3591.0176.7470.2173.2998.6692.4898.40
    高斯混合分布95.0589.1894.0586.0579.5782.0499.2093.6998.83
    下载: 导出CSV

    表  5  A3-IOUS-Net是否使用IOU监督机制对精度的影响(%)

    Table  5.   Effect of whether A3-IOUS-Net using IOU supervise mechanism (%)

    IOU监督全部场景复杂靠岸场景简单离岸场景
    RPAPRPAPRPAP
    ×89.3890.3787.1970.9378.2164.6697.8695.3196.72
    95.0589.1894.0586.0579.5782.0499.2093.6998.83
    下载: 导出CSV

    表  6  IOU监督机制使用不同IOU预测损失函数对精度的影响(%)

    Table  6.   Effect of IOU supervise mechanism using different IOU prediction loss functions (%)

    损失函数类型全部场景复杂靠岸场景简单离岸场景
    RPAPRPAPRPAP
    IOU Loss92.6789.0891.6878.4979.4174.5099.2093.2298.80
    GIOU Loss95.0589.1894.0586.0579.5782.0499.2093.6998.83
    下载: 导出CSV

    表  7  IOU监督机制是否使用坐标注意力模块对精度的影响(%)

    Table  7.   Effect of whether IOU supervise mechanism using coordinate attention module (%)

    坐标注意力全部场景复杂靠岸场景简单离岸场景
    RPAPRPAPRPAP
    ×93.5986.3292.7480.8170.2076.6499.4794.4299.24
    95.0589.1894.0586.0579.5782.0499.2093.6998.83
    下载: 导出CSV
  • [1] 刘方坚, 李媛. 基于视觉显著性的SAR遥感图像NanoDet舰船检测方法[J]. 雷达学报, 2021, 10(6): 885–894. doi: 10.12000/JR21105

    LIU Fangjian and LI Yuan. SAR remote sensing image ship detection method NanoDet based on visual saliency[J]. Journal of Radars, 2021, 10(6): 885–894. doi: 10.12000/JR21105
    [2] ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. HOG-ShipCLSNet: A novel deep learning network with hog feature fusion for SAR ship classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210322. doi: 10.1109/TGRS.2021.3082759
    [3] ZHANG Tianwen and ZHANG Xiaoling. Injection of traditional hand-crafted features into modern CNN-based models for SAR ship classification: What, why, where, and how[J]. Remote Sensing, 2021, 13(11): 2091. doi: 10.3390/rs13112091
    [4] XU Xiaowo, ZHANG Xiaoling, ZHANG Tianwen, et al. Shadow-background-noise 3D spatial decomposition using sparse low-rank Gaussian properties for video-SAR moving target shadow enhancement[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4516105. doi: 10.1109/LGRS.2022.3223514
    [5] ZHANG Tianwen and ZHANG Xiaoling. High-speed ship detection in SAR images based on a grid convolutional neural network[J]. Remote Sensing, 2019, 11(10): 1206. doi: 10.3390/rs11101206
    [6] ZHANG Tianwen, ZHANG Xiaoling, SHI Jun, et al. Balance scene learning mechanism for offshore and inshore ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4004905. doi: 10.1109/LGRS.2020.3033988
    [7] 徐从安, 苏航, 李健伟, 等. RSDD-SAR: SAR舰船斜框检测数据集[J]. 雷达学报, 2022, 11(4): 581–599. doi: 10.12000/JR22007

    XU Cong’an, SU Hang, LI Jianwei, et al. RSDD-SAR: Rotated ship detection dataset in SAR images[J]. Journal of Radars, 2022, 11(4): 581–599. doi: 10.12000/JR22007
    [8] ZHANG Tianwen, ZHANG Xiaoling, SHI Jun, et al. Depthwise separable convolution neural network for high-speed SAR ship detection[J]. Remote Sensing, 2019, 11(21): 2483. doi: 10.3390/rs11212483
    [9] TANG Gang, ZHUGE Yichao, CLARAMUNT C, et al. N-YOLO: A SAR ship detection using noise-classifying and complete-target extraction[J]. Remote Sensing, 2021, 13(5): 871. doi: 10.3390/rs13050871
    [10] ZHANG Tianwen and ZHANG Xiaoling. HTC+ for SAR ship instance segmentation[J]. Remote Sensing, 2022, 14(10): 2395. doi: 10.3390/rs14102395
    [11] HE Bokun, ZHANG Qingyi, TONG Ming, et al. Oriented ship detector for remote sensing imagery based on pairwise branch detection head and SAR feature enhancement[J]. Remote Sensing, 2022, 14(9): 2177. doi: 10.3390/rs14092177
    [12] XU Xiaowo, ZHANG Xiaoling, and ZHANG Tianwen. Lite-YOLOv5: A lightweight deep learning detector for on-board ship detection in large-scene sentinel-1 SAR images[J]. Remote Sensing, 2022, 14(4): 1018. doi: 10.3390/rs14041018
    [13] ZHANG Tianwen, ZHANG Xiaoling, SHI Jun, et al. HyperLi-Net: A hyper-light deep learning network for high-accurate and high-speed ship detection from synthetic aperture radar imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 167: 123–153. doi: 10.1016/j.isprsjprs.2020.05.016
    [14] ZHANG Tianwen and ZHANG Xiaoling. A mask attention interaction and scale enhancement network for SAR ship instance segmentation[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4511005. doi: 10.1109/LGRS.2022.3189961
    [15] XU Xiaowo, ZHANG Xiaoling, SHAO Zikang, et al. A group-wise feature enhancement-and-fusion network with dual-polarization feature enrichment for SAR ship detection[J]. Remote Sensing, 2022, 14(20): 5276. doi: 10.3390/rs14205276
    [16] LI Jianwei, XU Cong’an, SU Hang, et al. Deep learning for SAR ship detection: Past, present and future[J]. Remote Sensing, 2022, 14(11): 2712. doi: 10.3390/rs14112712
    [17] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [18] ZHANG Tianwen, ZHANG Xiaoling, KE Xiao, et al. LS-SSDD-v1.0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 SAR images[J]. Remote Sensing, 2020, 12(18): 2997. doi: 10.3390/rs12182997
    [19] KIM K and LEE H S. Probabilistic anchor assignment with IoU prediction for object detection[C]. 16th European Conference on Computer Vision, Glasgow, UK, 2020: 355–371.
    [20] REYNOLDS D. Gaussian Mixture Models[M]. LI S Z and JAIN A. Encyclopedia of Biometrics. Boston, USA: Springer, 2009: 659–663.
    [21] DEMPSTER A P, LAIRD N M, and RUBIN D B. Maximum likelihood from incomplete data via the EM algorithm[J]. Journal of the Royal Statistical Society:Series B (Methodological), 1977, 39(1): 1–22. doi: 10.1111/j.2517-6161.1977.tb01600.x
    [22] ZHANG Caiguang, XIONG Boli, LI Xiao, et al. TCD: Task-collaborated detector for oriented objects in remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 4700714. doi: 10.1109/TGRS.2023.3244953
    [23] ZHANG Tianwen and ZHANG Xiaoling. Squeeze-and-excitation Laplacian pyramid network with dual-polarization feature fusion for ship classification in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4019905. doi: 10.1109/LGRS.2021.3119875
    [24] HOU Qibin, ZHOU Daquan, and FENG Jiashi. Coordinate attention for efficient mobile network design[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 13708–13717.
    [25] ZHANG Tianwen, ZHANG Xiaoling, LI Jianwei, et al. SAR ship detection dataset (SSDD): Official release and comprehensive data analysis[J]. Remote Sensing, 2021, 13(18): 3690. doi: 10.3390/rs13183690
    [26] KETKAR N. Introduction to PyTorch[M]. KETKAR N. Deep Learning with Python: A Hands-on Introduction. Berkeley, USA: Apress, 2017: 195–208.
    [27] CHEN Kai, WANG Jiaqi, PANG Jiangmiao, et al. MMDetection: Open MMLab detection toolbox and benchmark[J]. arXiv: 1906.07155, 2019.
    [28] ZHANG Shifeng, CHI Cheng, YAO Yongqiang, et al. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 9756–9765.
    [29] ZHU Xizhou, HU Han, LIN S, et al. Deformable ConvNets V2: More deformable, better results[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 9300–9308.
    [30] LIU Shu, QI Lu, QIN Haifeng, et al. Path aggregation network for instance segmentation[J]. arXiv: 1803.01534, 2018.
    [31] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. doi: 10.1109/TPAMI.2016.2577031
    [32] CAI Zhaowei and VASCONCELOS N. Cascade R-CNN: Delving into high quality object detection[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6154–6162.
    [33] ZHANG Hongkai, CHANG Hong, MA Bingpeng, et al. Dynamic R-CNN: Towards high quality object detection via dynamic training[C]. 16th European Conference on Computer Vision, Glasgow, UK, 2020: 260–275.
    [34] WU Yue, CHEN Yinpeng, YUAN Lu, et al. Rethinking classification and localization for object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 10183–10192.
    [35] LIU Ze, LIN Yutong, CAO Yue, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 9992–10002.
    [36] PANG Jiangmiao, CHEN Kai, SHI Jianping, et al. Libra R-CNN: Towards balanced learning for object detection[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 821–830.
    [37] ZHAO Yan, ZHAO Lingjun, XIONG Boli, et al. Attention receptive pyramid network for ship detection in SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 2738–2756. doi: 10.1109/JSTARS.2020.2997081
    [38] ZHANG Tianwen, ZHANG Xiaoling, and KE Xiao. Quad-FPN: A novel quad feature pyramid network for SAR ship detection[J]. Remote Sensing, 2021, 13(14): 2771. doi: 10.3390/rs13142771
    [39] WEI Shunjun, SU Hao, MING Jing, et al. Precise and robust ship detection for high-resolution SAR imagery based on HR-SDNet[J]. Remote Sensing, 2020, 12(1): 167. doi: 10.3390/rs12010167
    [40] LIN Zhao, JI Kefeng, LENG Kiangguang, et al. Squeeze and excitation rank faster R-CNN for ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(5): 751–755. doi: 10.1109/LGRS.2018.2882551
    [41] VO X T and JO K H. A review on anchor assignment and sampling heuristics in deep learning-based object detection[J]. Neurocomputing, 2022, 506: 96–116. doi: 10.1016/j.neucom.2022.07.003
    [42] 孙显, 王智睿, 孙元睿, 等. AIR-SARShip-1.0: 高分辨率SAR舰船检测数据集[J]. 雷达学报, 2019, 8(6): 852–862. doi: 10.12000/JR19097

    SUN Xian, WANG Zhirui, and SUN Yuanrui, et al. AIR-SARShip-1.0: High-resolution SAR ship detection dataset[J]. Journal of Radars, 2019, 8(6): 852–862. doi: 10.12000/JR19097
  • 加载中
图(11) / 表(9)
计量
  • 文章访问数:  817
  • HTML全文浏览量:  397
  • PDF下载量:  199
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-04-27
  • 修回日期:  2023-05-26
  • 网络出版日期:  2023-06-21
  • 刊出日期:  2023-10-28

目录

    /

    返回文章
    返回