AIR-PolSAR-Seg-2.0:大规模复杂场景极化SAR地物分类数据集

王智睿 赵良瑾 汪越雷 曾璇 康健 杨健 孙显

李念, 刘杰, 于君明, 等. 基于多域联合直达波估计的建筑布局层析成像方法[J]. 雷达学报(中英文), 2025, 14(2): 309–321. doi: 10.12000/JR24220
引用本文: 王智睿, 赵良瑾, 汪越雷, 等. AIR-PolSAR-Seg-2.0:大规模复杂场景极化SAR地物分类数据集[J]. 雷达学报(中英文), 2025, 14(2): 353–365. doi: 10.12000/JR24237
LI Nian, LIU Jie, YU Junming, et al. Building layout tomography method based on joint multidomain direct wave estimation[J]. Journal of Radars, 2025, 14(2): 309–321. doi: 10.12000/JR24220
Citation: WANG Zhirui, ZHAO Liangjin, WANG Yuelei, et al. AIR-PolSAR-Seg-2.0: Polarimetric SAR ground terrain classification dataset for large-scale complex scenes[J]. Journal of Radars, 2025, 14(2): 353–365. doi: 10.12000/JR24237

AIR-PolSAR-Seg-2.0:大规模复杂场景极化SAR地物分类数据集

DOI: 10.12000/JR24237 CSTR: 32380.14.JR24237
基金项目: 国家自然科学基金(62331027)
详细信息
    作者简介:

    王智睿,博士,副研究员,主要研究方向为SAR图像智能解译

    赵良瑾,硕士,助理研究员,主要研究方向为模型轻量化、边缘智能

    汪越雷,博士生,主要研究方向为分布式协同感知

    曾 璇,博士生,主要研究方向为极化SAR地物分类

    康 健,博士,副教授,主要研究方向为遥感图像智能解译

    杨 健,博士,教授,主要研究方向为极化SAR图像处理

    孙 显,博士,研究员,主要研究方向为多源遥感图像智能解译

    通讯作者:

    赵良瑾 zhaolj004896@aircas.ac.cn

  • 责任主编:陈思伟 Corresponding Editor: CHEN Siwei
  • 中图分类号: TN957

AIR-PolSAR-Seg-2.0: Polarimetric SAR Ground Terrain Classification Dataset for Large-scale Complex Scenes

Funds: The National Natural Science Foundation of China (62331027)
More Information
  • 摘要: 极化合成孔径雷达(PolSAR)地物分类是SAR图像智能解译领域的研究热点之一。为了进一步促进该领域研究的发展,该文组织并发布了一个面向大规模复杂场景的极化SAR地物分类数据集AIR-PolSAR-Seg-2.0。该数据集由三景不同区域的高分三号卫星L1A级复数SAR影像构成,空间分辨率8 m,包含HH, HV, VH和VV共4种极化方式,涵盖水体、植被、裸地、建筑、道路、山脉等6类典型的地物类别,具有场景复杂规模大、强弱散射多样、边界分布不规则、类别尺度多样、样本分布不均衡的特点。为方便试验验证,该文将三景完整的SAR影像裁剪成24,672张512像素×512像素的切片,并使用一系列通用的深度学习方法进行了实验验证。实验结果显示,基于双通道自注意力方法的DANet性能表现最佳,在幅度数据和幅相融合数据的平均交并比分别达到了85.96%和87.03%。该数据集与实验指标基准有助于其他学者进一步展开极化SAR地物分类相关研究。

     

  • 图  1  高分三号地区1影像数据

    Figure  1.  GF-3 PolSAR image of area 1

    图  2  高分三号地区2影像数据

    Figure  2.  GF-3 PolSAR image of area 2

    图  3  高分三号地区3影像数据

    Figure  3.  GF-3 PolSAR image of area 3

    图  4  AIR-PolSAR-Seg-2.0数据集类别样本分布

    Figure  4.  Distribution of class samples in the AIR-PolSAR-Seg-2.0 dataset

    图  5  AIR-PolSAR-Seg-2.0切片产生过程

    Figure  5.  Process of AIR-PolSAR-Seg-2.0 dataset patch generation

    图  6  混淆矩阵示意图

    Figure  6.  The confusion matrices for the methods

    图  7  AIR-PolSAR-Seg-2.0数据集上不同方法的可视化结果

    Figure  7.  Visualization results of different methods on AIR-PolSAR-Seg-2.0 dataset

    1  AIR-PolSAR-Seg-2.0:大规模复杂场景极化SAR地物分类数据集发布网页

    1.  Release webpage of AIR-PolSAR-Seg-2.0: Polarimetric SAR ground terrain classification dataset for large-scale complex scenes

    表  1  AIR-PolSAR-Seg-2.0数据集中3个地区影像数据的详细信息

    Table  1.   Details of image data for three regions in the AIR-PolSAR-Seg-2.0 dataset

    地区编号 分辨率(m) 经度 纬度 图像大小(像素) 时间 成像模式 极化方式
    1 8 113°2' 23°0' 5456×4708 2016年11月 全极化条带I HH, HV, VH, VV
    2 8 116°5' 40°1' 7820×6488 2016年9月 全极化条带I HH, HV, VH, VV
    3 8 121°7' 31°2' 6014×4708 2019年1月 全极化条带I HH, HV, VH, VV
    下载: 导出CSV

    表  2  AIR-PolSAR-Seg-2.0数据集的地物类别对应编号及编码信息

    Table  2.   The corresponding numbers and coding information of the ground terrain categories in the AIR-PolSAR-Seg-2.0 dataset

    地物类别编号 地区1影像数据 地区2影像数据 地区3影像数据
    C1 水体(0, 0, 255) 水体(0, 0, 255) 水体(0, 0, 255)
    C2 植被(0, 255, 0) 植被(0, 255, 0) 植被(0, 255, 0)
    C3 裸地(255, 0, 0) 裸地(255, 0, 0) 裸地(255, 0, 0)
    C4 道路(0, 255, 255) 道路(0, 255, 255) 道路(0, 255, 255)
    C5 建筑(255, 255, 0) 建筑(255, 255, 0) 建筑(255, 255, 0)
    C6 山脉(255, 0, 255)
    下载: 导出CSV

    表  3  AIR-PolSAR-Seg-2.0与AIR-PolSAR-Seg数据集比较

    Table  3.   Comparison of AIR-PolSAR-Seg-2.0 and AIR-PolSAR-Seg datasets

    数据集 数据内容 分辨率 地物类别 影像区域及尺寸 样本数量及尺寸
    AIR-PolSAR-Seg-2.0数据集 L1A级SAR复数据(含幅度和
    相位图像),极化方式包括
    HH, HV, VH, VV
    8 m 共6类,分别为水体、
    植被、裸地、建筑、
    道路、山脉
    三景,尺寸分别为5456像素×
    4708像素、7820像素×6488像素
    6014像素×4708像素
    24672张,
    512像素×512像素
    AIR-PolSAR-Seg
    数据集
    L2级SAR幅度图像,极化方式
    包括HH, HV, VH, VV
    8 m 共6类,分别为住房区域、工业区、自然区、
    土地利用区、水域
    和其他区域
    一景,9082像素×9805像素 2000张,
    512像素×512像素
    下载: 导出CSV

    表  4  基于幅度数据实验中不同算法的对比结果(%)

    Table  4.   Comparative results of different methods in experiments based on amplitude data (%)

    方法每个类别的IoU/PA评价指标
    C0C1C2C3C4C5PAmPAmIoUKappa
    FCN89.33/93.4488.34/94.0375.58/79.2562.07/75.3188.58/95.0390.53/92.1592.4688.2082.4089.13
    PSPNet89.78/93.3088.22/94.0573.93/78.8260.00/70.2488.56/95.8892.55/93.1992.4187.5882.1789.02
    DeepLabV3+90.44/93.1189.54/95.1869.66/73.9863.15/76.6189.93/95.4495.98/98.1793.1288.7583.1290.08
    Point-rend90.63/93.7089.53/96.0679.87/84.3162.19/70.8289.57/95.3295.43/96.6593.2389.4884.5490.21
    DANet91.06/94.4990.51/95.6280.51/84.7667.39/78.6790.66/95.5995.62/97.2893.9291.0785.9691.24
    注:加粗项表示最优结果。
    下载: 导出CSV

    表  5  基于幅度相位融合数据实验中不同算法的对比结果 (%)

    Table  5.   Comparative results of different methods in experiments based on amplitude and phase fusion data (%)

    方法每个类别的IoU/PA评价指标
    C0C1C2C3C4C5PAmPAmIoUKappa
    FCN89.62/92.6288.39/94.6573.16/75.5261.84/72.6988.58/95.3794.60/96.9892.5587.9782.7089.23
    PSPNet89.87/93.6688.31/94.0276.38/81.3560.90/71.5188.70/95.5794.51/96.2592.5788.7383.1189.27
    DeepLabV3+89.82/91.9289.93/95.6873.78/78.4264.25/73.7690.14/96.2796.23/98.3693.3989.0784.0290.44
    Point-rend91.25/94.9890.03/94.4979.49/82.0666.19/75.3890.06/96.6095.23/97.4093.6690.1585.3890.84
    DANet91.73/94.3291.17/96.1982.38/85.6370.86/81.3591.48/95.9194.54/97.5294.4691.8287.0392.03
    注:加粗项表示最优结果。
    下载: 导出CSV
  • [1] JACKSON C R and APEL J R. Synthetic Aperture Radar Marine User’s Manual[M]. Washington: National Oceanic and Atmospheric Administration, 2004.
    [2] FU Kun, FU Jiamei, WANG Zhirui, et al. Scattering-keypoint-guided network for oriented ship detection in high-resolution and large-scale SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 11162–11178. doi: 10.1109/JSTARS.2021.3109469.
    [3] LEE J S and POTTIER E. Polarimetric Radar Imaging: From Basics to Applications[M]. Boca Raton: CRC Press, 2017: 1–10. doi: 10.1201/9781420054989.
    [4] LIU Xu, JIAO Licheng, TANG Xu, et al. Polarimetric convolutional network for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(5): 3040–3054. doi: 10.1109/TGRS.2018.2879984.
    [5] PARIKH H, PATEL S, and PATEL V. Classification of SAR and PolSAR images using deep learning: A review[J]. International Journal of Image and Data Fusion, 2020, 11(1): 1–32. doi: 10.1080/19479832.2019.1655489.
    [6] BI Haixia, SUN Jian, and XU Zongben. A graph-based semisupervised deep learning model for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(4): 2116–2132. doi: 10.1109/TGRS.2018.2871504.
    [7] CHEN Siwei and TAO Chensong. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(4): 627–631. doi: 10.1109/LGRS.2018.2799877.
    [8] 刘涛, 杨子渊, 蒋燕妮, 等. 极化SAR图像舰船目标检测研究综述[J]. 雷达学报, 2021, 10(1): 1–19. doi: 10.12000/JR20155.

    LIU Tao, YANG Ziyuan, JIANG Yanni, et al. Review of ship detection in polarimetric synthetic aperture imagery[J]. Journal of Radars, 2021, 10(1): 1–19. doi: 10.12000/JR20155.
    [9] WU Wenjin, LI Hailei, LI Xinwu, et al. PolSAR image semantic segmentation based on deep transfer learning—realizing smooth classification with small training sets[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6): 977–981. doi: 10.1109/LGRS.2018.2886559.
    [10] XIAO Daifeng, WANG Zhirui, WU Youming, et al. Terrain segmentation in polarimetric SAR images using dual-attention fusion network[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4006005. doi: 10.1109/LGRS.2020.3038240.
    [11] FREEMAN A and DURDEN S L. A three-component scattering model for polarimetric SAR data[J]. IEEE Transactions on Geoscience and Remote Sensing, 1998, 36(3): 963–973. doi: 10.1109/36.673687.
    [12] 肖东凌, 刘畅. 基于精调的膨胀编组-交叉CNN的PolSAR地物分类[J]. 雷达学报, 2019, 8(4): 479–489. doi: 10.12000/JR19039.

    XIAO Dongling and LIU Chang. PolSAR terrain classification based on fine-tuned dilated group-cross convolution neural network[J]. Journal of Radars, 2019, 8(4): 479–489. doi: 10.12000/JR19039.
    [13] 秦先祥, 余旺盛, 王鹏, 等. 基于复值卷积神经网络样本精选的极化SAR图像弱监督分类方法[J]. 雷达学报, 2020, 9(3): 525–538. doi: 10.12000/JR20062.

    QIN Xianxiang, YU Wangsheng, WANG Peng, et al. Weakly supervised classification of PolSAR images based on sample refinement with complex-valued convolutional neural network[J]. Journal of Radars, 2020, 9(3): 525–538. doi: 10.12000/JR20062.
    [14] 邹焕新, 李美霖, 马倩, 等. 一种基于张量积扩散的非监督极化SAR图像地物分类方法[J]. 雷达学报, 2019, 8(4): 436–447. doi: 10.12000/JR19057.

    ZOU Huanxin, LI Meilin, MA Qian, et al. An unsupervised PolSAR image classification algorithm based on tensor product graph diffusion[J]. Journal of Radars, 2019, 8(4): 436–447. doi: 10.12000/JR19057.
    [15] FANG Zheng, ZHANG Gong, DAI Qijun, et al. Hybrid attention-based encoder-decoder fully convolutional network for PolSAR image classification[J]. Remote Sensing, 2023, 15(2): 526. doi: 10.3390/rs15020526.
    [16] ZHANG Mengxuan, SHI Jingyuan, LIU Long, et al. Evolutionary complex-valued CNN for PolSAR image classification[C]. 2024 International Joint Conference on Neural Networks, Yokohama, Japan, 2024: 1–8. doi: 10.1109/IJCNN60899.2024.10650936.
    [17] SUN Xian, WANG Peijin, YAN Zhiyuan, et al. FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 184: 116–130. doi: 10.1016/j.isprsjprs.2021.12.004.
    [18] ZAMIR W S, ARORA A, GUPTA A, et al. iSAID: A large-scale dataset for instance segmentation in aerial images[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, USA, 2019: 28–37.
    [19] YANG Yi and NEWSAM S. Bag-of-visual-words and spatial extensions for land-use classification[C]. The 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, USA, 2010: 270–279. doi: 10.1145/1869790.1869829.
    [20] ROTTENSTEINER F, SOHN G, GERKE M, et al. ISPRS semantic labeling contest[C]. Photogrammetric Computer Vision, Zurich, Switzerland, 2014: 5–7.
    [21] CHENG Gong, HAN Junwei, and LU Xiaoqiang. Remote sensing image scene classification: Benchmark and state of the art[J]. Proceedings of the IEEE, 2017, 105(10): 1865–1883. doi: 10.1109/JPROC.2017.2675998.
    [22] SHENG Guofeng, YANG Wen, XU Tao, et al. High-resolution satellite scene classification using a sparse coding based multiple feature combination[J]. International Journal of Remote Sensing, 2012, 33(8): 2395–2412. doi: 10.1080/01431161.2011.608740.
    [23] LIU Xu, JIAO Licheng, LIU Fang, et al. PolSF: PolSAR image datasets on San Francisco[C]. The 5th IFIP TC 12 International Conference on Intelligence Science, Xi’an, China, 2022: 214–219. doi: 10.1007/978-3-031-14903-0_23.
    [24] WANG Zhirui, ZENG Xuan, YAN Zhiyuan, et al. AIR-PolSAR-Seg: A large-scale data set for terrain segmentation in complex-scene PolSAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 3830–3841. doi: 10.1109/JSTARS.2022.3170326.
    [25] HOCHSTUHL S, PFEFFER N, THIELE A, et al. Pol-InSAR-island—a benchmark dataset for multi-frequency pol-InSAR data land cover classification[J]. ISPRS Open Journal of Photogrammetry and Remote Sensing, 2023, 10: 100047. doi: 10.1016/j.ophoto.2023.100047.
    [26] WEST R D, HENRIKSEN A, STEINBACH E, et al. High-resolution fully-polarimetric synthetic aperture radar dataset[J]. Discover Geoscience, 2024, 2(1): 83. doi: 10.1007/s44288-024-00090-6.
    [27] LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3431–3440. doi: 10.1109/CVPR.2015.7298965.
    [28] ZHAO Hengshuang, SHI Jianping, QI Xiaojuan, et al. Pyramid scene parsing network[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6230–6239. doi: 10.1109/CVPR.2017.660.
    [29] CHEN L C, ZHU Yukun, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 833–851. doi: 10.1007/978-3-030-01234-2_49.
    [30] KIRILLOV A, WU Yuxin, HE Kaiming, et al. PointRend: Image segmentation as rendering[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 9796–9805. doi: 10.1109/CVPR42600.2020.00982.
    [31] FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3141–3149. doi: 10.1109/CVPR.2019.00326.
  • 加载中
图(8) / 表(5)
计量
  • 文章访问数: 90
  • HTML全文浏览量: 16
  • PDF下载量: 16
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-11-29
  • 修回日期:  2025-03-23
  • 网络出版日期:  2025-03-31
  • 刊出日期:  2025-04-28

目录

    /

    返回文章
    返回