面向非均衡类别的半监督辐射源识别方法

谭凯文 张立民 闫文君 徐从安 凌青 刘恒燕

谭凯文, 张立民, 闫文君, 等. 面向非均衡类别的半监督辐射源识别方法[J]. 雷达学报, 2022, 11(4): 713–727. doi: 10.12000/JR22043
引用本文: 谭凯文, 张立民, 闫文君, 等. 面向非均衡类别的半监督辐射源识别方法[J]. 雷达学报, 2022, 11(4): 713–727. doi: 10.12000/JR22043
TAN Kaiwen, ZHANG Limin, YAN Wenjun, et al. A semi-supervised emitter identification method for imbalanced category[J]. Journal of Radars, 2022, 11(4): 713–727. doi: 10.12000/JR22043
Citation: TAN Kaiwen, ZHANG Limin, YAN Wenjun, et al. A semi-supervised emitter identification method for imbalanced category[J]. Journal of Radars, 2022, 11(4): 713–727. doi: 10.12000/JR22043

面向非均衡类别的半监督辐射源识别方法

DOI: 10.12000/JR22043
基金项目: 国家自然科学基金(91538201)
详细信息
    作者简介:

    谭凯文(1998-),男,硕士研究生,主要研究方向为特定辐射源识别、生成式对抗网络、频谱感知

    张立民(1966-),男,教授,博士生导师,主要研究方向为卫星信号处理及应用

    闫文君(1986-),男,博士,副教授,主要研究方向为空时分组码检测、智能信号处理

    徐从安(1987-),男,博士,副教授,主要研究方向为遥感图像智能处理、多目标跟踪

    凌 青(1987-),女,博士,副教授,主要研究方向为通信信号智能处理

    刘恒燕(1994-),女,博士研究生,主要研究方向为LDPC译码

    通讯作者:

    张立民 iamzlm@163.com

    闫文君 wj_yan@foxmail.com

  • 责任主编:朱卫纲 Corresponding Editor: ZHU Weigang
  • 中图分类号: TN911.7

A Semi-supervised Emitter Identification Method for Imbalanced Category

Funds: The National Natural Science Foundation of China (91538201)
More Information
  • 摘要: 针对辐射源个体识别(SEI)中样本标签不完整和数据类别分布不平衡导致分类准确率下降的问题,该文提出了一种基于代价敏感学习和半监督生成式对抗网络(GAN)的特定辐射源分类方法。该方法通过半监督训练方式优化生成器和判别器的网络参数,并向残差网络中添加多尺度拓扑模块融合时域信号的多维分辨率特征,赋予生成样本额外标签从而直接利用判别器完成分类。同时设计代价敏感损失缓解优势样本导致的梯度传播失衡,改善分类器在类不平衡数据集上的识别性能。在4类失衡仿真数据集上的实验结果表明,存在40%无标记样本的情况下,该方法对于5个辐射源的平均识别率相比于交叉熵损失和焦点损失分别提高5.34%和2.69%,为解决数据标注缺失和类别分布不均条件下的特定辐射源识别问题提供了新思路。

     

  • 图  1  IC-SGAN整体结构

    Figure  1.  The overall structure of IC-SGAN

    图  2  基于MTB和残差单元构建的判别网络

    Figure  2.  The discriminator based on MTB and residual units

    图  3  5类代价敏感损失曲线

    Figure  3.  Five types of cost-sensitive loss curve

    图  4  基本神经元结构

    Figure  4.  The basic neuron structure

    图  5  判别器和生成器损失值变化曲线

    Figure  5.  The loss value change of thediscriminator and generator

    图  6  判别器结构参数对识别准确率的影响

    Figure  6.  The influence of discriminator structure parameters on recognition accuracy

    图  7  MTB-ResNet与ResNet的识别准确率对比

    Figure  7.  Comparison of recognition accuracy between MTB-ResNet and ResNet

    图  8  不同训练样本数下识别性能对比

    Figure  8.  Comparison of recognition performance under different numbers of training samples

    图  9  不同标记比下的混淆矩阵

    Figure  9.  Confusion matrix under different labeled ratios

    图  10  辐射源信号采集系统实物图

    Figure  10.  Physical map of emitter signal acquisition system

    图  11  IC-SGAN对于不同辐射源数量的识别结果

    Figure  11.  The recognition results for different numbers of emitters

    表  1  生成器结构

    Table  1.   The structure of generator

    层名称参数设置
    输入层随机噪声1×100
    全连接层
    维度变换
    256×64
    1×256×64
    卷积层1,批归一化,
    LeakyReLU,上采样
    Filters = 128, kernel_size = 1×3,
    strides = 1, padding = same
    卷积层3Filters = 128
    卷积层4Filters = 64
    卷积层5Filters = 32
    卷积层6Filters = 1
    Flatten,全连接层
    输出层
    1000
    1×1000信号向量
    下载: 导出CSV

    表  2  仿真数据的训练集和测试集设置

    Table  2.   Training and test set settings for simulation data

    数据集类别
    E1E2E3E4E5
    标准训练集480480480480480
    标准测试集480480480480480
    训练集11010100480480
    训练集21020200180480
    训练集34804804802080
    训练集45050180480300
    下载: 导出CSV

    表  3  不同损失函数在不同信噪比下的性能评估(%)

    Table  3.   Performance evaluation of loss functions under different SNRs (%)

    训练集损失函数0 dB2 dB4 dB6 dB8 dB10 dB12 dB14 dB16 dB18 dB20 dB22 dB24 dB

    标准训练集
    CE49.3351.1753.3658.6465.4772.4485.1690.8894.6496.4899.2399.7699.80
    FL48.7850.9754.2957.9764.3170.8486.1091.1293.7792.5798.6898.5698.65
    ICL48.3151.2654.3356.1962.9968.0985.8489.5594.0595.6996.0298.2799.31
    训练集1CE29.8835.1437.3239.2439.8840.0441.5242.6345.6848.5274.7279.8782.61
    FL30.1333.1636.0838.8440.4042.3241.9643.0845.1248.6074.1680.7784.16
    ICL30.6536.4735.0438.3941.2643.7744.2745.4649.0054.2474.8483.1385.36
    训练集2CE19.7724.1325.8029.4830.5639.7242.7654.5255.2075.9679.5680.1581.36
    FL23.0825.5524.1625.7633.1636.2436.4455.8057.2473.0080.2481.6683.48
    ICL24.1126.6726.7728.9435.2940.9442.9653.9759.3377.0479.6481.0884.70
    训练集3CE28.0932.4033.5736.7442.1746.3849.4450.3655.5664.3673.4977.7679.40
    FL33.6833.1938.4538.7742.5445.2948.3751.9452.7966.8174.3177.5678.04
    ICL32.4136.9740.6040.8844.2846.0049.6450.9656.4465.5275.8879.5480.92
    训练集4CE30.1131.7935.8836.0843.1245.0048.3660.0862.0364.2870.8072.6178.54
    FL31.2533.5736.4040.3642.9643.8454.5662.3664.4064.8874.1673.7779.76
    ICL32.9733.2236.4241.3244.0447.5256.2464.7667.0468.2481.6082.7783.84
    下载: 导出CSV

    表  4  不同算法识别准确率对比(%)

    Table  4.   Comparison of recognition accuracy of different schemes (%)

    算法0 dB4 dB8 dB12 dB16 dB20 dB24 dB
    本文方法+CE30.1135.8843.1248.3662.0370.8078.54
    本文方法+ICL32.9736.4244.0456.2467.0481.6083.84
    方法1+CE27.5331.7637.7244.8059.3664.4373.36
    方法1+ICL28.1232.6439.5349.0863.9672.1275.24
    方法2+CE26.4633.1841.0746.0860.4768.0077.36
    方法2+ICL29.1634.5642.2851.3265.2375.1679.00
    方法334.8233.4640.6150.9257.7767.1871.49
    方法420.9625.1034.6445.6957.1458.0667.18
    下载: 导出CSV

    表  5  网络复杂度对比

    Table  5.   Network complexity comparison

    网络模型空间复杂度
    NO (M)
    迭代平均
    耗时ttrain (s)
    平均识别
    时间ttest (s)
    IC-SGAN1.97+3.9390.83.1
    RFFE-InfoGAN2.52+19.57207.04.6
    E3SGAN4.25+4.61110.52.8
    下载: 导出CSV

    表  6  真实数据的训练集和测试集设置

    Table  6.   Training set and test set settings for real data

    数据集类别
    E1E2E3E4E5E6E7E8E9E10
    标准训练集480480480480480480480480480480
    标准测试集240240240240240240240240240240
    训练集14848048480484804848048480
    训练集2120480120480120480120480120480
    训练集34804848048480484804848048
    训练集4480120480120480120480120480120
    下载: 导出CSV

    表  7  3类损失函数的识别性能评估(%)

    Table  7.   Recognition performance evaluation of three loss functions (%)

    训练集损失函数K = 5K = 6K = 7K = 8K = 9K = 10Average
    标准训练集CE93.5289.7887.2785.3485.4786.8088.03
    FL97.3987.6488.2189.6687.2885.3289.25
    ICL94.6088.3690.4688.5686.7286.3889.18
    训练集1CE90.1678.3576.4072.3870.8762.7675.15
    FL91.6879.7679.3574.6174.9865.1877.59
    ICL93.4884.0782.3777.5672.6568.9279.84
    训练集2CE86.8880.1778.4670.0269.4271.4376.06
    FL91.5478.1281.9868.3372.1372.1677.37
    ICL90.5383.4176.2972.2174.8768.5977.65
    训练集3CE88.5779.4474.6771.6273.5166.9475.79
    FL91.0582.5676.2669.5076.3869.4877.53
    ICL93.3784.5079.1875.4378.8672.8680.70
    训练集4CE86.7279.9883.4374.3473.2965.7477.25
    FL84.0782.4077.5377.0775.8268.0977.49
    ICL83.4684.9879.2577.3379.4871.3879.31
    下载: 导出CSV
  • [1] XING Yuexiu, HU Aiqun, ZHANG Junqing, et al. Design of a robust radio-frequency fingerprint identification scheme for multimode LFM radar[J]. IEEE Internet of Things Journal, 2020, 7(10): 10581–10593. doi: 10.1109/JIOT.2020.3003692
    [2] SANKHE K, BELGIOVINE M, ZHOU Fan, et al. No radio left behind: Radio fingerprinting through deep learning of physical-layer hardware impairments[J]. IEEE Transactions on Cognitive Communications and Networking, 2020, 6(1): 165–178. doi: 10.1109/TCCN.2019.2949308
    [3] POLAK A C, DOLATSHAHI S, and GOECKEL D L. Identifying wireless users via transmitter imperfections[J]. IEEE Journal on Selected Areas in Communications, 2011, 29(7): 1469–1479. doi: 10.1109/JSAC.2011.110812
    [4] SUN Jinlong, SHI Wenjuan, YANG Zhutian, et al. Behavioral modeling and linearization of wideband RF power amplifiers using BiLSTM networks for 5G wireless systems[J]. IEEE Transactions on Vehicular Technology, 2019, 68(11): 10348–10356. doi: 10.1109/TVT.2019.2925562
    [5] 潘一苇, 杨司韩, 彭华, 等. 基于矢量图的特定辐射源识别方法[J]. 电子与信息学报, 2020, 42(4): 941–949. doi: 10.11999/JEIT190329

    PAN Yiwei, YANG Sihan, PENG Hua, et al. Specific emitter identification using signal trajectory image[J]. Journal of Electronics &Information Technology, 2020, 42(4): 941–949. doi: 10.11999/JEIT190329
    [6] DIGNE F, BAUSSARD A, KHENCHAF A, et al. Classification of radar pulses in a naval warfare context using Bézier curve modeling of the instantaneous frequency law[J]. IEEE Transactions on Aerospace and Electronic Systems, 2017, 53(3): 1469–1480. doi: 10.1109/TAES.2017.2671578
    [7] GUO Shanzeng, AKHTAR S, and MELLA A. A method for radar model identification using time-domain transient signals[J]. IEEE Transactions on Aerospace and Electronic Systems, 2021, 57(5): 3132–3149. doi: 10.1109/TAES.2021.3074129
    [8] URETEN O and SERINKEN N. Bayesian detection of Wi-Fi transmitter RF fingerprints[J]. Electronics Letters, 2005, 41(6): 373–374. doi: 10.1049/el:20057769
    [9] GONG Jialiang, XU Xiaodong, and LEI Yingke. Unsupervised specific emitter identification method using radio-frequency fingerprint embedded InfoGAN[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 2898–2913. doi: 10.1109/TIFS.2020.2978620
    [10] YAO Yanyan, YU Lu, and CHEN Yiming. Specific emitter identification based on square integral bispectrum features[C]. 2020 IEEE 20th International Conference on Communication Technology (ICCT), Nanning, China, 2020: 1311–1314.
    [11] ZHANG Jingwen, WANG Fanggang, DOBRE O A, et al. Specific emitter identification via Hilbert-Huang transform in single-hop and relaying scenarios[J]. IEEE Transactions on Information Forensics and Security, 2016, 11(6): 1192–1205. doi: 10.1109/TIFS.2016.2520908
    [12] YUAN Yingjun, HUANG Zhitao, WU Hao, et al. Specific emitter identification based on Hilbert-Huang transform-based time-frequency-energy distribution features[J]. IET Communications, 2014, 8(13): 2404–2412. doi: 10.1049/iet-com.2013.0865
    [13] PAN Yiwei, YANG Sihan, PENG Hua, et al. Specific emitter identification based on deep residual networks[J]. IEEE Access, 2019, 7: 54425–54434. doi: 10.1109/ACCESS.2019.2913759
    [14] 秦鑫, 黄洁, 王建涛, 等. 基于无意调相特性的雷达辐射源个体识别[J]. 通信学报, 2020, 41(5): 104–111. doi: 10.11959/j.issn.1000-436x.2020084

    QIN Xin, HUANG Jie, WANG Jiantao, et al. Radar emitter identification based on unintentional phase modulation on pulse characteristic[J]. Journal on Communications, 2020, 41(5): 104–111. doi: 10.11959/j.issn.1000-436x.2020084
    [15] SATIJA U, TRIVEDI N, BISWAL G, et al. Specific emitter identification based on variational mode decomposition and spectral features in single hop and relaying scenarios[J]. IEEE Transactions on Information Forensics and Security, 2018, 14(3): 581–591. doi: 10.1109/TIFS.2018.2855665
    [16] SA Kejin, LANG Dapeng, WANG Chenggang, et al. Specific emitter identification techniques for the internet of things[J]. IEEE Access, 2020, 8: 1644–1652. doi: 10.1109/ACCESS.2019.2962626
    [17] MERCHANT K, REVAY S, STANTCHEV G, et al. Deep learning for RF device fingerprinting in cognitive communication networks[J]. IEEE Journal of Selected Topics in Signal Processing, 2018, 12(1): 160–167. doi: 10.1109/JSTSP.2018.2796446
    [18] QIAN Yunhan, QI Jie, KUAI Xiaoyan, et al. Specific emitter identification based on multi-level sparse representation in automatic identification system[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 2872–2884. doi: 10.1109/TIFS.2021.3068010
    [19] WU Qingyang, FERES C, KUZMENKO D, et al. Deep learning based RF fingerprinting for device identification and wireless security[J]. Electronics Letters, 2018, 54(24): 1405–1407. doi: 10.1049/el.2018.6404
    [20] WANG Xuebao, HUANG Gaoming, MA Congshan, et al. Convolutional neural network applied to specific emitter identification based on pulse waveform images[J]. IET Radar, Sonar & Navigation, 2020, 14(5): 728–735. doi: 10.1049/iet-rsn.2019.0456
    [21] 何遵文, 侯帅, 张万成, 等. 通信特定辐射源识别的多特征融合分类方法[J]. 通信学报, 2021, 42(2): 103–112. doi: 10.11959/j.issn.1000-436x.2021028

    HE Zunwen, HOU Shuai, ZHANG Wancheng, et al. Multi-feature fusion classification method for communication specific emitter identification[J]. Journal on Communications, 2021, 42(2): 103–112. doi: 10.11959/j.issn.1000-436x.2021028
    [22] ZHOU Huaji, JIAO Licheng, ZHENG Shilian, et al. Generative adversarial network-based electromagnetic signal classification: A semi-supervised learning framework[J]. China Communications, 2020, 17(10): 157–169. doi: 10.23919/JCC.2020.10.011
    [23] ABDI L and HASHEMI S. To combat multi-class imbalanced problems by means of over-sampling techniques[J]. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(1): 238–251. doi: 10.1109/TKDE.2015.2458858
    [24] BUNKHUMPORNPAT C, SINAPIROMSARAN K, and LURSINSAP C. DBSMOTE: Density-based synthetic minority over-sampling technique[J]. Applied Intelligence, 2012, 36(3): 664–684. doi: 10.1007/s10489-011-0287-y
    [25] KANG Qi, CHEN Xiaoshuang, LI Sisi, et al. A noise-filtered under-sampling scheme for imbalanced classification[J]. IEEE Transactions on Cybernetics, 2017, 47(12): 4263–4274. doi: 10.1109/TCYB.2016.2606104
    [26] HOU Yun, LI Li, LI Bailin, et al. An anti-noise ensemble algorithm for imbalance classification[J]. Intelligent Data Analysis, 2019, 23(6): 1205–1217. doi: 10.3233/IDA-184354
    [27] KRAWCZYK B, WOŹNIAK M, and SCHAEFER G. Cost-sensitive decision tree ensembles for effective imbalanced classification[J]. Applied Soft Computing, 2014, 14: 554–562. doi: 10.1016/j.asoc.2013.08.014
    [28] DUAN Wei, JING Liang, and LU Xiangyang. Imbalanced data classification using cost-sensitive support vector machine based on information entropy[J]. Advanced Materials Research, 2014, 989/994: 1756–1761. doi: 10.4028/www.scientific.net/AMR.989-994.1756
    [29] ZHANG Zhongliang, LUO Xinggang, GARCÍA S, et al. Cost-sensitive back-propagation neural networks with binarization techniques in addressing multi-class problems and non-competent classifiers[J]. Applied Soft Computing, 2017, 56: 357–367. doi: 10.1016/j.asoc.2017.03.016
    [30] DHAR S and CHERKASSKY V. Development and evaluation of cost-sensitive universum-SVM[J]. IEEE Transactions on Cybernetics, 2015, 45(4): 806–818. doi: 10.1109/TCYB.2014.2336876
    [31] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
    [32] RADFORD A, METZ L, and CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL]. arXiv: 1511.06434[cs.LG], 2015. https://arxiv.org/abs/1511.06434.
    [33] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [34] ZHANG Linbin, ZHANG Caiguang, QUAN Sinong, et al. A class imbalance loss for imbalanced object recognition[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 2778–2792. doi: 10.1109/JSTARS.2020.2995703
    [35] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327. doi: 10.1109/TPAMI.2018.2858826
    [36] KHAN S H, HAYAT M, BENNAMOUN M, et al. Cost-sensitive learning of deep feature representations from imbalanced data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(8): 3573–3587. doi: 10.1109/TNNLS.2017.2732482
  • 加载中
图(11) / 表(7)
计量
  • 文章访问数:  1093
  • HTML全文浏览量:  694
  • PDF下载量:  128
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-11
  • 修回日期:  2022-05-05
  • 网络出版日期:  2022-05-17
  • 刊出日期:  2022-08-28

目录

    /

    返回文章
    返回