多角度探测模式下结合Hough变换与SVR的墙后目标定位算法

欧阳方平 曹家璇 丁一鹏

施天玥, 刘惠欣, 刘衍琦, 等. 基于先验相位结构信息的双基SAR两维自聚焦算法[J]. 雷达学报, 2020, 9(6): 1045–1055. doi: 10.12000/JR20048
引用本文: 欧阳方平, 曹家璇, 丁一鹏. 多角度探测模式下结合Hough变换与SVR的墙后目标定位算法[J]. 雷达学报(中英文), 2024, 13(4): 838–851. doi: 10.12000/JR23236
SHI Tianyue, LIU Huixin, LIU Yanqi, et al. Bistatic synthetic aperture radar two-dimensional autofocus approach based on prior knowledge on phase structure[J]. Journal of Radars, 2020, 9(6): 1045–1055. doi: 10.12000/JR20048
Citation: OUYANG Fangping, CAO Jiaxuan, and DING Yipeng. A through-wall target location algorithm combing Hough transform and SVR in multi-view detection mode[J]. Journal of Radars, 2024, 13(4): 838–851. doi: 10.12000/JR23236

多角度探测模式下结合Hough变换与SVR的墙后目标定位算法

DOI: 10.12000/JR23236
基金项目: 湖南省自然科学基金(2022JJ30749),中南大学研究生自主探索创新项目(2023ZZTS0398),国家自然科学基金(52073308),湖南省创新省建设专项基金 (2020RC3004)
详细信息
    作者简介:

    欧阳方平,教授,博士生导师,主要研究方向为低维量子材料与器件物理、计算凝聚态物理和纳米电子学

    曹家璇,硕士生,主要研究方向为墙体参数估计、时频分析技术和机器学习

    丁一鹏,博士,教授,博士生导师,主要研究方向为雷达信号处理

    通讯作者:

    丁一鹏 dingyipeng@sina.com

  • 责任主编:郭世盛 Corresponding Editor: GUO Shisheng
  • 中图分类号: TN957.52

A Through-wall Target Location Algorithm Combing Hough Transform and SVR in Multi-view Detection Mode

Funds: The Natural Science Foundation of Hunan Province (2022JJ30749), The Fundamental Research Funds for the Central Universities of Central South University (2023ZZTS0398), The National Natural Science Foundation of China (52073308) and the Special Foundation for Hunan Innovation Province Construction (2020RC3004)
More Information
  • 摘要: 多普勒穿墙雷达在定位墙后目标时,存在以下两个难点:(1)准确获取频率混叠区域目标瞬时频率;(2)通过获取精确的墙体参数来减小墙体对定位造成的影响。针对以上问题该文提出了一种结合Hough变换和支持向量回归-BP神经网络的目标定位算法。该文首先设计了一种多视角融合穿墙目标探测模型框架,通过获取不同视角下的目标位置来提供辅助估计墙体参数信息;其次,结合差分进化算法和切比雪夫插值多项式提出了一种目标瞬时频率曲线的高精度提取和估计算法;最后,利用估计的墙体参数信息,提出了一种基于BP神经网络的目标运动轨迹补偿算法,抑制了障碍物对目标定位结果的扭曲影响,实现了对墙后目标的精确定位。实验结果表明,相较于传统的短时傅里叶方法,该文所述方法可以准确提取时频混叠区域的目标瞬时频率曲线并减小墙体造成的影响,从而实现墙后多目标的准确定位,整体定位精度提升了约85%。

     

  • Today, the research and application of Artificial Intelligence (AI) has become a major area of scientific and technological development. Developing AI is a major strategy for enhancing national core competitiveness and maintaining national security.

    The Massachusetts Institute of Technology (MIT) has not established a new college for decades. However, in October 2018, MIT announced a new facility, the Schwarzman College of Computing[1], and the construction of the Stata Science Center (see Fig. 1) for computer science, AI, data science, and related intersections. Its purpose is to harness the powerful role of AI and big data computing in science and technology of the future. From Fig. 2, the SCR-615B radar built by MIT during World War II is on display in the Stata Science Center lobby. The MIT president also published an article in this year’s MIT newsletter[2] emphasizing the competition and challenges brought by AI.

    Figure  1.  MIT Stata Science Center
    Figure  2.  SCR-615B radar displayed in the hall

    In 2016, the United States (U.S.) White House released three important reports titled Preparing for the Future of Artificial Intelligence, National Artificial Intelligence Research and Development Strategic Plan, and Artificial Intelligence, and Automation and Economic Report, which promoted the establishment of a Machine Learning and Artificial Intelligence (MLAI) subcommittee that would actively plan for the future development of AI[3]. In January 2018, the United States Department of Defense released a new version of the National Defense Strategy report, stating that the development of advanced computing, big data analysis, and robotics are important factors affecting national security. In June 2018, the U.S. Defense Advanced Research Projects Agency (DARPA) discussed for the first time the preliminary details of the U.S. Electronic Revival Plan. The implementation of this Electronic Revival Plan will accelerate the development of AI hardware. In September of the same year, DARPA announced its commitment to building a system based on common sense, contextual awareness, and higher energy efficiency[4]. In February 2019, U.S. President Trump signed an executive order titled To Maintain U.S. Artificial Intelligence Leadership, which aims to maintain U.S. global leadership in AI. On February 12, 2019, the U.S. Department of Defense website published a Summary of the 2018 Department of Defense Artificial Intelligence Strategy—Harnessing AI to Advance Our Security and Prosperity, which clarified the U.S. military’s strategic initiatives and key areas for deploying AI[5]. The U.S. Department of Defense plans to use DARPA’s Next Generation Artificial Intelligence (AI Next) and Artificial Intelligence Exploration (AIE) projects as benchmarks for exploring and applying AI technologies to enhance military strength. The AI Next project, which was announced in September 2018, is based on the two generations of AI technology that were led by DARPA over the past 60 years. It emphasizes the environmentally adaptive capability of AI. The main areas of this project are to explore new technologies that promote the Department of Defense’s automation of key business processes, improve the robustness and reliability of AI systems, enhance the security and flexibility of machine learning and AI technologies, reduce power consumption and avoid inefficient data collection and performance, and create the next generation of AI algorithms and applications[6]. The AIE program will focus on Third Wave applications and theories of AI and aim to adapt machines to changing conditions. It will streamline proposals, contracts, and funding processes. The goal is to accelerate the research and development of AI platforms to help the U.S. maintain its technical advantages in the field of AI.

    In March 2017, France released its Artificial Intelligence Strategy, built a new AI center, and developed data storage and processing platforms, automatic learning technology platforms, and network security platforms[7]. The German Brain Science strategy focuses on robotics and digitization. In 2012, the Max Planck Institute for Scientific Research in Germany cooperated with the U.S. in computational neuroscience[8]. Japan also attaches great importance to the development of AI technology. In 2017, the Japanese government issued the Next Generation Artificial Intelligence Promotion Strategy to clarify its focus on AI development and to promote the extension of AI technology to strong AI and super AI levels[9].

    China released the New Generation Artificial Intelligence Development Plan in July 2017 and formulated a three-step goal for the national AI strategy. By 2030, China’s AI theory, technology, and applications will generally reach world-leading levels and become the world’s major AI innovation center[10]. Currently, China is showing very strong scientific research mobilization in the research and application of AI. For example, in August 2017, the National Natural Science Foundation of China (NSFC) released Guidelines for Emergency Management of Basic Research in Artificial Intelligence, which outlines plans to fund research in 25 research directions in three foundational aspects of the AI frontier, including intelligent autonomous movement bodies, intelligent decision-making theory, and key technologies of complex manufacturing processes[11]. We believe that, driven by innovation, China will achieve significant development in the research, application, and industrial fields of AI and AI technology, occupying an important territory in the world of AI.

    In this paper, we propose the development of AI technology in the field of space remote sensing and target recognition. In 2017, we hosted the Institute of Electrical and Electronics Engineers’ (IEEE) Remote Sensing Intelligent Processing Conference[12] and published some papers in the IEEE Transactions on Geoscience and Remote Sensing/Geoscience and Remote Sensing Letters[13-16]. We have also published several discussions in the Science & Technology Review [17,18], highlighting concepts regarding physical intelligence and microwave vision. Here we focus on Synthetic Aperture Radar (SAR) target monitoring and information perception and discuss the research on AI information technology against the physical background of the interaction between electromagnetic waves and targets, i.e., the use of this physical intelligence to develop microwave visions that can perceive target information on the electromagnetic spectrum that cannot be recognized by the human eye.

    In the 1950s, SAR images were only single-mode RCS grayscale images used for monitoring military targets. Later, in the 1970s, the development and application of this technology began to make great strides in civilian fields of study, such as ocean wind fields, terrestrial hydrology, vegetation, snow, precipitation, drought, the monitoring and evaluation of natural disasters, and the identification of surface changes, to name a few. Various applications have various needs, and the theoretical and technical issues associated with different scientific connotations have strongly promoted the comprehensive development of SAR technology. Since the beginning of the 21st century, SAR satellite technologies have developed rapidly, with the realization of full polarization, interference, and high- resolution to produce a multisource multimode full-polarization high-resolution SAR (hereinafter referred to as multimode SAR) information technology (see Fig. 3).

    Figure  3.  Overview of SAR development in various countries

    With the improvement in spatial resolution to meters and decimeters, the perception of multimode SAR remote sensing information has produced a field of science and technology that has great significance for civilian and national defense technology. SAR in the 21st century promotes the research and application of Automatic Target Recognition (ATR). Based on the presence or absence of a one-dimensional to a two-dimensional object map, three-dimensional object feature recognition is achieved, along with identification of multi-dimensional object morphology.

    However, SAR information perception and target feature inversion and reconstruction are not accomplished by human vision. The interaction between electromagnetic waves and complex targets and their image-scattering mechanisms provide the physical basis for SAR imaging. We have studied the theoretical parameter modeling, numerical simulation, and physical and numerical characteristics in the frequency, spatial, time, and polarization domains, and have developed polarized SAR parametric simulation software, techniques for scattering and imaging calculations, and target classification, recognition, and feature reconstruction[19].

    Multimode SAR remote sensing produces a many series of images with multiple temporal and physical characteristics and rich and multiple types of complex data. Driven by remote sensing big data, remote sensing application technology has progressed in a broad range of areas. However, most of these are limited to traditional data statistical analysis and image processing technologies, which cannot meet the needs of multimode SAR technology and applications. In particular, it is difficult to realize the automatic recognition of various types of targets in the sky, land, and sea, as well as the perception and inversion reconstruction of fine-scale multi-dimensional information.

    In recent years, AI technology has attracted considerable attention from science and industry. Based on the recognition of local structure-features-whole target in the eye-retina-brain V1–V4 area, a simple perception rule was established to obtain visual perception ability. Using the method of computational neuroscience and driven by the fitting of big data, multi-layer convolution networks are constructed from the local structure and feature-vector space for large overall network calculations to realize the ability to perceive internal information, which is the basic idea of AI and deep learning.

    Similarly, we must determine how to develop a new smart brain-like function suitable for the perception of SAR information from electromagnetic wave image scattering, which differs from computer vision processing that is usually based on optical vision. To do so, it is necessary to construct an intelligent information technology that can perceive SAR information from the microwave spectrum. We call this the electromagnetic AI–new scientific technology, i.e., from optical vision by the human brain to humanoid brain electromagnetic waves–microwave vision, which is driven by remote sensing big data under the guidance of the physics mechanism of multi-source multimode full-polarimetric high-resolution SAR.

    Fig. 4 and Fig. 5 illustrate the physical basis of multimode SAR as a forward problem of electromagnetic-wave-scattering modeling simulation and an inverse problem of multi-dimensional information inversion and reconstruction. AI deep learning based on a brain-like computing neural algorithm is driven by various types of big data constrained by the physical background of multimode SAR remote sensing for processing perceptions of AI information for application in various fields.

    Figure  4.  Research and application of multisource and multimode SAR remote sensing information perception for space-ground-sea targets
    Figure  5.  Physical intelligence to application of remotely sensed big data

    Based on the SAR image-scattering mechanism, we developed a brain-like intelligent function for processing this type of big data to perceive SAR information. This is like seeing microwaves, i.e., microwave vision. Eventually, this technology will be able to perform automatic interpretations online and produce easy-to-accept visual representations and visual semantics. Known as microwave consciousness, this technology plays an important role in the technical methods of visual semantics, reasoning, decision-making, interactive detection, identification, interference, confrontation, and the attack of SAR scattered radiation fields.

    In Fig. 6, we propose a combined forward and inverse theory for the creation of electromagnetic-wave-scattering and brain-like AI research to generate a new intelligent algorithm. This cross-discipline electromagnetic AI (EM AI) has important applications in Earth remote sensing, ATR, electronic countermeasures, and satellite navigation communications. Therefore, this proposal represents remote sensing-communication-navigation technology in electromagnetic space.

    Figure  6.  Artificial intelligence of space electromagnetics

    We have recently edited a book series titled Spaceborne Microwave Remote Sensing[20], whereby 14 monographs will be published by Science Press in the next two years, eight monographs of which deal with the acquisition of SAR information (Fig. 7). These include the monograph Intelligent Interpretation of Radar Image Information, written by our laboratory team[21]. Based on the background and research status of SAR image interpretation, this monograph summarizes our laboratory’s latest research progress using deep learning intelligent technology in SAR ATR and polarized SAR feature classification, and provides sample data and program code for relevant chapters.

    Figure  7.  Spaceborne microwave remote sensing research and application series

    Some of the research conducted at our laboratory on intelligent information perception can be summarized as follows:

    • We proposed an intelligent recognition algorithm for SAR targets[15]. The full convolutional network we proposed reduces the number of independent parameters by removing fully connected layers. It achieved a classification accuracy of 99% for a 10-class task when applied to the SAR target classification dataset MSTAR[22]. In addition, an end-to-end target detection–discrimination–recognition method for SAR images was implemented. Furthermore, we proposed a fast-detection algorithm for surface ship targets, established an SAR image ship target data set, and performed a ship target classification experiment based on transfer learning.

    • We proposed a deep-learning training network algorithm in a complex domain[16], whereby we can train a Convolutional Neural Network (CNN) of a polarized SAR surface classification with complex multi-dimensional images in a polarized coherence matrix. This algorithm achieved state-of-the-art accuracy of 95% for a 15-class task on the Flevoland benchmark dataset[22].

    • We proposed a CNN using few samples for target ATR, which has good network generalization ability. We also studied the target recognition and classification ability of CNN feature-vector distribution under the condition of no samples[14]. Zero-sample learning is important for SAR ATR because training samples are not always suitable for all targets and scenarios. In this paper, we proposed a new generation-based deep neural network framework, the key aspect of which is a generative deconvolutional neural network, called a generator that automatically constructs a continuous SAR target feature space composed of direction-invariant features and direction angles while learning the target hierarchical representation. This framework is then used as a reference for designing and initializing the interpreter CNN, which is antisymmetric to the generator network. The interpreter network is then trained to map any input SAR image to the target feature space.

    • We proposed a deep neural network structure for CNN processing to despeckle SAR-image noise[23]. This process uses a CNN to extract image features and reconstruct a discrete RCS Probability Density Function (PDF). The network is trained by a mixed loss function that measures the distance between the actual and estimated SAR image intensity PDFs, which is obtained by the convolution between the reconstructed RCS PDF and the prior speckled PDF. The network can be trained using either simulated or real SAR images. Experimental results on both simulated SAR images and real NASA/JPL AIRSAR images confirm the effectiveness of the proposed noise-despeckling deep neural network.

    • Lastly, we proposed a colorized CNN processing method from single-polarized SAR images to polarized SAR images for scene analysis and processing[24]. This paper proposed a deep neural network that converts a single-polarized SAR image into a fully polarized SAR image. This network has two parts, a feature extraction network and a feature translation network that is used to match spatial and polarized features. Using this method, the polarization covariance matrix of each pixel can be reconstructed. The resulting fully polarized SAR image is very close to the real fully polarized SAR image not only visually but also in real PolSAR applications.

    In addition, part of the work of our laboratory is to do the SAR-AI-ATR identification of—base on domestic and foreign SAR data including China’s GF-3 SAR data. do the SAR-AI-ATR identification of ground vehicles, airport aircraft, and sea surface ships. In addition, we proposed a CNN method for the inversion of forest tree heights by interferometric SAR, i.e., INSAR, and a method for constructing the reciprocal generation of optical images and microwave radar images by the contrast training of optical and microwave images. The above work can be found in related monographs[21].

    Data is not synonymous with information. Big data is just material and a driver, and different data have different scientific connotations. Therefore, the use of simple and direct statistics in the analysis of big data cannot realize the perception of connotative information, especially in the imaging of multi-dimensional vectorized complex data of multimode microwave SAR, which is difficult to intuitively perceive by the human eye. In this paper, we proposed the use of AI driven by big data under the guidance of physics to retrieve information and develop new AI models and algorithms to meet the needs of SAR remote sensing physics and applications. Interdisciplinary AI research is very important. The realization of new EM AI technology will drive the development of multiple industries and applications.

    At present, research on multimode remote sensing intelligent information and target recognition is still in the exploratory stage, and further research is needed to continue to develop new theories, methods, and applications of microwave vision.

  • 图  1  传统多普勒雷达探测模型

    Figure  1.  Traditional Doppler radar target detection model

    图  2  多普勒穿墙雷达目标定位原理图

    Figure  2.  Doppler through-wall radar localization schematic

    图  3  多视角融合穿墙目标探测模型

    Figure  3.  Multi-view fusion through-wall target detection model

    图  4  BPNN结构图

    Figure  4.  BPNN structure diagram

    图  5  基于切比雪夫插值多项式的Hough变换瞬时频率估计算法与基于SVR-BPNN的墙体厚度估计与运动轨迹补偿算法流程图

    Figure  5.  Flowchart of Hough transform instantaneous frequency estimation algorithm based on Chebyshev’s multi-interpolation term formulation and SVR-BPNN based wall thickness estimation and motion trajectory compensation algorithm

    图  6  训练与测试所用的目标运动轨迹

    Figure  6.  The target motion trajectories used for training and testing

    图  7  实验设备与场景

    Figure  7.  The experimental equipment and scenarios

    图  8  模拟实验中的墙体厚度估计结果

    Figure  8.  The estimated results of wall thickness in the simulated experiments

    图  9  无墙场景下的目标频率估计与定位结果

    Figure  9.  Target frequency estimation and localization results in no wall scenes

    图  10  墙后目标定位结果

    Figure  10.  The localization results of targets behind the wall

    表  1  雷达系统参数设置

    Table  1.   Radar system parameters settings

    参数 数值
    载波频率 fc1, fc2 (GHz) 2.40, 2.39
    最大/最小发射功率Pmax, Pmin (dBm) 30, 15
    天线增益G (dBi) 3.5
    天线带宽B (MHz) 40
    天线间隔d (m) 0.06
    采样频率(Hz) 200
    最大方位角θm (°) 75
    下载: 导出CSV

    表  2  STFT、二次贝塞尔模型、四阶切比雪夫插值多项式模型误差对比(无墙双目标场景)

    Table  2.   Algorithm errors comparison of STFT, quadratic Bezier model and 4th order Chebyshev interpolating polynomial model (scene of dual target without walls)

    算法目标1频率(Hz)目标1定位(m)目标2频率(Hz)目标2定位(m)
    STFT0.170.160.160.31
    基于二次贝塞尔模型的Hough变换0.070.130.100.57
    基于四阶切比雪夫插值多项式的Hough变换0.040.070.070.09
    下载: 导出CSV

    表  3  STFT、二次贝塞尔模型、轨迹相交法、四阶切比雪夫插值多项式模型误差对比(墙后双目标场景)

    Table  3.   Algorithm errors comparison of STFT, quadratic Bezier model, trajectory intersection method and 4th order Chebyshev interpolating polynomial model (scene of dual target behind a wall)

    算法砖墙场景混凝土墙场景
    目标1定位(m)目标2定位(m)目标1定位(m)目标2定位(m)
    STFT0.590.330.700.77
    基于二次贝塞尔模型的Hough变换0.680.660.810.79
    轨迹相交法0.270.220.290.25
    基于四阶切比雪夫插值多项式的Hough变换0.100.140.090.13
    下载: 导出CSV
  • [1] 刘振, 魏玺章, 黎湘. 一种新的随机PRI脉冲多普勒雷达无模糊MTD算法[J]. 雷达学报, 2012, 1(1): 28–35. doi: 10.3724/SP.J.1300.2012.10063.

    LIU Zhen, WEI Xizhang, and LI Xiang. Novel method of unambiguous moving target detection in pulse-Doppler radar with random pulse repetition interval[J]. Journal of Radars, 2012, 1(1): 28–35. doi: 10.3724/SP.J.1300.2012.10063.
    [2] 胡程, 廖鑫, 向寅, 等. 一种生命探测雷达微多普勒测量灵敏度分析新方法[J]. 雷达学报, 2016, 5(5): 455–461. doi: 10.12000/JR16090.

    HU Cheng, LIAO Xin, XIANG Yin, et al. Novel analytic method for determining micro-Doppler measurement sensitivity in life-detection radar[J]. Journal of Radars, 2016, 5(5): 455–461. doi: 10.12000/JR16090.
    [3] PENG Yiqun, DING Yipeng, ZHANG Jiawei, et al. Target trajectory estimation algorithm based on time-frequency enhancement[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 8500807. doi: 10.1109/TIM.2022.3227997.
    [4] DING Minhao, DING Yipeng, PENG Yiqun, et al. CNN-based time-frequency image enhancement algorithm for target tracking using Doppler through-wall radar[J]. IEEE Geoscience and Remote Sensing Letter, 2023, 20: 3505305. doi: 10.1109/LGRS.2023.3282700.
    [5] WANG Genyuan and AMIN M G. Imaging through unknown walls using different standoff distances[J]. IEEE Transactions on Signal Processing, 2006, 54(10): 4015–4025. doi: 10.1109/TSP.2006.879325.
    [6] 丁一鹏, 厍彦龙. 穿墙雷达人体动作识别技术的研究现状与展望[J]. 电子与信息学报, 2022, 44(4): 1156–1175. doi: 10.11999/JEIT211051.

    DING Yipeng and SHE Yanlong. Research status and prospect of human movement recognition technique using through-wall radar[J]. Journal of Electronics & Information Technology, 2022, 44(4): 1156–1175. doi: 10.11999/JEIT211051.
    [7] ABDOUSH Y, POJANI G, and CORAZZA G E. Adaptive instantaneous frequency estimation of multicomponent signals based on linear time-frequency transforms[J]. IEEE Transactions on Signal Processing, 2019, 67(12): 3100–3112. doi: 10.1109/TSP.2019.2912132.
    [8] HUANG N E, SHEN Zheng, LONG S R, et al. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis[J]. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 1998, 454(1971): 903–995. doi: 10.1098/rspa.1998.0193.
    [9] LI Po and ZHANG Qinghai. An improved Viterbi algorithm for IF extraction of multicomponent signals[J]. Signal, Image and Video Processing, 2018, 12(1): 171–179. doi: 10.1007/s11760-017-1143-2.
    [10] 金添, 宋勇平, 崔国龙, 等. 低频电磁波建筑物内部结构透视技术研究进展[J]. 雷达学报, 2021, 10(3): 342–359. doi: 10.12000/JR20119.

    JIN Tian, SONG Yongping, CUI Guolong, et al. Advances on penetrating imaging of building layout technique using low frequency radio waves[J]. Journal of Radars, 2021, 10(3): 342–359. doi: 10.12000/JR20119.
    [11] JIN Tian, CHEN Bo, and ZHOU Zhimin. Image-domain estimation of wall parameters for autofocusing of through-the-wall SAR imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(3): 1836–1843. doi: 10.1109/TGRS.2012.2206395.
    [12] PROTIVA P, MRKVICA J, and MACHAC J. Estimation of wall parameters from time-delay-only through-wall radar measurements[J]. IEEE Transactions on Antennas and Propagation, 2011, 59(11): 4268–4278. doi: 10.1109/TAP.2011.2164206.
    [13] WANG Genyuan, AMIN M G, and ZHANG Yimin. New approach for target locations in the presence of wall ambiguities[J]. IEEE Transactions on Aerospace and Electronic Systems, 2006, 42(1): 301–315. doi: 10.1109/TAES.2006.1603424.
    [14] ZHANG Huamei, ZHANG Yerong, WANG Fangfang, et al. Application of support vector machines for estimating wall parameters in through-wall radar imaging[J]. International Journal of Antennas and Propagation, 2015, 2015: 456123. doi: 10.1155/2015/456123.
    [15] DING Yipeng, SUN Yinhua, HUANG Guowei, et al. Human target localization using Doppler through-wall radar based on micro-Doppler frequency estimation[J]. IEEE Sensors Journal, 2020, 20(15): 8778–8788. doi: 10.1109/JSEN.2020.2983104.
    [16] DING Yipeng, SUN Yinhua, YU Xiali, et al. Bezier-based Hough transforms for Doppler localization of human targets[J]. IEEE Antennas and Wireless Propagation Letters, 2020, 19(1): 173–177. doi: 10.1109/lawp.2019.2956842.
    [17] CHEN Gang, CHEN Jin, DONG Guangming, et al. An adaptive non-parametric short-time Fourier transform: Application to echolocation[J]. Applied Acoustics, 2015, 87: 131–141. doi: 10.1016/j.apacoust.2014.06.018.
    [18] DING Yipeng, YU Xiali, LEI Chengxi, et al. A novel real-time human heart rate estimation method for noncontact vital sign radar detection[J]. IEEE Access, 2020, 8: 88689–88699. doi: 10.1109/ACCESS.2020.2993503.
    [19] LIN Xiaoyi, DING Yipeng, XU Xuemei, et al. A multi-target detection algorithm using high-order differential equation[J]. IEEE Sensors Journal, 2019, 19(13): 5062–5069. doi: 10.1109/JSEN.2019.2901923.
    [20] ZHOU Can, YU Wentao, HUANG Keke, et al. A New model transfer strategy among spectrometers based on SVR parameter calibrating[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1010413. doi: 10.1109/TIM.2021.3119129.
    [21] XIE Yaqin, WANG Kailiang, and HUANG Hai. BPNN based indoor fingerprinting localization algorithm against environmental fluctuations[J]. IEEE Sensors Journal, 2022, 22(12): 12002–12016. doi: 10.1109/JSEN.2022.3172860.
    [22] BOULIC R, THALMANN N M, and THALMANN D. A global human walking model with real-time kinematic personification[J]. The Visual Computer, 1990, 6(6): 344–358. doi: 10.1007/BF01901021.
  • 加载中
图(10) / 表(3)
计量
  • 文章访问数: 
  • HTML全文浏览量: 
  • PDF下载量: 
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-11-30
  • 修回日期:  2024-01-21
  • 网络出版日期:  2024-01-31
  • 刊出日期:  2024-08-28

目录

/

返回文章
返回