Accepted Article Preview
Accepted articles have been peer-reviewed, and will be published when volume and issue numbers are finalized. The articles are citable by DOI (Digital Object Identifier).
The joint optimization problem of transmit power and dwell time of radar for asynchronous multi-target tracking in heterogeneous multiple radar networks with imperfect detection is investigated. Firstly, all the asynchronous measurements from different radar node in each fusion sampling interval are fused into composite measurement, thus the Bayesian Cramér-Rao Lower Bound (BCRLB) analytical expression of the asynchronous target tracking error with parameters such as radar node selection, transmit power and dwell time with imperfect detection is derived and used as the asynchronous target tracking accuracy measure. Based on this, a joint optimization model of transmit power and dwell time for asynchronous multi-target tracking in heterogeneous multiple radar networks with imperfect detection is established, with the optimization objective of minimizing the asynchronous multi-target tracking error and the constraints of given system transmit resource limitations, the parameters such as radar node selection, transmit power and dwell time in different radar networks are designed adaptively and optimally so as to improve the asynchronous multi-target tracking accuracy of the heterogeneous multiple radar networks system. Finally, a four-step decomposition algorithm combined with the Sequential Quadratic Programming (SQP) algorithm and cyclic minimization method is used to solve the optimization problem. Simulation results demonstrate that the asynchronous multi-target tracking accuracy of the heterogeneous multiple radar networks outperforms existing algorithms.
The joint optimization problem of transmit power and dwell time of radar for asynchronous multi-target tracking in heterogeneous multiple radar networks with imperfect detection is investigated. Firstly, all the asynchronous measurements from different radar node in each fusion sampling interval are fused into composite measurement, thus the Bayesian Cramér-Rao Lower Bound (BCRLB) analytical expression of the asynchronous target tracking error with parameters such as radar node selection, transmit power and dwell time with imperfect detection is derived and used as the asynchronous target tracking accuracy measure. Based on this, a joint optimization model of transmit power and dwell time for asynchronous multi-target tracking in heterogeneous multiple radar networks with imperfect detection is established, with the optimization objective of minimizing the asynchronous multi-target tracking error and the constraints of given system transmit resource limitations, the parameters such as radar node selection, transmit power and dwell time in different radar networks are designed adaptively and optimally so as to improve the asynchronous multi-target tracking accuracy of the heterogeneous multiple radar networks system. Finally, a four-step decomposition algorithm combined with the Sequential Quadratic Programming (SQP) algorithm and cyclic minimization method is used to solve the optimization problem. Simulation results demonstrate that the asynchronous multi-target tracking accuracy of the heterogeneous multiple radar networks outperforms existing algorithms.
Super-resolution Direction of Arrival (DOA) estimation is a critical problem related to vehicle-borne MMW radars that needs to be solved to realize accurate target positioning and tracking. Based on the common conditions of limited array aperture, low snapshot, low signal-to-noise ratio, and coherent sources with respect to vehicle-borne scenarios, a super-resolution DOA estimation method for a moving target with an MMW radar based on Range-Doppler Atom Norm Minimize (RD-ANM) is proposed herein. First, an array for receiving signals in the range-Doppler domain is constructed based on the radar echo of the moving target. Then, the compensation vector for the Doppler coupling phase of the moving target is designed to reduce the influence of target motion on DOA estimation. Finally, a multitarget super-resolution DOA estimation method based on the atomic norm framework is proposed herein. Compared to the existing DOA estimation algorithm, the proposed algorithm can achieve higher angular resolution and estimation accuracy owing to low signal-to-noise ratio and single snapshot processing conditions, as well as robust performance in processing coherent sources without sacrificing array aperture. The effectiveness of the proposed algorithm is proven via theoretical analyses, numerical simulations, and experiments.
Super-resolution Direction of Arrival (DOA) estimation is a critical problem related to vehicle-borne MMW radars that needs to be solved to realize accurate target positioning and tracking. Based on the common conditions of limited array aperture, low snapshot, low signal-to-noise ratio, and coherent sources with respect to vehicle-borne scenarios, a super-resolution DOA estimation method for a moving target with an MMW radar based on Range-Doppler Atom Norm Minimize (RD-ANM) is proposed herein. First, an array for receiving signals in the range-Doppler domain is constructed based on the radar echo of the moving target. Then, the compensation vector for the Doppler coupling phase of the moving target is designed to reduce the influence of target motion on DOA estimation. Finally, a multitarget super-resolution DOA estimation method based on the atomic norm framework is proposed herein. Compared to the existing DOA estimation algorithm, the proposed algorithm can achieve higher angular resolution and estimation accuracy owing to low signal-to-noise ratio and single snapshot processing conditions, as well as robust performance in processing coherent sources without sacrificing array aperture. The effectiveness of the proposed algorithm is proven via theoretical analyses, numerical simulations, and experiments.
Radar echo modeling based on dynamics and kinematics serves as the theoretical basis for micro-Doppler characteristic analysis and projectile parameter extractions. First, the initial disturbance of a projectile in the straight-line ballistic segment is analyzed. Based on the dynamic equation of the projectile, an angular motion model of the projectile characterized by two circular motion modes is established. Moreover, the motion definitions of projectile spin, nutation, and precession are explained. Subsequently, the parameterized characterization of the micro-Doppler signal produced by the angular motion of the projectile is derived. Furthermore, the mapping relationship between the angular motion of the projectile and the radar echo is obtained at the signal level. Taking high-speed spin projectile and a low-speed spin tail projectile as examples, when the angular motion of the two targets are affected by the initial disturbance, the radar echo signal model of the two targets is simulated and time-frequency analysis is carried out. The validity of the theoretical analysis and the model is verified by comparing the simulation results with the measured data of the projectile. Therefore, the micro-Doppler effect theory of projectile is enriched and verified through theoretical analysis, simulation modeling, and experimental verification. This study provides theoretical and technical support for the identification and analysis of projectile motion characteristics.
Radar echo modeling based on dynamics and kinematics serves as the theoretical basis for micro-Doppler characteristic analysis and projectile parameter extractions. First, the initial disturbance of a projectile in the straight-line ballistic segment is analyzed. Based on the dynamic equation of the projectile, an angular motion model of the projectile characterized by two circular motion modes is established. Moreover, the motion definitions of projectile spin, nutation, and precession are explained. Subsequently, the parameterized characterization of the micro-Doppler signal produced by the angular motion of the projectile is derived. Furthermore, the mapping relationship between the angular motion of the projectile and the radar echo is obtained at the signal level. Taking high-speed spin projectile and a low-speed spin tail projectile as examples, when the angular motion of the two targets are affected by the initial disturbance, the radar echo signal model of the two targets is simulated and time-frequency analysis is carried out. The validity of the theoretical analysis and the model is verified by comparing the simulation results with the measured data of the projectile. Therefore, the micro-Doppler effect theory of projectile is enriched and verified through theoretical analysis, simulation modeling, and experimental verification. This study provides theoretical and technical support for the identification and analysis of projectile motion characteristics.
This study aims to address the unreasonable assignment of positive and negative samples and poor localization quality in ship detection in complex scenes. Therefore, in this study, a Synthetic Aperture Radar (SAR) ship detection network (A3-IOUS-Net) based on adaptive anchor assignment and Intersection over Union (IOU) supervise in complex scenes is proposed. First, an adaptive anchor assignment mechanism is proposed, where a probability distribution model is established to adaptively assign anchors as positive and negative samples to enhance the ship samples’ learning ability in complex scenes. Then, an IOU supervise mechanism is proposed, which adds an IOU prediction branch in the prediction head to supervise the localization quality of detection boxes, allowing the network to accurately locate the SAR ship targets in complex scenes. Furthermore, a coordinate attention module is introduced into the prediction branch to suppress the background clutter interference and improve the SAR ship detection accuracy. The experimental results on the open SAR Ship Detection Dataset (SSDD) show that the Average Precision (AP) of A3-IOUS-Net in complex scenes is 82.04%, superior to the other 15 comparison models.
This study aims to address the unreasonable assignment of positive and negative samples and poor localization quality in ship detection in complex scenes. Therefore, in this study, a Synthetic Aperture Radar (SAR) ship detection network (A3-IOUS-Net) based on adaptive anchor assignment and Intersection over Union (IOU) supervise in complex scenes is proposed. First, an adaptive anchor assignment mechanism is proposed, where a probability distribution model is established to adaptively assign anchors as positive and negative samples to enhance the ship samples’ learning ability in complex scenes. Then, an IOU supervise mechanism is proposed, which adds an IOU prediction branch in the prediction head to supervise the localization quality of detection boxes, allowing the network to accurately locate the SAR ship targets in complex scenes. Furthermore, a coordinate attention module is introduced into the prediction branch to suppress the background clutter interference and improve the SAR ship detection accuracy. The experimental results on the open SAR Ship Detection Dataset (SSDD) show that the Average Precision (AP) of A3-IOUS-Net in complex scenes is 82.04%, superior to the other 15 comparison models.
Feature-based detection methods are often employed to address the challenges related to small-target detection in sea clutter. These methods determine the presence or absence of a target based on whether the feature value falls within a certain judgment region. However, such methods often overlook the temporal information between features. In fact, the temporal correlation between historical and current frame data can provide valuable a priori information, thereby enabling the calculation of the feature value of the current frame. To this end, this paper proposes a novel method for time-series modeling and prediction of radar echoes using an Auto-Regressive (AR) model in the feature domain, leveraging a priori information from historical frame features. To verify the feasibility of AR modeling and prediction of feature sequences, the AR model was first employed in the modeling and 1-step prediction analysis of Average Amplitude (AA), Relative Doppler Peak Height (RDPH), and Frequency Peak-to-Average Ratio (FPAR) feature sequences. Next, a technique for extracting feature values by utilizing the temporal information of historical frame features as a priori information was proposed. Based on this approach, a small-target detection method predicated on three-feature prediction, which can effectively utilize the temporal information of historical frame features for AA, RDPH, and FPAR, was proposed. Finally, the validity of the proposed method was verified using a measured data set.
Feature-based detection methods are often employed to address the challenges related to small-target detection in sea clutter. These methods determine the presence or absence of a target based on whether the feature value falls within a certain judgment region. However, such methods often overlook the temporal information between features. In fact, the temporal correlation between historical and current frame data can provide valuable a priori information, thereby enabling the calculation of the feature value of the current frame. To this end, this paper proposes a novel method for time-series modeling and prediction of radar echoes using an Auto-Regressive (AR) model in the feature domain, leveraging a priori information from historical frame features. To verify the feasibility of AR modeling and prediction of feature sequences, the AR model was first employed in the modeling and 1-step prediction analysis of Average Amplitude (AA), Relative Doppler Peak Height (RDPH), and Frequency Peak-to-Average Ratio (FPAR) feature sequences. Next, a technique for extracting feature values by utilizing the temporal information of historical frame features as a priori information was proposed. Based on this approach, a small-target detection method predicated on three-feature prediction, which can effectively utilize the temporal information of historical frame features for AA, RDPH, and FPAR, was proposed. Finally, the validity of the proposed method was verified using a measured data set.
Vehicle targets in urban scenes have the characteristics of random distribution and can be easily disturbed by environmental factors during the detection process. Given the above issues, this paper proposes a detection method that utilizes multi-aspect Synthetic Aperture Radar (SAR) images for stationary vehicle target extraction. In the feature extraction stage, a novel feature extraction method called multiscale rotational Gabor Odd Filter-based Ratio Operator (MR-GOFRO) is designed for vehicle targets in multi-aspect SAR images, where the original GOFRO features are improved from four aspects—filter form, feature scale, feature direction and feature level. The improvement allows MR-GOFRO to adapt to possible variations in the target direction, scale, morphology, etc. In the image fusion stage, a Weighted-Non-negative Matrix Factorization (W-NMF) method is developed to adjust the feature weights from various images according to the feature quality. This method can reduce the quality degradation of the fusion features due to mutual interference between different aspects. The proposed method is verified on various airborne multi-aspect image datasets. The experimental results revealed that the feature extraction and feature fusion methods proposed in this paper enhance the detection accuracy by an average of 3.69% and 4.67%, respectively, compared with similar methods.
Vehicle targets in urban scenes have the characteristics of random distribution and can be easily disturbed by environmental factors during the detection process. Given the above issues, this paper proposes a detection method that utilizes multi-aspect Synthetic Aperture Radar (SAR) images for stationary vehicle target extraction. In the feature extraction stage, a novel feature extraction method called multiscale rotational Gabor Odd Filter-based Ratio Operator (MR-GOFRO) is designed for vehicle targets in multi-aspect SAR images, where the original GOFRO features are improved from four aspects—filter form, feature scale, feature direction and feature level. The improvement allows MR-GOFRO to adapt to possible variations in the target direction, scale, morphology, etc. In the image fusion stage, a Weighted-Non-negative Matrix Factorization (W-NMF) method is developed to adjust the feature weights from various images according to the feature quality. This method can reduce the quality degradation of the fusion features due to mutual interference between different aspects. The proposed method is verified on various airborne multi-aspect image datasets. The experimental results revealed that the feature extraction and feature fusion methods proposed in this paper enhance the detection accuracy by an average of 3.69% and 4.67%, respectively, compared with similar methods.
In recent years, millimeter-wave radar has been widely used in safety detection, nondestructive detection of parts, and medical diagnosis because of its strong penetration ability, small size, and high detection accuracy. However, due to the limitation of hardware transmission bandwidth, achieving ultra-high two-dimensional resolution using millimeter-wave radar is challenging. Two-dimensional high-resolution imaging of altitude and azimuth can be realized using radar platform scanning to form a two-dimensional aperture. However, during the scanning process, the millimeter-wave radar produces sparse tracks in the height dimension, resulting in a sparse sampling of the altitude echo, thus reducing the imaging quality. In this paper, a high-resolution three-dimensional imaging algorithm for millimeter-wave radar based on Hankel transformation matrix filling is proposed to solve this problem. The matrix filling algorithm restores the sparse sampling echo, which guarantees the imaging accuracy of the millimeter-wave radar in the scanning plane. First, the low-rank prior characteristics of the millimeter-wave radar's elevation-range section were analyzed. To solve the problem of missing whole rows and columns of data during sparse trajectory sampling, the echo data matrix was reconstructed using the Hankel transform, and the sparse low-rank prior characteristics of the constructed matrix were analyzed. Furthermore, a matrix filling algorithm based on truncated Schatten-p norm combining low-rank and sparse priors was proposed to fill and reconstruct the echoes to ensure the three-dimensional resolution of the sparse trajectory millimeter-wave radar. Finally, using simulation and several sets of measured data, the proposed method was proved to achieve high-resolution three-dimensional imaging even when only 20%–30% of the height echo was used.
In recent years, millimeter-wave radar has been widely used in safety detection, nondestructive detection of parts, and medical diagnosis because of its strong penetration ability, small size, and high detection accuracy. However, due to the limitation of hardware transmission bandwidth, achieving ultra-high two-dimensional resolution using millimeter-wave radar is challenging. Two-dimensional high-resolution imaging of altitude and azimuth can be realized using radar platform scanning to form a two-dimensional aperture. However, during the scanning process, the millimeter-wave radar produces sparse tracks in the height dimension, resulting in a sparse sampling of the altitude echo, thus reducing the imaging quality. In this paper, a high-resolution three-dimensional imaging algorithm for millimeter-wave radar based on Hankel transformation matrix filling is proposed to solve this problem. The matrix filling algorithm restores the sparse sampling echo, which guarantees the imaging accuracy of the millimeter-wave radar in the scanning plane. First, the low-rank prior characteristics of the millimeter-wave radar's elevation-range section were analyzed. To solve the problem of missing whole rows and columns of data during sparse trajectory sampling, the echo data matrix was reconstructed using the Hankel transform, and the sparse low-rank prior characteristics of the constructed matrix were analyzed. Furthermore, a matrix filling algorithm based on truncated Schatten-p norm combining low-rank and sparse priors was proposed to fill and reconstruct the echoes to ensure the three-dimensional resolution of the sparse trajectory millimeter-wave radar. Finally, using simulation and several sets of measured data, the proposed method was proved to achieve high-resolution three-dimensional imaging even when only 20%–30% of the height echo was used.
Multistation cooperative radar target recognition aims to enhance recognition performance by utilizing the complementarity between multistation information. Conventional multistation cooperative target recognition methods do not explicitly consider the issue of interstation data differences and typically adopt relatively simple fusion strategies, which makes it difficult to obtain accurate and robust recognition performance. In this study, we propose an angle-guided transformer fusion network for multistation radar High-Resolution Range Profile (HRRP) target recognition. The extraction of the local and global features of the single-station HRRP is conducted via feature extraction, which employs a transformer as its main structure. Furthermore, three new auxiliary modules are created to facilitate fusion learning: the angle-guided module, the prefeature interaction module, and the deep attention feature fusion module. First, the angle guidance module enhances the robustness and consistency of features via modeling data differences between multiple stations and reinforces individual features associated with the observation perspective. Second, the fusion approach is optimized, and the multilevel hierarchical fusion of multistation features is achieved by combining the prefeature interaction module and the deep attention feature fusion module. Finally, the experiments are conducted on the basis of the simulated multistation scenarios with measured data, and the outcomes demonstrate that our approach can effectively enhance the performance of target recognition in multistation coordination.
Multistation cooperative radar target recognition aims to enhance recognition performance by utilizing the complementarity between multistation information. Conventional multistation cooperative target recognition methods do not explicitly consider the issue of interstation data differences and typically adopt relatively simple fusion strategies, which makes it difficult to obtain accurate and robust recognition performance. In this study, we propose an angle-guided transformer fusion network for multistation radar High-Resolution Range Profile (HRRP) target recognition. The extraction of the local and global features of the single-station HRRP is conducted via feature extraction, which employs a transformer as its main structure. Furthermore, three new auxiliary modules are created to facilitate fusion learning: the angle-guided module, the prefeature interaction module, and the deep attention feature fusion module. First, the angle guidance module enhances the robustness and consistency of features via modeling data differences between multiple stations and reinforces individual features associated with the observation perspective. Second, the fusion approach is optimized, and the multilevel hierarchical fusion of multistation features is achieved by combining the prefeature interaction module and the deep attention feature fusion module. Finally, the experiments are conducted on the basis of the simulated multistation scenarios with measured data, and the outcomes demonstrate that our approach can effectively enhance the performance of target recognition in multistation coordination.
The traditional networked radar power allocation is typically optimized with a given jamming model, while the jammer resource allocation is optimized with a given radar power allocation method; such research lack gaming and interaction. Given the rising seriousness of combat scenarios in which radars and jammers compete, this study suggests a deep game problem of networked radar power allocation under escort suppression jamming, in which intelligent target jamming is trained using Deep Reinforcement Learning (DRL). First, the jammer and the networked radar are mapped as two agents in this problem. Based on the jamming model and the radar detection model, the target detection model of the networked radar under suppressed jamming and the optimized objective function for maximizing the target detection probability are established. In terms of the networked radar agent, the radar power allocation vector is generated by the Proximal Policy Optimization (PPO) policy network. In terms of the jammer agent, a hybrid policy network is designed to simultaneously create beam selection and power allocation actions. Domain knowledge is introduced to construct more effective reward functions. Three kinds of domain knowledge, namely target detection model, equal power allocation strategy, and greedy interference power allocation strategy, are employed to produce guided rewards for the networked radar agent and the jammer agent, respectively. Consequently, the learning efficiency and performance of the agent are improved. Lastly, alternating training is used to learn the policy network parameters of both agents. The experimental results show that when the jammer adopts the DRL-based resource allocation strategy, the DRL-based networked radar power allocation is significantly better than the particle swarm-based and the artificial fish swarm-based networked radar power allocation in both target detection probability and run time metrics.
The traditional networked radar power allocation is typically optimized with a given jamming model, while the jammer resource allocation is optimized with a given radar power allocation method; such research lack gaming and interaction. Given the rising seriousness of combat scenarios in which radars and jammers compete, this study suggests a deep game problem of networked radar power allocation under escort suppression jamming, in which intelligent target jamming is trained using Deep Reinforcement Learning (DRL). First, the jammer and the networked radar are mapped as two agents in this problem. Based on the jamming model and the radar detection model, the target detection model of the networked radar under suppressed jamming and the optimized objective function for maximizing the target detection probability are established. In terms of the networked radar agent, the radar power allocation vector is generated by the Proximal Policy Optimization (PPO) policy network. In terms of the jammer agent, a hybrid policy network is designed to simultaneously create beam selection and power allocation actions. Domain knowledge is introduced to construct more effective reward functions. Three kinds of domain knowledge, namely target detection model, equal power allocation strategy, and greedy interference power allocation strategy, are employed to produce guided rewards for the networked radar agent and the jammer agent, respectively. Consequently, the learning efficiency and performance of the agent are improved. Lastly, alternating training is used to learn the policy network parameters of both agents. The experimental results show that when the jammer adopts the DRL-based resource allocation strategy, the DRL-based networked radar power allocation is significantly better than the particle swarm-based and the artificial fish swarm-based networked radar power allocation in both target detection probability and run time metrics.
Most high-resolution Synthetic Aperture Radar (SAR) images of real-life scenes are complex due to clutter, such as grass, trees, roads, and buildings, in the background. Traditional target detection algorithms for SAR images contain numerous false and missed alarms due to such clutter, adversely affecting the performance of SAR images target detection. Herein we propose a feature decomposition–based convolutional neural network for target detection in SAR images. The feature extraction module first extracts features from the input images, and these features are then decomposed into discriminative and interfering features using the feature decomposition module. Furthermore, only the discriminative features are input into the multiscale detection module for target detection. The interfering features that are removed after feature decomposition are the parts that are unfavorable to target detection, such as complex background clutter, whereas the discriminative features that are retained are the parts that are favorable to target detection, such as the targets of interest. Hence, an effective reduction in the number of false and missed alarms, as well as an improvement in the performance of SAR target detection, is achieved. The F1-score values of the proposed method are 0.9357 and 0.9211 for the MiniSAR dataset and SAR Aircraft Detection Dataset (SADD), respectively. Compared to the single shot multibox detector without the feature extraction module, the F1-score values of the proposed method for the MiniSAR and SADD datasets show an improvement of 0.0613 and 0.0639, respectively. Therefore, the effectiveness of the proposed method for target detection in SAR images of complex scenes was demonstrated through experimental results based on the measured datasets.
Most high-resolution Synthetic Aperture Radar (SAR) images of real-life scenes are complex due to clutter, such as grass, trees, roads, and buildings, in the background. Traditional target detection algorithms for SAR images contain numerous false and missed alarms due to such clutter, adversely affecting the performance of SAR images target detection. Herein we propose a feature decomposition–based convolutional neural network for target detection in SAR images. The feature extraction module first extracts features from the input images, and these features are then decomposed into discriminative and interfering features using the feature decomposition module. Furthermore, only the discriminative features are input into the multiscale detection module for target detection. The interfering features that are removed after feature decomposition are the parts that are unfavorable to target detection, such as complex background clutter, whereas the discriminative features that are retained are the parts that are favorable to target detection, such as the targets of interest. Hence, an effective reduction in the number of false and missed alarms, as well as an improvement in the performance of SAR target detection, is achieved. The F1-score values of the proposed method are 0.9357 and 0.9211 for the MiniSAR dataset and SAR Aircraft Detection Dataset (SADD), respectively. Compared to the single shot multibox detector without the feature extraction module, the F1-score values of the proposed method for the MiniSAR and SADD datasets show an improvement of 0.0613 and 0.0639, respectively. Therefore, the effectiveness of the proposed method for target detection in SAR images of complex scenes was demonstrated through experimental results based on the measured datasets.
A utility maximization-based multiradar online task planning algorithm aiming at the real-time multitask planning problem is proposed in this paper. Using the maximization of the task utility function as the objective, multiradar task planning is formulated as an integer programming-based mixed multivariable optimization problem. Then, two algorithms, namely heuristic greedy search and convex relaxation-based two-step decoupling are proposed to solve the resulting NP-hard optimization problem in polynomial time, respectively. Simulation experiments demonstrate that compared with the optimal exhaustive search algorithm, the proposed algorithms can effectively reduce the computing time or improve solution efficiency such that the real-time requirement of online task planning can be satisfied.
A utility maximization-based multiradar online task planning algorithm aiming at the real-time multitask planning problem is proposed in this paper. Using the maximization of the task utility function as the objective, multiradar task planning is formulated as an integer programming-based mixed multivariable optimization problem. Then, two algorithms, namely heuristic greedy search and convex relaxation-based two-step decoupling are proposed to solve the resulting NP-hard optimization problem in polynomial time, respectively. Simulation experiments demonstrate that compared with the optimal exhaustive search algorithm, the proposed algorithms can effectively reduce the computing time or improve solution efficiency such that the real-time requirement of online task planning can be satisfied.
The realization of anti-jamming technologies via beamforming for applications in Frequency-Diverse Arrays and Multiple-Input and Multiple-Output (FDA-MIMO) radar is a field that is undergoing intensive research. However, because of limitations in hardware systems, such as component aging and storage device capacity, the signal covariance matrix data computed by the receiver system may be missing. To mitigate the impact of the missing covariance matrix data on the performance of the beamforming algorithm, we have proposed a covariance matrix data recovery method for FDA-MIMO radar based on deep learning and constructed a two-stage framework based on missing covariance matrix recovery-adaptive beamforming. Furthermore, a learning framework based on this two-stage framework and the generation countermeasure network is constructed, which is mainly composed of a discriminator (D) and a generator (G). G is primarily used to output complete matrix data, while D is used to judge whether this data is real or filled. The entire network closes the gap between the samples generated by G and the distribution of the real data via a confrontation between D and G, consequently leading to the missing data of the covariance matrix being recovered. In addition, considering that the covariance matrix data is complex, two independent networks are constructed to train the real and imaginary parts of the matrix data. Finally, the numerical experiment results reveal that the difference in the root square mean error levels between the real and recovery data is 0.01 in magnitude.
The realization of anti-jamming technologies via beamforming for applications in Frequency-Diverse Arrays and Multiple-Input and Multiple-Output (FDA-MIMO) radar is a field that is undergoing intensive research. However, because of limitations in hardware systems, such as component aging and storage device capacity, the signal covariance matrix data computed by the receiver system may be missing. To mitigate the impact of the missing covariance matrix data on the performance of the beamforming algorithm, we have proposed a covariance matrix data recovery method for FDA-MIMO radar based on deep learning and constructed a two-stage framework based on missing covariance matrix recovery-adaptive beamforming. Furthermore, a learning framework based on this two-stage framework and the generation countermeasure network is constructed, which is mainly composed of a discriminator (D) and a generator (G). G is primarily used to output complete matrix data, while D is used to judge whether this data is real or filled. The entire network closes the gap between the samples generated by G and the distribution of the real data via a confrontation between D and G, consequently leading to the missing data of the covariance matrix being recovered. In addition, considering that the covariance matrix data is complex, two independent networks are constructed to train the real and imaginary parts of the matrix data. Finally, the numerical experiment results reveal that the difference in the root square mean error levels between the real and recovery data is 0.01 in magnitude.
Compared with single-radar systems, spatially separated networked radar usually has better detection performance due to its advantages of spatial and frequency diversities. Most of the current fusion detection methods based on networked radar systems only rely on the echo amplitude information of the target without considering the Doppler information that a coherent radar system can obtain to assist detection of targets. Intuitively, the spatial position and radial velocity of a target observed by different radar devices in the networked radar systems should meet certain physical constraints under which the target and false target can be substantially distinguished. Based on this consideration, fusion detection for the networked radar aided by a Doppler information algorithm is proposed in this paper. First, a set of inequalities is constructed based on the coupling between the observation of the same target’s azimuth and Doppler velocity by multiple radar stations. Then, a two-phase method, an algorithm in operational research, is used to judge whether the inequalities have a feasible solution, based on which a judgment is made on whether the target exists. Finally, some simulations are conducted, which show that the proposed algorithm can effectively improve the detection performance of the networked radar system fusion detection. Additionally, the influence of radar station location and target position on the fusion detection performance of the proposed algorithm is analyzed.
Compared with single-radar systems, spatially separated networked radar usually has better detection performance due to its advantages of spatial and frequency diversities. Most of the current fusion detection methods based on networked radar systems only rely on the echo amplitude information of the target without considering the Doppler information that a coherent radar system can obtain to assist detection of targets. Intuitively, the spatial position and radial velocity of a target observed by different radar devices in the networked radar systems should meet certain physical constraints under which the target and false target can be substantially distinguished. Based on this consideration, fusion detection for the networked radar aided by a Doppler information algorithm is proposed in this paper. First, a set of inequalities is constructed based on the coupling between the observation of the same target’s azimuth and Doppler velocity by multiple radar stations. Then, a two-phase method, an algorithm in operational research, is used to judge whether the inequalities have a feasible solution, based on which a judgment is made on whether the target exists. Finally, some simulations are conducted, which show that the proposed algorithm can effectively improve the detection performance of the networked radar system fusion detection. Additionally, the influence of radar station location and target position on the fusion detection performance of the proposed algorithm is analyzed.
With the substantial improvement of Synthetic Aperture Radar (SAR) regarding swath width and spatial and temporal resolutions, a time series obtained by registering SAR images acquired at different times can provide more accurate information on the dynamic changes in the observed areas. However, inherent speckle noise and outliers along the temporal dimension in the time series pose serious challenges for subsequent interpretation tasks. Although existing state-of-the-art methods can effectively suppress the speckle noise in a SAR time series, outliers along the temporal dimension will interfere with the denoising results. To better solve this problem, this paper proposes an additive signal decomposition method in the logarithm domain that can suppress the speckle noise and separate stable data and outliers along the temporal dimension in a time series, thus eliminating the impact of outliers on the denoising results. When the simulated data are disturbed by outliers, the proposed method can achieve an approximately 3 dB improvement in the Peak Signal-to-Noise Ratio (PSNR) compared to the other state-of-the-art methods. On Sentinel-1 data, the proposed method robustly suppresses the speckle noise in a time series, and the obtained outliers along the temporal dimension provide reference data for subsequent interpretation tasks.
With the substantial improvement of Synthetic Aperture Radar (SAR) regarding swath width and spatial and temporal resolutions, a time series obtained by registering SAR images acquired at different times can provide more accurate information on the dynamic changes in the observed areas. However, inherent speckle noise and outliers along the temporal dimension in the time series pose serious challenges for subsequent interpretation tasks. Although existing state-of-the-art methods can effectively suppress the speckle noise in a SAR time series, outliers along the temporal dimension will interfere with the denoising results. To better solve this problem, this paper proposes an additive signal decomposition method in the logarithm domain that can suppress the speckle noise and separate stable data and outliers along the temporal dimension in a time series, thus eliminating the impact of outliers on the denoising results. When the simulated data are disturbed by outliers, the proposed method can achieve an approximately 3 dB improvement in the Peak Signal-to-Noise Ratio (PSNR) compared to the other state-of-the-art methods. On Sentinel-1 data, the proposed method robustly suppresses the speckle noise in a time series, and the obtained outliers along the temporal dimension provide reference data for subsequent interpretation tasks.
In radar-based road target recognition, the increase in target feature dimension is a common technique to improve recognition performance when targets become diverse, but their characteristics are similar. However, the increase in feature dimension leads to feature redundancy and dimension disasters. Therefore, it is necessary to optimize the extracted high-dimensional feature set. The Adaptive Genetic Algorithm (AGA) based on random search is an effective feature optimization method. To improve the efficiency and accuracy of the AGA, the existing improved AGA methods generally utilize the prior correlation between features and targets for pre-dimensionality reduction of high-dimensional feature sets. However, such algorithms only consider the correlation between a single feature and a target, neglecting the correlation between feature combinations and targets. The selected feature set may not be the best recognition combination for the target. Thus, to address this issue, this study proposes an improved AGA via pre-dimensionality reduction based on Histogram Analysis (HA) of the correlation between different feature combinations and targets. The proposed method can simultaneously improve the efficiency and accuracy of feature selection and target recognition performance. Comparative experiments based on a real dataset of the millimeter-wave radar showed that the average accuracy of target recognition of the proposed HA-AGA method could reach 95.7%, which is 1.9%, 2.4%, and 10.1% higher than that of IG-GA, ReliefF-IAGA, and improved RetinaNet methods, respectively. Comparative experiments based on the CARRADA dataset showed that the average accuracy of target recognition of the proposed HA-AGA method could reach 93.0%, which is 1.2% and 1.5% higher than that of IG-GA and ReliefF-IAGA methods, respectively. These results verify the effectiveness and superiority of the proposed method compared with existing methods. In addition, the performance of different feature optimization methods coupled with the integrated bagging tree, fine tree, and K-Nearest Neighbor (KNN) classifier was compared. The experimental results showed that the proposed method exhibits evident advantages when coupled with different classifiers and has broad applicability.
In radar-based road target recognition, the increase in target feature dimension is a common technique to improve recognition performance when targets become diverse, but their characteristics are similar. However, the increase in feature dimension leads to feature redundancy and dimension disasters. Therefore, it is necessary to optimize the extracted high-dimensional feature set. The Adaptive Genetic Algorithm (AGA) based on random search is an effective feature optimization method. To improve the efficiency and accuracy of the AGA, the existing improved AGA methods generally utilize the prior correlation between features and targets for pre-dimensionality reduction of high-dimensional feature sets. However, such algorithms only consider the correlation between a single feature and a target, neglecting the correlation between feature combinations and targets. The selected feature set may not be the best recognition combination for the target. Thus, to address this issue, this study proposes an improved AGA via pre-dimensionality reduction based on Histogram Analysis (HA) of the correlation between different feature combinations and targets. The proposed method can simultaneously improve the efficiency and accuracy of feature selection and target recognition performance. Comparative experiments based on a real dataset of the millimeter-wave radar showed that the average accuracy of target recognition of the proposed HA-AGA method could reach 95.7%, which is 1.9%, 2.4%, and 10.1% higher than that of IG-GA, ReliefF-IAGA, and improved RetinaNet methods, respectively. Comparative experiments based on the CARRADA dataset showed that the average accuracy of target recognition of the proposed HA-AGA method could reach 93.0%, which is 1.2% and 1.5% higher than that of IG-GA and ReliefF-IAGA methods, respectively. These results verify the effectiveness and superiority of the proposed method compared with existing methods. In addition, the performance of different feature optimization methods coupled with the integrated bagging tree, fine tree, and K-Nearest Neighbor (KNN) classifier was compared. The experimental results showed that the proposed method exhibits evident advantages when coupled with different classifiers and has broad applicability.
As a biometric technology, gait recognition is usually considered a retrieval task in real life. However, because of the small scale of the existing radar gait recognition dataset, the current studies mainly focus on classification tasks and only consider the situation of a single walking view and the same wearing condition, limiting the practical application of radar-based gait recognition. This paper provides a radar gait recognition dataset under multi-view and multi-wearing conditions; the dataset uses millimeter-wave radar as a sensor to collect the time-frequency spectrogram data of 121 subjects walking along views under multiple wearing conditions. Eight views were collected for each subject, and ten sets were collected for each view. Six of the ten sets are dressed normally, two are dressed in coats, and the last two are carrying bags. Meanwhile, this paper proposes a method for radar gait recognition based on retrieval tasks. Experiments are conducted on this dataset, and the experimental results can be used as a benchmark to facilitate further research by related scholars on this dataset.
As a biometric technology, gait recognition is usually considered a retrieval task in real life. However, because of the small scale of the existing radar gait recognition dataset, the current studies mainly focus on classification tasks and only consider the situation of a single walking view and the same wearing condition, limiting the practical application of radar-based gait recognition. This paper provides a radar gait recognition dataset under multi-view and multi-wearing conditions; the dataset uses millimeter-wave radar as a sensor to collect the time-frequency spectrogram data of 121 subjects walking along views under multiple wearing conditions. Eight views were collected for each subject, and ten sets were collected for each view. Six of the ten sets are dressed normally, two are dressed in coats, and the last two are carrying bags. Meanwhile, this paper proposes a method for radar gait recognition based on retrieval tasks. Experiments are conducted on this dataset, and the experimental results can be used as a benchmark to facilitate further research by related scholars on this dataset.
For the Multi-Target Tracking (MTT) of distributed netted phased array radars, this paper proposes a joint beam and dwell time allocation algorithm driven by dynamic threats. First, a Bayesian Cramer-Rao Lower Bound (BCRLB), including beam and dwell time allocation, is derived. Then, a comprehensive threat evaluation scale is constructed based on the real-time motion state of the target, and a utility function based on the tracking accuracy reference threshold and contributed weights is designed for targets with different threats to measure the relationship of resource allocation prioritization among multiple targets. Afterward, an optimal distribution model of the joint beam and the dwell time driven by the dynamic threat of the target is established; the utility function is combined with the resources of the netted phased array radar system. Finally, the problem is solved using a reward-based iterative descent search algorithm, and the effectiveness of the algorithm is verified via simulation. The simulation results show that the proposed algorithm can determine the tracking accuracy requirements of different targets and allocate tracking resources based on the multi-target threat assessment results, thereby improving the comprehensive tracking accuracy of networked phased array radars.
For the Multi-Target Tracking (MTT) of distributed netted phased array radars, this paper proposes a joint beam and dwell time allocation algorithm driven by dynamic threats. First, a Bayesian Cramer-Rao Lower Bound (BCRLB), including beam and dwell time allocation, is derived. Then, a comprehensive threat evaluation scale is constructed based on the real-time motion state of the target, and a utility function based on the tracking accuracy reference threshold and contributed weights is designed for targets with different threats to measure the relationship of resource allocation prioritization among multiple targets. Afterward, an optimal distribution model of the joint beam and the dwell time driven by the dynamic threat of the target is established; the utility function is combined with the resources of the netted phased array radar system. Finally, the problem is solved using a reward-based iterative descent search algorithm, and the effectiveness of the algorithm is verified via simulation. The simulation results show that the proposed algorithm can determine the tracking accuracy requirements of different targets and allocate tracking resources based on the multi-target threat assessment results, thereby improving the comprehensive tracking accuracy of networked phased array radars.
An improved Synthetic Aperture Radar (SAR) imaging algorithm is proposed to address the issues of low azimuth resolution and noise interference in the sparse sampling condition. Based on the existing L1/2 regularization theory and iterative threshold algorithm, the gradient operator is modified, which can improve the solution accuracy of the reconstructed image and reduce the load of calculation. Then, under full sampling and under-sampling conditions, the original and improved L1/2 iterative threshold algorithm are combined with the approximate observation model to image SAR echo signals and compare their imaging performance. The experimental findings demonstrate that the improved algorithm improves the azimuth resolution of SAR images and has higher convergence performance.
An improved Synthetic Aperture Radar (SAR) imaging algorithm is proposed to address the issues of low azimuth resolution and noise interference in the sparse sampling condition. Based on the existing L1/2 regularization theory and iterative threshold algorithm, the gradient operator is modified, which can improve the solution accuracy of the reconstructed image and reduce the load of calculation. Then, under full sampling and under-sampling conditions, the original and improved L1/2 iterative threshold algorithm are combined with the approximate observation model to image SAR echo signals and compare their imaging performance. The experimental findings demonstrate that the improved algorithm improves the azimuth resolution of SAR images and has higher convergence performance.
In this study, a real-time dwell scheduling algorithm based on pulse interleaving is proposed for a distributed radar network system. A time pointer vector is introduced to indicate the moment when the dwell task with the highest synthetic priority should be chosen. This task is further allocated to the radar node with the lowest interleaving time utilization ratio, effectively reducing the time gaps during scheduling. Meanwhile, the pulse interleaving analysis determines whether the assigned dwell task can be scheduled successfully on the corresponding radar node. The time slot occupation matrix and energy assumption matrix are introduced to indicate the time and energy resource consumption of radar nodes, which not only simplifies the pulse interleaving analysis process but also enables pulse interleaving among the tasks with different pulse repetition intervals and numbers. Furthermore, to improve the efficiency of dwell scheduling, a threshold of interleaving time utilization ratio is set to adaptively choose the sliding step of the time pointer. The simulation results reveal that the proposed algorithm can execute real-time dwell scheduling for a distributed radar network system and achieve better scheduling performance than the existing dwell scheduling algorithm.
In this study, a real-time dwell scheduling algorithm based on pulse interleaving is proposed for a distributed radar network system. A time pointer vector is introduced to indicate the moment when the dwell task with the highest synthetic priority should be chosen. This task is further allocated to the radar node with the lowest interleaving time utilization ratio, effectively reducing the time gaps during scheduling. Meanwhile, the pulse interleaving analysis determines whether the assigned dwell task can be scheduled successfully on the corresponding radar node. The time slot occupation matrix and energy assumption matrix are introduced to indicate the time and energy resource consumption of radar nodes, which not only simplifies the pulse interleaving analysis process but also enables pulse interleaving among the tasks with different pulse repetition intervals and numbers. Furthermore, to improve the efficiency of dwell scheduling, a threshold of interleaving time utilization ratio is set to adaptively choose the sliding step of the time pointer. The simulation results reveal that the proposed algorithm can execute real-time dwell scheduling for a distributed radar network system and achieve better scheduling performance than the existing dwell scheduling algorithm.
This study proposes a fast power allocation algorithm under a low interception background for a collocated MIMO radar that simultaneously tracks multiple maneuvering targets. First, the target maneuver process is modeled as an Adaptive Current Statistical (ACS) model, and a particle filter is used to estimate the state of each target. Second, the Predicted Conditional Cramer-Rao Lower Bound (PC-CRLB) is derived, and the target comprehensive threat assessment model is constructed based on the target motion and electromagnetic characteristics. Subsequently, an optimization model with respect to transmitting power is established by developing the weighted sum of the target tracking error evaluation index and the unintercepted probability of radar as the optimization objective. Thereafter, to solve the model using the monotonically decreasing property of the objective function, a solving algorithm based on sequence relaxation is proposed. Finally, a simulation is conducted to verify the effectiveness and timeliness of the proposed algorithm. The results indicate that the proposed algorithm can effectively improve the target tracking accuracy and low interception performance of the radar system. Further, its run speed is increased by nearly 50% compared with that of the interior point method.
This study proposes a fast power allocation algorithm under a low interception background for a collocated MIMO radar that simultaneously tracks multiple maneuvering targets. First, the target maneuver process is modeled as an Adaptive Current Statistical (ACS) model, and a particle filter is used to estimate the state of each target. Second, the Predicted Conditional Cramer-Rao Lower Bound (PC-CRLB) is derived, and the target comprehensive threat assessment model is constructed based on the target motion and electromagnetic characteristics. Subsequently, an optimization model with respect to transmitting power is established by developing the weighted sum of the target tracking error evaluation index and the unintercepted probability of radar as the optimization objective. Thereafter, to solve the model using the monotonically decreasing property of the objective function, a solving algorithm based on sequence relaxation is proposed. Finally, a simulation is conducted to verify the effectiveness and timeliness of the proposed algorithm. The results indicate that the proposed algorithm can effectively improve the target tracking accuracy and low interception performance of the radar system. Further, its run speed is increased by nearly 50% compared with that of the interior point method.
Most traditional multi-aircraft flight path optimization methods are oriented toward area coverage, use static optimization models, and face the challenge of model mismatch under complex dynamic environments. Therefore, this study proposes a flight path optimization method for dynamic area coverage based on multi-aircraft radars. First, we introduce an attenuation factor to this method to characterize the actual coverage effect of airborne radar on a dynamic environment, and we take the area coverage rate under the dynamic area coverage background as the optimization function. After integrating the constraints of multi-dimensional flight path control parameters to be optimized, we built a mathematical model for dynamic area coverage flight path optimization based on multi-aircraft radars. Then, the stochastic optimization method is used to solve the flight path optimization problem of dynamic area coverage. Finally, the simulation results show that the proposed flight path optimization method can significantly improve the dynamic coverage performance in dynamic areas compared with the search mode using preset flight paths based on multi-aircraft radars. Compared with the traditional flight path optimization method oriented to static environments, the dynamic coverage performance of our proposed method is improved by approximately 6% on average.
Most traditional multi-aircraft flight path optimization methods are oriented toward area coverage, use static optimization models, and face the challenge of model mismatch under complex dynamic environments. Therefore, this study proposes a flight path optimization method for dynamic area coverage based on multi-aircraft radars. First, we introduce an attenuation factor to this method to characterize the actual coverage effect of airborne radar on a dynamic environment, and we take the area coverage rate under the dynamic area coverage background as the optimization function. After integrating the constraints of multi-dimensional flight path control parameters to be optimized, we built a mathematical model for dynamic area coverage flight path optimization based on multi-aircraft radars. Then, the stochastic optimization method is used to solve the flight path optimization problem of dynamic area coverage. Finally, the simulation results show that the proposed flight path optimization method can significantly improve the dynamic coverage performance in dynamic areas compared with the search mode using preset flight paths based on multi-aircraft radars. Compared with the traditional flight path optimization method oriented to static environments, the dynamic coverage performance of our proposed method is improved by approximately 6% on average.
For the resource allocation problem of multitarget tracking in a spectral coexistence environment, this study proposes a joint transmit power and dwell time allocation algorithm for radar networks. First, the predicted Bayesian Cramér-Rao Lower Bound (BCRLB) with the variables of radar node selection, transmit power and dwell time is derived as the performance metric for multi-target tracking accuracy. On this basis, a joint optimization model of transmit power and dwell time allocation for multitarget tracking in radar networks under spectral coexistence is built to collaboratively optimize the radar node selection, transmit power and dwell time of radar networks, This joint optimization model aims to minimize the multitarget tracking BCRLB while satisfying the given transmit resources of radar networks and the predetermined maximum allowable interference energy threshold of the communication base station. Subsequently, for the aforementioned optimization problem, a two-step decomposition method is used to decompose it into multiple subconvex problems, which are solved by combining the Semi-Definite Programming (SDP) and cyclic minimization algorithms. The simulation results showed that, compared with the existing algorithms, the proposed algorithm can effectively improve the multitarget tracking accuracy of radar networks while ensuring that the communication base station works properly.
For the resource allocation problem of multitarget tracking in a spectral coexistence environment, this study proposes a joint transmit power and dwell time allocation algorithm for radar networks. First, the predicted Bayesian Cramér-Rao Lower Bound (BCRLB) with the variables of radar node selection, transmit power and dwell time is derived as the performance metric for multi-target tracking accuracy. On this basis, a joint optimization model of transmit power and dwell time allocation for multitarget tracking in radar networks under spectral coexistence is built to collaboratively optimize the radar node selection, transmit power and dwell time of radar networks, This joint optimization model aims to minimize the multitarget tracking BCRLB while satisfying the given transmit resources of radar networks and the predetermined maximum allowable interference energy threshold of the communication base station. Subsequently, for the aforementioned optimization problem, a two-step decomposition method is used to decompose it into multiple subconvex problems, which are solved by combining the Semi-Definite Programming (SDP) and cyclic minimization algorithms. The simulation results showed that, compared with the existing algorithms, the proposed algorithm can effectively improve the multitarget tracking accuracy of radar networks while ensuring that the communication base station works properly.
This paper establishes a hybrid distributed Phased-Array Multiple-Input Multiple-Output (PA-MIMO) radar system model, which combines coherent processing gain and spatial diversity gain to synergistically improve the target detection performance. We derive a Likelihood Ratio Test (LRT) detector based on the Neyman-Pearson (NP) criterion for the hybrid distributed PA-MIMO radar system. The coherent processing gain and spatial diversity gain are jointly optimized by implementing subarray-level and array element–level optimal configurations at the transceiver and transmitter ends. Moreover, a Quantum Particle Swarm Optimization-based Stochastic Rounding (SR-QPSO) algorithm is proposed for the integer programming-based configuration model. This algorithm ensures that the optimal array-element configuration strategy is obtained with less iteration and achieves the joint optimization of subarray and array-element levels. Finally, simulations verify that the proposed optimal configuration offers substantial improvements compared to other typical radar systems, with a detection probability of 0.98 and an effective range of 1166.3 km, as well as a considerably improved detection performance.
This paper establishes a hybrid distributed Phased-Array Multiple-Input Multiple-Output (PA-MIMO) radar system model, which combines coherent processing gain and spatial diversity gain to synergistically improve the target detection performance. We derive a Likelihood Ratio Test (LRT) detector based on the Neyman-Pearson (NP) criterion for the hybrid distributed PA-MIMO radar system. The coherent processing gain and spatial diversity gain are jointly optimized by implementing subarray-level and array element–level optimal configurations at the transceiver and transmitter ends. Moreover, a Quantum Particle Swarm Optimization-based Stochastic Rounding (SR-QPSO) algorithm is proposed for the integer programming-based configuration model. This algorithm ensures that the optimal array-element configuration strategy is obtained with less iteration and achieves the joint optimization of subarray and array-element levels. Finally, simulations verify that the proposed optimal configuration offers substantial improvements compared to other typical radar systems, with a detection probability of 0.98 and an effective range of 1166.3 km, as well as a considerably improved detection performance.
To reduce the probability of UAV (Unmanned Aerial Vehicle) being destroyed during a reconnaissance mission, this study proposes an effective path planning algorithm to reduce the target threat. First, high-resolution airborne radar is used for robust tracking and estimation of multiple extended targets. Subsequently, the targets are classified based on the threat degree calculated via fuzzy TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution). Next, path planning of a UAV is performed considering joint optimization of multiple task decision-making (the joint evaluation of the target threat degree and target tracking performance) as an evaluation criterion. The simulation results indicate that the fuzzy threat assessment method is effective in multiple extended target tracking, and the proposed UAV path planning algorithm is reasonable. Thus the target threat is efficiently reduced without losing the tracking accuracy.
To reduce the probability of UAV (Unmanned Aerial Vehicle) being destroyed during a reconnaissance mission, this study proposes an effective path planning algorithm to reduce the target threat. First, high-resolution airborne radar is used for robust tracking and estimation of multiple extended targets. Subsequently, the targets are classified based on the threat degree calculated via fuzzy TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution). Next, path planning of a UAV is performed considering joint optimization of multiple task decision-making (the joint evaluation of the target threat degree and target tracking performance) as an evaluation criterion. The simulation results indicate that the fuzzy threat assessment method is effective in multiple extended target tracking, and the proposed UAV path planning algorithm is reasonable. Thus the target threat is efficiently reduced without losing the tracking accuracy.
This paper studies adaptive distributed targets detection for frequency diverse array multiple-input multiple-output (FDA-MIMO) radar, where the targets are embedded in Gaussian clutter with unknown covariance matrix. The proposed FDA-MIMO radar detection model considers also the distributed targets establishing as a summation expression, which is different from the classic detection models in MIMO and/or phase array radars that discuss only point-like targets. Next, the detector through Rao criterion without the need of training data are proposed. The proposed method together with all theoretical analysis are verified by numerical results.
This paper studies adaptive distributed targets detection for frequency diverse array multiple-input multiple-output (FDA-MIMO) radar, where the targets are embedded in Gaussian clutter with unknown covariance matrix. The proposed FDA-MIMO radar detection model considers also the distributed targets establishing as a summation expression, which is different from the classic detection models in MIMO and/or phase array radars that discuss only point-like targets. Next, the detector through Rao criterion without the need of training data are proposed. The proposed method together with all theoretical analysis are verified by numerical results.