Volume 4 Issue 5
Nov.  2015
Turn off MathJax
Article Contents
Li Lian-lin, Zhou Xiao-yang, Cui Tie-jun. Perspectives on Theories and Methods of Structural Signal Processing[J]. Journal of Radars, 2015, 4(5): 491-502. doi: 10.12000/JR15111
Citation: Li Lian-lin, Zhou Xiao-yang, Cui Tie-jun. Perspectives on Theories and Methods of Structural Signal Processing[J]. Journal of Radars, 2015, 4(5): 491-502. doi: 10.12000/JR15111

Perspectives on Theories and Methods of Structural Signal Processing

doi: 10.12000/JR15111
Funds:

The National Natural Science Foundation of China (61471006)

  • Received Date: 2015-10-08
  • Rev Recd Date: 2015-11-08
  • Publish Date: 2015-10-28
  • Over the past decade structural signal processing is an emerging field, which gained researchers' intensive attentions in various areas including the applied mathematics, physics, information theory, signal processing, and so on. The structural signal processing is a paradigm of making the revolutionary refresh on theories and methods in the nutshell of traditional signal processing based on the well-known Nyquist-Shannon theory, which will render us a new perspective on the adaptive data acquisition in the task-driven manner. Basically, the structural signal processing includes four research contents (MAMA): (a) Measures for the structural signal, (b) Algorithms for reconstructing the structural signal at the low-complexity computational cost, (c) Methods for smart data acquisition at the low hardware cost and system complexity, and (d) Applications of structural signal processing in applied fields. This paper reviews on the recent progress on the theory and algorithms for structural signal processing, which will provides hopefully useful guide for readers of interest.

     

  • loading
  • [1]
    李廉林, 李芳. 稀疏信号处理讲义[M]. 北京大学内部讲义, 2015. Li Lian-lin and Li Fang. Lecture on Sparse Signal Processing (Preprint)[M]. Peking University, 2015.
    [2]
    Elad M.Sparse and redundant representation modelingwhat next?[J]. IEEE Signal Processing Letters, 2012, 19(12): 922-928.
    [3]
    Duarte M F and Eldar Y C. Structured compressed sensing: from theory to applications[J]. IEEE Transactions on Signal Processing, 2011, 59(9): 4053-4085.
    [4]
    吴一戎. 稀疏微波成像理论、体制和方法研究, 国家973项目 申请书(项目首席科学家: 吴一戎, 中国科学院电子学研究 所), 2010. Wu Yi-rong. Sparse microwave imaging: theories, methods, and systems, The proposal submitted to the National Basic Research Program of China, Leading Scientist: Wu Yirong, Institute of Electronics, Chinese Academy of Sciences, 2010.
    [5]
    Candes E J and Tao T. Near-optimal signal recovery from random projections: universal encoding strategies?[J]. IEEE Transactions on Information Theory, 2006, 52(12): 5406-5425.
    [6]
    Donoho D L. Neighborly polytopes and sparse solutions of underdetermined linear equations[J]. Preprint, 2005.
    [7]
    Donoho D L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306.
    [8]
    Donoho D L. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution[J]. Communications on Pure and Applied Mathematics, 2006, 59(7): 797-829.
    [9]
    Donoho D L and Elad M. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization[J]. Proceedings of the National Academy of
    [10]
    Sciences, 2003, 100(5): 2197-2202. Donoho D L and Elad M. On the stability of the basis pursuit in the presence of noise[J]. Signal Processing, 2006, 86(3): 511-532.
    [11]
    Donoho D L, Elad M, and Temlyakov V N. Stable recovery of sparse overcomplete respresentations in the presence of noise[J]. IEEE Transactions on Information Theory, 2006, 52(1): 6-18.
    [12]
    Donoho D L and Jared T. Sparse nonnegative solution of underdetermined linear equations by linear programming[J]. Proceedings of the National Academy of Sciences, 2005, 102(27): 9446-9451.
    [13]
    Donoho D L and Tanner J. Precise undersampling theorems[J]. Proceedings of the IEEE, 2010, 98(6): 913-924.
    [14]
    Donoho D L and Tsaig Y. Fast solution of l1-norm minimization problems when the solution may be sparse[J]. IEEE Transactions on Information Theory, 2008, 54(11): 4789-4812.
    [15]
    Candes E J, Romberg J, and Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information[J]. IEEE Transactions on Information Theory, 2006, 52(2): 489-509.
    [16]
    Cands E J, Romberg J K, and Tao T. Stable signal recovery from incomplete and inaccurate measurements[J]. Communications on Pure and Applied Mathematics, 2006, 59(8): 1207-1223.
    [17]
    Cands E J and Tao T.Decoding by linear programming[J]. IEEE Transactions on Information Theory, 2005, 51(12): 4203-4215.
    [18]
    Prony R. Essai experimental et analytique sur les lois de la Dilatabilit_e des uideselastiques et sur celles de la Force expansive de la vapeur de l'eau et de la vapeur de l'alkool, a dierentes temperatures. J. de l' Ecole Polytechnique, Floreal et Prairial III, 1795, 1(2): 24-76.
    [19]
    Tarantola A. Inverse Problem Theory and Method for Model Parameter Estimation[M]. SIAM Press.
    [20]
    Jaynes E. Probability Theory: The Logic of Science[M]. Cambridge University Press, 2003.
    [21]
    Carathodory C. ber den Variabilittsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen[J]. Mathematische Annalen, 1907, 64(1): 95-115.
    [22]
    Carathodory C. ber den variabilittsbereich der fourier'schen konstanten von positiven harmonischen funktionen[J]. Rendiconti Del Circolo Matematico Di Palermo, 1911, 32(1): 193-217.
    [23]
    Beurling A. Sur les integrales de Fourier absolument convergentes et leur application a une transformation fonctionelle[C]. In Proc. Scandinavian Math. Congress, Helsinki, Finland, 1938.
    [24]
    Claerbout J F and Muir F. Robust modeling of erratic data[J]. Geophysics, 1973, 38(5): 826-844.
    [25]
    Taylor H, Banks S, and McCoy J. Deconvolution with the l1 norm[J]. Geophysics, 1979, 44(1): 39-52.
    [26]
    Santosa F and Symes W W. Linear inversion of bandlimited reflection seismograms[J]. SIAM Journal on Scientific and Statistical Computing, 1986, 7(4): 1307-1330.
    [27]
    Santosa F and Symes W. Inversion of impedance profile from band-limited data[C]. International Geoscience and Remote Sensing Symposium, 1983.
    [28]
    Tibshirani R. Regression shrinkage and selection via the Lasso[J]. Journal of the Royal Statistical Society, Series B, 1996, 58(1): 267-288.
    [29]
    Chen S,Donoho D,and Saunders M.Atomic decomposition by basis pursuit[J]. SIAM Journal on Scientific Computing, 1998, 20(1): 33-61.
    [30]
    Samadi S, Cetin M, and Masnadi-Shirazi M. Sparse representation-based synthetic aperture radar imaging[J]. IET Radar, Sonar Navigation, 2011, 5(2): 182-193.
    [31]
    Soldovieri F, Solimene R, and Ahmad F. Sparse tomographic inverse scattering approach for through-the wall radar imaging[J].IEEE Transactions on Instrumentation Measurement, 2012, 61(12): 3340-3350.
    [32]
    Oliveri G, Rocca P, and Massa A. A bayesian-compressivesampling- based inversion for imaging sparse scatterers[J]. IEEE Transactions on Geoscience Remote Sensing, 2011, 49(10): 3993-4006.
    [33]
    Desmal A and Bagci H. Shrinkage-thresholding enhanced Born iterative method for solving 2D inverse electromagnetic scattering problem[J]. IEEE Transactions on Antennas Propagation, 2014, 62(7): 3878-3884.
    [34]
    Solimene R, Ahmad F, and Soldovieri F. A novel CSTSVD strategy to perform data reduction in linear inverse scattering problems[J]. IEEE Geoscience Remote Sensing Letters, 2012, 9(5): 881-885.
    [35]
    Winters D W, Van Veen B D, and Hagness S C. A sparsity regularization approach to the electromagnetic inverse scattering problem.[J]. IEEE Transactions on Antennas Propagation, 2010, 58(1): 145-154.
    [36]
    Huang Q, Qu L, Wu B, et al.. UWB through-wall imaging based on compressive sensing[J]. IEEE Transactions on Geoscience Remote Sensing, 2010, 48(3): 1408-1415.
    [37]
    Yoon Y S and Amin M G. Through-the-wall radar imaging using compressive sensing along temporal frequency domain[C]. 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2010: 2806-2809.
    [38]
    Leigsnering M, Ahmad F, Amin M G, et al.. Compressive sensing based specular multipath exploitation for throughthe- wall radar imaging[C]. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013: 6004-6008.
    [39]
    Zhu X X and Bamler R. Super resolving SAR tomography for multidimensional imaging of urban areas: compressive sensing-based TomoSAR inversion[J]. IEEE Signal Processing Magazine, 2014, 31(4): 51-58.
    [40]
    向寅. 压缩采样、稀疏约束成像及等效辐射源无相位逆散 射[D]. [博士论文] , 中国科学院研究生院, 2010. Xiang Yin. Study of compressed sampling, sparse imaging its applications in phaseless inverse scattering[D]. [Ph.D. dissertation] , University of Chinese Academy of Sciences, 2010.
    [41]
    张文吉. 电磁逆散射成像方法及压缩感知理论在成像中的应 用[D]. [博士论文] , 中国科学院研究生院, 2009. Wenji Zhang, Study of fast methods of electromagnetic inverse scattering and compressive electromagnetic imaging[D]. [Ph.D. dissertation] , University of Chinese Academy of Sciences, 2009.
    [42]
    刘艳丽. 基于抛物线方程的电波传播问题及电离层成像研 究[D]. [博士论文] , 中国科学院研究生院, 2009. Yanli Liu, Study of parabolic equations based radio wave propagation and its applications in ionosphere tomography[D]. [Ph.D. dissertation] , University of Chinese Academy of Sciences, 2009.
    [43]
    Xu G, Xing M D, Xia X G, et al.. Sparse regularization of interferometric phase and amplitude for InSAR image formation based on bayesian representation[J]. IEEE Transactions on Geoscience Remote Sensing, 2015, 53(4): 2123-2136.
    [44]
    Zhang L, Qiao Z J, Xing M, et al.. High-resolution ISAR imaging with sparse stepped-frequency waveforms[J]. IEEE Transactions on Geoscience Remote Sensing, 2011, 49(11): 4630-4651.
    [45]
    石光明, 刘丹华, 高大化, 等. 压缩感知理论及其研究进展[J]. 电子学报, 2009, 37(5): 1070-1078. Guangming Shi, Danhua Liu, Dahua Gao, et al.. Theory of compressive sensing and its recent progress, Recent progress on theory and compressive sensing[J]. Chinese Journal of Electronics, 2009, 37(5): 1070-1078.
    [46]
    Xu Z, Chang X, Xu F, et al.. L1/2 regularization: a thresholding representation theory and a fast solver[J]. IEEE Transactions on Neural Networks Learning Systems, 2012, 23(7): 1013-1027.
    [47]
    黄晓涛, 杨俊刚, 金添. 压缩感知雷达成像[M]. 北京: 科学出 版社, 2014. Huang Xiao-tao, Yang Jun-gang, and Jin Tian. Compressed Radar Imaging[M]. Science Press, 2014.
    [48]
    Mallat S and Zhang Z. Matching pursuit in a timefrequency dictionary[J]. IEEE Transactions on Signal Processing, 1993, 41(12): 3397-3415.
    [49]
    Needell D and Tropp J A. CoSaMP: iterative signal recovery from incomplete and inaccurate samples[J]. Applied Computational Harmonic Analysis, 2008, 26(12): 93-100.
    [50]
    Nesterov Y.Introductory Lectures on Convex Optimization: A Basic Course[M]. Kluwer Academic Publishers, 2004.
    [51]
    Wainwright M J. Structured regularizers for highdimensional problems: statistical and computational issues[J]. Social Science Electronic Publishing, 2014, 1(1): 233-253.
    [52]
    Donoho D and Tsaig Y. Fast solution of l1 norm minimization problems when thesolution may be sparse[J]. IEEE Transactions on Information Theory, 2008, 54(11): 4789-4812.
    [53]
    Chen S S, Donoho D L, and Saunders M A. Atomic decomposition by basis pursuit[J]. SIAM Review, 2001, 43(1): 129-159.
    [54]
    Koh K, Kim S J, and Boyd S. An interior-point method for large-scale l1 -regularized logistic regression[J]. Journal of Machine Learning Research, 2007, 8(3): 1519-1555.
    [55]
    Shevade S K and Keerthi S S. A simple and efficient algorithm for gene selection using sparse logistic regression[J]. Bioinformatics, 2003, 19(17): 2246-2253.
    [56]
    Hui Z and Trevor H. Regularization and variable selection via the elastic net[J]. Journal of the Royal Statistical Society, 2005, 67(2): 301-320.
    [57]
    Cands E J, Wakin M B, and Boyd S P. Enhancing sparsity by reweighted l1 minimization[J]. Journal of Fourier Analysis Applications, 2008, 14: 877-905.
    [58]
    Tosic I and Froccard P. Dictionary learning[J]. IEEE Signal Processing Magazine, 2011: 27-38.
    [59]
    Aharon M, Elad M, and Bruckstein A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311-4322.
    [60]
    Van den Berg E, Schmidt M, Friedlander M P, et al.. Group sparsity via linear-time projections[R]. Technical Report, University of British Columbia, 2008, Technical Report Number TR-2008, 2008.
    [61]
    Friedman J, Hastie T, and Tibshirani R. A note on the group lasso and a sparse group lasso[J]. Preprint arXiv:1001:0736v1, 2010.
    [62]
    Huang J and Zhang T. The benefit of group sparsity[J]. Annals of Statistics, 2009, 38(4): 1978-2004.
    [63]
    Huang J, Zhang T, and Metaxas D. Learning with structured sparsity[J]. Journal of Machine Learning Research, 2011, 12(7): 3371-3412.
    [64]
    Jacob L and Obozinski G. Group lasso with overlaps and graph lasso[C]. Proceedings of the 26th International Conference on Machine Learning, 2009.
    [65]
    Jenatton R, Audibert J Y, and Bach F. Structured variable selection with sparsity-inducing norms[J]. Journal of Machine Learning Research, 2011, 12(10): 2777-2824.
    [66]
    Baraniuk R G, Cevher V, Duarte M F, et al.. Model-based compressive sensing[J]. IEEE Transactions on Information Theory, 2010, 56(4): 1982-2001.
    [67]
    He L and Carin L. Exploiting structure in wavelet-based bayesian compressive sensing[J]. IEEE Transactions on Signal Processing, 2009, 57(9): 3488-3497.
    [68]
    He L, Chen H, and Carin L. Tree-structured compressive sensing with variational bayesian analysis[J]. IEEE Signal Processing Letters, 2010, 17(3): 233-236.
    [69]
    Eldar Y C, Kuppinger P, and Blcskei H. Block-sparse signals: uncertainty relations and efficient recovery[J]. IEEE Transactions on Signal Processing, 2010, 58(6): 3042-3054.
    [70]
    Yuan M and Lin Y. Model selection and estimation in regression with grouped variables[J]. Journal of the Royal Statistical Society, 2006, 68(1): 49-67.
    [71]
    Robert T, Michael S, Saharon R, et al.. Sparsity and smoothness via the fused lasso[J]. Journal of the Royal Statistical Society, 2005, 67(1): 91-108.
    [72]
    Amit Y, Fink M, Srebro N, et al.. Uncovering shared structures in multiclass classification[C]. Twenty-fourth International Conference on Machine Learning, 2007: 17-24.
    [73]
    Gilboa G and Osher S. Nonlocal operators with applications to image processing[J]. SIAM Journal on Multiscale Modeling Simulation, 2008, 7(3): 1005-1028.
    [74]
    Bach F and Obozinski G. Structured sparsity through convex optimization[J]. Statistical Science, 2011, 27(4): 450-468.
    [75]
    Peleg T, Eldar Y C, and Elad M. Exploiting statistical dependencies in sparse representations for signal recovery[J]. IEEE Transactions on Signal Processing, 2012, 60(5): 2286-2303.
    [76]
    Dremeau A, Herzet C, and Daudet L. Boltzmann machine and mean-field approximation for structured sparse decompositions[J]. IEEE Transactions on Signal Processing, 2012, 60(7): 3425-3438.
    [77]
    Marlin B M and Murphy K P. Sparse Gaussian graphical models with unknown block structure[C]. ICML'09 Proceedings of the 26th International Conference on Machine Learning, 2009: 705-712.
    [78]
    Marlin B M, Schmidt M, and Murphy K P. Group sparse priors for covariance estimation[C]. Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2012: 383-392.
    [79]
    Pournaghi R and Wu X. Coded acquisition of high frame rate video[J]. IEEE Transactions on Image Processing, 2013, 23(12): 5670-5682.
    [80]
    Donoho D and Tanner J. Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing[J]. Philosophical Transactions A: Mathematical, Physical and Engineering Sciences, 2009, 367(1906): 4273-4293.
    [81]
    Cands E J and Recht B. Exact matrix completion via convex optimization[J]. Foundations of Computational Mathematics, 2008, 9(6): 717-772.
    [82]
    Candes E J and Plan Y. Matrix completion with noise[J]. Proceedings of the IEEE, 2009, 98(6): 925-936.
    [83]
    Hardle W, Hall P, and Ichimura H. Optimal smoothing in single-index models[J]. The Annals of Statistics, 1993, 21(1): 157-178.Hristache M and Spokoiny V. Direct
    [84]
    estimation of the index coefficient in a single-index model[J]. The Annals of Statistics, 2001, 29(3): 595-623.
    [85]
    Gopi S, Netrapalli P, Jain P, et al.. One-bit compressed sensing: provable support and vector recovery[C]. In International Conference on Machine Learning, 2013.
    [86]
    Jacques L, Laska J, Boufounos P, et al.. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors[J]. arXiv:1104.3160, 2011.
    [87]
    Plan Y and Vershynin R. One-bit compressed sensing by linear programming[J]. Communications on Pure Applied Mathematics, 2013, 66(8): 1275-1297.
    [88]
    Plan Y and Vershynin R. Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach[J]. IEEE Transactions on Information Theory, 2013, 59(1): 482-494.
    [89]
    Bahmani S and Romberg J. Efficient compressive phase retrieval with constrained sensing vectors. arXiv: 1507. 08254v1, 2015.
    [90]
    Cevher V, Becker S, and Schmidt M. Convex optimization for big data: Scalable, randomized, and Parallel algorithms for big data analytics[J]. IEEE Signal Processing Magazine, 2014, 31(5): 32-43.
    [91]
    Slavakis K, Giannakis G B, and Mateos G. Modeling and Optimization for Big Data Analytics: (Statistical) learning tools for our era of data deluge[J]. IEEE Signal Processing Magazine, 2014, 31(5): 18-31.
    [92]
    Yuan G X, Chang K W, Hsieh C J, et al.. A comparison of optimization methods and software for large-scale L1- regularized linear classification[J]. Journal of Machine Learning Research, 2010, 11(2): 3183-3234.
    [93]
    Bertsekas D P and Tsitsiklis J N. Parallel and Distributed Computation: Numerical Methods[M]. Prentice Hall, 1989.
    [94]
    Nemirovski A, Juditsky A, Lan G, et al.. Robust stochastic approximation approach to stochastic programming[J].
    [95]
    SIAM Journal on Optimization, 2009, 19(4): 1574-1609. Kushner H and Yin G. Stochastic Approximation Algorithms and Applications[M]. New York: Springer, 1997.
    [96]
    Nesterov Y. Efficiency of coordinate descent methods on huge-scale optimization problems[J]. SIAM Journal on Optimization, 2012, 22(2): 341-362.
    [97]
    Richtrik P and Tak M. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function[J]. Mathematical Programming, 2014, 144(1): 1-38.
    [98]
    Johnson R and Zhang T. Accelerating stochastic gradient descent using predictive variance reduction[C]. Advances in Neural Information Processing Systems, 2013, 26: 315-323.
    [99]
    Boyd S, Parikh N, Chu E, et al.. Distributed optimization and statistical learning via the alternating direction method of multipliers[J]. Foundations and Trends in Machine Learning, 2011, 3(1): 1-122.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索
    Article views(3254) PDF downloads(1681) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint