科学网

 找回密码
  注册

tag 标签: 稀疏信号恢复

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

[转载]The Nuit Blanche Chronicles 2012
热度 1 oliviazhang 2012-12-31 09:04
Nuit Blanche 是压缩感知领域影响力最大的一个博客。可惜在国内无法浏览这个博客(除非“翻墙”)。最近博客的博主Igor把2012年所发表的博文汇总成册,以方便下载。详细说明转载如下: The Nuit Blanche Chronicles 2012 are here . The e-book/pdf lists all the entries on Nuit Blanche from Dec 14, 2011 till Dec 21, 2012 .The formatting is crap, it doesn't allow the links to show up (it's a real shame, if somebody has a better way, I am all ears) but it does two useful things: the blog can be read offline on a tablet, a nice thing when you want to unplug and put some perspective on all this The search feature within a pdf is likely much better than Google's.
个人分类: 研究动态|2452 次阅读|3 个评论
[转载]So what is missing in Compressive Imaging?
oliviazhang 2012-11-30 17:06
转载自:From Nuit Blanche: http://nuit-blanche.blogspot.com/2012/11/sunday-morning-insight-so-what-is.html 搞图像compressed sensing的,最好再看看这个连接里的讨论: Link Sunday Morning Insight: So what is missing in Compressive Imaging and Uncertainty Quantification ? Paul Graham in one of his essay recently mentioned the following when it came to finding start-up ideas . "Once you're living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you're really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won't be obvious is that they're startup ideas. So if you want to find startup ideas, don't merely turn on the filter "What's missing?" Also turn off every other filter, particularly "Could this be a big company?" There's plenty of time to apply that test later. But if you're thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones." If you read this blog, you are already on the edge of knowledge, in a rapidly moving field and to a certain extent you are already living in the future . So what is missing ? Compressive sensing is a way of gathering data efficiently while knowing full well that, what you generally acquire, follows a power law of some sorts. With that in mind, people pushed the idea of concentration of measure results farther to more complex objects like matrices and then will eventually cross over to tensors (especially if we can leverage the matrix approach featured in the QTT format ) In a way, compressive sensing is successful because the framework has a wide appeal beyond signal processing. Indeed, if you read again what the analysis operator approach does, it is nothing more than solving a differential equation subjected to some measurements. The co-sparsity parameter represents sources and/or the inhomogeneous part of these partial differential equations that are themselves constrained in the loss function. All is well but there are dark clouds. The last time I mentionedUncertainty Quantificationin the blog, it was to say that while a Polynomial Chaos approach could follow the traditional compressive sensing framework, in all likelihood, given the Donoho-Tanner phase transition, you probably had to go through the extra complexity of wavelet chaos in order to find a winner . If you have ever dealt with polynomial series expansions, you know that all kinds of problems come from the coefficient thresholding we expect in compressive sensing. Even if some coefficients expansions are small they still matter ... at least empirically. It's known as the Gibbs phenomenon . Similarly, if you are part of the compressive sensing group on LinkedIn , you have seen that seemingly small questions lead to not so favorable answers for compressive imaging. In this instance , you realize that the only way to reconstruct a very nice image through some of the most advanced compressive sensing reconstruction algorithm is to ... cheat. You first have to acquire an image, threshold its series expansion and then acquire the compressive measurements from that thresholded version. When you do that, you get indeed near perfect reconstructions but it doubtful there is a CS camera that can do that. The underlying reason for this disillusion in both CS imaging and UQ is that while CS works fine for sparse signals, it is not all that great for unknowns that are merely compressible, i.e. not sparse.In short, if you hope to do well because the object of interest is only compressible in terms of some eigenfunctions expansion, you might make a mistake.In both UQ or images, what is missing is a result for compressible signals and it just looks like the one we have is just not good enough. As Paul said earlier "What is missing ?" I think two ideas could take us out of this slump. One: Maybe our sampling scheme for the eigenfunction expansion is chosen too early and we ought to rethink it in light of the work on infinite-dimensional compressive sensing (see ). Two:The second idea revolves around learning the right analysis operator in both imaging applications and UQ. Unlike the traditional approach in compressive imaging that relied on an eigenfunction expansion (which eventually led us to this slump), the analysis approach on the other hand, goes for what is obvious. The zeros are on one side of the homogeneous equation fulfilled by the field of interest. They are not epsilons, just zeros, the stuff that makes something sparse. In imaging applications, you traditionally acquire the eigenfunctions expansion of a 2D projection of the plenoptic function . The TV-norm, a special case of analysis operator, is successful because the PSF of most cameras is degenerate. What if the PSF were not degenerate, what sorts of analysis operator approach should be used ? In UQ, you try to estimate how changes in the coefficients of the transfer equation will affect the solution of said equation. The question being answered is: What are the coefficient phase space of interest that is consistent with the measurements ? Both problematic eventually rejoin in their need to develop an analysis approach of the following sort: min || L(f) - c || subject to Ax = b Where L represents the discretization of field transport operator (with unknown coefficients in UQ), c the boundary conditions or sources, A represents the hardware sensing mechanism and b the measurements. Thank you Leslie Smith for the fruitful discussion. Reference: B. Adcock A. C. Hansen E. Herrholz G. Teschke, "Generalized sampling, infinite-dimensional compressed sensing, and semi-random sampling for asymptotically incoherent dictionaries" . B. Adcock A. C. Hansen, "Generalized Sampling and Infinite Dimensional Compressed Sensing" . Michael Elad , K-SVD Dictionary-Learning for Analysis Sparse Models . Uncertainty QuantificationandChemical Model Reduction ,Habib N. Najm Mini-tutorial On Uncertainty Quantification: Intrusive UQ Propagation Methods by Alireza Doostan (similar info can be found here in Compressive Sensing and UQ )
个人分类: 研究动态|2289 次阅读|0 个评论
又一个BSBL算法推导出来
热度 1 oliviazhang 2012-11-26 13:15
最近我和本源等合作了一篇论文,已经投稿到了IEEE SPL。这篇文章里,又一个新的BSBL算法推导出来。该算法尽管性能上稍微弱于BSBL-EM,BSBL-BO等,但是速度非常快,所以比较适合大规模的问题。 文章的信息如下: Fast Marginalized Block SBL Algorithm by Benyuan Liu, Zhilin Zhang, Hongqi Fan, Zaiqi Lu, Qiang Fu 下载链接: http://arxiv.org/abs/1211.4909 摘要: The performance of sparse signal recovery can be improved if both sparsity and correlation structure of signals can be exploited. One typical correlation structure is intra-block correlation in block sparse signals. To exploit this structure, a framework, called block sparse Bayesian learning (BSBL) framework, has been proposed recently. Algorithms derived from this framework showed promising performance but their speed is not very fast, which limits their applications. This work derives an efficient algorithm from this framework, using a marginalized likelihood maximization method. Thus it can exploit block sparsity and intra-block correlation of signals. Compared to existing BSBL algorithms, it has close recovery performance to them, but has much faster speed. Therefore, it is more suitable for recovering large scale datasets.
个人分类: 我的论文|8269 次阅读|1 个评论
BSBL算法被IEEE Trans. on Signal Processing接收了
热度 8 oliviazhang 2012-11-17 15:01
我们的BSBL算法从月份投出去,到现在终于被IEEE Trans. on Signal Processing接收了。论文如下: Zhilin Zhang, Bhaskar. D. Rao, Extension of SBL Algorithms for theRecovery of Block Sparse Signals with Intra-Block Correlation , to appear in IEEE Trans. on Signal Processing 文章可以在arXiv上下载: http://arxiv.org/abs/1201.0862 BSBL以及相关试验代码可以在这里下载: http://dsp.ucsd.edu/~zhilin/BSBL.html 文章摘要如下: We examine the recovery of block sparse signals and extend the framework in two important directions; one by exploiting signals' intra-block correlation and the other by generalizing signals' block structure. We propose two families of algorithms based on the framework of block sparse Bayesian learning (BSBL). One family, directly derived from the BSBL framework, requires to know the block structure. Another family, derived from an expanded BSBL framework, is based on a weaker assumption on the block structure, and can be used in the case when the block structure is completely unknown. Using these algorithms we show that exploiting intra-block correlation is very helpful in improving recovery performance. These algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation to improve performance. 下面是相关的应用工作,均发表或者接收在IEEE Trans. on Biomedical Engineering: Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Non-Invasive Fetal ECG via Block Sparse Bayesian Learning , IEEE Trans. Biomedical Engineering, accepted Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Bhaskar D. Rao, Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware , IEEE Trans. Biomedical Engineering, accepted 至此,BSBL的工作就宣告一个段落了。从BSBL算法研究开始到现今,围绕这个框架总共 1,发表/接收了4篇文章到IEEE期刊,分别是IEEE Journal of Selected Topics in Signal Processing, IEEE Trans. on Signal Processing,和IEEE Trans. on Biomedical Engineering 2,发表了若干篇会议文章(CVPR,MICCAI,ICASSP等等) 3,四篇期刊文章在审 4,拿到了工业界5位数的现金奖励(美元)和6位数年薪的工作offer(美元) 5,帮国内的朋友结合某应用领域申请到了30万(RMB)的funding 6,1 US 专利(在审)
个人分类: 我的论文|15086 次阅读|11 个评论
Yes, let's Move From Sparsity to Structured Sparsity
oliviazhang 2012-10-2 19:31
申明:本篇博文转自我的英文博客:http://marchonscience.blogspot.com/ Igor wrote a cool post in Nuit Blanche yesterday: Pushing the Boundaries in Compressive Sensing by proposing the following questions: Given that compressive sensing is based on an argument of sparsity Given that sparsity is all around us because most data seem to be sparse or compressible in a wavelet basis Given that wavelet decomposition not only shows not just simple compressibility but structured compressibility of most datasets Given that we now have some results showing better compressive sensing with a structured sparsity argument Isn't it time we stopped talking about RIP ? the Donoho-Tanner phase transition ? L_1 recovery ? My answer is "YES"! There are a lot of practical scenarios where signals (or the regression coefficients) have rich structure. Merely exploiting sparsity without exploiting structure is far from enough ! Even in some cases where structure information is not obvious and generally traditional L1 recovery algorithms are used, we can still find a way to use structured-sparsity-based algorithms. In the following I'll give an example on the recovery of compressed audio signals, which is a typical application. I've seen a number of work which used traditional L1 algorithms to do the job. Let's see how a block-structure-exploited algorithm can improve the result. Below is an audio signal with the length of 81920. In the compression stage, it was evenly divided into T segments x_i (i=1,2,...,T). We considered five cases, i.e., T choosing 160, 80, 40, 20, and 10. Accordingly, the segments had the length of N = 512, 1024, 2048, 4096, and 8192. Each segment, x_i, was compressed into y_i. The sensing matrix A was a random Gaussian matrix with the dimension N/2 by N. In the recovery stage, we first recovered the DCT coefficients of each segment, and then recovered the original segment. This is a tradition method used by most L1 algorithms for this task, since audio signals are believed to be more sparse than in the time domain. We used six algorithms which do not consider any structure information. They were: Smooth L0 (SL0), EM-BG-AMP, SPGL-1, BCS, OMP, and SP. Their recovery performance was measured by total speed and the normalized MSE (calculated in dB). The results are given in the following table: Table: Performance of all algorithms measured in terms of normalized MSE in dB (and speed in second). From the Table, we can see if we want to obtain good quality, we need to increase the segment length. However, the cost is that the recovery time is significantly increased. For example, to achieve the quality of MSE = -21dB , SL0 needed to recover segments of the length 8192, and the total time was 2725 seconds ! Now let's perform the BSBL-BO algorithm , a sparse Bayesian learning algorithm exploiting block structure and intra-block correlation, to do the same task. As I have said in many places, BSBL-BO needs users to define a block partition, and the block partition is not needed to be consistent with the true block structure of signals. So, we defined the block partition as (in Matlab language): . The complete command for recovering the i-th segment was: Result = BSBL_BO(A, y_i, , 0, 'prune_gamma',-1, 'max_iters',15); The recovery result for the case N=512 is given in the above Table. Clearly, by exploiting the structure information, BSBL-BO achieved -23.3 dB but only cost 82 seconds ! The quality was 2 dB higher than that of SL0 but cost only 3% of the time cost by SL0. What this result tells us? First, exploiting both structure and sparsity is very important; merely exploiting sparsity is outdated in many practical applications. Second, conclusions on which algorithms are fast or slow are weakened if not mentioning which practical task is finished by these algorithms . Based on specific tasks, the answer could be varied case by case. For example, in the above experiment SL0 was very faster than BSBL-BO given the same segment length N. However, if the goal is to achieve a good quality, it took much longer time than BSBL-BO (even could not achieve the same quality as BSBL-BO, no matter how large the N was). Note, I was not trying to compare BSBL-BO with the other algorithms. The comparison was not meaningful, since BSBL-BO exploits structure while the other algorithms do not. The point here is, even for some traditional applications where L1 algorithms were often used, exploiting structure information can provide you a better result! PS: Several months ago we published a review paper on SBL algorithms exploiting structure, titled " From Sparsity to Structured Sparsity: Bayesian Perspective ", in the Chinese journal Signal Processing . One can download the paper from here .
个人分类: 经验交流|4393 次阅读|0 个评论
[转载]Nuit Blanche对准备进入压缩传感的学生的建议
oliviazhang 2012-8-10 06:42
Nuit Blanche的博主Igor最近写了一篇博客,对预进入压缩传感领域的学生提出了几条建议。全文连接如下: http://nuit-blanche.blogspot.com/2012/08/advice-for-undergraduate.html?utm_source=feedburnerutm_medium=emailutm_campaign=Feed%3A+blogspot%2FvhVI+%28Nuit+Blanche%29 考虑到国内很难进入这个网站,我转载如下: Advice for an Undergraduate by Igor Carron I have been asked by an undergraduate student to help him out on finding a PhD studentship in Compressive Sensing. I am not sure I am the right person to answer that question but here is what I know: You need a LinkedIn page preferably one that has an English version You need a webpage where you can tell the world about even the most insignificant technical projects you have ever undertaken. Both your webpage and the LinkedIn page need to be cross-linked. You might even consider starting an account on Github. If you are able to do so time-wise, you might even consider blogging on technical issues.An impressive example of somebody who has done well in this regards is Danny Bickson . Danny was making his code available when he was a student, he then started a blog when GraphLab needed to be explained to newcomers and he just organized a workshop with more than 300 people from the Silicon Valley and other Fortune 500 companies who now know who he is and probably want to work with him. If English is not your first language and you have no way of getting corrections from a friend, I suggest you come back to your past blog entries or web pages often and correct them for style. All these things are the seeds of a conversation. and only that. To get that conversation started, it's a little bit like flirting: You need to interact directly with potential interested PhD supervisors. Face-to-face is better than e-mail. A friend of mine got into MIT some years back by coming to the department he was interested in a year earlier on a tourist visa. He visited his future PhD advisor who in turn invited him on an unpaid internship a year later in the Summer.. When you are in a room and tell somebody you crossed an ocean for him or her, trust me the conversation gets going. For most it may be too hard so the next best thing is to start conversations. How do you do that ? well kindly interacting on the LinkedIn Compressive Sensing (1683 members) or the Advanced Matrix Factorization group (388 members) would be a good start. Be gentle, people are busy, you cannot start a conversation with " I need to find all the references on Compressive sensing and face detection" or " I want to do a PhD in Compressive Sensing in Wheat genomics". You don't start a conversation like this when you are flirting, why should this be case here ? Typos: If you want to send the signal to a specific person that you really care about their work, read through the preprints they have listed on their website and find the typos. Send them an e-mail about it. It's a great conversation starter. If you have already done something that is even remotely relevant to compressive sensing like using an Arduino and a Kinect to produce a calibration mechanism for a random imaging system or implement a simple algorithm in Octave or run different algorithms on a set of benchmarks or produce a new sets of CS examples on scikit-learn As I have done in the past, I'll be the first to feature it on Nuit Blanche. To have an audience, trust me, the hardest part are the first million three hundred and thirty three thousand page views . We are already past that here.
个人分类: 经验交流|3634 次阅读|0 个评论
Approximate Message Passing vs. Sparse Bayesian Learning?
oliviazhang 2012-8-7 05:19
上周Nuit Blanche发了一篇博文《More intra-block correlation and sparse sensing matrices》,内容是The Ohio State University的Phil写给他的一封邮件,其中介绍了Phil组里最近投出的基于Approximate Message Passing (AMP)的压缩传感(Compressed Sensing, CS)算法。信中最后说道,“ we get performance that meets or exceeds that of Bayesian or variational techniques, but with orders-of-magnitude reduction in complexity ”,也就是说,AMP算法比稀疏贝叶斯学习(Sparse Bayesian Learning, SBL)算法性能要好,同时速度快了一个档次。但是这句话是值得商酌的,不正确的。我猜想Phil的意思是, 在他们的文章中给出的试验条件和应用场合里(比如video/image coding) ,AMP比 目前 的SBL既好又快。 其实在很多情况下和实际应用中,SBL都比AMP性能远远优越。 第一个情况就是,当传感矩阵(sensing matrix)或者字典矩阵(dictionary matrix)的列与列之间非常相关时,决大多数算法包括AMP算法性能都非常差。而SBL算法的性能却保持良好。 这种情况多见于source localization and tracking (比如波达方向估计,脑源定位,雷达探测……),生物医学数据分析(比如基因表达中的特征提取,神经成像数据中的特征提取),以及计算视觉(比如基于稀疏表达的人脸识别,表情识别,视觉追踪等等)。实际上,SBL在这种情况下的优势已经部分的被我们实验室的师兄David Wipf证明过了。他在2011年的NIPS文章《 Sparse Estimation with Structured Dictionaries 》里,从数学上证明了在这种sensing matrix (or dictionary matrix)非常coherent的情况下,为什么SBL比Lasso性能要好。 为了让读者对这种情况下的各个压缩传感算法的性能有多差有个感性认识,我在提供了一个Matlab仿真程序供下载: http://dsp.ucsd.edu/~zhilin/papers/Localization.zip . 这个仿真程序模拟了一个简单的基于multiple snapshots的脑电源定位的试验,其中的sensing matrix(在EEG领域,叫做lead-field matrix)来自于由一个大脑模型构建的lead-field matrix(通常是100 x 10000维,但是我把它缩小成了50x400维,这样可以在1秒左右的时间里就可以获得结果)。这个sensing matrix的列与列之间的相关性可达0.99以上。我比较了我们实验室的三种成名算法,分别是M-FOCUSS,M-SBL和我的T-MSBL。读者可以看到SBL算法远远比M-FOCUSS算法好。读者也可以利用这个程序使用自己喜好的熟悉的算法,可以看到在这种情况下其它算法的性能非常差。 另外,在这里http://dsp.ucsd.edu/~zhilin/papers/Experiment.rar 我提供了一个基于single snapshot的源定位试验仿真程序。这里比较了14个算法的性能,可以看到SBL算法性能最好,尤其是T-MSBL。这个试验还有一个note,可以在这里下载:http://dsp.ucsd.edu/~zhilin/papers/comparison.pdf 除了上面这个情况外, 当信号不是特别稀疏甚至根本不稀疏的时候,SBL算法也表现出很好的优势 。在我的关于生理信号远程监控的文章《 Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning 》中,大量的试验显示了BSBL能够以很高的质量 直接 恢复非稀疏的信号(i.e.,不借助于其它技术)。另外在我的文章《 Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation 》中,也显示了BSBL在无噪条件下并且已知Block Structure的情况下,可以仅仅只用M个测量值精确恢复一个有M个非零元素的长度为N的信号(N远远大于M,比如N=10M,非零元素的位置未知)。所以,大量的试验显示了SBL对于非稀疏,或者不是特别稀疏的信号有明显的优势。要知道,在绝大多数实际问题中,信号并不是特别稀疏的。在compressed sensing领域中,传统上大家通过寻找dictionary matrix来获得信号的稀疏表示。但是目前这个方向还正在发展,对很多非稀疏信号并不能找到有效的稀疏表示。而SBL从另外一个方向获得了突破。关于BSBL能够只用M个测量值精确恢复一个有M个非零元素的长度为N的信号,其实可以用我的文章《 Sparse Signal Recovery with Temporally Correlated Source Vectors Using Sparse Bayesian Learning 》中的定理1来证明。所以这里就不过多展开了。 正因为在实际问题中,通常sensing matrix(或者dictonary matrix)列与列之间相关性很大,信号又并不很稀疏,导致了绝大多数已有的算法并不能达到特别理想的效果(或者效果还有很大的提升空间),而SBL则显示出了很大的优势。所以SBL作为压缩传感/稀疏信号处理的一个分支,越来越多的应用在很多实际问题中。我们实验室,作为开发SBL算法的主要实验室之一,已经和国内,美国,欧洲不少学校/研究机构开始了合作。相信明年会有大批应用的文章涌现。 当然,SBL的速度不快。但是这种情况正在转变。一个原因是,SBL可以转换成一个加权迭代的算法。比如,在我的文章《 Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation 》中,BSBL算法可以用group Lasso算法迭代3-4次实现。每年都有不少的更快的group Lasso算法提出来,所以可以预见BSBL算法也会越来越快。另外,现在也有不少比较快的SBL算法了。比如我上面这篇文章中的BSBL-BO算法,以及我们最近的CVPR 2012文章《 Sparse Bayesian Multi-Task Learning for Predicting Cognitive Outcomes from Neuroimaging Measures in Alzheimer's Disease 》中的基于固定点的快速算法。我现在还有几个更快的算法,大概下个月就要投出去了。 综上,基于Phil的那句可能会引起误会的话,我在这里把SBL的几个优势稍微讲解了一下。有兴趣的朋友可以去我网页http://dsp.ucsd.edu/~zhilin/ 或者https://sites.google.com/site/researchbyzhang/ 详细了解SBL算法,最近的发展和下载这篇博文中提到的文章(以及代码)。另外,我和武汉大学的孙洪老师,余磊朋友有一篇中文的邀请综述:《 从稀疏到结构化稀疏:贝叶斯方法 》(后半部分是我们实验室开发的一系列SBL算法),发表在最近的《信号处理》杂志上。网上可以下载到。
个人分类: 我的论文|15980 次阅读|0 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-6-15 10:12

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部