科学网

 找回密码
  注册

tag 标签: coding

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

[CODE]sparse coding在线学习及其代码
热度 1 qianli8848 2012-6-8 08:42
一个在线的sparse coding学习算法,最近很多应用类的文章都用这个算法。 The SPArse Modelling Software (SPAMS) can be downloaded here . It includes fast implementations of LARS, OMP, a dictionary learning algorithm and its variants for NMF, sparse PCA, as well as efficient sparse solvers based on proximal methods. June, 2011: SPAMS is now released under an open-source licence. This should be a good news for MAC, Windows, R, Python, C++ users. The package contains scripts for compiling the library under Linux. If you manage to make a script for compiling it under Mac OS and Windows, I would be glad to include it in the release. The denoising code of my ICCV'09 paper can be found here .The package contains binary files for Linux 64bits computers, and an instruction file. Academic use only. The demosaicking code of my ICCV'09 paper can be found here .The package contains binary files for Linux 64bits computers, and an instruction file. Academic use only. The KSVD source code of my IEEE-TIP and SIAM-MMS papers from 2008 can be found here . This package is not maintained anymore and I will not respond to any question about the source code. If you need Linux binaries to do experiments, please contact me.
个人分类: CODE|12561 次阅读|1 个评论
历届网络编码NETCOD理论国际讨论会
热度 1 huangfuqiang 2012-3-26 12:07
2012 2011 2010 2009 2008 2007 2006 2005
个人分类: 计算机网络理论与工程|4455 次阅读|2 个评论
Basis Pursuit
热度 1 zhenliangli 2012-3-24 20:03
From my first blog article- An introduction to sparse representation . We have noticed that the sparse coding for the input datasets is very useful to signal processing and machine learning. Zero norm is the direct solution for the sparse coding problem. However, D.L. Donoho proved that 1 norm solution is also the sparsest solution, and it is equals to zero norm solution. Standard BP The form without noise . (1) Basis pursuit denoise (Three forms). BP is relaxed to obtain the basis pursuit denoise (BPDN) problem. (2) The parameter is used to estimate the noise level of the input data sets.It becomes a standard BP problemwhen equals to zero. E.V.D. Berg, etl, have proposed an efficient algorithm for problem. (3) Using Lagrange-operater into problem (2), it turn problem (2) into problem (3). It is first proposed by Chen, Donoho, and Sunders . It was proposed by R. T.IBSHIRNI . For the case where an estimate of the noise level is known, Chen, Donoho, etl argue that the choice has important optimality properties. Reference D. L. Donoho, For most large underdetermined systems of linear equations the minimal 1- norm solution is also the sparsest solution, Comm. Pure Appl. Math., 59 (2006), pp. 797–829. S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit,SIAM Rev. 43 (2001), pp. 129–159. E.V.D. Berg and M.P. Friedlander, Probing the pareto frontier for basis puisuit solutions. SIAM J. Sci. Comput. 31(2008), pp. 890-912. R. Tibshirani, Regression shrinkage and selection via the Lasso, J. Roy. Statist. Soc. Ser. B. 58 (1996), pp. 267–288.
个人分类: Signal processing|7687 次阅读|1 个评论
Sparse Representation (Cont.)
zhenliangli 2012-3-23 23:28
Sparse Representation (Cont.)
1. Uniqueness Theorem 2.1. For an arbitrary pair of orthogonal bases Ψ , Φ with mutual-coherence μ (A), and for an arbitrary non-zero vector b ∈ IR n with representations α and β correspondingly,the following inequality holds true: Uncertainty Principle 1 : Theorem 2.2. Any two distinct solutions x 1 ; x 2 of the linear system x = b cannot both be very sparse, governed by the following uncertainty principle: Uncertainty Principle 2 : We refer to this result as an uncertainty of redundant solutions, as we discuss here solutions to the underdetermined system. (1) via SparkF (2) via Mutual-Coherence 2. Equivalence If is empty, we can get the 1 norm get the same result as 0 norm, and it’s the sparsest solution. 3. Stability Prove that: 4. OMP( Orthogonal-Matching-Pursuit ) Reference: A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions ofsystems of equations to sparse modeling of signals and images,” SIAMReview, vol. 51, no. 1, pp. 34–81, 2009. Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, "Orthogonal matching pursuit: Recursivefunction approximation with applications to wavelet decomposition," in Proceedings of the27th Asilomar Conference on Signals, Systems and Computers, Vol. 1, 1993, pp. 40–44.
个人分类: Signal processing|5267 次阅读|0 个评论
An introduction to Sparse Representation
热度 1 zhenliangli 2012-3-23 22:33
An introduction to Sparse Representation
Preliminaries A full-rank matrix A ∈ Rn×m with n m generates an underdetermined system of linearequations Ax = b having infinitely many solutions.Suppose we seek the sparsest solution,i.e., the one with the fewest nonzero entries. Can it ever be unique? If so, when? As optimizationof sparsity is combinatorial in nature, are there efficient methods for finding thesparsest solution? Sparse Solutions of Linear Systems of Equations?. Consider a full-rankmatrix A ∈ Rn×m with n m, and define the underdetermined linear system of equations Ax = b. This system has infinitely many solutions; if one desires tonarrow the choice to one well-defined solution, additional criteria are needed. Hereis a familiar way to do this. Introduce a function J(x) to evaluate the desirabilityof a would-be solution x, with smaller values being preferred. Define the generaloptimization problem (P J ), Selecting a strictly convex function J(·) guarantees a unique solution.Take 2 norm mueasurement as an example, this is given explicitly by Measurement The vector is sparse,if there are few nonzeros among the possible entries in x. It will be convenient tointroduce the 0 “norm” Thus if x0  m, x is sparse. Consider the problem (P0) obtained from the general prescription (P J ) with thechoice J(x) = J 0 (x) ≡ x0; explicitly, Sparsity optimization (2) looks superficially like the minimum 2-norm problem(P 2 ), but the notational similarity masks some startling differences. The solutionto (P 2 ) is always unique and is readily available through now-standard tools fromcomputational linear algebra. (P 0 ) has probably been considered to be a possiblegoal from time to time for many years, but initially it seems to pose many conceptual challenges that have inhibited its widespread study and application. These are rootedin the discrete and discontinuous nature of the 0 norm; the standard convex analysisideas which underpin the analysis of (P 2 ) do not apply. Many of the most basicquestions about (P 0 ) seem immune to immediate insight: • Can uniqueness of a solution be claimed? Under what conditions? • If a candidate solution is available, can we perform a simple test to verifythat the solution is actually the global minimizer of (P 0 )? Perhaps, in some instances, with very special matrices A and left-hand sides b, waysto answer such questions are apparent, but for general classes of problem instances(A, b), such insights initially seem unlikely. From above figure, only the norm are eque or less than 1,the intersection point is on the axises. so we can present it use only one nonzore.On the other hand, the norm which eque or lager than one can see as an convex optimation problem. If 0 norm solution is hard to obtain, we can use 1 norm to be an alternate way. The proof about the equivalency of zero norm and one norm problem can find in . Two ways to get the sparse solution. (1) Greedy Algorithm-OMP (2) Basis Pursuit-IRLS,LARS,Feature-sign search algorithm . Reference: A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions ofsystems of equations to sparse modeling of signals and images,” SIAMReview, vol. 51, no. 1, pp. 34–81, 2009. Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, "Orthogonal matching pursuit: Recursivefunction approximation with applications to wavelet decomposition," in Proceedings of the27th Asilomar Conference on Signals, Systems and Computers, Vol. 1, 1993, pp. 40–44. H. Lee, A. Battle, R. Raina and Andrew Y.Ng, "Efficient sparse coding algorithms" in In NIPS, 2007.
个人分类: Signal processing|5417 次阅读|2 个评论
Pipeline Network Coding for Multicast Streams
hongyanee 2011-10-23 09:17
文章研究了 pipeline network coding 的多播流分布在高丢包率的场景下的性能。以前的网络编码研究集中于 batch network coding ,必须等待这一批数据包到达之后一起编码。而 pipeline network coding 的思想是随着包的到达可以即时的编码。这样的话 pipeline network coding 的好处是:减少了编码时延;进一步提高了吞吐量;对于高层是透明的;不需要特殊的硬件;易于部署。 1. 在无线通信环境下, MANETs 易于受到信道误码、干扰和拥塞的影响,通常由两种方法用于误码恢复: Forward Error Correction 和 ARQ 。在实时的多播流中 ARQ 是不合适的,而 erasure coding 可以通过引入编码冗余实现纠错。 In network coding, 也是通过将几个数据包线性编码为几个线性组合。两者的不同是 erasure coding 是在 source node 执行 encoding ,而 network coding 在 intermediate node 实现。 2. 所以研究人员引入了 batch network coding ,也就是将一批数据包在 source and relay nodes 实现随机线性编码。但是这种方案有两个缺点: 1 )引入了编码和解码的时延,时延随着这批数据包的大小增长。 2 )这批数据包只有在足够多的线性独立的数据包组合到达,才能解码。 3. 为了解决 batch coding 的问题,文章提出了 pipeline coding 。不是等待这批数据包全部到达之后在进行编码,而是每到一个新的数据包就生成一个新的线性组合,还没到的数据包系数记为 0 。这样可以在到达一个新的数据包时就可以进行编码或者解码。这样做的好处是。 1 )获得了很低的编码和解码时延。 2 )提高了吞吐量。 3 )对于高层是透明的。 4 )不需要特殊的硬件。
个人分类: 笔记|5733 次阅读|0 个评论
波前编码中关键元件三次相位板检测、加工问题
xiebin 2010-5-25 21:07
波前编码(WFC)是近年来兴起的新技术,可以解决离焦、色散等问题,因此可大大改善照相机,显微镜等光学系统使用效能,具有广阔的应用前景。对于大口径高精度系统,可以较好解决环境温度,震动等引起的离焦,同时也降低了装调的难度。下面是国外文献中的实例。 总之,鉴别率板的成像会说明一切,如下图所示: 其中的关键部件为三次位相板,实际为自由曲面,加工难度大,因此目前国内的研究主要以理论模拟为主。中科院长春光机所的科研团队是研究高精度非球面出身,光学加工基础深厚,可以将理论研究,自由曲面的加工,实际应用较好结合起来。 经过研究,这里也试制了一块,干涉图如下,包括未去除和去除倾斜误差两种情况。 结束 解滨 电邮: thouandi@tom.com
个人分类: 非球面加工检测技术讨论|9684 次阅读|0 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-9 01:04

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部