一个在线的sparse coding学习算法,最近很多应用类的文章都用这个算法。 The SPArse Modelling Software (SPAMS) can be downloaded here . It includes fast implementations of LARS, OMP, a dictionary learning algorithm and its variants for NMF, sparse PCA, as well as efficient sparse solvers based on proximal methods. June, 2011: SPAMS is now released under an open-source licence. This should be a good news for MAC, Windows, R, Python, C++ users. The package contains scripts for compiling the library under Linux. If you manage to make a script for compiling it under Mac OS and Windows, I would be glad to include it in the release. The denoising code of my ICCV'09 paper can be found here .The package contains binary files for Linux 64bits computers, and an instruction file. Academic use only. The demosaicking code of my ICCV'09 paper can be found here .The package contains binary files for Linux 64bits computers, and an instruction file. Academic use only. The KSVD source code of my IEEE-TIP and SIAM-MMS papers from 2008 can be found here . This package is not maintained anymore and I will not respond to any question about the source code. If you need Linux binaries to do experiments, please contact me.
From my first blog article- An introduction to sparse representation . We have noticed that the sparse coding for the input datasets is very useful to signal processing and machine learning. Zero norm is the direct solution for the sparse coding problem. However, D.L. Donoho proved that 1 norm solution is also the sparsest solution, and it is equals to zero norm solution. Standard BP The form without noise . (1) Basis pursuit denoise (Three forms). BP is relaxed to obtain the basis pursuit denoise (BPDN) problem. (2) The parameter is used to estimate the noise level of the input data sets.It becomes a standard BP problemwhen equals to zero. E.V.D. Berg, etl, have proposed an efficient algorithm for problem. (3) Using Lagrange-operater into problem (2), it turn problem (2) into problem (3). It is first proposed by Chen, Donoho, and Sunders . It was proposed by R. T.IBSHIRNI . For the case where an estimate of the noise level is known, Chen, Donoho, etl argue that the choice has important optimality properties. Reference D. L. Donoho, For most large underdetermined systems of linear equations the minimal 1- norm solution is also the sparsest solution, Comm. Pure Appl. Math., 59 (2006), pp. 797–829. S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit,SIAM Rev. 43 (2001), pp. 129–159. E.V.D. Berg and M.P. Friedlander, Probing the pareto frontier for basis puisuit solutions. SIAM J. Sci. Comput. 31(2008), pp. 890-912. R. Tibshirani, Regression shrinkage and selection via the Lasso, J. Roy. Statist. Soc. Ser. B. 58 (1996), pp. 267–288.
1. Uniqueness Theorem 2.1. For an arbitrary pair of orthogonal bases Ψ , Φ with mutual-coherence μ (A), and for an arbitrary non-zero vector b ∈ IR n with representations α and β correspondingly,the following inequality holds true: Uncertainty Principle 1 : Theorem 2.2. Any two distinct solutions x 1 ; x 2 of the linear system x = b cannot both be very sparse, governed by the following uncertainty principle: Uncertainty Principle 2 : We refer to this result as an uncertainty of redundant solutions, as we discuss here solutions to the underdetermined system. (1) via SparkF (2) via Mutual-Coherence 2. Equivalence If is empty, we can get the 1 norm get the same result as 0 norm, and it’s the sparsest solution. 3. Stability Prove that: 4. OMP( Orthogonal-Matching-Pursuit ) Reference: A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions ofsystems of equations to sparse modeling of signals and images,” SIAMReview, vol. 51, no. 1, pp. 34–81, 2009. Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, "Orthogonal matching pursuit: Recursivefunction approximation with applications to wavelet decomposition," in Proceedings of the27th Asilomar Conference on Signals, Systems and Computers, Vol. 1, 1993, pp. 40–44.
Preliminaries A full-rank matrix A ∈ Rn×m with n m generates an underdetermined system of linearequations Ax = b having infinitely many solutions.Suppose we seek the sparsest solution,i.e., the one with the fewest nonzero entries. Can it ever be unique? If so, when? As optimizationof sparsity is combinatorial in nature, are there efficient methods for finding thesparsest solution? Sparse Solutions of Linear Systems of Equations?. Consider a full-rankmatrix A ∈ Rn×m with n m, and define the underdetermined linear system of equations Ax = b. This system has infinitely many solutions; if one desires tonarrow the choice to one well-defined solution, additional criteria are needed. Hereis a familiar way to do this. Introduce a function J(x) to evaluate the desirabilityof a would-be solution x, with smaller values being preferred. Define the generaloptimization problem (P J ), Selecting a strictly convex function J(·) guarantees a unique solution.Take 2 norm mueasurement as an example, this is given explicitly by Measurement The vector is sparse,if there are few nonzeros among the possible entries in x. It will be convenient tointroduce the 0 “norm” Thus if x0 m, x is sparse. Consider the problem (P0) obtained from the general prescription (P J ) with thechoice J(x) = J 0 (x) ≡ x0; explicitly, Sparsity optimization (2) looks superficially like the minimum 2-norm problem(P 2 ), but the notational similarity masks some startling differences. The solutionto (P 2 ) is always unique and is readily available through now-standard tools fromcomputational linear algebra. (P 0 ) has probably been considered to be a possiblegoal from time to time for many years, but initially it seems to pose many conceptual challenges that have inhibited its widespread study and application. These are rootedin the discrete and discontinuous nature of the 0 norm; the standard convex analysisideas which underpin the analysis of (P 2 ) do not apply. Many of the most basicquestions about (P 0 ) seem immune to immediate insight: • Can uniqueness of a solution be claimed? Under what conditions? • If a candidate solution is available, can we perform a simple test to verifythat the solution is actually the global minimizer of (P 0 )? Perhaps, in some instances, with very special matrices A and left-hand sides b, waysto answer such questions are apparent, but for general classes of problem instances(A, b), such insights initially seem unlikely. From above figure, only the norm are eque or less than 1,the intersection point is on the axises. so we can present it use only one nonzore.On the other hand, the norm which eque or lager than one can see as an convex optimation problem. If 0 norm solution is hard to obtain, we can use 1 norm to be an alternate way. The proof about the equivalency of zero norm and one norm problem can find in . Two ways to get the sparse solution. (1) Greedy Algorithm-OMP (2) Basis Pursuit-IRLS,LARS,Feature-sign search algorithm . Reference: A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions ofsystems of equations to sparse modeling of signals and images,” SIAMReview, vol. 51, no. 1, pp. 34–81, 2009. Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, "Orthogonal matching pursuit: Recursivefunction approximation with applications to wavelet decomposition," in Proceedings of the27th Asilomar Conference on Signals, Systems and Computers, Vol. 1, 1993, pp. 40–44. H. Lee, A. Battle, R. Raina and Andrew Y.Ng, "Efficient sparse coding algorithms" in In NIPS, 2007.