Preprint
Article

This version is not peer-reviewed.

Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem

Submitted:

04 August 2025

Posted:

05 August 2025

You are already at the latest version

Abstract
Celebrated breakthrough sparsity theorem obtained independently by Donoho and Elad \textit{[Proc. Natl. Acad. Sci. USA, 2003]} and Gribonval and Nielson \textit{[IEEE Trans. Inform. Theory, 2003]} and Fuchs \textit{[IEEE Trans. Inform. Theory, 2004]} says that unique sparse solution to NP-Hard $\ell_0$-minimization problem can be obtained using unique solution to P-Type $\ell_1$-minimization problem. In this paper, we extend their result to abstract Banach spaces using 1-approximate Schauder frames. We notice that the `normalized' condition for Hilbert spaces can be generalized to a larger extent when we consider Banach spaces.
Keywords: 
;  ;  ;  

1. Introduction

Let H be a finite dimensional Hilbert space over K ( C or R ). Recall that [2,28] a finite collection { τ j } j = 1 n in H is said to be a frame (also known as dictionary) for H if it spans H . A frame { τ j } j = 1 n for H is said to be normalized if τ j = 1 for all 1 j n . Given a frame { τ j } j = 1 n for H , we define the analysis operator
θ τ : H h θ τ h ( h , τ j ) j = 1 n K n .
Adjoint of the analysis operator is known as the synthesis operator whose equation is
θ τ * : K n ( a j ) j = 1 n θ τ * ( a j ) j = 1 n j = 1 n a j τ j H .
Given d K n , let d 0 be the number of nonzero entries in d. Central problem which occurs in everyday life is the following l 0 -minimization problem:
Problem 1.1. 
Let { τ j } j = 1 n be a frame for H . Given h H , solve
minimize d K n d 0 subject to θ τ * d = h .
Recall that c K n is said to be a unique solution to Problem 1.1 if it satisfies following two conditions.
(i)
θ τ * c = h .
(ii)
If d K n satisfies θ τ * d = h , then
d 0 > c 0 .
Unfortunately, in 1995, Natarajan showed that Problem 1.1 is NP-Hard [23,33]. Therefore solution to Problem 1.1 has to be obtained using other means. Entire body of work which is built around Problem 1.1 is known as sparseland (term due to Elad [19]) or compressive sensing or compressed sensing [1,4,5,6,7,8,9,10,14,15,16,17,18,19,20,21,22,23,27,32,34,37,38]. We note that as the operator θ τ * is surjective, for a given h H , there is always d K n such that θ τ * d = h . Thus the central problem is when solution to Problem 1.1 is unique. It is well-known that (see [3,13,18]) following problem is the closest convex relaxation problem to Problem 1.1.
Problem 1.2. 
Let { τ j } j = 1 n be a frame for H . Given h H , solve
minimize d K n d 1 subject to θ τ * d = h .
There are several linear programmings available to obtain solution of Problem 1.2 and it is a P-problem [35,36,39].
Most important result which shows by solving Problem 1.2 we also get a solution to Problem 1.1, obtained independently by Donoho and Elad [17] and Gribonval and Nielsen [27] and Fuchs [25,26], is the following.
Theorem 1.3. 
[17,19,25,26,27,31] (Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem) Let { τ j } j = 1 n be a normalized frame for a Hilbert space H . If h H can be written as h = θ τ * c for some c K n satisfying
c 0 < 1 2 1 + 1 max 1 j , k n , j k | τ j , τ k | ,
then c is the unique solution to Problem 1.2 and Problem 1.1.
We naturally ask for (both finite and infinite dimensional) Banach space version Theorem 1.3. More than this natural question, many spaces occurring in functional analysis and in applications are Banach and there is no Hilbert space structure associated with them. As frame theory for Hilbert spaces has been successfully extended to Banach spaces which also found applications, we believe that generalization of Theorem 1.3 will have applications. It is interesting to note that a noncommutative version of Theorem 1.3 has been recently derived [29].

2. Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem

In the paper, K denotes C or R and X denotes a Banach space (need not be finite dimensional) over K . Dual of X is denoted by X * . We need the notion of 1-approximate Schauder frames for Banach spaces which is a subclass of Schauder frames [11,12,24].
Definition 2.1. 
[30] Let X be a Banach space over K . Let { f n } n = 1 be a sequence in X * and { τ n } n = 1 be a sequence in X . The pair ( { f n } n = 1 , { τ n } n = 1 ) is said to be a1-approximate Schauder frame(we write 1-ASF) for X if the following conditions are satisfied.
(i) 
The map (analysis operator)
θ f : X x θ f x : = { f n ( x ) } n = 1 l 1 ( N )
is a well-defined bounded linear operator.
(ii) 
The map (synthesis operator)
θ τ : l 1 ( N ) { a } n = 1 θ τ { a } n = 1 : = n = 1 a n τ n X
is a well-defined bounded linear operator.
1. 
The map (frame operator)
S f , τ : X x S f , τ x : = n = 1 f n ( x ) τ n X
is a well-defined bounded invertible operator.
We the notion of 1-ASF, we generalize Problems 1.1 and 1.2.
Problem 2.2. 
Let ( { f n } n = 1 , { τ n } n = 1 ) be an 1-ASF for X . Given x X , solve
minimize d l 1 ( N ) d 0 subject to θ τ d = x .
Problem 2.3. 
Let ( { f n } n = 1 , { τ n } n = 1 ) be an 1-ASF for X . Given x X , solve
minimize d l 1 ( N ) d 1 subject to θ τ d = x .
A very important property used to show Theorem 1.3 is the notion of null space property (see [14,31]). We now define the same property for Banach spaces. We use following notations. Let { e n } n = 1 be the canonical Schauder basis for l 1 ( N ) . Given M N and d = { d n } n = 1 l 1 ( N ) , we define
d M : = n M d n e n .
Definition 2.4. 
An 1-ASF ( { f n } n = 1 , { τ n } n = 1 ) for X is said to have thenull space property(we write NSP) of order k N if for every M N with o ( M ) k , we have
d M 1 < 1 2 d 1 , d ker ( θ τ ) , d 0 .
Following characterization relates NSP with Problem 2.3.
Theorem 2.5. 
Let ( { f n } n = 1 , { τ n } n = 1 ) be an 1-ASF for X and let k N . The following are equivalent.
(i) 
If x X can be written as x = θ τ c for some c l 1 ( N ) satisfying c 0 k , then c is the unique solution to Problem 2.3.
(ii) 
( { f n } n = 1 , { τ n } n = 1 ) satisfies the NSP of order k.
Proof. 
(i)
⇒ (ii) Let M N with o ( M ) k and let d ker ( θ τ ) , d 0 . Then we have
0 = θ τ d = θ τ ( d M + d M c ) = θ τ ( d M ) + θ τ ( d M c )
which gives
θ τ ( d M ) = θ τ ( d M c ) .
Define c : = d M l 1 ( N ) and x θ τ ( d M ) . Then we have c 0 o ( M ) k and
x = θ τ c = θ τ ( d M c ) .
By assumption (i), we then have
c 1 = d M 1 < d M c 1 = d M c 1 .
Rewriting previous inequality gives
d M 1 < d 1 d M 1 d M 1 < 1 2 d 1 .
Hence ( { f n } n = 1 , { τ n } n = 1 ) satisfies the NSP of order k.
(ii)
⇒ (i) Let x X can be written as x = θ τ c for some c l 1 ( N ) satisfying c 0 k . Define M : = supp ( c ) . Then o ( M ) = c 0 k . By assumption (ii), we then have
d M 1 < 1 2 d 1 , d ker ( θ τ ) , d 0 .
Let b l 1 ( N ) be such that x = θ τ b and b c . Define a : = b c l 1 ( N ) . Then θ τ a = θ τ b θ τ c = x x = 0 and hence a ker ( θ τ ) , a 0 . Using Inequality (1), we get
a M 1 < 1 2 a 1 a M 1 < 1 2 ( a M 1 + a M c 1 ) a M 1 < a M c 1 .
Using Inequality (2) and the information that c is supported on M, we get
b 1 c 1 = b M 1 + b M c 1 c M 1 c M c 1 = b M 1 + b M c 1 c M 1 = b M 1 + ( b c ) M c 1 c M 1 = b M 1 + a M c 1 c M 1 > b M 1 + a M 1 c M 1 b M 1 + ( b c ) M 1 c M 1 b M 1 b M 1 + c M 1 c M 1 = 0 .
Hence c is the unique solution to Problem 2.3.
Using Theorem 2.5 we obtain Banach space version of Theorem 1.3. We do this by relating Problem 2.3 to Theorem 2.5 and then Problem 2.2 to Theorem 2.5.
Theorem 2.6. 
Let ( { f n } n = 1 , { τ n } n = 1 ) be an 1-ASF for X such that
| f n ( τ n ) | 1 , n N .
If x X can be written as x = θ τ c for some c l 1 ( N ) satisfying
c 0 < 1 2 1 + 1 sup n , m N , n m | f n ( τ m ) | ,
then c is the unique solution to Problem 2.3.
Proof. 
We show that ( { f n } n = 1 , { τ n } n = 1 ) satisfies the NSP of order k : = c 0 . Then Theorem 2.5 says that c is the unique solution to Problem 2.3. Let x X can be written as x = θ τ c for some c l 1 ( N ) satisfying c 0 k . Let M N with o ( M ) k and let d ker ( θ τ ) , d 0 . Then we have
θ f θ τ d = 0 .
By writing d = { d n } n = 1 l 1 ( N ) , above equation gives
0 = θ f θ τ { d m } m = 1 = θ f m = 1 d m θ τ e m = θ f m = 1 d m τ m = m = 1 d m θ f ( τ m ) = m = 1 d m k = 1 f k ( τ m ) e k .
Let { ζ n } n = 1 be the coordinate functionals associated with the canonical Schauder basis { e n } n = 1 for l 1 ( N ) . Let n N . By evaluating previous equation at ζ n , we get
0 = ζ n m = 1 d m k = 1 f k ( τ m ) e k = m = 1 d m k = 1 f k ( τ m ) ζ n ( e k ) = m = 1 d m f n ( τ m ) = d n f n ( τ n ) + m = 1 , m n d m f n ( τ m ) .
Therefore
d n f n ( τ n ) = m = 1 , m n d m f n ( τ m ) , n N .
By using Inequality (3),
| d n | | d n | | f n ( τ n ) | = m = 1 , m n d m f n ( τ m ) m = 1 , m n | d m f n ( τ m ) | sup m N , n m | f n ( τ m ) | m = 1 , m n | d m | sup n , m N , n m | f n ( τ m ) | m = 1 , m n | d m | = sup n , m N , n m | f n ( τ m ) | m = 1 | d m | | d n | = sup n , m N , n m | f n ( τ m ) | ( d 1 | d n | ) , n N .
By rewriting above inequality we get
1 + 1 sup n , m N , n m | f n ( τ m ) | | d n | d 1 , n N .
Summing Inequality (5) over M leads to
1 + 1 sup n , m N , n m | f n ( τ m ) | d M 1 = 1 + 1 sup n , m N , n m | f n ( τ m ) | n M | d n | d 1 n M 1 = d 1 o ( M ) .
Finally using Inequality (4)
d M 1 1 + 1 sup n , m N , n m | f n ( τ m ) | 1 d 1 o ( M ) 1 + 1 sup n , m N , n m | f n ( τ m ) | 1 d 1 k = 1 + 1 sup n , m N , n m | f n ( τ m ) | 1 d 1 c 0 < 1 2 d 1 .
Hence ( { f n } n = 1 , { τ n } n = 1 ) satisfies the NSP of order k which completes the proof. □
Theorem 2.7.(Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem)
Let ( { f n } n = 1 , { τ n } n = 1 ) be an 1-ASF for X such that
| f n ( τ n ) | 1 , n N .
If x X can be written as x = θ τ c for some c l 1 ( N ) satisfying
c 0 < 1 2 1 + 1 sup n , m N , n m | f n ( τ m ) | ,
then c is the unique solution to Problem 2.2.
Proof. 
Theorem 2.6 says that c is the unique solution to Problem 2.3. Let d l 1 ( N ) be such that x = θ τ d . We claim that d 0 > c 0 . If this fails, we must have d 0 c 0 . We then have
d 0 < 1 2 1 + 1 sup n , m N , n m | f n ( τ m ) | .
Theorem 2.6 again says that d is also the unique solution to Problem 2.3. Therefore we must have c 1 < d 1 and c 1 > d 1 which is a contradiction. Therefore claim holds and we have d 0 > c 0 . □
Corollary 2.8. 
Theorem 1.3 follows from Theorems 2.6 and 2.7.
Proof. 
Let { τ j } j = 1 n be a normalized frame for a Hilbert space H . For each 1 j n , define
f j : H h f j ( h ) : = h , τ j K .
Then
| f j ( τ j ) | = 1 , 1 j n
and
f k ( τ j ) = τ j , τ k , 1 j , k n .

References

  1. Ben Adcock and Anders C. Hansen. Compressive imaging: structure, sampling, learning. Cambridge University Press, Cambridge, 2021.
  2. John J. Benedetto and Matthew Fickus. Finite normalized tight frames. Adv. Comput. Math., 18(2-4):357–385, 2003. [CrossRef]
  3. Alfred M. Bruckstein, David L. Donoho, and Michael Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev., 51(1):34–81, 2009. [CrossRef]
  4. Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Statist., 35(6):2313–2351, 2007. [CrossRef]
  5. Emmanuel J. Candes. The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, 346(9-10):589–592, 2008. [CrossRef]
  6. Emmanuel J. Candès, Justin Romberg, and Terence Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006. [CrossRef]
  7. Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1207–1223, 2006. [CrossRef]
  8. Emmanuel J. Candes and Terence Tao. Decoding by linear programming. IEEE Trans. Inform. Theory, 51(12):4203–4215, 2005. [CrossRef]
  9. Emmanuel J. Candes and Terence Tao. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406–5425, 2006.
  10. Emmanuel J. Candès and Terence Tao. The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inform. Theory, 56(5):2053–2080, 2010. [CrossRef]
  11. P. G. Casazza, S. J. Dilworth, E. Odell, Th. Schlumprecht, and A. Zsák. Coefficient quantization for frames in Banach spaces. J. Math. Anal. Appl., 348(1):66–86, 2008. [CrossRef]
  12. Peter, G. Casazza, Deguang Han, and David R. Larson. Frames for Banach spaces. In The functional and harmonic analysis of wavelets and frames (San Antonio, TX, 1999), volume 247 of Contemp. Math., pages 149–182. Amer. Math. Soc., Providence, RI, 1999.
  13. Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):33–61, 1998. [CrossRef]
  14. Albert Cohen, Wolfgang Dahmen, and Ronald DeVore. Compressed sensing and best k-term approximation. J. Amer. Math. Soc., 22(1):211–231, 2009. [CrossRef]
  15. Mark A. Davenport, Marco F. Duarte, Yonina C. Eldar, and Gitta Kutyniok. Introduction to compressed sensing. In Compressed sensing, pages 1–64. Cambridge Univ. Press, Cambridge, 2012.
  16. David L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289–1306, 2006.
  17. David L. Donoho and Michael Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci. USA, 100(5):2197–2202, 2003.
  18. David L. Donoho and Xiaoming Huo. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47(7):2845–2862, 2001.
  19. Michael Elad. Sparse and redundant representations : From theory to applications in signal and image processing. Springer, New York, 2010.
  20. Michael Elad and Alfred M. Bruckstein. A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inform. Theory, 48(9):2558–2567, 2002. [CrossRef]
  21. Yonina C. Eldar. Sampling Theory : Beyond Bandlimited Systems. Cambridge University Press, Cambridge, 2014.
  22. Arie Feuer and Arkadi Nemirovski. On sparse representation in pairs of bases. IEEE Trans. Inform. Theory, 49(6):1579–1581, 2003.
  23. Simon Foucart and Holger Rauhut. A mathematical introduction to compressive sensing. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, New York, 2013.
  24. D. Freeman, E. Odell, Th. Schlumprecht, and A. Zsák. Unconditional structures of translates for Lp(Rd). Israel J. Math., 203(1):189–209, 2014.
  25. Jean-Jacques Fuchs. More on sparse representations in arbitrary bases. IFAC Proceedings Volumes, 36(16):1315–1320, 2003.
  26. Jean-Jacques Fuchs. On sparse representations in arbitrary redundant bases. IEEE Trans. Inform. Theory, 50(6):1341–1344, 2004. [CrossRef]
  27. Rémi Gribonval and Morten Nielsen. Sparse representations in unions of bases. IEEE Trans. Inform. Theory, 49(12):3320–3325, 2003.
  28. Deguang Han, Keri Kornelson, David Larson, and Eric Weber. Frames for undergraduates, volume 40 of Student Mathematical Library. American Mathematical Society, Providence, RI, 2007.
  29. K. Mahesh Krishna. Noncommutative Donoho-Elad-Gribonval-Nielsen-Fuchs sparsity theorem. Math. Inequal. Appl., 28(3):531–539, 2025.
  30. K. Mahesh Krishna and P. Sam Johnson. Towards characterizations of approximate Schauder frame and its duals for Banach spaces. J. Pseudo-Differ. Oper. Appl., 12(1):Paper No. 9, 13, 2021.
  31. Gitta Kutyniok. Data separation by sparse representations. In Compressed sensing, pages 485–514. Cambridge Univ. Press, Cambridge, 2012.
  32. Gitta Kutyniok. Theory and applications of compressed sensing. GAMM-Mitt., 36(1):79–101, 2013.
  33. B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Comput., 24(2):227–234, 1995.
  34. Irina Rish and Genady Ya. Grabarnik. Sparse modeling : Theory, algorithms, and applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. CRC Press, Boca Raton, FL, 2015.
  35. T. Terlaky. On lp programming. European J. Oper. Res., 22(1):70–100, 1985.
  36. A. M. Tillmann. Equivalence of linear programming and basis pursuit. Proc. Appl. Math. Mech., 15:735–738, 2015. [CrossRef]
  37. Andreas M. Tillmann and Marc E. Pfetsch. The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory, 60(2):1248–1259, 2014. [CrossRef]
  38. M. Vidyasagar. An introduction to compressed sensing, volume 22 of Computational Science & Engineering. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2020.
  39. Guoliang Xue and Yinyu Ye. An efficient algorithm for minimizing a sum of p-norms. SIAM J. Optim., 10(2):551–579, 2000.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated