Preprint
Short Note

This version is not peer-reviewed.

Continuous Donoho-Elad Spark Uncertainty Principle

Submitted:

10 July 2025

Posted:

11 July 2025

You are already at the latest version

Abstract
Donoho and Elad \textit{[Proc. Natl. Acad. Sci. USA, 2003]} introduced the important notion of the spark of a frame, using which they derived a fundamental uncertainty principle. Based on spark, they also provided a necessary and sufficient condition for the uniqueness of sparse solutions to the NP-hard $\ell_0$-minimization problem. In this nano note, we show that the notion of spark can be extended to linear maps whose domains are measure spaces. Using this generalization, we derive an uncertainty principle and provide a sufficient condition for the existence of sparse solutions to linear systems on measure spaces.
Keywords: 
;  ;  

1. Introduction

Let H be a finite dimensional Hilbert space over K ( C or R ). Recall that [1] a collection of nonzero elements { τ j } j = 1 n in H is said to be a frame (also known as dictionary) H if there are r , s > 0 such that
r h 2 j = 1 n | h , τ j | 2 s h 2 , h H .
It is well-known that a collection { τ j } j = 1 n in H is a frame for H if and only if { τ j } j = 1 n spans H [2]. A frame { τ j } j = 1 n for H is said to be normalized if τ j = 1 for all 1 j n . Note that any frame can be normalized by dividing each element by its norm. Given a frame { τ j } j = 1 n for H , we define the analysis operator
θ τ : H h θ τ h ( h , τ j ) j = 1 n K n .
Adjoint of the analysis operator is known as the synthesis operator whose equation is
θ τ * : K n ( a j ) j = 1 n θ τ * ( a j ) j = 1 n j = 1 n a j τ j H .
Given d K n , let d 0 be the number of nonzero entries in d. Central problem which occurs in many situations is the following 0 -minimization problem:
Problem 1.
Let { τ j } j = 1 n be a normalized frame for H . Given h H , solve
minimize d K n d 0 subject to θ τ * d = h .
Recall that c K n is said to be a unique solution to Problem 1 if it satisfies following two conditions.
(i)
θ τ * c = h .
(ii)
If d K n satisfies θ τ * d = h , then
d 0 > c 0 .
In 1995, Natarajan showed that Problem 1 is NP-Hard [3]. Therefore solution to Problem 1 has to be obtained using other methods. Work which is built around Problem 1 is known as sparseland (term due to Elad [4]) or compressive sensing or compressed sensing.
As the operator θ τ * is surjective, for a given h H , there is always a d K n such that θ τ * d = h . Thus the central problem is when solution to Problem 1 is unique. One of the greatest results of Donoho and Elad [5] with regard to this is using the notion of spark defined as follows. In the paper, given a subset M N , the cardinality of M is denoted by o ( M ) .
Definition 1.
[5] Given a normalized frame { τ j } j = 1 n for H , thesparkof { τ j } j = 1 n is defined as
Spark ( { τ j } j = 1 n ) min { o ( M ) : M { 1 , , n } , { τ j } j M is linearly dependent } = min { d 0 : d ker ( θ τ * ) , d 0 } .
In 2003, Donoho and Elad derived the following breakthrough spark uncertainty principle [5].
Theorem 2.
[5] (Donoho-Elad Spark Uncertainty Principle) Let { τ j } j = 1 n be a normalized frame for H . If a , b K n are distinct and θ τ * a = θ τ * b , then
a 0 + b 0 Spark ( { τ j } j = 1 n ) .
In the same paper [5], Donoho and Elad also gave a characterization for the solution of Problem 1 using spark.
Theorem 3.
[5,6] (Donoho-Elad Spark Sparsity Theorem) Let { τ j } j = 1 n be a normalized frame for H .
(i)
For every h H and every 1 k n , there exists atmost one vector c K n such that
h = θ τ * c satisfying c 0 k
if and only if
Spark ( { τ j } j = 1 n ) > 2 k .
(ii)
If h H can be written as h = θ τ * c for some c K n satisfying
c 0 < 1 2 Spark ( { τ j } j = 1 n ) ,
then c is the unique solution to Problem 1.
In this note, we show that Definition 1 can be extended largely. Using this, we show that Theorems 2 and 3 have continuous extensions.

2. Continuous Spark

Let ( Ω , μ ) be a measure space and let
M ( Ω , μ ) { f : Ω K is measurable } .
Let V be a vector space over K and let W be a subspace of M ( Ω , μ ) . Given a linear map A : W V , we define the spark of A as
Spark ( A ) inf { μ ( supp ( f ) ) : f ker ( A ) , f 0 } .
We now have continuous version of Theorem 2.
Theorem 4. (Continuous Donoho-Elad Spark Uncertainty Principle) Let A : W V be a linear map. If f , g W are distinct and A f = A g , then
μ ( supp ( f ) ) + μ ( supp ( g ) ) Spark ( A ) .
Proof. 
Since f g 0 and f g ker ( A ) , we have
Spark ( A ) μ ( supp ( f g ) ) μ ( supp ( f ) supp ( g ) ) μ ( supp ( f ) ) + μ ( supp ( g ) ) .
We set of most general version of Problem 1 as follows.
Problem 5.
Let A : W V be a linear map. Given v V , solve
minimize g W μ ( supp ( g ) ) subject to A g = v .
Following is continuous version of Theorem 3.
Theorem 6.(Continuous Donoho-Elad Spark Sparsity Theorem) Let A : W V be a linear map.
(i)
Let r [ 0 , ) . If
Spark ( A ) > 2 r ,
then for every v V , there exists atmost one vector f W such that
v = A f satisfying μ ( supp ( f ) ) r .
(ii)
If v V can be written as v = A f for some f W satisfying
μ ( supp ( f ) ) < 1 2 Spark ( A ) ,
then f is the unique solution to Problem 5.
Proof.
(i)
Let r [ 0 , ) and v V . Let g , h W satisfy v = A g = A h and μ ( supp ( g ) ) r , μ ( supp ( h ) ) r . We claim that g = h . If this is not true, then g h 0 . Then g h ker ( A ) with g h 0 . But then
2 r < Spark ( A ) μ ( supp ( g h ) ) μ ( supp ( g ) supp ( h ) ) μ ( supp ( g ) ) + μ ( supp ( h ) ) r + r = 2 r
which is impossible. Hence claim holds.
(ii)
Let v V and f W satisfies
μ ( supp ( f ) ) < 1 2 Spark ( A ) .
Let g W be such that v = A g and g f . Then we have
A ( f g ) = v v = 0 .
Hence f g ker ( A ) and f g 0 . Definition of spark then gives
Spark ( A ) μ ( supp ( f g ) ) μ ( supp ( f ) ) + μ ( supp ( g ) ) < 1 2 Spark ( A ) + μ ( supp ( g ) ) .
Therefore
μ ( supp ( f ) ) < 1 2 Spark ( A ) < μ ( supp ( g ) ) .
Hence f is unique solution to Problem 5.
In view of Theorem 3, we have following problem: For which measure spaces, the converse of (i) in Theorem 6 holds? Note that the proof of converse of (i) in Theorem 3 is based on the technique of writing a 2 k -sparse vector as a difference of two k-sparse vectors [6] which we are unable do in continuous setting.

References

  1. Benedetto, J.J.; Fickus, M. Finite normalized tight frames. Adv. Comput. Math. 2003, 18, 357–385. [Google Scholar] [CrossRef]
  2. Han, D.; Kornelson, K.; Larson, D.; Weber, E. Frames for undergraduates; Vol. 40, Student Mathematical Library, American Mathematical Society, Providence, RI, 2007; pp. xiv+295. [CrossRef]
  3. Natarajan, B.K. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995, 24, 227–234. [Google Scholar] [CrossRef]
  4. Elad, M. Sparse and redundant representations: From theory to applications in signal and image processing; Springer, New York, 2010; pp. xx+376. [CrossRef]
  5. Donoho, D.L.; Elad, M. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef] [PubMed]
  6. Davenport, M.A.; Duarte, M.F.; Eldar, Y.C.; Kutyniok, G. Introduction to compressed sensing. In Compressed sensing; Cambridge Univ. Press, Cambridge, 2012; pp. 1–64. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated