1. Introduction
Let
be a finite dimensional Hilbert space over
(
or
). Recall that [
1] a collection of nonzero elements
in
is said to be a
frame (also known as
dictionary)
if there are
such that
It is well-known that a collection
in
is a frame for
if and only if
spans
[
2]. A frame
for
is said to be
normalized if
for all
. Note that any frame can be normalized by dividing each element by its norm. Given a frame
for
, we define the analysis operator
Adjoint of the analysis operator is known as the synthesis operator whose equation is
Given , let be the number of nonzero entries in d. Central problem which occurs in many situations is the following -minimization problem:
Problem 1.
Let be a normalized frame for . Given , solve
Recall that is said to be a unique solution to Problem 1 if it satisfies following two conditions.
- (i)
.
- (ii)
If
satisfies
, then
In 1995, Natarajan showed that Problem 1 is NP-Hard [
3]. Therefore solution to Problem 1 has to be obtained using other methods. Work which is built around Problem 1 is known as
sparseland (term due to Elad [
4]) or
compressive sensing or
compressed sensing.
As the operator
is surjective, for a given
, there is always a
such that
Thus the central problem is when solution to Problem 1 is unique. One of the greatest results of Donoho and Elad [
5] with regard to this is using the notion of spark defined as follows. In the paper, given a subset
, the cardinality of
M is denoted by
.
Definition 1.
[5] Given a normalized frame for , thesparkof is defined as
In 2003, Donoho and Elad derived the following breakthrough spark uncertainty principle [
5].
Theorem 2.
[5] (Donoho-Elad Spark Uncertainty Principle) Let be a normalized frame for . If are distinct and , then
In the same paper [
5], Donoho and Elad also gave a characterization for the solution of Problem 1 using spark.
Theorem 3.
[5,6] (Donoho-Elad Spark Sparsity Theorem) Let be a normalized frame for .
- (i)
-
For every and every , there exists atmost one vector such that
- (ii)
-
If can be written as for some satisfying
then c is the unique solution to Problem 1.
In this note, we show that Definition 1 can be extended largely. Using this, we show that Theorems 2 and 3 have continuous extensions.
2. Continuous Spark
Let
be a measure space and let
Let
be a vector space over
and let
be a subspace of
. Given a linear map
, we define the spark of
A as
We now have continuous version of Theorem 2.
Theorem 4.
(Continuous Donoho-Elad Spark Uncertainty Principle) Let be a linear map. If are distinct and , then
Proof. Since
and
, we have
□
We set of most general version of Problem 1 as follows.
Problem 5.
Let be a linear map. Given , solve
Following is continuous version of Theorem 3.
Theorem 6.(Continuous Donoho-Elad Spark Sparsity Theorem) Let be a linear map.
- (i)
-
then for every , there exists atmost one vector such that
- (ii)
-
If can be written as for some satisfying
then f is the unique solution to Problem 5.
Proof.
- (i)
Let
and
. Let
satisfy
and
,
. We claim that
. If this is not true, then
. Then
with
. But then
which is impossible. Hence claim holds.
- (ii)
-
Let
and
satisfies
Let
be such that
and
. Then we have
Hence
and
. Definition of spark then gives
Hence f is unique solution to Problem 5.
□
In view of Theorem 3, we have following problem: For which measure spaces, the converse of (i) in Theorem 6 holds? Note that the proof of converse of (i) in Theorem 3 is based on the technique of writing a
-sparse vector as a difference of two
k-sparse vectors [
6] which we are unable do in continuous setting.
References
- Benedetto, J.J.; Fickus, M. Finite normalized tight frames. Adv. Comput. Math. 2003, 18, 357–385. [Google Scholar] [CrossRef]
- Han, D.; Kornelson, K.; Larson, D.; Weber, E. Frames for undergraduates; Vol. 40, Student Mathematical Library, American Mathematical Society, Providence, RI, 2007; pp. xiv+295. [CrossRef]
- Natarajan, B.K. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995, 24, 227–234. [Google Scholar] [CrossRef]
- Elad, M. Sparse and redundant representations: From theory to applications in signal and image processing; Springer, New York, 2010; pp. xx+376. [CrossRef]
- Donoho, D.L.; Elad, M. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef] [PubMed]
- Davenport, M.A.; Duarte, M.F.; Eldar, Y.C.; Kutyniok, G. Introduction to compressed sensing. In Compressed sensing; Cambridge Univ. Press, Cambridge, 2012; pp. 1–64. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).