On Verifiable Sufficient Conditions for Sparse Signal Recovery via Minimization
On Verifiable Sufficient Conditions for Sparse Signal Recovery via Minimization
Abstract
We discuss necessary and sufficient conditions for a sensing matrix to be “good” – to allow for exact recovery of sparse signals with nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect recovery (nonzero measurement noise, nearly sparse signal, nearoptimal solution of the optimization problem yielding the recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse recovery and to efficiently computable upper bounds on those for which a given sensing matrix is good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties.
1 Introduction
In the existing literature on sparse signal recovery and Compressed Sensing (see [ReferencesReferences,ReferencesReferences] and references therein) the emphasis is on assessing sparse signal from an observation (in this context ):
(1.1) 
where is a given norm on , is the observation error and is a given upper bound on the error magnitude, measured in the norm . One of the most popular (computationally tractable) estimators which is well suited for recovering sparse signals is the recovery given by
(1.2) 
The existing Compressed Sensing theory focuses on this estimator and since our main motivation comes from the Compressed Sensing, we will also concentrate on this particular recovery. It is worth to mention that other closely related estimation techniques are used in statistical community, the most renown examples are “Dantzig Selector” (cf. [5]), provided by
(1.3) 
and Lasso estimator, see [21, 4], which under sparsity scenario exhibits similar behavior.
The theory offers strong results which state, in particular, that if is sparse (i.e., has at most nonzero entries) and possesses a certain welldefined property, then the recovery of is close to , provided the observation error is small. For instance, necessary and sufficient conditions of exactness of recovery in the case of noiseless observation (when ) has been established in [23, 16, 15]. Specifically, in [23] it is shown that is the unique solution of the noiseless recovery problem
(1.4) 
if and only if the kernel of the sensing matrix is strict balanced, the latter meaning that for any set of cardinality it holds
(1.5) 
(what the above condition is sufficient for the recovery to be exact in the noiseless case was stated in [14]).
Some particularly impressive results make use of the Restricted Isometry property which is as follows: a matrix is said to possess the Restricted Isometry () property with parameters and , where is a positive integer, if
(1.6) 
For instance, the following result is well known ([10, Theorem 1.2] or [9, Theorem 4.1]): let in (1.1) be the Euclidean norm , and let the sensing matrix satisfy property with . Then
(1.7) 
where
On the negative side, random matrices are the only known matrices which possess the  property for such large values of . For all known deterministic families of matrices provably possessing the property, one has (see [13]), which is essentially worse than the bound promised by the RIbased theory. Moreover, RIproperty itself is “intractable” – the only currently available technique to verify the property for a matrix amounts to test all its submatrices. In other words, given a large sensing matrix , one can never be sure that it possesses the RIproperty with a given .
Certainly, the RIproperty is not the only property of a sensing matrix which allows to obtain good error bounds for recovery of sparse signals. Two related characteristics are the Restricted Eigenvalue assumption introduced in [4] and the Restricted Correlation assumption of [3], among others. However, they share with the RIproperty not only the nice consequences as in (1.7), but also the drawback of being computationally intractable. To summarize our very restricted and sloppy description of the existing results on recovery, neither strict balancedness, nor Restricted Isometry, or Restricted Correlation assumption and the like, do allow to answer affirmatively the question whether for a given sensing matrix , an accurate recovery of sparse signals with a given number of nonzero entries is possible.
Now, suppose we face the following problem: given a sensing matrix , which we are allowed to modify in certain ways to obtain a new matrix , our objective is, depending on problem’s specifications, either the maximal improvement, or the minimal deterioration of the sensing properties of with respect to sparse recovery. As a simple example, one can think, e.g., of a  or dimensional point grid of possible locations of signal sources and an element grid of possible locations of sensors. A sensor at a given location measures a known linear form of the signals emitted at the nodes of which depends on location, and the goal is to place a given number of sensors at the nodes of in order to be able to recover, via the recovery, all sparse signals. Formally speaking, we are given an matrix , and our goal is to extract from it a submatrix which is good – such that whenever the true signal in (1.1) is sparse and there is no observation error (), the recovery (1.2) recovers exactly. To the best of our knowledge, the only existing computationally tractable techniques which allow to approach such a synthesis problem are those based on mutual incoherence
(1.8) 
of a sensing matrix with columns (assumed to be nonzero). Clearly, the mutual incoherence can be easily computed even for large matrices. Moreover, bounds of the same type as in (1.7) can be obtained for matrices with small mutual incoherence: a matrix with mutual incoherence and columns of unit norm satisfies assumption (1.6) with . Unfortunately, the latter relation implies that should be very small to certify the possibility of accurate recovery of nontrivial sparse signals, so that the estimates of a “goodness” of sensing for recovery based on mutual incoherence are very conservative.
The goal of this paper is to provide new computationally tractable sufficient conditions for sparse recovery.
The overview of our main results is as follows.

Let for
stand for the sum of maximal magnitudes of components of . Set
Starting from optimality conditions for the problem (1.4) of noiseless recovery, we show that is good if and only if , thus recovering some of the results of [23]. While is fully responsible for ideal recovery of sparse signals under ideal circumstances, when there is no observation error in (1.1) and (1.2) is solved to precise optimality, in order to cope with the case of imperfect recovery (nonzero observation error, nearly sparse true signal, (1.2) is not solved to exact optimality), we embed the characteristic into a singleparametric family of characteristics , . Here
(note that is nonincreasing in and is equal to for all large enough values of ). We then demonstrate (Section 3) that whenever is such that , the error of imperfect recovery admits an explicit upper bound, similar in structure the RIbased bound (1.7):
where is the measurement error and is the inaccuracy in solving (1.2).

The characteristics is still difficult to compute. In Section 4, we develop efficiently computable lower and upper bounds on . In particular, we show that the quantity ,
(here is the norm conjugate to ) is an upper bound on .
This bound provides us with an efficiently verifiable (although perhaps conservative) sufficient condition for goodness of , namely, . We demonstrate that our verifiable sufficient conditions for goodness are less restrictive than those based on mutual incoherence. On the other hand, the proposed lower bounds on allow to bound from above the values of for which is good.
We also study limitations of our sufficient conditions for goodness: unfortunately, it turns out that these conditions, as applied to a matrix , cannot justify its goodness when , unless is “nearly square”. While being much worse than the theoretically achievable, for appropriate ’s, level of for which may be good, this “limit of performance” of our machinery nearly coincides with the best known values of for which explicitly given individual good sensing matrices are known.

In Section 5, we investigate the implications of the RI property in our context. While these implications do not contribute to the “constructive” part of our results (since the RI property is difficult to verify), they certainly contribute to better understanding of our approach and integrating it into the existing Compressed Sensing theory. The most instructive result of this Section is as follows: whenever is, say, (so that the is good for ), our verifiable sufficient conditions do certify that is good – they guarantee “at least the square root of the true level of goodness”.

Section 6 presents some very preliminary numerical illustrations of our machinery. These illustrations, in particular, present experimental evidence of how significantly this machinery can outperform the mutualincoherencebased one – the only known to us existing computationally tractable way to certify goodness.
When this paper was finished, we become aware of the preprint [12] which contain results closely related to some of those in our paper. The authors of [12] have “extracted” from [11] the sufficient condition for goodness of and proposed an efficiently computable upper bound on based on semidefinite relaxation. This bound is essentially different from our, and it could be interesting to find out if one of these bounds is “stronger” than the other.
2 Characterizing goodness
2.1 Characteristics and : definition and basic properties
The “minimal” requirement on a sensing matrix to be suitable for recovering sparse signals (that is, those with at most nonzero entries) via minimization is as follows: whenever the observation in (1.2) is noiseless and comes from an sparse signal : , should be the unique optimal solution of the optimization problem in (1.2) where is set to 0. This observation motivates the following
Definition 1
Let be a matrix and be an integer, . We say that is good, if for every sparse vector , is the unique optimal solution to the optimization problem
(2.9) 
Let be the largest for which is good; this is a well defined integer, since by trivial reasons every matrix is good. It is immediately seen that for every matrix .
From now on, is the norm on and is its conjugate norm:
We are about to introduce two quantities which are “responsible” for goodness.
Definition 2
Let be a matrix, and be a nonnegative integer. We define the quantities , as follows:
(i) is the infinum of such that for every vector with nonzero entries, equal to , there exists a vector such that
(2.10) 
If for some as above there does not exist with such that coincides with on the support of , we set .
(ii) is the infinum of such that for every vector with nonzero entries, equal to , there exists a vector such that
(2.11) 
To save notation, we will skip indicating when , thus writing instead of , and similarly for .
Several immediate observations are in order:
A. It is easily seen that the set of the values of participating in (iii) are closed, so that when , then for every vector with nonzero entries, equal to , there exists such that
(2.12) 
Similarly, for every as above there exists such that
(2.13) 
B. The quantities and are convex nonincreasing functions of , . Moreover, from A it follows that for a given , and all large enough values of one has and .
C. Taking into account that the set is convex, it follows that if , then the vectors satisfying (2.12) exist for every sparse vector with , not only for vectors with exactly nonzero entries equal to . Similarly, vectors satisfying (2.13) exist for all sparse with . As a byproduct of these observations, we see that and are nondecreasing in .
Our interest in the quantities and stems from the following
Theorem 1
Let be a matrix and be a nonnegative integer.
(i) is good if and only if .
(ii) For every one has
(2.14) 
Theorem 1 explains the importance of the characteristic in the context of recovery. However, it is technically more convenient to deal with the quantity .
2.2 Equivalent representation of
According to Theorem 1 (ii), the quantities and are tightly related. In particular, the equivalent characterization of goodness in terms of reads as follows:
In the sequel, we shall heavily utilize an equivalent representation which, as we shall see in Section 4, has important algorithmic consequences. The representation is as follows:
Theorem 2
Consider the polytope
One has
(2.15) 
In particular,
(2.16) 
Proof. By definition, is the smallest such that the closed convex set , where and , contains all vectors with nonzero entries, equal to . This is exactly the same as to say that contains the convex hull of these vectors; the latter is exactly . Now, satisfies the inclusion if and only if for every the support function of is majorized by that of , namely, for every one has
(2.17)  
with the convention that when , is or 0 depending on whether or . That is, if and only if
By homogeneity w.r.t. , it is equivalent to
Thus, is the smallest for which the concluding inequality takes place, and we arrive at (2.15), (2.16).
Recall that for , is the sum of the largest magnitudes of entries in , or, equivalently,
Corollary 1
For a matrix one has , . As a result, matrix is good if and only if the maximum of norms of vectors with is .
Note that (2.15) and (2.16) can be seen as an equivalent definition of , and one can easily prove Corollary 1 without any reference to Theorem 1, and thus without a necessity even to introduce the characteristic . However, we believe that from the methodological point of view the result of Theorem 1 is important, since it reveals the “true origin” of the quantities and as the entities coming from the optimality conditions for the problem (2.9).
3 Error bounds for imperfect recovery via
We have seen that the quantity (or, equivalently, ) is responsible for goodness of a sensing matrix , that is, for the precise recovery of an sparse signal in the “ideal case” when there is no measurement error and the optimization problem (2.9) is solved to exact optimality. It appears that the same quantities control the error of recovery in the case when the vector is not sparse and the problem (2.9) is not solved to exact optimality. To see this, let , , stand for the best, in terms of norm, sparse approximation of . In other words, is the vector obtained from by zeroing all coordinates except for the largest in magnitude.
Proposition 1
Proof. Let and let be the set of indices of largest elements of (i.e., the support of ). Denote by () the vector, obtained from () by replacing by zero all coordinates of () with the indices outside of . As , by Corollary 1,
On the other hand, is a feasible solution to (2.9), so , whence
or, equivalently,
Thus,
and, as ,
We switch now to the properties of approximate solutions to the problem
(3.18) 
where and
with . We are about to show that in the “nonideal case”, when is “nearly sparse” and (3.18) is solved to nearoptimality, the error of the recovery remains “under control” – it admits an explicit upper bound governed by with a finite . The corresponding result is as follows:
Theorem 3
Proof. Since , is a feasible solution to (3.18) and therefore , whence, by (3.19),
(3.21) 
Let be the set of indices of entries in . As in the proof of Proposition 1 we denote by the error of the recovery, and by () the vector obtained from () by replacing by zero all coordinates of () with the indices outside of . By (2.15) we have
(3.22) 
On the other hand, exactly in the same way as in the proof of Proposition 1 we conclude that
which combines with (3.22) to imply that
Since , this results in
which is (3.20).
The bound (3.20) can be easily rewritten in terms of . instead of
The error bound (3.20) for imperfect recovery, while being in some respects weaker than the RIbased bound (1.7), is of the same structure as the latter bound: assuming and (or, equivalently, ), the error of imperfect recovery can be bounded in terms of , , measurement error , “tail” of the signal to be recovered and the inaccuracy to which the estimate solves the program (3.18). The only flaw in this interpretation is that we need , while the “true” necessary and sufficient condition for goodness is . As we know, for all finite “large enough” values of , but we do not want the “large enough” values of to be really large, since the larger is, the worse is the error bound (3.20). Thus, we arrive at the question “what is large enough” in our context. Here are two simple results in this direction.
Proposition 2
Let be a sensing matrix of rank .
(i) Let . Then for every nonsingular submatrix of and every one has
(3.23) 
where is the minimal singular value of .
(ii) Let , and let for certain the image of the unit ball in under the mapping contain the ball . Then for every
(3.24) 
Proof. Given , let , so that for every vector with nonzero entries, equal to , there exists such that when and otherwise. All we need is to prove that in the situations of (i) and (ii) we have .
In the case of (i) we clearly have , whence , as claimed. In the case of (ii) we have , whence
where is due to the inclusion assumed in (ii). The resulting inequality implies that , as claimed.
4 Efficient bounding of
In the previous section we have seen that the properties of a matrix relative to recovery are governed by the quantities – the less they are, the better. While these quantities is difficult to compute, we are about to demonstrate – and this is the primary goal of our paper – that admits efficiently computable “nontrivial” upper and lower bounds.
4.1 Efficient lower bounding of
Recall that for any . Thus, in order to provide a lower bound for it suffices to supply such a bound for . Theorem 2 suggests the following scheme for bounding from below. By (2.16) we have
Function clearly is convex and efficiently computable: given and solving the LP problem
we get a linear form of which underestimates everywhere and coincides with when . Therefore the easily computable quantity is a lower bound on . We now can use the standard sequential convex approximation scheme for maximizing the convex function over . Specifically, we run the recurrence
thus obtaining a nondecreasing sequence of lower bounds on . We can terminate this process when the improvement in bounds falls below a given threshold, and can make several runs starting from randomly chosen points .
4.2 Efficient upper bounding of .
We have seen that the representation (2.16) suggests a computationally tractable scheme for bounding from below. In fact, the same representation allows for a tractable way to bound from above, which is as follows. Whatever be a matrix , we clearly have
whence also
The right hand side in this relation is easily computable, since the objective in the right hand side problem is linear in , and the domain of in this problem is the convex hull of just points , , where are the basic orths:
Thus, for all ,
so that when setting , we get
Since is an easytocompute convex function of , the quantity also is easy to compute (in fact, this is the optimal value in an explicit LP program with sizes polynomial in ).
This approach can be easily modified to provide an upper bound for . Namely, given a matrix and , , let us set
(4.25) 
As with , we shorten the notation to .
It is easily seen that the optimization problem is (4.25) is solvable, and that is nondecreasing in , convex and nonincreasing in , and is such that for all large enough values of (cf. similar properties of ). The central observation in our context is that is an efficiently computable upper bound on , provided that the norm is efficiently computable. Indeed, the efficient computability of stems from the fact that this is the optimal value in an explicit convex optimization problem with efficiently computable objective and constraints. The fact that is an upper bound on is stated by the following
Theorem 4
One has .
Proof. Let be a subset of of cardinality , be a sparse vector with nonzero entries equal to , and let be the support of . Let be such that and the columns in are of the norm not exceeding . Setting , we have due to for all . Besides this,