The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor...

12
Optim Lett (2017) 11:471–482 DOI 10.1007/s11590-016-1013-9 ORIGINAL PAPER The sparsest solutions to Z-tensor complementarity problems Ziyan Luo 1 · Liqun Qi 2 · Naihua Xiu 3 Received: 5 May 2015 / Accepted: 24 January 2016 / Published online: 11 February 2016 © Springer-Verlag Berlin Heidelberg 2016 Abstract Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxed nonconvex polynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo [email protected] Liqun Qi [email protected] Naihua Xiu [email protected] 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123

Transcript of The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor...

Page 1: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

Optim Lett (2017) 11:471–482DOI 10.1007/s11590-016-1013-9

ORIGINAL PAPER

The sparsest solutions to Z-tensor complementarityproblems

Ziyan Luo1 · Liqun Qi2 · Naihua Xiu3

Received: 5 May 2015 / Accepted: 24 January 2016 / Published online: 11 February 2016© Springer-Verlag Berlin Heidelberg 2016

Abstract Finding the sparsest solutions to a tensor complementarity problem isgenerally NP-hard due to the nonconvexity and noncontinuity of the involved �0 norm.In this paper, a special type of tensor complementarity problems with Z -tensors hasbeen considered. Under some mild conditions, we show that to pursuit the sparsestsolutions is equivalent to solving polynomial programming with a linear objectivefunction. The involved conditions guarantee the desired exact relaxation and also allowto achieve a global optimal solution to the relaxednonconvexpolynomial programmingproblem. Particularly, in comparison to existing exact relaxation conditions, such asRIP-type ones, our proposed conditions are easy to verify.

This research was supported by the National Natural Science Foundation of China (11301022, 11431002),the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20,RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913and 15302114).

B Ziyan [email protected]

Liqun [email protected]

Naihua [email protected]

1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University,Beijing 100044, People’s Repubic of China

2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom,Kowloon, Hong Kong, People’s Repubic of China

3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing,People’s Repubic of China

123

Page 2: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

472 Z. Luo et al.

Keywords Z -tensor · Tensor complementarity problem · Sparse solution · Exactrelaxation · Polynomial programming

1 Introduction

The classical compressed sensing theory (see the pioneeringwork in [4,5,15]) has beengradually generalized and the nonlinear compressed sensing theory has attracted moreand more attentions inspired by significant real-life applications such as sparse phaseretrieval and sparse pulsation frequency detection in Asteroseimology (see [2] and ref-erences therein). Among those nonlinear measurements, the polynomial structure hasbeen employed in many applications cases, such as quadratic measurements in sparsesignal recovery [27], and nonlinear compressed sensing with polynomial measure-ments [26]. Besides the nonlinearity structure carried in many real-life applications,some priori information such as the nonnegativity, can be extensively encountered incommunications, DNA microarrays, spectroscopy, tomography, network monitoring,and hidden Markov models [16,20,23,28,33]. In this regard, the optimality conditionsuch as the KKT condition, which has been widely used in the optimization commu-nity, might be a good surrogate for the involved feasible set. Under some constraintqualifications, the original problem turns out to finding the sparsest solutions to a spe-cial nonlinear complementarity problem with polynomial structures. Mathematically,it can be formulated as

(PCP)min ‖x‖0s.t. F(x) ≥ 0, x ≥ 0, 〈x, F(x)〉 = 0,

where F = ( f1, . . . , fn) and all fi ’s are polynomial functions.Apparently, when each fi in the aforementioned model (PCP) is reduced to be

affine (i.e., a linear function added by some constant), (PCP) is exactly to find thesparsest solutions to a linear complementarity problem (LCP for short). The sparsestsolution to LCP has been studied in [10,32]. However, to our best knowledge, relatedwork on this topic are very limited, partially but essentially due to the complexitycaused by the involved non-convex discontinuous objective function, and partiallyfrom the nonlinearity generated by the underlying complementarity constraint. Forthe former difficulty, many relaxation strategies have been explored by using differ-ent surrogates for the �0 norm, such as the convex �1 norm [4,5,15], the non-convex�p norm [9,18,37], the reweighted �1 norm [6,8], and so on. A natural but essentialquestion arises: is it possible to get an exact solution of the original �0 norm mini-mization problem by the relaxation counterpart? If so, what kind of properties shouldthe involved data possess? For the linear measurement case, the well-known restrictedisometry property (RIP for short) was introduced to guarantee the desired exactness,which has given a great explanation of the popularity of all sorts of random com-pressed sensing approaches [3,4]. There are some other exact relaxation conditionson the coefficient matrix for linear constraints, such as the null space property [40], therange space property [41], the s-goodness property [22] and so on. Most of these exactrelaxation properties are somehow not easy to verify. In [29], a generalized Z -matrix

123

Page 3: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

The sparsest solutions to Z -tensor complementarity problems 473

was introduced. Together with the nonnegativity of the right-hand side observationvector, it serves as an easy-to-check condition for the desired exact relaxation for thelinear sparse optimization. For the nonlinear sparse optimization, such as the sparseLCP, the Z -matrix has been employed to guarantee the exact relaxation [32]. Thenhow about more general polynomial cases? Can we find some Z -type condition toensure an exact solution from the relaxation problem for the original sparse polyno-mial complementarity problem? This is our primary goal in this paper.

Recently, hypermatrices, or tensors,1 as a higher-order generalization of matrices,have been extensively studied [11,12,30,38], which is closely related to polynomialsin terms of those coefficients. This allows us to write a polynomial equation systemin a condense form with tensors. In this regard, when the constraint function in theaforementioned problem (PCP) takes the form F(x) = H(x) − b with some homo-geneous polynomial function H(x) and some vector b ∈ R

n , then the feasible set canbe reformulated as

(TCP) x ≥ 0, Axm−1 − b ≥ 0, 〈x,Axm−1 − b〉 = 0,

wherem−1 is the degree of H(x), xm−1 is a rank one tensor of orderm−1 and dimen-sion nwith its (i1, . . . , im−1)th entry xi1 · · · xim−1 , andA is anmth order n-dimensionaltensor consisting of all the coefficients of H(x) by means of H(x) = Axm−1. Herethe tensor product Axm−1 is defined as (Axm−1)i = ∑n

i2,...,im=1 aii2...im xi2 · · · xim ,for all i = 1, . . . , n. Similarly, we can define Axm−k as

(Axm−k)i1...ik =n∑

ik+1,...,im=1

ai1...ik ik+1...im xik+1 · · · xim , ∀i1, . . . , ik = 1, . . . , n.

(1.1)

The above (TCP) is the so-called tensor complementarity problem which has beenstudied in [7,34,35]. In this paper, we will focus on finding the sparsest solutions to atensor complementarity problem which can be modeled as

(P0)min ‖x‖0s.t. Axm−1 − b ≥ 0, x ≥ 0, 〈x,Axm−1 − b〉 = 0.

Mathematically, problem (P0) is generallyNP-hard due to the objective function ‖x‖0.Inspired by the scheme of themost popular convex relaxation, we could get the a linearsurrogate eT x resulting from the nonnegativity constraint of x . But the nonlinearityfrom the tensor complementarity constraints cannot be easily handled and the existing

1 More precisely, here the tensor should be called the hypermatrix. In physics and mechanics, tensors aregeometric objects equipped with a transformation law that details how the components of the tensor respondto a change of basis, while hypermatrices are simply multi-dimensional arrays. However, for most papers intensor decomposition (e.g., the survey paper [25] in SIAMReview, and the numerical optimization methodsin [24] in Math. Programming), and in tensor complementarity problems (e.g., [34]), the word “tensors” isused for multi-dimensional arrays. We will then roughly use the word “tensors” throughout this paper.

123

Page 4: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

474 Z. Luo et al.

exact relaxation conditions are not that appropriate since most of them are customizedfor linear systems.

In this paper, by employing Z -tensors (see e.g., [13]) and the least element theoryin nonlinear complementarity problems [36], we present that if the involved b isnonnegative andA is a Z -tensor, then a sparsest solution of the tensor complementarityproblem can be achieved by solving the following polynomial programming problem:

(P1)min eT xs.t. Axm−1 − b = 0, x ≥ 0,

where e is the all one vector. In comparison to those existing exact relaxation con-ditions for general nonlinear sparse optimization problems [2], our conditions on thecoefficients of the polynomial functions are easy to check. This is themain contributionof this paper.

The rest of the paper is organized as follows. The concepts of Z -tensor and Z -function are recalled and some useful properties are presented in Sect. 2. The Z -tensorcomplementarity problem is introduced and discussed in Sect. 3. The exact relaxationtheorem is established in Sect. 4. Concluding remarks are drawn in Sect. 5.

For convenience of presentation, the following notations will be used throughoutthe paper. We use R

n and Rn+ to denote the n-dimensional Euclidean space and its

nonnegative orthant respectively. Rn×n is used to denote the space of all real n × nmatrices. Tm,n is used to stand for the set of all real tensors with orderm and dimensionn. Vectors are denoted by lowercase letters such as x , matrices are written as capitalletters such as A, and tensors are written as calligraphic capital letters such as A.

2 Z-Tensors and Z-functions

As a nonlinear generalization of Z -matrices with non-positive off-diagonal elements,the concept of off-diagonally antitone functions was first introduced by Rheinboldtin [31], which further leads to the definition of Z -functions as stated in [36]. In [21],Isac has redefined the Z -functions equivalently by means of an implication system,which has been also widely used in the community of complementarity problems. Inthis section, the definition of Z -functions and some useful properties will be recalled.Particularly, to explore the Z -property of homogeneous polynomial functions, theZ -tensor and the partially Z -tensor will be introduced and analyzed.

Definition 1 (Definition 3.2, [21]) Amapping F : Rn → Rn is said to be a Z -function

if for every x , y, z ∈ Rn+ such that 〈x, y − z〉 = 0, we have 〈x, F(y) − F(z)〉 ≤ 0.

Proposition 1 (Proposition 3.2, [21]) A Gateaux continuous differentiable functionF : Rn → R

n is a Z-function if and only if for any x ∈ Rn+, ∇F(x) is a Z-matrix.

Lemma 1 ([21]) If F : Rn → Rn is a Z-function, then the following implication

holds:

x ∈ Rn+, y ∈ R

n+, 〈x, y〉 = 0 ⇒ 〈x, F(y) − F(0)〉 ≤ 0. (2.1)

123

Page 5: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

The sparsest solutions to Z -tensor complementarity problems 475

Moreover, if F(x) = Ax is a linear function, then A is a Z-matrix, which is equivalentto the following implication:

x ∈ Rn+, y ∈ R

n+, 〈x, y〉 = 0 ⇒ 〈x, Ay〉 ≤ 0. (2.2)

It is known that for any matrix A, Ax is a Z -function if and only if A is a Z matrix,i.e., all of its off-diagonal entries are non-positive. This concept has been extended tothe higher order tensors as stated below.

Definition 2 ([39]) Let A = (ai1...im ) ∈ Tm,n . A is called a Z -tensor if all its off-diagonal entries are nonpositive, i.e., ai1...im ≤ 0 when δi1,...,im = 0.

Another concept called the partially Z -tensor is introduced here.

Definition 3 Let A = (ai1...im ) ∈ Tm,n . We call A a partially Z -tensor if for anyi1 ∈ [n], ai1i2...im ≤ 0 for all i2, . . . , im satisfying i1 /∈ {i2, . . . , im}.

Obviously, a Z -tensor is a partially Z -tensor, and both Z - and partially Z -tensorsof orderm = 2 are exactly Z -matrices. Thus, these two concepts both can be regardedas extensions of the Z -matrix. Properties on these two types of tensors are discussedas follows which will play an essential role in the sequel analysis.

Theorem 1 For any given A ∈ Tm,n, we have

(i) ifA is a partially Z-tensor, then the implication (2.1) holds with F(x) = Axm−1;(ii) if A is a Z-tensor, then F(x) = Axm−1 is a Z-function.

Proof (i) Suppose x , y ∈ Rn+ with 〈x, y〉 = 0. Easily we can get

xi ≥ 0, yi ≥ 0, xi yi = 0, ∀i ∈ [n]. (2.3)

Thus,

〈x, F(y) − F(0)〉

= 〈x,Aym−1〉 =n∑

i=1

xi (Aym−1)i

=n∑

i=1

xi

n∑

i2,...,im=1

aii2...im yi2 · · · yim

=n∑

i=1

⎜⎜⎝

n∑

i2,...,im=1i /∈{i2,...,im }

aii2...im xi yi2 · · · yim +n∑

i2,...,im=1i∈{i2,...,im }

aii2...im xi yi2 · · · yim

⎟⎟⎠

=n∑

i=1

n∑

i2,...,im=1i /∈{i2,...,im }

aii2...im xi yi2 · · · yim

≤ 0,

123

Page 6: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

476 Z. Luo et al.

where the last equality follows from (2.3) and the last inequality follows fromDefinition 3.

(ii) Invoking Proposition 1, it suffices to show that for any x ∈ Rn+, ∇x (Axm−1) is a

Z -matrix. Combining with Lemma 1, we only need to show the implication (2.2)holds with A := Axm−1 for any given x ∈ R

n+. Let y, z be any two nonnegativevectors, and 〈y, z〉 = 0. It yields that

〈y,∇x (Axm−2)z〉

=n∑

i=1

yi

n∑

i2,...,im=1

(aii2...im + ai2i ...im + . . . + ai2...imi

)xi2 · · · xim−1 zim

=n∑

i=1

⎜⎜⎝

n∑

i2,...,im=1im �=i

(aii2...im + ai2i ...im + . . . + ai2...imi

)xi2 · · · xim−1 zim yi

⎟⎟⎠

≤ 0,

where the second equality is from the complementarity of y and z, and the lastinequality is from the fact that A is a Z -tensor. Thus FA is a Z -function. �

3 Z-Tensor complementarity problems

It is known that a tensor complementarity problem always takes the form

(TCP(A, b)) x ≥ 0, Axm−1 − b ≥ 0, 〈x, Axm−1 − b〉 = 0,

which is actually a special nonlinear complementarity problem. When the involvedtensor A is a Z -tensor, the corresponding (TCP(A, b)) is called a Z -tensor comple-mentarity problem. In this section, we will concentrate on exploiting the properties ofsuch a special class of tensor complementarity problems.We start with recalling a niceproperty possessed by general nonlinear complementarity problems with Z -functions.

Theorem 2 (Ex. 3.7.21, [17]) Let F : Rn → Rn be a continuous Z-function. Suppose

that the following nonlinear complementarity problem

(NCP(F)) x ≥ 0, F(x) ≥ 0, 〈x, F(x)〉 = 0

is feasible, i.e., F := {x ∈ Rn : x ≥ 0, F(x) ≥ 0} �= ∅. Then F has a unique least

element x∗ which is also a solution to (NCP(F)).

Inspired by the relation between Z -tensors and Z -functions, we can easily get thefollowing properties for Z -tensor complementarity problems.

Corollary 1 LetA be a Z-tensor and b ∈ Rn. Suppose that the tensor complementar-

ity problem (TCP(A, b)) is feasible, i.e.,F := {x ∈ Rn : x ≥ 0,Axm−1 −b ≥ 0} �=

∅. Then F has a unique least element x∗ which is also a solution to (TCP(A, b)).

123

Page 7: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

The sparsest solutions to Z -tensor complementarity problems 477

Proof Theorem 1 tells us that Axm−1 is a Z -function. Utilizing Proposition 1, it iseasy to verify that Axm−1 − b is also a Z -function for any b ∈ R

n . Thus, the desiredresult follows directly from Theorem 2. �

With a nonnegative b and a partially Z -tensor A in (TCP(A, b)), the tensor com-plementarity problem can be equivalent to a multi-linear equation with nonnegativeconstraints.

Proposition 2 LetA be a partially Z-tensor and b ∈ Rn+. The following two systems

are equivalent:

(i) x ∈ Rn+, Axm−1 − b ∈ R

n+, 〈x,Axm−1 − b〉 = 0;(ii) x ∈ R

n+, Axm−1 − b = 0.

Proof Trivially, any solution to system (ii) is a solution to system (i). Let y be anysolution to system (i). SinceA is a partially Z -tensor and b ∈ R

n+, invoking Theorem2,it yields that

0 ≥ 〈Aym−1 − b,Aym−1〉= 〈Aym−1 − b,Aym−1 − b〉 + 〈Aym−1 − b, b〉≥ ‖Aym−1 − b‖22.

This indicates that Aym−1 − b = 0, which implies that y is a solution to (ii). �Note that Z -tensors are partially Z -tensors. Thus the results in the above proposition

hold for Z -tensors.

Corollary 2 Let A be a Z-tensor and b ∈ Rn+. The following two systems are equiv-

alent:

(i) x ∈ Rn+, Axm−1 − b ∈ R

n+, 〈x,Axm−1 − b〉 = 0;(ii) x ∈ R

n+, Axm−1 − b = 0.

Utilizing the aforementioned equivalence, we can characterize the feasibility ofTCP(A, b) in terms of the consistency of the corresponding nonnegative constrainedmulti-linear equation. Before stating the feasibility, we recall the definition of M-tensors, which form an important subclass of Z -tensors.

Definition 4 ([39]) Let A ∈ Tm,n be a Z -tensor with A = sI − B, where I is theidentity tensor whose diagonal entries are 1 and others 0, B is a nonnegative tensorand s ∈ R

n+. If s ≥ ρ(B), then A is called an M-tensor. If s > ρ(B), then A is calleda strong M-tensor. Here ρ(B) stands for the spectral radius of B.Proposition 3 If A is a strong M-tensor and b ∈ R

n+, then (TCP(A, b)) is feasible.

Proof Let A = sI − B be a strong M-tensor with B ≥ 0 and s > ρ(B). Invokingthe equivalence as established in Corollary 2, it suffices to show that that there existssome nonnegative x such that Axm−1 = b. Let Ts,B,b : Rn+ → R

n+ be the mappingdefined as follows:

Ts,B,b(x) := (s−1Bxm−1 + s−1b)

[1

m−1

]

, ∀x ∈ Rn,

123

Page 8: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

478 Z. Luo et al.

where x

[1

m−1

]

is the vector with its i th component x

[1

m−1

]

i , for all i = 1, . . . , n. Easily,we can find that the required nonnegative solution x is exactly a fixed point of thismapping Ts,B,b. Besides, since A is a strong M-tensor, applying Theorem 3 in [13],there always exists a positive z such that Azm−1 > 0. Denote

α := mini∈[n]

bi(Azm−1)i

, and β := maxi∈[n]

bi(Azm−1)i

.

Obviously,

0 ≤ αAzm−1 ≤ b ≤ βAzm−1.

Set v := A(α

1m−1 z

)m−1and w := A

1m−1 z

)m−1. Therefore,

α1

m−1 z = Ts,B,v

1m−1 z

)≤ Ts,B,b

1m−1 z

),

β1

m−1 z = Ts,B,w

1m−1 z

)≥ Ts,B,b

1m−1 z

)

Note that Ts,B,b is an increasing continuous mapping on Rn+. By employing the fixedpoint theorem in [1] (also see Theorem 3.1 in [14]), there exists at least one fixed point

x of Ts,B,b such that 0 ≤ α1

m−1 z ≤ x ≤ β1

m−1 z. This completes the proof. �

4 Exact relaxation

Now we are in a position to establish the exact relaxation theorem for the �0 normminimization problem (P0).

Theorem 3 Let A be a Z-tensor and b ∈ Rn+. If the problem (P0) is feasible, then

(P0) has a solution x∗ which is also the unique solution to the problem (P1).

Proof Invoking Theorem 2, we know that Axm−1 is a Z -function. Together withProposition 1, it is easy to verify thatAxm−1 − b is also a Z -function. Thus, Theorem2 tells us that there exists a unique least element x∗ in F as defined in Corollary 1,which is also a solution of the tensor complementarity problem. The nonnegativityconstraint directly yields that x∗ is one of the sparsest solutions of (P0). Utilizing theequivalence as shown in Corollary 2, x∗ is definitely the unique solution of (P1) bythe fact that it should be the least element in {x ∈ R

n : x ≥ 0,Axm−1 = b}. Thiscompletes the proof. �Corollary 3 LetA be a strong M-tensor and b ∈ R

n+. Then problem (P1) is uniquelysolvable and the unique solution is also an optimal solution to problem (P0).

Proof This follows directly from Proposition 3 and Theorem 3. �Some simple examples are introduced to illustrate the exact relaxation theorem for

pursuing the sparsest solution to strong M-tensor complementarity problems.

123

Page 9: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

The sparsest solutions to Z -tensor complementarity problems 479

Example 1 (i) Let A = (ai1i2i3i4

) ∈ T4,2 with a1111 = a2222 = 1, a1112 = −2, andother entries 0, and b = (0, 1)�. It is known from [19] that A is a strong M-tensor and the solution set of the corresponding tensor complementarity problem(TCP(A, b)) is {(0, 1)�, (2, 1)�}. By direct calculation, (0, 1)� is the uniquecommon solution to both problems (P0) and (P1).

(ii) Let A = (ai jk

) ∈ T3,n be some strictly diagonally dominated Z -tensor (andhence strong M-tensor as shown in [39]) with ai j j = 0 for any i �= j , i , j ∈{1, . . . , n}, and b ∈ R

n+ with ‖b‖0 = 1. In this case, it is not hard to verify that

x∗ = (0, . . . , 0,√

biaiii

, 0, . . . , 0)� is the unique solution to problem (P0), wherei is the index such that bi �= 0. This optimal solution is also the unique solutionto problem (P1) by direct verification or by applying the Guass–Seidel iterationmethod in [14] for the system Ax2 = b with the zero vector as the initial point.

Remark 1 The above two cases are easy to handle. For general strong M-tensor com-plementarity problems with b ∈ R

n+, the exact relaxation theorem in Corollary 3always holds, which allows us to get the sparsest solution by solving the corre-sponding minimization problem with a linear objective function and homogeneouspolynomial equality constraints together with a nonnegative constraint. We now applythe aforementioned Guass–Seidel iteration method in [14] for the multi-linear equa-tionAxm−1 = bwith the zero vector as the initial point to get theminimal nonnegativesolution x∗, which is exactly the solution to problem (P1) and hence to problem (P0).Here we will randomly generate 1000 instances of the strong M-tensor A ∈ T3,100by A = ηI − B with B = (

bi jk) = rand(100, 100, 100), bi j j = 0 for any i ,

j ∈ {1, . . . , 100}, η = 1.1×max(sum(sum(B, 3), 2)), and b = 10 × rand(100, 1).Both B and b will be treated with certain sparsity. The sparsity of the solution is listedin Table 1.

Here p(A) stands for the percentage of nonzero entries of A, which indicatesthe sparsity of A, x is the truncated vector of the optimal solution to problem (P1)with components larger than 1e-4, and s(x) is the average sparsity of x for all testedinstances. Observe that the sparsity of the solution greatly relies on the sparsity of theinput data A and b. For example, if bi �= 0, then xi �= 0 by the structure of A. Thatis, ‖x∗‖0 ≥ ‖b‖0. And the sparsity of A will also greatly effect the sparsity of thesolution as Table 1 shows.

Some extended result on exact relaxation theory is discussed as follows. For anymatrix P = (pi j ) ∈ R

n×n , we define a linear operator PA : Tm,n → Tm,n as follows:

(PA)i1i2...im =n∑

i=1

pi1i aii2...im , ∀A = (ai1i2...im

) ∈ Tm,n .

Table 1 The Gauss–SeidelIteration Method in [14] forProblem (P1)

p(A) 90 % 20 % 90 % 90 % 90 % 20 % 10 %

‖b‖0 100 100 10 2 1 10 10

s(x) 100 100 95.2 95.1 1 45.9 28.1

123

Page 10: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

480 Z. Luo et al.

This is also treated as a special tensor-matrix product in the tensor community (seee.g., [25]). Evidently, if P is an invertible matrix, then PTm,n = Tm,n . However, thisoperator cannot preserve the Z -property for Z -tensors. Note that the right-hand sideof the multi-linear system Axm−1 = b is actually a condense form of n homoge-neous polynomials of degree m with any row tensor Ai formed by the correspondingcoefficients of the i th polynomial. If we change the order of these equations in themulti-linear system, the solution set will not be affected. This observation allows usto generalize the result in Theorem 3. For convenience, we use PZ

m,n to denote the setof all tensors of order m and dimension n which can be transformed to Z -tensors withsome permutation matrices, i.e.,

PZm,n := {A ∈ Tm,n : PA is a Z -tensor, P is a permutation matrix}

Corollary 4 Suppose A ∈ PZm,n and b ∈ R

n+. If the problem (P0) is feasible, then(P0) has a unique solution x∗ which is also the unique solution to the problem (P).

5 Conclusions

To pursuit the sparsest solutions to a tensor complementarity problem can be for-mulated as an �0 norm minimization with tensor complementarity constraints, whichis always NP-hard. Based on the properties of Z -tensors, we show that one of thesparsest solutions of the Z -tensor complementarity problem can be achieved in poly-nomial time by solving a polynomial programming problem with a linear objectivefunction, such as the Gauss-Seidel iteration method proposed in [14]. The involvedcondition on the input of data is indeed an exact relaxation condition for the original �0norm minimization. It is worth mentioning that in comparison to other existing exactrelaxation conditions in the community of sparse optimization or compressed sensing,our proposed condition is easy to verify. Including the sparse tensor complementarityproblem as a special case, the topic of sparse optimization with general nonlinearcomplementarity constraints deserves further study.

Acknowledgements The authors would like to thank two anonymous referees and the associated editorfor their valuable comments, and Mr. Weiyang Ding and Prof. Wei for the proof of Proposition 3 and theirmatlab codes of the Gauss-Seidel iteration method.

References

1. Amann, H.: Fixed point equations and nonlinear eigenvalue problems in ordered banach spaces. SIAMRev. 18(4), 620–709 (1976)

2. Blumensath, T.: Compressed sensing with nonlinear observations and related nonlinear optimizationproblems. IEEE Trans. Inf. Theory 59(6), 3466–3474 (2013)

3. Candes, E.J.: The restricted isometry property and its implications for compressed sensing. ComptesRendus Math. 346(9), 589–592 (2008)

4. Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measure-ments. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

5. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215(2005)

123

Page 11: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

The sparsest solutions to Z -tensor complementarity problems 481

6. Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted �1 minimization. J. FourierAnal. Appl. 14(5–6), 877–905 (2008)

7. Che,M.,Qi,L.,Wei,Y.: Positive definite tensors to nonlinear complementarity problems. JOptimTheoryAppl. 168, 475–487 (2016)

8. Chen, X., Ge, D.,Wang, Z., Ye, Y.: Complexity of unconstrained l2−l p minimization. Math. Program.143(1–2), 371–383 (2014)

9. Chen, X., Ng, M.K., Zhang, C.: Non-Lipschitz-regularization and box constrained model for imagerestoration. IEEE Trans. Image Process. 21(12), 4709–4721 (2012)

10. Chen, X., Xiang, S.: Sparse solutions of linear complementarity problems. Preprint (2014)11. Comon, P., Golub, G., Lim, L.H., Mourrain, B.: Symmetric tensors and symmetric tensor rank. SIAM

J. Matrix Anal. Appl. 30(3), 1254–1279 (2008)12. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-(R1, R2, · · · , RN )

approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21(4), 1324–1342 (electronic)(2000)

13. Ding, W., Qi, L., Wei, Y.: M-tensors and nonsingular M-tensors. Linear Algebra Appl. 439(10), 3264–3278 (2013)

14. Ding, W., Wei, Y.: Solving multi-linear systems with M-tensors. Preprint (2015)15. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)16. Donoho, D.L., Tanner, J.: Precise undersampling theorems. Proc. IEEE 98(6), 913–924 (2010)17. Facchinei, F., Pang, J.S.: Finite-dimensional Variational Inequalities and Complementarity Problems.

Springer Series in Operations Research, vol. I. Springer-Verlag, New York (2003)18. Foucart, S., Lai, M.J.: Sparsest solutions of underdetermined linear systems via lq -minimization for

0 < q ≤ 1. Appl. Comput. Harmon. Anal. 26(3), 395–407 (2009)19. Gowda, M.S., Luo, Z., Qi, L., Xiu, N.: Z -tensors and complementarity problems. arXiv preprint

arXiv:1510.07933 (2015)20. Harmany, Z.T., Marcia, R.F., Willett, R.M.: Spiral out of convexity: sparsity-regularized algorithms

for photon-limited imaging. In: IS&T/SPIE Electronic Imaging, pp. 75330R–75330R. InternationalSociety for Optics and Photonics (2010)

21. Isac, G.: Complementarity problems. Springer-Verlag, Berlin (1992)22. Juditsky, A., Nemirovski, A.: On verifiable sufficient conditions for sparse signal recovery via �1

minimization. Math. Program. 127(1), 57–88 (2011)23. Khajehnejad, M.A., Dimakis, A.G., Xu, W., Hassibi, B.: Sparse recovery of nonnegative signals with

minimal expansion. IEEE Trans. Signal Process. 59(1), 196–208 (2011)24. Kolda, T.G.: Numerical optimization for symmetric tensor decomposition. Math. Program. 151(1),

225–248 (2015)25. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAMRev. 51(3), 455–500 (2009)26. Lauer, F., Ohlsson, H.: Finding sparse solutions of systems of polynomial equations via group-sparsity

optimization. J. Glob. Optim. 62(2), 319–349 (2015)27. Li, X., Voroninski, V.: Sparse signal recovery from quadratic measurements via convex programming.

SIAM J. Math. Anal. 45(5), 3019–3033 (2013)28. Liu, Y.F., Dai, Y.H., Luo, Z.Q.: Joint power and admission control via linear programming deflation.

IEEE Trans. Signal Process. 61(6), 1327–1338 (2013)29. Luo, Z., Qin, L., Kong, L., Xiu, N.: The nonnegative zero-norm minimization under generalized Z -

matrix measurement. J. Optim. Theory Appl. 160(3), 854–864 (2014)30. Qi, L.: Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 40(6), 1302–1324 (2005)31. Rheinboldt, W.C.: On M-functions and their application to nonlinear Gauss–Seidel iterations and to

network flows. J. Math. Anal. Appl. 32, 274–307 (1970)32. Shang, M., Zhang, C., Xiu, N.: Minimal zero norm solutions of linear complementarity problems. J.

Optim. Theory Appl. 163(3), 795–814 (2014)33. Sheikh, M.A., Sarvotham, S., Milenkovic, O., Baraniuk, R.G.: Dna array decoding from nonlinear

measurements by belief propagation. In: IEEE/SP 14th Workshop on statistical signal processing,2007. SSP’07, pp. 215–219. IEEE (2007)

34. Song, Y., Qi, L.: Properties of some classes of structured tensors. J. Optim. Theory Appl. 165(3),854–873 (2014)

35. Song, Y., Qi, L.: Properties of tensor complementarity problem and some classes of structured tensors.arXiv preprint arXiv:1412.0113 (2014)

123

Page 12: The sparsest solutions to Z-tensor complementarity problems · The sparsest solutions to Z-tensor complementarity problems 473 was introduced. Together with the nonnegativity of the

482 Z. Luo et al.

36. Tamir, A.: Minimality and complementarity properties associated with Z -functions and M-functions.Math. Program. 7, 17–31 (1974)

37. Xu, Z., Chang, X., Xu, F., Zhang, H.: regularization: a thresholding representation theory and a fastsolver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)

38. Yang, Y., Yang, Q.: Further results for Perron–Frobenius theorem for nonnegative tensors. SIAM J.Matrix Anal. Appl. 31(5), 2517–2530 (2010)

39. Zhang, L., Qi, L., Zhou, G.: M-tensors and some applications. SIAM J. Matrix Anal. Appl. 35(2),437–452 (2014)

40. Zhang, Y.: Theory of compressive sensing via �1-minimization: a non-RIP analysis and extensions. J.Oper. Res. Soc. China 1(1), 79–105 (2013)

41. Zhao, Y.B.: RSP-based analysis for sparsest and least �1-norm solutions to underdetermined linearsystems. IEEE Trans. Signal Process. 61(22), 5777–5788 (2013)

123