Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex...

142
Charles University in Prague Faculty of Mathematics and Physics HABILITATION THESIS Milan Hlad´ ık Interval linear algebra Department of Applied Mathematics Prague 2014

Transcript of Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex...

Page 1: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Charles University in Prague

Faculty of Mathematics and Physics

HABILITATION THESIS

Milan Hladık

Interval linear algebra

Department of Applied Mathematics

Prague 2014

Page 2: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Contents

Preface 3

1 Interval linear algebra 51.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2 Systems of interval linear equations . . . . . . . . . . . . . . . . . 8

1.2.1 Enclosure methods . . . . . . . . . . . . . . . . . . . . . . 81.2.2 Linear dependencies . . . . . . . . . . . . . . . . . . . . . 91.2.3 AE solution set . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3 The interval eigenvalue problem . . . . . . . . . . . . . . . . . . . 121.3.1 The symmetric case . . . . . . . . . . . . . . . . . . . . . . 121.3.2 The general case . . . . . . . . . . . . . . . . . . . . . . . 13

1.4 Further topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Bibliography 16

Reprints of papers 20M. Hladık. New operator and method for solving real preconditioned

interval linear equations. SIAM J. Numer. Anal., 2014 . . . . . . 20M. Hladık. Solution set characterization of linear interval systems with

a specific dependence structure. Reliab. Comput., 2007 . . . . . . 33M. Hladık. Description of symmetric and skew-symmetric solution set.

SIAM J. Matrix Anal. Appl., 2008 . . . . . . . . . . . . . . . . . 47M. Hladık. Enclosures for the solution set of parametric interval linear

systems. Int. J. Appl. Math. Comput. Sci., 2012 . . . . . . . . . 60E. D. Popova & M. Hladık. Outer enclosures to the parametric AE

solution set. Soft Comput., 2013 . . . . . . . . . . . . . . . . . . . 74M. Hladık, D. Daney & E. Tsigaridas. Bounds on real eigenvalues and

singular values of interval matrices. SIAM J. Matrix Anal. Appl.,2010. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

M. Hladık, D. Daney & E. Tsigaridas. An algorithm for addressing thereal interval eigenvalue problem. J. Comput. Appl. Math., 2011 . 100

M. Hladık, D. Daney & E. Tsigaridas. Characterizing and approximat-ing eigenvalue sets of symmetric interval matrices. Comput. Math.Appl., 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

M. Hladık, D. Daney & E. Tsigaridas. A filtering method for the intervaleigenvalue problem. Appl. Math. Comput., 2011 . . . . . . . . . 128

M. Hladık. Bounds on eigenvalues of real and complex interval matrices.Appl. Math. Comput., 2013 . . . . . . . . . . . . . . . . . . . . . 135

2

Page 3: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Preface

This habilitation thesis consists of 10 journal papers, supplemented by a com-mentary. The papers are (co-)authored by M. Hladık, and their subject belongsto interval linear algebra.

The thesis is structured as follows. In the first part, we introduce the readerto the topic with particular emphasis on author’s contribution. More specifically,we focus on interval linear systems of equations (Section 1.2) and the intervaleigenvalue problem (Section 1.3). We briefly mention also further related areasthe author works in (Section 1.4). Generally, most of the author’s contribution isof algorithmic essence – developing polynomial methods for approximating NP-hard problems. Some of his results are more theoretical – he extended the list ofknown NP-hard problems by posting several new ones, and also derived explicitdescription of some complicated sets.

In the second part, we attach reprints of those 10 papers the thesis is basedon. Below, we give a short summary accompanied with a brief description. Forinterval linear systems of equations, we attach the papers:

• Milan Hladık. New operator and method for solving real preconditionedinterval linear equations. SIAM J. Numer. Anal., 52(1):194–206, 2014.

In this paper, we develop a new method for solving interval linear equa-tions. It outperforms some classical methods with respect to both time andsharpness of enclosures.

• Milan Hladık. Solution set characterization of linear interval systems witha specific dependence structure. Reliab. Comput., 13(4):361–374, 2007

We extend characterization of the solution set of interval linear equations forthe case when there is a simple dependence structure between the intervalcoefficients.

• Milan Hladık. Description of symmetric and skew-symmetric solution set.SIAM J. Matrix Anal. Appl., 30(2):509–521, 2008.

We derive an explicit characterization of the solution set of interval linearequations under the restriction that the constraint matrix must be symmet-ric resp. skew-symmetric.

• Milan Hladık. Enclosures for the solution set of parametric interval linearsystems. Int. J. Appl. Math. Comput. Sci., 22(3):561–574, 2012.

We generalize some enclosure methods for interval linear equations to thecase when the matrix and the right-hand side entries depend linearly oninterval parameters.

• Evgenija D. Popova and Milan Hladık. Outer enclosures to the parametricAE solution set. Soft Comput., 17(8):1403–1414, 2013.

We consider systems of linear equations, where the elements of the ma-trix and of the right-hand side vector are linear functions of interval pa-rameters. We study parametric AE solution sets, which are defined by

3

Page 4: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

∀∃-quantification of parameters. We generalize classical methods to obtainpolynomially computable outer bounds for parametric AE solution sets.

The interval eigenvalue problem is accompanied by reprints of the followingpapers:

• Milan Hladık, David Daney, and Elias Tsigaridas. Bounds on real eigenval-ues and singular values of interval matrices. SIAM J. Matrix Anal. Appl.,31(4):2116–2129, 2010.

We propose computationally cheap bounds on the eigenvalues for generaland for symmetric interval matrices, and also for singular values of (non-square) interval matrices.

• Milan Hladık, David Daney, and Elias P. Tsigaridas. An algorithm foraddressing the real interval eigenvalue problem. J. Comput. Appl. Math.,235(8):2715–2730, 2011.

We develop an algorithm for approximation of the real eigenvalues of inter-val matrices with a given precision.

• Milan Hladık, David Daney, and Elias P. Tsigaridas. Characterizing andapproximating eigenvalue sets of symmetric interval matrices. Comput.Math. Appl., 62(8):3152–3163, 2011.

We present a characterization of some of the boundary eigenvalues of sym-metric interval matrices. Based on this result, we introduce an inner ap-proximation algorithm that in many cases finds exact bounds.

• Milan Hladık, David Daney, and Elias P. Tsigaridas. A filtering methodfor the interval eigenvalue problem. Appl. Math. Comput., 217(12):5236–5242, 2011.

We propose a filtering method that iteratively improves a given outer ap-proximation of eigenvalues of an interval matrix.

• Milan Hladık. Bounds on eigenvalues of real and complex interval matrices.Appl. Math. Comput., 219(10):5584–5591, 2013.

We present a computationally cheap and tight formula for bounding realand imaginary parts of eigenvalues of real or complex interval matrices.

4

Page 5: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

1. Interval linear algebra

Interval computation, roughly speaking, studies problems with interval input da-ta. Intervals naturally appear in many situations:

• Rounding errors

A real number cannot be represented exactly in the floating point arith-metic. In order to get reliable results, it is a natural approach to enclosethe result of each operation in as small as possible interval covering the truevalue. For example,

1

3∈ [0.3333333333333333333, 0.3333333333333333334],

√2 ∈ [1.4142135623730950488, 1.4142135623730950489],

π ∈ [3.1415926535897932384, 3.1415926535897932385].

• Computer-assisted proofs

Some mathematical proofs, such as Kepler conjecture or the Double bub-ble problem, were carried out with aid of computers. Employing intervalmethods here was necessary to obtain verified results.

• CSP and global optimization

In the Constraint satisfaction problem or in Global optimization, we have tofind a solution (or the best solution) subject to some nonlinear constraints.Interval computation enables in a numerically reliable way to remove partsof the initial domain that contain no solution, and by an extensive spacesearch it encloses all solutions with arbitrary precision.

• Modelling uncertainty

Intervals are also used to model uncertain, inexact or incomplete data.Results of measurements are often expressed in the form of v±∆v, meaningthat the true value, which is not observable, lies in the interval [v−∆v, v+∆v]. Continuous variables are sometimes discretized into a finite set ofvalues – typically, we split time into time slots (days, years, . . . ). Incompletedata may arise due to lack of knowledge or data protection – personalinformation (salary, age, . . . ) is often categorized in interval ranges.

Interval linear algebra gives a necessary mathematical background for inter-val computation. The same role the standard linear algebra plays in the modernscience, the interval linear algebra plays for studying interval problems. Funda-mental problems of interval linear algebra are analogous to those of traditionallinear algebra; examples include solving interval linear equations, testing non-singularity or computing eigenvalues of interval matrices. Thus, interval linearalgebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares and statistics on intervaldata, for instance.

5

Page 6: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

1.1 Introduction

Notation. We begin with the key notion of an interval matrix, which is thefamily of matrices

A := {A ∈ Rm×n; A ≤ A ≤ A},where A,A ∈ Rm×n, A ≤ A, are given. The inequality ≤ for matrices and vectorsis understood entrywise throughout the thesis. The center and radius matricesof A are defined as

Ac :=1

2(A+ A), A∆ :=

1

2(A− A).

The set of all interval matrices of size m×n is denoted by IRm×n. Interval vectorsand intervals are considered as special cases of interval matrices.

The magnitude of A ∈ IRm×n is defined as mag(A) := max {|A|; A ∈ A} =max(|A|, |A|), where the maximum and absolute value functions are understoodentrywise.

Let a set S ⊂ Rn be given. The interval hull of S, denoted by �S, is thesmallest interval vector containing S, that is

�S :=⋂

v∈IRn: S⊆vv.

As we will see later, to determine the interval hull is a computationally hardproblem in general. So we often weaken the goal to compute an enclosure instead.An enclosure of a set S ⊂ Rn is any interval vector v ∈ IRn of such that S ⊆v. Naturally, one seeks for as small as possible enclosures. The computationof interval hulls or enclosures of various sets (described explicitly or implicitly)belongs to the basic problems studied in interval computation.

Interval arithmetic. Interval arithmetic is the basic tool to compute enclosuresfor the images of real functions defined by arithmetic expressions. Let ◦ be a basicoperation – addition, subtraction, multiplication or division. For a, b ∈ IR wedefine

a ◦ b := {a ◦ b; a ∈ a, b ∈ b},

with 0 6∈ b in case of division. It is not hard to see that for particular operationsthe interval arithmetic reads

a+ b = [a+ b, a+ b],

a− b = [a− b, a− b],ab = [min(ab, ab, ab, ab),max(ab, ab, ab, ab)],

a/b = [min(a/b, a/b, a/b, a/b),max(a/b, a/b, a/b, a/b)].

Looking at real numbers as zero-radius intervals, interval arithmetic generalizesthe classical one, and we can use mixed expressions like 2[3, 4]+5 with the meaning[2, 2][3, 4] + [5, 5].

Interval addition and multiplication is known to be commutative, associativeand sub-distributive. More details on interval arithmetic and interval computa-tion in general can be found in books [1, 33, 34, 35].

6

Page 7: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Interval extensions of functions. One of the fundamental problems in intervalanalysis is computation of the range of a function over intervals. Let f : Rn 7→ Rbe a real function and x ∈ IRn. The range of f over x is defined as

f(x) := {f(x); x ∈ x}.For continuous and monotone functions, the range is easy to determine. If fis non-decreasing on x ∈ IR, then f(x) = [f(x), f(x)] and likewise for non-increasing functions. For some simple functions, such as the sine or square, therange is easily determined, too.

In general, the range f(x) need not be an interval. In fact, it may be neitherclosed nor connected. Moreover, checking whether 0 ∈ f(x) is an undecidableproblem [51]. Thus one aims at determining its interval hull �f(x) instead.Besides numerical aspects of exact calculation, computing �f(x) is still a verydifficult problem in general. That is why we usually cannot hope for exact com-putation of �f(x), and we have to rely on more-or-less tight enclosures. Badnews are that even calculating a sufficiently tight enclosure may be still a com-putationally hard problem.

Consider a mapping f : IRn 7→ IR, called an interval function. We say thatit is inclusion isotonic if for every x,y ∈ IRn :

x ⊆ y ⇒ f(x) ⊆ f(y).

Further, the interval function f : IRn 7→ IR is an interval extension of f : Rn 7→ Rif for every x ∈ Rn :

f(x) = f(x).

These two properties, inclusion isotonicity and interval extension, are enough toget a proper enclosure for the range of f(x). The following theorem is alreadyby Moore [33].

Theorem 1.1 (Fundamental theorem of interval analysis). If f : IRn 7→ IR isinclusion isotonic and is an interval extension of f : Rn 7→ R, then for everyx ∈ IRn :

f(x) ⊆ f(x).

Interval arithmetic is one example of an inclusion isotonic interval extension.Suppose that we have an arithmetic expression for f : Rn 7→ R using only a finitenumber of arithmetic operations. The corresponding natural interval extension fof f is defined by that expression when replacing real arithmetic by the intervalone.

Theorem 1.2. Natural interval extension of an arithmetic expression is both aninterval extension and inclusion isotonic.

Dependency problem. When evaluating arithmetic expressions by intervalarithmetic, we face the so called dependency problem. As an example, con-sider f(x) = x2 − 2x and x ∈ x = [1, 2]. Direct evaluation yields the enclosuref(x) ⊆ x2 − 2x = [−3, 2], but an equivalent formulation f(x) = (x − 1)2 − 1results in the exact image f(x) = (x − 1)2 − 1 = [−1, 0]. The reason is thatin the latter expression, the parameter appears only once, and so the evaluationby interval arithmetic is exact, while in the former the parameter appears twice.Unfortunately, not every function can be expressed in this way, and that is whydirect evaluations can suffer from high overestimation of the true image.

7

Page 8: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

1.2 Systems of interval linear equations

Let A ∈ IRm×n and b ∈ IRm. The system of interval linear equations is a familyof linear systems

Ax = b, A ∈ A, b ∈ b.

We denote this family shortly as

Ax = b,

but the aim is not to find an (interval) vector x that satisfies these equations.A solution is defined as a solution to a system Ax = b for some A ∈ A and b ∈ b.The solution set is defined as a set of all solutions and denoted

Σ := {x ∈ Rn; ∃A ∈ A, ∃b ∈ b : Ax = b}. (1.1)

Comprehensive works on interval system include [1, 6, 33, 34, 35].

Characterization. The well-known characterization of Σ comes from [38].

Theorem 1.3 (Oettli–Prager, 1964). The solution set Σ is described by the in-equality system

|Acx− bc| ≤ A∆|x|+ b∆.

From the description of Σ we see that it represents a non-convex polyhedralset, which is however convex in each orthant. The problem of checking Σ 6= ∅ isNP-hard; see [32]. This is true even in the class of problems with m = n; cf. [6].

1.2.1 Enclosure methods

Here we restrict ourselves to the most typical case m = n, and we want tocompute an enclosure of Σ. Many methods for computing enclosures of Σ arejust extensions of the classical methods for the real case. This involves Gaussianelimination or Jacobi and Gauss–Seidel iterations. There are also some specificalgorithms developed directly for the interval case, for example the Krawczykmethod (see [30, 34, 35]), the Hansen–Bliek–Rohn method (see [3, 6, 8, 36, 37, 43])or the Magnitude method [25].

Preconditioning. Most of the methods use preconditioning, which was firstintroduced for interval systems in Hansen [7]. Preconditioning by a matrix C ∈Rn×n means that we multiply both sides of Ax = b by C in interval arithmeticto obtain a new interval system

(CA)x = Cb.

Due to properties of interval arithmetic, the solution set to the new system con-tains the original one. Even though the new solution set is larger than the originalone, the overestimation is usually small. The main argument for preconditioningis that many methods perform better with preconditioning.

8

Page 9: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Mostly, we precondition by a numerically computed (Ac)−1, which has goodboth theoretical properties and practical performance. If we precondition by(Ac)−1, then the interval hull can be computed in polynomial time by using theHansen–Bliek–Rohn method. However, other methods are useful as well sincethey are faster and the overestimation is low. We briefly describe the Gauss–Seidel and Magnitude methods.

Gauss–Seidel method. It is an iterative method that start with an initialenclosure x of Σ, and then iteratively improves

xi :=1

aii

(bi −

j 6=i

aijxj

)∩ xi, i = 1, . . . , n.

Magnitude method. It is a very recent method from Hladık [25] using a kindof generalization of the Gauss–Seidel method. We assume that Ax = b is alreadypreconditioned. Next, we assume that Ac = In, which is easily done by suitableinflation of interval entries of A. Eventually, we assume that for the spectralradius of A∆, denoted by ρ(A∆), we have ρ(A∆) < 1; this is a necessary andsufficient condition for A to contain nonsingular matrices only. Now, the methodworks as follows:

1. Compute u, an enclosure of the solution of the real system Au = mag(b).

2. Calculatedi := aii/(1− ((A∆)2)ii), i = 1, . . . , n.

3. Evaluate

x∗i :=bi + (

∑j 6=i a

∆ijuj − γiui)[−1, 1]

aii + γi[−1, 1], i = 1, . . . , n,

where γi := aii − 1/di.

As a result, we have Σ ⊆ x∗. The third step of the method is one Gauss–Seidel-like iteration applied to the enclosure [−u, u] ⊇ Σ. It was proved in [25] thatthe computed enclosure x∗ si always as tight as that calculated by the intervalGauss–Seidel method. Moreover, if di are replaced by (A−1)ii, i = 1, . . . , n, thenΣ = x∗, up to numerical precision.

1.2.2 Linear dependencies

So far, we considered the case when parameters can attain any value from theirinterval domains independently of each other. This assumption is very restrictiveand hardly satisfied in practice. Mostly, there are some dependencies betweenthe interval parameters. Thus, to take dependencies into account is an importantand challenging problem.

As a rich class of systems with dependencies, consider problems with linearparametric structure

A(p)x = b(p),

9

Page 10: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

where A(p) =∑K

k=1Akpk, b(p) =∑K

k=1 bkpk and p ∈ p for some given inter-val vector p ∈ IRK , matrices A1, . . . , AK ∈ Rn×n and vectors b1, . . . , bn ∈ Rn.This linear parametric case covers a wide area of interval systems with depen-dencies. For instance, the interval systems Ax = b, where the constraint matrixis supposed to be symmetric, skew-symmetric, Toeplitz or Hankel.

The solution set. The solution set of a parametric interval system is definedas

Σp = {x ∈ Rn; A(p)x = b(p) for some p ∈ p}.Many researcher tried to generalize the Oettli–Prager Theorem 1.3 for the para-metric case, but no simple characterization appeared. Hladık [11] derived anOettli–Prager-like characterization of interval linear equations and inequalitieswith a simple dependence structure given by multiple appearance of a submatrixin the constraints. This was utilized by Hladık [16] to characterize solutions of asystem of complex interval equations, where complex intervals have a rectangularform. Fourier–Motzkin type elimination characterizing the solution set by a sys-tem of nonlinear inequalities was utilized in [2, 39], but the number of inequalitiesmay be doubly exponential.

The symmetric solution set. Symmetry of the constraint matrix is a specialtype of the linear parametric form. Herein, the symmetric solution set reads

{x ∈ Rn; Ax = b for some symmetric A ∈ A}.

Explicit description of this set was derived in Hladık [12], and it uses “only”exponentially many inequalities

A∆|x|+ b∆ ≥ |r|,n∑

i,j=1

a∆ij |xixj(pi − qj)|+

n∑

i=1

b∆i |xi(pi + qi)| ≥

∣∣∣∣∣n∑

i=1

rixi(pi − qi)∣∣∣∣∣

for all vectors p, q ∈ {0, 1}n \ {0, 1} such that

p≺lex q and (p = 1− q ∨ ∃i : pi = qi = 0),

where r := −Acx+ bc and “≺lex ” denotes the lexicographical ordering.It remains open whether there is a polynomial characterization by means

of nonlinear inequalities. Anyway, checking whether x ∈ Σp for a given x ∈Rn is polynomially decidable via linear programming even for a general linearparametric solution set.

Enclosures. To find an enclosure of Σp, we can simply “forget” the dependenciesand enclose by standard methods the relaxed system Ax = b, where A := A(p)and b := b(p) are evaluated by interval arithmetic. Utilizing the dependencies,however, leads to tighter enclosures. Various extensions of standard methods tothe parametric case were presented in [42, 47, 48, 50], among others. In particular,Hladık [20] generalized the Bauer–Skeel and the Hansen–Bliek–Rohn bounds forthis case and combined them together, yielding a more efficient algorithm.

10

Page 11: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

1.2.3 AE solution set

In the definition (1.1) of the standard solution set, interval parameters are asso-ciated with existential quantifiers. In some problems, universal quantifiers mayappear, too. Thus, a natural generalization of the solution concept is to asso-ciate each parameter pk ∈ ∪i,j{aij, bi} with a quantification Qk ∈ {∀,∃}. Now, asolution is any vector x ∈ Rn satisfying the quantified formula

Q1p1, Q2p2, . . . , Qn2+npn2+n : Ax = b.

To treat such solutions is a tempting problem. The known results are con-cerned with the special case of π2-quantification, called AE quantification in inter-val community (AE for “All-Exists”). The interval quantities that are universallyquantified are denoted by A∀, b∀, and the existential ones by A∃, b∃. Thus, theinterval system Ax = b can be written as (A∀ + A∃)x = b∀ + b∃, and the socalled AE solution set is defined

ΣAE :={x ∈ Rn; ∀A∀ ∈ A∀ ∀b∀ ∈ b∀ ∃A∃ ∈ A∃ ∃b∃ ∈ b∃ : (A∀ + A∃)x = b∀ + b∃

}.

Special cases. There are some important special cases of the general AE-solution concept. For example, tolerable solutions are defined by the condition

∀A ∈ A,∃b ∈ b : Ax = b,

and controllable solutions are defined by

∀b ∈ b,∃A ∈ A : Ax = b.

Characterization. Surprisingly, AE solution set can be described in the samemanner as Oettli–Prager; see [49] and references therein.

Theorem 1.4 (Shary, 1995). We have

ΣAE ={x ∈ Rn; A∀x− b∀ ⊆ b∃ −A∃x

}.

Theorem 1.5 (Rohn, 1996). We have

ΣAE ={x ∈ Rn; |Acx− bc| ≤

(radA∃ − radA∀

)|x|+ rad b∃ − rad b∀

}.

Combining linear dependencies with AE solutions, we come up with a generalmodel, which is in the scope of current research. Popova [40] utilized Fourier–Motzkin type elimination to characterize the solution set by a system of nonlinearinequalities. Concerning enclosures of the corresponding solution set, Popova andHladık [41] recently generalized the single-step Bauer–Skeel method, and for thetolerable solution set, they proposed a linear programming based method thatyields optimal enclosures under some assumptions.

11

Page 12: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

1.3 The interval eigenvalue problem

1.3.1 The symmetric case

If A ∈ Rn×n is symmetric, then it has real eigenvalues, and thus we may supposethey are sorted non-increasingly

λ1(A) ≥ · · · ≥ λn(A).

Given an interval matrix A ∈ IRn×n with A and A symmetric, the correspondingsymmetric interval matrix is defined as

AS := {A ∈ A; A = AT}.

Its eigenvalue sets are defined as

λi(AS) := {λi(A); A ∈ AS} , i = 1, . . . , n.

Each eigenvalue set λi(AS) consists of ith eigenvalues of all symmetric matrices

in A. By the continuity of eigenvalues and compactness and convexity of AS it iseasy to see that λi(A

S), i = 1, . . . , n are compact intervals. They may be disjointor they may overlap, but one interval can never lie in the interior of another one.

Exact bounds. The largest and smallest eigenvalues of AS can be calculatedby an exponential time formula by Hertz [9]. These two extremal eigenvalues areattained at matrices of special form.

Theorem 1.6 (Hertz, 1992). We have

λ1(AS) = maxz∈{±1}n

λ1

(Ac + diag(z)A∆ diag(z)

),

λn(AS) = minz∈{±1}n

λn(Ac − diag(z)A∆ diag(z)

).

The other boundary points of the eigenvalue sets need not be attained at thesematrices, moreover, they need not be attained at vertex matrices (matrices withentries aij ∈ {aij, aij} ∀i, j). Hladık et al. [28] extended the Hertz theorem forboundary eigenvalues of λi(A

S), i = 1, . . . , n, that lie in no other eigenvalue set.Based on these results, they also developed an algorithm (the so called submatrixvertex enumeration) that computes exact bounds (up to the numerical precision)under some additional assumptions. In the general case, it returns inner andouter approximations of the eigenvalues sets.

A complete characterization of the eigenvalue sets still remains a challengingopen problem.

Enclosures. Calculation of the eigenvalue sets is computationally intractable.Even checking whether 0 ∈ λi(A

S) for some i = 1, . . . , n is NP-hard; for somemore NP-hardness results see [31]. That is why we will again turn our attentionto tight enclosures of the eigenvalue sets.

A simple enclosure of the eigenvalue set is obtained by the Weyl theorem; cf.[26, 46]. Recall that ρ(A) stands for the spectral radius of A.

12

Page 13: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Theorem 1.7. We have

λi(AS) ⊆ [λi(A

c)− ρ(A∆), λi(Ac) + ρ(A∆)], i = 1, . . . , n.

Hladık et al. [26] also employed the Cauchy interlacing property in two dif-ferent ways to enclose the eigenvalue sets by other means, and proposed severalother computationally cheap bounds. Combining these approaches together leadsto an efficient enclosing algorithm with respect to both time and tightness.

Contracting method. Hladık et al. [29] developed an iterative contractor. Itstarts with an enclosure of the eigenvalue sets and iteratively makes them tighter.The method works only for disjoint enclosures and does not converge to theoptimal bounds in general, but the formula is computationally cheap, convergesin a low number of steps, and often reduces the overestimation significantly. Themethod is based on the following theorem. For a given λ0 that is an eigenvalue ofno matrix inAS it determines a neighbourhood interval containing no eigenvaluesas well.

Theorem 1.8. Let λ0 6∈ ∪ni=1λi(AS) and define M := A−λ0I. Then (λ0 +λ) 6∈

∪ni=1λi(AS) for all real λ satisfying

|λ| < 2− ρ(|I −QM c|+ |I −QM c|T + |Q|M∆ + (M∆)T |Q|T

)

ρ (|Q|+ |Q|T ), (1.2)

where Q ∈ Rn×n, Q 6= 0, is an arbitrary matrix.

Even though the theorem is valid for any Q 6= 0, an appropriate choice of Qinfluences effectivity of the formula (1.2). The authors recommended the choiceof (M c)−1 or its numerical approximation.

1.3.2 The general case

We have mentioned that calculation of eigenvalue bounds for symmetric intervalmatrices is a computationally hard problem. Bounding (complex) eigenvalues forgeneral interval matrix A ∈ IRn×n is even much more difficult.

The following enclosure for all eigenvalues of all A ∈ A is by Hladık [22].Originally developed for eigenvalues of complex interval matrices, we state it inthe real form for the sake of simplicity.

Theorem 1.9. For any A ∈ A and its eigenvalue ν = λ+ iµ we have

λn(12(A+AT )S) ≤ λ ≤ λ1(1

2(A+AT )S),

λn

(0 1

2(A−AT )

12(AT −A) 0

)S

≤ µ ≤ λ1

(0 1

2(A−AT )

12(AT −A) 0

)S

.

The theorem generalizes and improves some older formulae [10, 45]. The mainidea behind is to reduce the general case to the symmetric one. In particular,the reduction uses the right end-point of the largest eigenvalue set and the leftend-point of the smallest eigenvalue set. Thus, the more efficient method forcomputing the extremal eigenvalues of symmetric interval matrices is employed,the more efficient are these formulae.

13

Page 14: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosing eigenvalues of A ∈ A by circles was presented by Hladık et al.[26]. They adapted the well known Bauer–Fike theorem and its generalizationby Chu [5] from the perturbation theory of eigenvalues of diagonalizable and notnecessarily diagonalizable matrices, respectively.

Upper bounds on the maximal spectral radius,

max{ρ(A); A ∈ A},

were dealt with in Hladık [17]. He proposed two computationally cheap andtight formulae and adapted the contracting method mentioned above to refinethe computed values.

Real eigenvalues. Now, we focus on real eigenvalues of general matrices A ∈ A.The set of real eigenvalues is defined as

Λ(A) := {λ ∈ R; Ax = λx, x 6= 0, A ∈ A}.

The iterative contractor from [29] discussed above is applicable for this situation,too, as well as the circle enclosures. A more thorough investigation was presentedby Hladık et al. [27]. Adapting sufficient or necessary conditions for regularity ofinterval matrices and enhancing various techniques from interval computations,they developed an algorithm approximating Λ(A) by means of inner and outerenclosures. Moreover, by using eigenvalue theorems from Rohn [44], they man-aged to achieve exact boundary points (up to numerical precision) of Λ(A) undersome mild assumptions only.

1.4 Further topics

Interval linear algebra is a basis for solving many interval-valued problems.

Interval linear inequalities. Generalization of the Oettli–Prager Theorem 1.3to interval mixed linear equations and inequalities is due to Hladık [23]. He alsoconsidered strong solvability (i.e., AE solutions with universal quantifiers only)for such systems.

Interval linear programming. Linear programming with coefficients varyingin given intervals was surveyed in Hladık [21]. Hladık [13] proposed a generalscheme for computing the range of optimal values subject to variations of param-eters in given intervals; the scheme involves not only basic linear programmingformulations using equations or inequalities, but it ables to handle dependenciesbetween the parameters as well. Further, Hladık [24] studied the conditions underwhich an optimal basis remains optimal under any perturbation of parameters inintervals, and proposed a sufficient condition for such basis stability. NP-hardnessresults in the area of multiobjective interval linear programming were presentedby Hladık [19]. He showed, for instance, that checking whether a given solutionremains Pareto optimal for any perturbation of the objective value coefficients ingiven intervals is a co-NP-complete problem.

14

Page 15: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Interval nonlinear programming. Hladık [18] proposed a general frameworkfor determining bounds of the optimal values of nonlinear programming problemswhen input data vary in given intervals. He applied the approach in two classesof optimization problems: Convex quadratic programming and posynomial geo-metric programming. Hladık [14] considered a similar problem for interval-valuedlinear fractional programming problems.

Interval matrix games. Suppose that payoffs of bimatrix games are subject tointerval uncertainties. Hladık [15] discussed the problem of existence of an equi-librium being common for all instances of interval values. He also characterizedthe set of all possible equilibria by means of a linear mixed integer system.

Interval linear regression. Enclosing least square solutions for interval-valuedoverdetermined linear equations, with straightforward applications in statistics,was considered in Cerny, Antoch and Hladık [4].

15

Page 16: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Bibliography

[1] G. Alefeld and J. Herzberger. Introduction to Interval Computations. Com-puter Science and Applied Mathematics. Academic Press, New York, 1983.

[2] G. Alefeld, V. Kreinovich, and G. Mayer. On the solution sets of particularclasses of linear interval systems. J. Comput. Appl. Math., 152(1-2):1–15,2003.

[3] C. Bliek. Computer Methods for Design Automation. PhD thesis, Mas-sachusetts Institute of Technology, Cambridge, MA, July 1992.

[4] M. Cerny, J. Antoch, and M. Hladık. On the possibilistic approach to linearregression models involving uncertain, indeterminate or interval data. Inf.Sci., 244:26–47, 2013.

[5] K.-w. E. Chu. Generalization of the Bauer-Fike theorem. Numer. Math.,49(6):685–691, 1986.

[6] M. Fiedler, J. Nedoma, J. Ramık, J. Rohn, and K. Zimmermann. LinearOptimization Problems with Inexact Data. Springer, New York, 2006.

[7] E. Hansen. Interval arithmetic in matrix computations, Part I. J. Soc. Ind.Appl. Math., Ser. B, Numer. Anal., 2(2):308–320, 1965.

[8] E. R. Hansen. Bounding the solution of interval linear equations. SIAM J.Numer. Anal., 29(5):1493–1503, 1992.

[9] D. Hertz. The extreme eigenvalues and stability of real symmetric intervalmatrices. IEEE Trans. Autom. Control, 37(4):532–535, 1992.

[10] D. Hertz. Interval analysis: Eigenvalue bounds of interval matrices. In C. A.Floudas and P. M. Pardalos, editors, Encyclopedia of optimization, pages1689–1696. Springer, New York, 2009.

[11] M. Hladık. Solution set characterization of linear interval systems with a spe-cific dependence structure. Reliab. Comput., 13(4):361–374, 2007.

[12] M. Hladık. Description of symmetric and skew-symmetric solution set. SIAMJ. Matrix Anal. Appl., 30(2):509–521, 2008.

[13] M. Hladık. Optimal value range in interval linear programming. FuzzyOptim. Decis. Mak., 8(3):283–294, 2009.

[14] M. Hladık. Generalized linear fractional programming under interval uncer-tainty. Eur. J. Oper. Res., 205(1):42–46, 2010.

[15] M. Hladık. Interval valued bimatrix games. Kybernetika, 46(3):435–446,2010.

[16] M. Hladık. Solution sets of complex linear interval systems of equations.Reliab. Comput., 14:78–87, 2010.

16

Page 17: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

[17] M. Hladık. Error bounds on the spectral radius of uncertain matrices. InT. Simos, editor, Proceedings of the International Conference on Numer-ical Analysis and Applied Mathematics 2011 (ICNAAM-2011), G-Hotels,Halkidiki, Greece, 19-25 September, volume 1389 of AIP Conference Pro-ceedings, pages 882–885, Melville, New York, 2011. American Institute ofPhysics (AIP).

[18] M. Hladık. Optimal value bounds in nonlinear programming with intervaldata. TOP, 19(1):93–106, 2011.

[19] M. Hladık. Complexity of necessary efficiency in interval linear programmingand multiobjective linear programming. Optim. Lett., 6(5):893–899, 2012.

[20] M. Hladık. Enclosures for the solution set of parametric interval linear sys-tems. Int. J. Appl. Math. Comput. Sci., 22(3):561–574, 2012.

[21] M. Hladık. Interval linear programming: A survey. In Z. A. Mann, editor,Linear Programming – New Frontiers in Theory and Applications, chapter 2,pages 85–120. Nova Science Publishers, New York, 2012.

[22] M. Hladık. Bounds on eigenvalues of real and complex interval matrices.Appl. Math. Comput., 219(10):5584–5591, 2013.

[23] M. Hladık. Weak and strong solvability of interval linear systems of equationsand inequalities. Linear Algebra Appl., 438(11):4156–4165, 2013.

[24] M. Hladık. How to determine basis stability in interval linear programming.Optim. Lett., 8(1):375–389, 2014.

[25] M. Hladık. New operator and method for solving real preconditioned intervallinear equations. SIAM J. Numer. Anal., 52(1):194–206, 2014.

[26] M. Hladık, D. Daney, and E. Tsigaridas. Bounds on real eigenvalues andsingular values of interval matrices. SIAM J. Matrix Anal. Appl., 31(4):2116–2129, 2010.

[27] M. Hladık, D. Daney, and E. P. Tsigaridas. An algorithm for addressing thereal interval eigenvalue problem. J. Comput. Appl. Math., 235(8):2715–2730,2011.

[28] M. Hladık, D. Daney, and E. P. Tsigaridas. Characterizing and approximat-ing eigenvalue sets of symmetric interval matrices. Comput. Math. Appl.,62(8):3152–3163, 2011.

[29] M. Hladık, D. Daney, and E. P. Tsigaridas. A filtering method for the intervaleigenvalue problem. Appl. Math. Comput., 217(12):5236–5242, 2011.

[30] R. Krawczyk. Newton-Algorithmen zur Bestimmung von Nullstellen mitFehlerschranken. Comput., 4:187–201, 1969.

[31] V. Kreinovich, A. Lakeyev, J. Rohn, and P. Kahl. Computational Complexityand Feasibility of Data Processing and Interval Computations. Kluwer, 1998.

17

Page 18: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

[32] A. V. Lakeev and S. I. Noskov. On the solution set of a linear equationwith the right-hand side and operator given by intervals. Sib. Math. J.,35(5):957–966, 1994.

[33] R. E. Moore. Interval Analysis. Prentice-Hall, Englewood Cliffs, NJ, 1966.

[34] R. E. Moore, R. B. Kearfott, and M. J. Cloud. Introduction to IntervalAnalysis. SIAM, Philadelphia, PA, 2009.

[35] A. Neumaier. Interval Methods for Systems of Equations. Cambridge Uni-versity Press, Cambridge, 1990.

[36] A. Neumaier. A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfottenclosure for linear interval equations. Reliab. Comput., 5(2):131–136, 1999.

[37] S. Ning and R. B. Kearfott. A comparison of some methods for solving linearinterval equations. SIAM J. Numer. Anal., 34(4):1289–1305, 1997.

[38] W. Oettli and W. Prager. Compatibility of approximate solution of lin-ear equations with given error bounds for coefficients and right-hand sides.Numer. Math., 6:405–409, 1964.

[39] E. D. Popova. Explicit characterization of a class of parametric solutionsets. Comptes Rendus de L’Academie Bulgare des Sciences, 62(10):1207–1216, 2009.

[40] E. D. Popova. Explicit description of AE solution sets for parametric linearsystems. SIAM J. Matrix Anal. Appl., 33(4):1172–1189, 2012.

[41] E. D. Popova and M. Hladık. Outer enclosures to the parametric AE solutionset. Soft Comput., 17(8):1403–1414, 2013.

[42] E. D. Popova and W. Kramer. Inner and outer bounds for the solution set ofparametric linear systems. J. Comput. Appl. Math., 199(2):310–316, 2007.

[43] J. Rohn. Cheap and tight bounds: The recent result by E. Hansen can bemade more efficient. Interval Comput., 1993(4):13–21, 1993.

[44] J. Rohn. Interval matrices: Singularity and real eigenvalues. SIAM J. MatrixAnal. Appl., 14(1):82–91, 1993.

[45] J. Rohn. Bounds on eigenvalues of interval matrices. ZAMM, Z. Angew.Math. Mech., 78(Suppl. 3):S1049–S1050, 1998.

[46] J. Rohn. A handbook of results on interval linear problems. TechnicalReport 1163, Institute of Computer Science, Academy of Sciences of theCzech Republic, Prague, 2012.

[47] S. M. Rump. Verification methods for dense and sparse systems of equa-tions. In J. Herzberger, editor, Topics in Validated Computations, Studiesin Computational Mathematics, pages 63–136, Amsterdam, 1994. Elsevier.Proceedings of the IMACS-GAMM International Workshop on ValidatedComputations, University of Oldenburg.

18

Page 19: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

[48] S. M. Rump. Verification methods: Rigorous results using floating-pointarithmetic. Acta Numer., 19:287–449, 2010.

[49] S. P. Shary. A new technique in systems analysis under interval uncertaintyand ambiguity. Reliab. Comput., 8(5):321–418, 2002.

[50] I. Skalna. A method for outer interval solution of systems of linear equationsdepending linearly on interval parameters. Reliab. Comput., 12(2):107–120,2006.

[51] W. Zhu. Unsolvability of some optimization problems. Appl. Math. Comput.,174(2):921–926, 2006.

19

Page 20: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

SIAM J. NUMER. ANAL. c© 2014 Society for Industrial and Applied MathematicsVol. 52, No. 1, pp. 194–206

NEW OPERATOR AND METHOD FOR SOLVING REALPRECONDITIONED INTERVAL LINEAR EQUATIONS∗

MILAN HLADIK†

Abstract. We deal with real preconditioned interval linear systems of equations. We present anew operator, which generalizes the interval Gauss–Seidel operator. Also, based on the new operatorand properties of well-known methods, we propose a new algorithm, called the magnitude method.We illustrate by numerical examples that our approach outperforms some classical methods withrespect to both time and sharpness of enclosures.

Key words. linear interval systems, solution set, interval matrix

AMS subject classifications. 65G40, 15A06

DOI. 10.1137/130914358

1. Introduction. We consider a real system of linear equation with coefficientsvarying inside given intervals, and we want to find a guaranteed enclosure for allemerging solutions. Since determining the best enclosure to the solution set is an NP-hard problem [2], the approaches to calculating it may be computationally expensive[6, 15, 20] in the worst case. That is why the field was driven to develop cheapmethods for enclosing the solution set, not necessarily optimally. There are manymethods known; see, e.g., [1, 2, 3, 4, 8, 9, 10, 17, 19]. Extensions to parametricinterval systems were studied in [5, 13, 19], among others, and quantified solutionswere investigated, e.g., in [12, 13, 21].

We will use the following interval notation. An interval matrix A is defined as

A := [A, A] = {A ∈ Rm×n; A ≤ A ≤ A},

where A, A ∈ Rm×n are given. The center and radius of A are, respectively, defined as

Ac :=1

2(A + A), AΔ :=

1

2(A − A).

The set of all m-by-n interval matrices is denoted by IRm×n. Interval vectors andintervals can be regarded as special interval matrices of sizes m-by-1 and 1-by-1,respectively. For a definition of interval arithmetic, see, e.g., [8, 9]. Extended intervalarithmetic with improper intervals of type [a, a], a > a, was discussed, e.g., in [7,21]. We will use improper intervals only for the simplicity of exposition of intervalexpressions. For example, a + [b, −b], where b > 0, is a shorthand for the interval[a + b, a − b].

The magnitude of an A ∈ IRm×n is defined as mag(A) := max(|A|, |A|), wheremax(·) is understood entrywise. The comparison matrix of A ∈ IRn×n is the matrix〈A〉 ∈ Rn×n with entries

〈A〉ii := min{|a|; a ∈ aii}, i = 1, . . . , n,

〈A〉ij := − mag(aij), i �= j.

∗Received by the editors March 25, 2013; accepted for publication (in revised form) Octo-ber 23, 2013; published electronically January 28, 2014. This work was supported by CE-ITI(GAP202/12/G061) of the Czech Science Foundation.

http://www.siam.org/journals/sinum/52-1/91435.html†Faculty of Mathematics and Physics, Department of Applied Mathematics, Charles University,

118 00 Prague, Czech Republic ([email protected]).

194

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 21: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 195

Let A ∈ IRn×n, b ∈ IRn, and consider a set of systems of linear equations

Ax = b, A ∈ A, b ∈ b,

commonly called a system of interval linear equations. The corresponding solutionset is defined as

Σ := {x ∈ Rn; ∃A ∈ A∃b ∈ b : Ax = b}.

The aim is to compute as tight as possible an enclosure of Σ by an interval vectorx ∈ IRn, meaning that Σ ⊆ x. By Σ := �Σ we denote the interval hull of Σ, i.e.,the smallest interval enclosure of Σ with respect to inclusion. Thus, enclosing Σ orΣ by interval vectors is the same objective.

Throughout the paper, we assume that Ac = In; that is, the midpoint of A isthe identity matrix. This assumption is not without loss of generality, but most ofthe solvers utilize a preconditioning that results in interval linear equations

A′x = b′, A′ ∈ RA, b′ ∈ Rb,

where R is the numerically computed inverse of Ac. Thus, the midpoint of RA isnearly the identity matrix. To be numerically safe, we then relax the interval system to

A′x = b′, A′ ∈ [In − mag(In − RA), In + mag(In − RA)], b′ ∈ Rb.

Even though preconditioning causes an enlargement of the solution set, it is easier tohandle. Since we do not miss any old solution, any enclosure to the preconditionedsystem is a valid enclosure for the original one as well.

The assumption Ac = In has many consequences. The solution set of such aninterval linear system is bounded (i.e., A contains no singular matrix) if and only ifρ(AΔ) < 1, where ρ(AΔ) stands for the spectral radius of AΔ; the sufficiency followsfrom [14] and the necessity from [9, Prop. 4.1.7]. So in the rest of the paper we assumethat this is satisfied.

Another nice property of the interval system in question is that the interval hull ofthe solution set can be determined exactly (up to numerical accuracy) by calling theHansen–Bliek–Rohn method [2, 16]. Ning and Kearfott [11] (see also [10]) proposedan alternative formula for computing Σ. We state it below and use the followingnotation:

u := 〈A〉−1 mag(b),

di := (〈A〉−1)ii, i = 1, . . . , n,

αi := 〈aii〉 − 1/di, i = 1, . . . , n.

Notice also that the comparison matrix 〈A〉 can now be expressed as 〈A〉 = In −AΔ.Theorem 1.1 (Ning–Kearfott, 1997). We have

Σi =bi + (ui/di − mag(bi))[−1, 1]

aii + αi[−1, 1], i = 1, . . . , n.(1.1)

The disadvantage of the Hansen–Bliek–Rohn method is that we have to safelycompute the inverse of 〈A〉. There are other procedures to compute a verified en-closure of Σ; see [8, 9]. They are usually faster, on account of tightness of the

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 22: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

196 MILAN HLADIK

resulting enclosures. We briefly recall three of them, the well-known interval Jacobi,Gauss–Seidel, and Krawczyk iteration methods. They must be initiated by a startingenclosure x0 ⊇ Σ. In our examples we employ the formula from [8], which in our casedraws

Σ ⊆ ‖ mag(b)‖∞1 − ‖AΔ‖∞

[−1, 1]n, provided ‖AΔ‖∞ < 1.(1.2)

Iteration methods can usually be expressed by an operator P : IRn �→ IRn having theproperty

(x ∩ Σ) ⊆ P(x).(1.3)

Thus, the operator removes no solution included in x. As a consequence, if P(x) = ∅,then x contains no solution. Basically, iterations then can have the plain form x �→P(x), or the form with intersections x �→ P(x) ∩ x.

The Krawczyk method is based on the operator

x �→ b + (In − A)x.

Denote by D the interval diagonal matrix, whose diagonal is the same as that of A,and A′ is used for the interval matrix A with zero diagonal. The interval Jacobioperator reads

x �→ D−1(b − A′x),

where D−1 is the diagonal matrix with entries 1/d11, . . . , 1/dnn. The interval Gauss–Seidel operator proceeds by evaluating the above expression row by row and usingthe already updated entries of x in the subsequent rows. That is, the interval Gauss–Seidel iteration reads

for i = 1 to n do : xi :=1

aii

⎛⎝bi −

j �=i

aijxj

⎞⎠ .

In the remainder of the paper, we will be concerned with the Krawczyk method,and with the Gauss–Seidel iterations in particular. By xGS and xK we denote thelimit enclosures computed by the interval Gauss–Seidel and Krawczyk methods, re-spectively. The theorem below is adapted from [9] and gives an explicit formula forthe enclosures.

Theorem 1.2. We have

xGS = D−1(b + mag(A′)u[−1, 1]),

xK = b + AΔu[−1, 1].

Moreover,

u = mag(Σ) = mag(xGS) = mag(xK).(1.4)

Property (1.4), not stressed enough in the literature, shows an interesting relationbetween the mentioned methods. In each coordinate, all corresponding enclosureshave one endpoint in common (that one with the larger absolute value). Thus, theenclosures differ from one side only (but the difference may be large).

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 23: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 197

2. New interval operator.Theorem 2.1. Let Σ ⊆ x ∈ IRn. Then

Σi ⊆bi −∑j �=i aijxj + [γi, −γi]ui

aii + γi[−1, 1](2.1)

for every γi ∈ [0, αi] and i = 1, . . . , n.Proof. Let i ∈ {1, . . . , n}. First, we prove the statement for γi = αi. By Theo-

rem 1.1,

Σi =bi + (ui/di − mag(bi))[−1, 1]

aii + αi[−1, 1].

The denominator is the same as in (2.1), and it is a positive interval. Thus, it issufficient to compare the numerators only. We have

bi + (ui/di − mag(bi))[−1, 1] = bi + (ui/di − (〈A〉u)i)[−1, 1]

= bi +

⎛⎝∑

j �=i

aΔijuj − (〈aii〉 − 1/di)ui

⎞⎠ [−1, 1]

⊆ bi +

⎛⎝∑

j �=i

aΔij mag(xj) − γiui

⎞⎠ [−1, 1]

= bi −∑

j �=i

aijxj + [γi, −γi]ui.

For γi = 0, equation (2.1) reduces to the interval Gauss-Seidel operator.Now, we suppose that 0 < γi < αi. Defining vi := bi −∑j �=i aΔ

ijuj[−1, 1], wehave to show the inclusion

vi + αiui[1, −1]

aii + αi[−1, 1]⊆ vi + γiui[1, −1]

aii + γi[−1, 1].

We show it by comparing the left endpoints only; the right endpoints are comparedaccordingly. We distinguish three cases:

(1) Let vi + γiui ≥ 0. Then we want to show that

vi + γiui

aii + γi≤ vi + αiui

aii + αi,

which is simplified to

vi(αi − γi) ≤ aiiui(αi − γi)

or

vi ≤ aiiui.

This is always true since for any x ∈ Σ and the corresponding A ∈ A and b ∈ b wehave

0 = (Ax − b)i =n∑

j=1

aijxj − bi ≤n∑

j=1

aijuj − bi = aiiui − vi.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 24: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

198 MILAN HLADIK

(2) Let vi + γiui < 0 and vi + αiui ≥ 0. Then the statement is obvious.(3) Let vi + αiui < 0. Then we want to show that

vi + γiui

aii − γi≤ vi + αiui

aii − αi.

Simplifying, we obtain

−vi(αi − γi) ≤ aiiui(αi − γi)

or

−vi ≤ aiiui,

which holds true.The proposed operator is based on the inclusion (2.1) and reads

for i = 1 to n do : xi :=bi −∑j �=i aijxj + [γi, −γi]ui

aii + γi[−1, 1].(2.2)

This is the sequential form as in the Gauss–Seidel iterations. We can also formulatea Jacobi-like form with independent evaluations for each coordinate. Denote by D′

the diagonal matrix with entries (a11 + γ1[−1, 1])−1, . . . , (ann + γn[−1, 1])−1, and byb′ the interval vector with entries b1 + [γ1, −γ1]u1, . . . , bn + [γn, −γn]un. Then thesecond version of the operator becomes

x �→ D′(b′ − A′x),

where A′ is the same as in the Jacobi operator. For simplicity of exposition, we willwork with the former formulation of the operator.

Obviously, for γ = 0 we get the interval Gauss–Seidel operator, so our operatorcan be viewed as its generalization. The proof also shows that the best choice forγ is γ = α. In order to make the operator applicable, we have to compute u and dor some lower bounds of them. Notice that replacing the exact values of u and d bylower bounds causes a slight overestimation, and one gets a superset in the right-handsides of (2.1) and (2.2). Thus, the operator will still satisfy the fundamental property(1.3).

The tighter the bounds on u and d, the better; however, if we spend too muchtime calculating almost exact u and d, then it makes no sense to use the operatorwhen we can call the Ning–Kearfott formula directly. So, it is preferable to derivecheap and possibly tight lower bounds on u and d. We suggest the following ones.

Proposition 2.2. We have

u ≥ mag(b) + AΔ(mag(b) + AΔ mag(b))),

di ≥ di := aii/(1 − ((AΔ)2)ii), i = 1, . . . , n.

Proof. The first part follows from

u = 〈A〉−1mag(b) = (In − AΔ)−1 mag(b) =

( ∞∑

k=0

(AΔ)k

)mag(b)

≥ (In + AΔ + (AΔ)2)mag(b) = mag(b) + AΔ(mag(b) + AΔ mag(b)).

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 25: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 199

The second part follows from

d = diag (〈A〉−1) = diag

( ∞∑

k=0

(AΔ)k

),

whence

di =

∞∑

k=0

((AΔ)k)ii

≥ aii + ((AΔ)2)ii(1 + aΔii + ((AΔ)2)ii + ((AΔ)2)iia

Δii + ((AΔ)2)2ii + · · · )

= aii + (1 + aΔii )((A

Δ)2)ii(1 + ((AΔ)2)ii + ((AΔ)2)2ii + · · · )

= aii + aii((AΔ)2)ii

1

1 − ((AΔ)2)ii

=aii

1 − ((AΔ)2)ii

.

Notice that both bounds require computational time O(n2). In particular, thediagonal of (AΔ)2 is computable in square time, but the exact diagonal of (AΔ)3

would be too costly. The following result shows that the above estimation of d is tightenough to ensure that γ ≥ 0. Notice that this would not be satisfied in general if weused the simpler estimation d ≥ diag(A + (AΔ)2).

Proposition 2.3. We have γi := 〈aii〉 − 1/di ≥ 0, i = 1, . . . , n.Proof. We can write

γi = 〈aii〉 − 1/di = 〈aii〉 − 1 − ((AΔ)2)ii

aii

≥ 〈aii〉 − 1 − (aΔii )

2

1 + aΔii

= 1 − aΔii − (1 − aΔ

ii ) = 0.

2.1. Comparison to the interval Gauss–Seidel method. Since our operatoris a generalization of the interval Gauss–Seidel iteration, it is natural to comparethem. Let x be an enclosure of Σ, let i ∈ {1, . . . , n}, and denote by u a lowerbound estimation on u. We compare the results of ours and the interval Gauss–Seideloperators, that is,

bi −∑j �=i aijxj + [γi, −γi]ui

aii + γi[−1, 1]and

bi −∑j �=i aijxj

aii.

If γi = 0, then both intervals coincide, so let us assume that γi > 0. Denote vi :=bi −∑j �=i aijxj . We compare the left endpoints of the intervals

vi + [γi, −γi]ui

aii + γi[−1, 1]and

vi

aii;

the right endpoints are compared accordingly. We distinguish three cases:(1) Let vi ≥ 0. Then we want to show that

vi

aii≤ vi + γiui

aii + γi.

This is simplified to

viγi ≤ aiiuiγi

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 26: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

200 MILAN HLADIK

or

vi ≤ aiiui.

If ui = ui or ui is not far from ui, then the inequality holds true.(2) Let vi < 0 and vi + γiui ≥ 0. Then the inequality is obviously satisfied.(3) Let vi + γiui < 0. Then we want to show that

vi

aii

≤ vi + γiui

aii − γi.

This is simplified to

−viγi ≤ aiiuiγi

or

−vi ≤ aiiui.

This is true, provided that both ui and x are sufficiently good approximations of ui

and Σ, respectively.The above discussion indicates that our operator with γi > 0 is effective only

if x is sufficiently tight and the reduction of the enclosure is valid from the smallerside (in the absolute value sense) only. Since aij , i �= j, are symmetric intervals, thereduction in the smaller sides of xi’s makes no improvement in the next iterations.The only influence is by the size of mag(x) since

j �=i

aijxj =∑

j �=i

aij mag(x)j .

Therefore, the following incorporation of our operator seems the most effective: Com-pute x ⊇ Σ by the interval Gauss–Seidel method, and then call one iteration of ouroperator.

Example 2.4. Let

A =

⎛⎝

−[8, 10] [3, 5] [8, 10]− [5, 7] [0, 2] −[6, 8][4, 6] [7, 9] −[5, 7]

⎞⎠ , b =

⎛⎝

[3, 5][6, 8][5, 7]

⎞⎠ ,

and consider the interval linear system Ax = b, A ∈ A, b ∈ b, preconditioned by thenumerically computed inverse of Ac.

As in the subsequent examples, the computations were done in MATLAB. Intervalarithmetics and some basic interval functions were provided by the interval toolboxINTLAB v6 [18]. The resulting intervals displayed below are exact within outwardrounding to four digits.

We start with the initial enclosure computed by (1.2),

x0 = 1.8065([−1, 1], [−1, 1], [−1, 1])T .

The interval Gauss–Seidel method then is terminated after four iterations, yieldingthe enclosure

x1 = ([−1.2820, 0.0174], [0.1847, 1.5641], [−1.0822, 0.0889])T .

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 27: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 201

This is not yet equal to the limit enclosure

xGS = ([−1.2813, 0.0167], [0.1849, 1.5637], [−1.0821, 0.0887])T ,

due to the finite number of iterations; we terminate the iterations when the abso-lute improvement in each coordinate from both above and from below is less thanmini,j(A

Δij)/100 = 0.01. By Proposition 2.2, we obtain the following lower bounds:

u ≥ (1.1633, 1.4367, 0.9788)T,

d ≥ (1.2343, 1.2536, 1.2030)T,

whence we calculate

γ := (0.0387, 0.0396, 0.0366)T .

These values are quite conservative since the optimal values would be for γ = α, where

α = (0.0632, 0.0643, 0.0604)T.

Nevertheless, the computed γ is sufficient to reduce the overestimation of x1. Oneiteration of our operator results in the tighter enclosure

x2 = ([−1.2820, −0.0258], [0.2261, 1.5641], [−1.0822, 0.0497])T .

For completeness, notice that the interval hull of the preconditioned system is

Σ = ([−1.2813, −0.0549], [0.2571, 1.5637], [−1.0821, 0.0144])T .

3. Magnitude method. Property (1.4) and the analysis at the end of sec-tion 2.1 motivate us to compute enclosure to Σ along the following lines. First, wecompute the magnitude of Σ, that is, u = 〈A〉−1

mag(b), and then we apply oneiteration of the presented operator on the initial box x = [−u, u], producing

bi −∑j �=i aijuj + [γi, −γi]ui

aii + γi[−1, 1], i = 1, . . . , n.

Herein, the lower bound on d is computed by Proposition 2.2. In view of the proof ofTheorem 2.1, we can express the result equivalently as (1.1), but in that formula, anupper bound on d is required, so we do not consider it here. Instead, we reformulateit in the slightly simpler form omitting improper intervals:

bi + (∑

j �=i aΔijuj − γiui)[−1, 1]

aii + γi[−1, 1], i = 1, . . . , n.

Algorithm 3.1 gives a detailed and numerically reliable description of the method.Algorithm 3.1.1. Compute u, an enclosure for the solution of 〈A〉u = mag(b).2. Calculate d, a lower bound on d by Proposition 2.2.3. Evaluate

x∗i :=

bi + (∑

j �=i aΔijuj − γiui)[−1, 1]

aii + γi[−1, 1], i = 1, . . . , n,

where γi := 〈aii〉 − 1/di.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 28: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

202 MILAN HLADIK

3.1. Properties. First, notice that the computations of u and d in steps 1 and 2are independent, so they may be parallelized.

Now, let us compare the magnitude method with the Hansen–Bliek–Rohn andthe interval Gauss–Seidel method. The propositions below show that the magnitudemethod is superior to the interval Gauss–Seidel method, and it gives the best possibleenclosure as long as u and d are determined exactly. Since u is computed tightly, thepossible deficiency is caused only by an underestimation of d.

Proposition 3.2. If u and d are calculated exactly, then x∗ = Σ.Proof. This follows from the proof of Theorem 2.1.Proposition 3.3. We have x∗ ⊆ xGS. If γ = 0, then equality holds.Proof. Let i ∈ {1, . . . , n}, and without loss of generality assume that Σc

i ≥ 0.Then

x∗i =

bi −∑j �=i aijuj + [γi, −γi]ui

aii + γi[−1, 1],

xGSi =

bi − (A′[−u, u])i

aii=

bi −∑j �=i aijuj

aii.

Denoting vi := bi −∑j �=i aijuj , we can rewrite the above expressions as

x∗i =

vi + [γi, −γi]ui

aii + γi[−1, 1],

xGSi =

vi

aii.

By the assumption, x∗i = xGS

i = ui, so we have to compare the left endpoints ofx∗

i and xGS only. We distinguish three cases:(1) Let vi ≥ 0. Then we want to show that

vi

aii≤ vi + γiui

aii + γi.

This is simplified to

viγi ≤ aiiuiγi.

If γi = 0, then the above inequality holds as equation; otherwise for any γi > 0 it istrue as well.

(2) Let vi < 0 and vi + γiui ≥ 0. Then the statement is obvious.(3) Let vi + γiui < 0. Then we want to show that

vi

aii

≤ vi + γiui

aii − γi.

This is simplified to

−viγi ≤ aiiuiγi,

which is true for any γi ≥ 0, too.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 29: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 203

−1

−2

1−1−2−3−40

x1

x2

Fig. 3.1. Example 3.4: The solution set in gray, the preconditioned system in light gray, andthree enclosures for verifylss, the interval Gauss–Seidel, and the magnitude method (from largestto smallest).

3.2. Numerical examples.Example 3.4. Consider the interval linear system Ax = b, A ∈ A, b ∈ b, with

A =

(−[2, 4] [8, 10][2, 4] [4, 6]

), b =

(−[4, 6]− [8, 10]

).

Figure 3.1 depicts the solution set to Ax = b, A ∈ A, b ∈ b, in darker gray andthe preconditioned system by (Ac)−1 in light gray. We compare three methods forenclosing the solution set. The function verifylss from the package INTLAB [18]yields the enclosure

x1 = ([−3.4985, 0.8318], [−1.9279, −0.0721])T .

The interval Gauss–Seidel method initiated with the enclosure x0 = ([−5, 5], [−5, 5])T

gives in four iterations a tighter enclosure than verifylss,

x2 = ([−3.4555, −0.2722], [−1.9093, −0.3180])T ,

but it requires almost double computational time. In contrast, our magnitude methodproduces a bit tighter enclosure,

x∗ = ([−3.4546, −0.3557], [−1.9091, −0.3741])T ,

but with less computational effort than the other methods. The enclosure is also veryclose to the optimal one (for the preconditioned system),

Σ = ([−3.4546, −0.3999], [−1.9091, −0.4117])T .

Enclosures x1, x2, x∗ are illustrated in Figure 3.1 in a nested way.In the example below, we present a limited computational study.Example 3.5. We considered randomly generated examples for various dimensions

and interval radii. The entries of Ac and bc were generated randomly in [−10, 10] withuniform distribution. All radii of A and b were equal to the parameter δ > 0.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 30: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

204 MILAN HLADIK

Table 3.1Example 3.5: Computational time for randomly generated data.

n δ verifylss Gauss–Seidel Magnitude Magnitude (γ = 0)

5 1 3.2903 0.10987 0.004466 0.0034295 0.1 0.004234 0.02937 0.004513 0.0035025 0.01 0.002342 0.02500 0.004473 0.003456

10 0.1 0.018845 0.08370 0.004877 0.00377710 0.01 0.003161 0.05305 0.004821 0.00379915 0.1 0.246779 0.21868 0.005212 0.00416215 0.01 0.005403 0.09163 0.005260 0.00417220 0.1 16.9678 0.95238 0.005554 0.00425120 0.01 0.008950 0.15602 0.005736 0.00462230 0.01 0.019111 0.32294 0.006457 0.00528930 0.001 0.004488 0.19544 0.006460 0.00526050 0.01 0.210430 1.01155 0.008483 0.00706250 0.001 0.010190 0.54813 0.008343 0.006879

100 0.001 0.044463 2.42025 0.016706 0.014645100 0.0001 0.013940 1.48693 0.017089 0.014847

Table 3.2Example 3.5: Tightness of enclosures for randomly generated data.

n δ verifylss Gauss–Seidel Magnitude Magnitude (γ = 0)

5 1 1.1520 1.1510 1.09548 1.11965 0.1 1.08302 1.01645 1.00591 1.01645 0.01 1.01755 1.00148 1.00037 1.00148

10 0.1 1.07756 1.02495 1.01107 1.0247410 0.01 1.02362 1.00378 1.00132 1.0037815 0.1 1.06994 1.03121 1.01755 1.0307415 0.01 1.02125 1.00217 1.00047 1.0021620 0.1 1.05524 1.03076 1.02007 1.0298920 0.01 1.02643 1.00348 1.00097 1.0034830 0.01 1.02539 1.00402 1.00129 1.0040130 0.001 1.00574 1.00026 1.000039 1.00025650 0.01 1.02688 1.00533 1.00226 1.0053150 0.001 1.00902 1.00051 1.00011 1.00051

100 0.001 1.01303 1.00057 1.00013 1.00057100 0.0001 1.0024988 1.0000274 1.0000022 1.0000274

The computations were carried out in MATLAB 7.11.0.584 (R2010b) on a six-processor machine AMD Phenom(tm) II X6 1090T Processor, CPU 800 MHz, with15579MB RAM.

We compared four methods with respect to computational time and tightnessof resulting enclosures, namely, the verifylss function from INTLAB, the intervalGauss–Seidel method, the proposed magnitude method (Algorithm 3.1), and eventu-ally the magnitude method with γ = 0. The last one yields the limit Gauss–Seidelenclosure, and it is faster than the magnitude method since we need not compute alower bound on d.

Table 3.1 shows the running times in seconds, and Table 3.2 shows the tightnessfor the same data. Each record is an average of 100 runs. The tightness was measuredby the sum of the resulting interval radii with respect to the optimal interval hull Σ

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 31: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

OPERATOR AND METHOD FOR INTERVAL LINEAR EQUATIONS 205

computed by the Ning–Kearfott formula (1.1). Precisely, we display

∑ni=1 xΔ

i∑ni=1 Σ

Δi

,

where x is the calculated enclosure. Thus, the closer to 1, the sharper the enclosure.The results of our experiments show that the magnitude method with γ = 0 saves

some time (about 10% to 20%), but the loss in tightness may be larger. Comparedto the interval Gauss–Seidel method, the magnitude method wins significantly bothin time and tightness. Compared to verifylss, our approach produces tighter en-closures. Provided interval entries of the equation system are wide, the magnitudemethod is also cheaper; for narrow enough intervals, the situation is changed andverifylss needs less computational effort.

For both variants of the magnitude method, we used verifylss for computing averified enclosure of u = 〈A〉−1

mag(b) (step 1 of Algorithm 3.1). So it might seemcurious that (for wide input intervals) verifylss beats itself.

4. Conclusion. We have proposed a new operator for tightening solution setenclosures of interval linear equations. Based on this operator and a property oflimit enclosures of classical methods, we came up with a new algorithm, called themagnitude method. It provably always outperforms the interval Gauss–Seidel methodwith respect to the quality of approximation. Numerical experiments indicate thatit is efficient in both computational time and tightness of enclosures, particularly forwide interval entries.

In future research, we would like to extend our approach to parametric intervalsystems. Also, overcoming the assumption Ac = In and considering nonprecondi-tioned systems is a challenging problem. Very recently, a new version of INTLABwas released (unfortunately, no longer free of charge), so numerical studies utilizingenhanced INTLAB functions would be of interest, too.

REFERENCES

[1] O. Beaumont, Solving interval linear systems with linear programming techniques, LinearAlgebra Appl., 281 (1998), pp. 293–309.

[2] M. Fiedler, J. Nedoma, J. Ramık, J. Rohn, and K. Zimmermann, Linear OptimizationProblems with Inexact Data, Springer, New York, 2006.

[3] E. Hansen and S. Sengupta, Bounding solutions of systems of equations using interval anal-ysis, BIT, 21 (1981), pp. 203–211.

[4] E. R. Hansen, Bounding the solution of interval linear equations, SIAM J. Numer. Anal., 29(1992), pp. 1493–1503.

[5] M. Hladık, Enclosures for the solution set of parametric interval linear systems, Int. J. Appl.Math. Comput. Sci., 22 (2012), pp. 561–574.

[6] C. Jansson, Calculation of exact bounds for the solution set of linear interval systems, LinearAlgebra Appl., 251 (1997), pp. 321–340.

[7] E. Kaucher, Interval analysis in the extended interval space IR, in Fundamentals of Numer-ical Computation (Computer-Oriented Numerical Analysis), Comput. Suppl. 2, Springer,Vienna, 1980, pp. 33–49.

[8] R. E. Moore, R. Baker Kearfott, and M. J. Cloud, Introduction to Interval Analysis,SIAM, Philadelphia, 2009.

[9] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cam-bridge, UK, 1990.

[10] A. Neumaier, A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott enclosure for lin-ear interval equations, Reliab. Comput., 5 (1999), pp. 131–136.

[11] S. Ning and R. B. Kearfott, A comparison of some methods for solving linear intervalequations, SIAM J. Numer. Anal., 34 (1997), pp. 1289–1305.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 32: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

206 MILAN HLADIK

[12] E. D. Popova, Explicit description of AE solution sets for parametric linear systems, SIAMJ. Matrix Anal. Appl., 33 (2012), pp. 1172–1189.

[13] E. D. Popova and M. Hladık, Outer enclosures to the parametric AE solution set, SoftComput., 17 (2013), pp. 1403–1414.

[14] G. Rex and J. Rohn, Sufficient conditions for regularity and singularity of interval matrices,SIAM J. Matrix Anal. Appl., 20 (1998), pp. 437–445.

[15] J. Rohn, Systems of linear interval equations, Linear Algebra Appl., 126 (1989), pp. 39–78.[16] J. Rohn, Cheap and tight bounds: The recent result by E. Hansen can be made more efficient,

Interval Comput., 1993 (1993), pp. 13–21.[17] J. Rohn and G. Rex, Enclosing solutions of linear equations, SIAM J. Numer. Anal., 35

(1998), pp. 524–539.[18] S. M. Rump, INTLAB—INTerval LABoratory, in Developments in Reliable Computing,

T. Csendes, ed., Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999, pp. 77–104.

[19] S. M. Rump, Verification methods: Rigorous results using floating-point arithmetic, Acta Nu-mer., 19 (2010), pp. 287–449.

[20] S. P. Shary, On optimal solution of interval linear equations, SIAM J. Numer. Anal., 32(1995), pp. 610–630.

[21] S. P. Shary, A new technique in systems analysis under interval uncertainty and ambiguity,Reliab. Comput., 8 (2002), pp. 321–418.

Dow

nloa

ded

02/0

3/14

to 1

95.1

13.1

7.18

5. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 33: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Solution Set Characterization of Linear Interval

Systems with a Specific Dependence Structure

MILAN HLADIKCharles University, Faculty of Mathematics and Physics, Department of Applied Mathematics,Malostranske nam. 25, 118 00, Prague, Czech Republic, e-mail: [email protected]

(Received: 29 June 2006; accepted: 11 December 2006)

Abstract. This is a contribution to solvability of linear interval equations and inequalities. In intervalanalysis we usually suppose that values from different intervals are mutually independent. Thisassumption can be sometimes too restrictive. In this article we derive extensions of Oettli-Pragertheorem and Gerlach theorem for the case where there is a simple dependence structure betweencoefficients of an interval system. The dependence is given by equality of two submatrices of theconstraint matrix.

1. Introduction

Coefficients and right-hand sides of systems of linear equalities and inequalities are

rarely known exactly. In interval analysis we suppose that these values vary in some

real intervals independently. But in practical applications (for instance electrical

circuit problem [5], [7]) they are sometimes related. General case of parametric

dependences has been considered e.g. in [6], [9], where various algorithms for

finding inner and outer solutions were proposed. Linear interval systems with

more specific dependencies were studied e.g. in [1], [2]. There were derived basic

characteristics (shape, enclosures, etc.), especially for cases where the constraint

matrix is supposed to be symmetric or skew-symmetric. But any explicit condition

such as Oettli-Prager theorem [8] (for linear interval equations) or Gerlach theorem

[4] (for linear interval inequalities) has never appeared.

In this paper we focus on weak solvability of linear interval systems with a simple

dependence structure and derive explicit (generally nonlinear) conditions for such

a solvability.

Let us introduce some notation. The ith row of a matrix A is denoted by Ai, ⋅, the

jth column by A⋅, j. The vector e = (1, …, 1)T is the vector of all ones. An interval

matrix is defined as

AI= [A, A] = {A ∈ R

m × n | A ≤ A ≤ A},

where A ≤ A are fixed matrices. By

Ac ≡1

2(A + A), AΔ ≡

1

2(A − A)

Reliable Computing (2007) 13:361–374DOI: 10.1007/s11155-007-9033-x © Springer 2007

Page 34: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

362 MILAN HLADIK

we denote the midpoint and radius of AI, respectively. The interval matrix addition

and subtraction is defined as follows

AI+ BI

= [A + B, A + B],AI − BI

= [A − B, A − B].

A vector x ∈ Rn is called a weak solution of a linear interval system AIx = bI

,

if Ax = b holds for some A ∈ AI, b ∈ bI

. Analogously we define a term weak

solution for other types of interval systems (cf. [3]).

2. Generalization of Oettli-Prager Theorem

In this section we generalize the Oettli-Prager [8] characterization of weak solutions

of linear interval equations to the case where there is a specific dependence between

some coefficients of the constraint matrix.

LEMMA 2.1. Given s1, s2, pi, qi ∈ R , i = 1, …, n. Let us denote the function

ƒ(u1, u2) ≡ s1u1 + s2u2 +n∑

i= 1

|piu1 + qiu2|. Then the problem

min {ƒ(u1, u2); (u1, u2) ∈ R2} (2.1)

has an optimal solution (equal to zero) if and only if

n∑

i= 1

|qi| ≥ |s2|,n∑

i= 1

|qkpi − qipk| ≥ |qks1 − pks2| ∀k = 1, …, n

holds.

Proof. The objective function ƒ(u1, u2) is positive homogeneous and hence the

problem (2. 1) has an optimal solution iff ƒ(u1, u2) ≥ 0 holds for u1 = ±1, u2 ∈ R

and for u1 = 0, u2 = ±1. Let us consider the following cases:

(i) Let u1 = 1. Then the function ƒ(1, u2) = s1 + s2u2 +n∑

i= 1

|pi + qiu2| of one

parameter represents a broken line. It is sufficient to check nonnegativity of this

function in the breaks and nonnegativity of the limits in ±∞. The breaks are − pk

qk,

qk �= 0, k = 1, …, n. Hence we derive

∀k = 1, …, n, qk �= 0 : s1 − pk

qks2 +

n∑

i= 1

∣∣∣∣pi − pk

qkqi

∣∣∣∣ ≥ 0. (2.2)

Page 35: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 363

To be limu2 →∞

ƒ(1, u2) ≥ 0, it must the inequalityn∑

i= 1

|qi| ≥ −s2 hold and to be

limu2 →−∞

ƒ(1, u2) ≥ 0, it mustn∑

i= 1

|qi| ≥ s2 hold. We obtain next condition

n∑

i= 1

|qi| ≥ |s2|. (2.3)

(ii) Let u1 = −1. Then analogously as in first paragraph we obtain for the

function ƒ(−1, u2) = −s1 + s2u2 +n∑

i= 1

| − pi + qiu2| the condition

∀k = 1, …, n, qk �= 0 : −s1 +pk

qks2 +

n∑

i= 1

∣∣∣∣− pi +

pk

qkqi

∣∣∣∣ ≥ 0, (2.4)

n∑

i= 1

|qi| ≥ |s2|. (2.5)

All the conditions (2. 2), (2. 4) can we written in one

∀k = 1, …, n :

n∑

i= 1

|piqk − pkqi| ≥ |s1qk − pks2|. (2.6)

The assumption qk �= 0 is not necessary, for in the case qk = 0 the inequality (2. 6)

is included in (2. 3).

(iii) Let u1 = 0. Then the condition ƒ(0, ± 1) ≥ 0 is included in the condition

(2. 3). �

THEOREM 2.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x, y ∈ Rn, z ∈ R

h form a solutionof the system

Ax + Bz = b, (2.7)

Ay + Cz = c (2.8)

if and only if they satisfy the following system of inequalities

AΔ|x| + BΔ|z| + bΔ ≥ |r1|, (2.9)

AΔ|y| + CΔ|z| + cΔ ≥ |r2|, (2.10)

BΔ|z||y|T + CΔ|z||x|T + bΔ|y|T + cΔ|x|T+AΔ|xyT − yxT | ≥ |r1yT − r2xT |, (2.11)

where r1 ≡ −Acx − Bcz + bc, r2 ≡ −Acy − Ccz + cc.

Page 36: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

364 MILAN HLADIK

Proof. Denote aI ≡ AIl, ⋅, bI ≡ BI

l, ⋅, cI ≡ CIl, ⋅, β I ≡ bI

l , γ I ≡ cIl . Consider the lth

equations in systems (2. 7)–(2. 8) and denote them by

ax + bz = β, ay + cz = γ , (2.12)

where a ∈ aI , b ∈ bI, c ∈ cI , β ∈ β I , γ ∈ γ I . Suppose that the vector a ∈ aI in

demand has the ith component in the form ai ≡ aci + αiaΔ

i for αi ∈ 〈−1, 1〉. The

condition (2. 12) holds iff for a certain α ∈ 〈−1, 1〉n relations

acx +

n∑

i= 1

αiaΔi xi + bcz ∈ 〈β c − bΔ|z| − β Δ, β c + bΔ|z| + β Δ〉,

acy +

n∑

i= 1

αiaΔi yi + ccz ∈ 〈γ c − cΔ|z| − γ Δ, γ c + cΔ|z| + γ Δ〉

hold. Equivalently, iff the following problem

max

{0Tα; −

n∑

i= 1

αiaΔi xi ≤ −r1 + β1,

n∑

i= 1

αiaΔi xi ≤ r1 + β1,

−n∑

i= 1

αiaΔi yi ≤ −r2 + β2,

n∑

i= 1

αiaΔi yi ≤ r2 + β2, α ≤ e, −α ≤ e

}

has an optimal solution for r1 ≡ −acx−bcz+β c, r2 ≡ −acy−ccz+γ c, β1 ≡ β Δ+bΔ|z|,β2 ≡ γ Δ + cΔ|z|. From duality theory in linear programming this problem has an

optimal solution iff the problem

min

{(−r1 + β1)u1 + (r1 + β1)u2 + (−r2 + β2)u3 + (r2 + β2)u4 +

n∑

i= 1

(vi + wi);

−aΔi xiu1 + aΔ

i xiu2 − aΔi yiu3 + aΔ

i yiu4 + vi − wi = 0 ∀i = 1, …, n,

u1, u2, u3, u4, vi, wi ≥ 0 ∀i = 1, …, n}

has an optimal solution. After substitution u1 ≡ u2 −u1, u3 ≡ u4 −u3 we can rewrite

this problem as

min

{(r1 + β1)u1 + 2β1u1 + (r2 + β2)u3 + 2β2u3 +

n∑

i= 1

(vi + wi);

aΔi xiu1 + aΔ

i yiu3 + vi − wi = 0 ∀i = 1, …, n,

u1 ≥ −u1, u3 ≥ −u3, u1, u3, vi, wi ≥ 0 ∀i = 1, …, n}

.

For optimal vi, wi, u1, u3 we have u1 = (−u1)+, u3 = (−u3)+, and vi + wi =

|aΔi xiu1 + aΔ

i yiu3| (since one of vi, wi is equal to zero). Hence the problem can be

Page 37: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 365

reformulated as

min

{(r1 + β1)u1 + 2β1(−u1)+ + (r2 + β2)u3 + 2β2(−u3)+

+

n∑i= 1

|aΔi xiu1 + aΔ

i yiu3|; u1, u3 ∈ R

}.

The positive part of real number p is equal to p+ =1

2( p + |p|) and the problem

comes in the form

min

{r1u1 + β1|u1| + r2u3 + β2|u3| +

n∑i= 1

|aΔi xiu1 + aΔ

i yiu3|; u1, u3 ∈ R

}.

(2.13)

Now we use Lemma 2.1 with u1 replaced by u1, u2 replaced by u3, n by n + 2,

and next s1 ≡ r1, s2 ≡ r2, pi ≡ aΔi xi (i = 1, …, n), pn+1 ≡ β1, pn+2 ≡ 0, qi ≡ aΔ

i yi

(i = 1, …, n), qn+1 ≡ 0, qn+2 ≡ β2. Hence the problem (2. 13) has an optimum iff

n∑i= 1

aΔi |xi| + β1 ≥ |r1|,

n∑i= 1

aΔi |yi| + β2 ≥ |r2|,

β1|yk| + β2|xk| +

n∑i= 1

aΔi |ykxi − xkyi| ≥ |ykr1 − xkr2| ∀k = 1, …, n

holds, or, equivalently iff

aΔ|x| + bΔ|z| + β Δ ≥ |r1|,aΔ|y| + cΔ|z| + γ Δ ≥ |r2|,

bΔ|z||y|T + β Δ|y|T + cΔ|z||x|T + γ Δ|x|T + aΔ|xyT − yxT | ≥ |r1yT − r2xT |

holds. These inequalities represents the lth inequalities from systems (2. 9)–(2. 11),

which proves the statement. �

In the case that x = y we immediately have the following corollary.

COROLLARY 2.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x ∈ Rn, z ∈ R

h form a solution ofthe system

Ax + Bz = b, (2.14)

Ax + Cz = c (2.15)

Page 38: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

366 MILAN HLADIK

if and only if they represent a weak solution of the linear interval system

AIx + BIz = bI , (2.16)

AIx + CIz = cI , (2.17)

BIz − CIz = bI − cI . (2.18)

3. Generalization of Gerlach Theorem

Now we generalize the Gerlach [4] characterization of weak solutions of linear

interval inequalities to the case where there is a specific dependence between some

coefficients of the constraint matrix.

THEOREM 3.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x, y ∈ Rn, z ∈ R

h form a solutionof the system

Ax + Bz ≤ b, (3.1)

Ay + Cz ≤ c (3.2)

if and only if they satisfy the system of inequalities

AΔ|x| + BΔ|z| + b ≥ Acx + Bcz, (3.3)

AΔ|y| + CΔ|z| + c ≥ Acy + Ccz, (3.4)

r1|yk| + r2|xk| + AΔ|ykx − xky| ≥ 0 ∀k = 1, …, n : xkyk < 0, (3.5)

where r1 ≡ b − Acx − Bcz + BΔ|z|, r2 ≡ c − Acy − Ccz + CΔ|z|.Proof. Denote aI ≡ AI

l, ⋅, bI ≡ BIl, ⋅, cI ≡ CI

l, ⋅, β I ≡ bIl , γ I ≡ cI

l . Let us consider the

lth inequalities in systems (3. 1)–(3. 2) and denote them by

ax + bz ≤ β, ay + cz ≤ γ , (3.6)

where a ∈ aI , b ∈ bI, c ∈ cI , β ∈ β I , γ ∈ γ I . Let us consider the vector in demand

a ∈ aI in the form with the ith component ai ≡ aci +αiaΔ

i , αi ∈ 〈−1, 1〉. The condition

(3. 6) holds iff for a certain α ∈ 〈−1, 1〉n we have

acx +

n∑

i= 1

αiaΔi xi + bcz ≤ β + bΔ|z|,

acy +

n∑

i= 1

αiaΔi yi + ccz ≤ γ + cΔ|z|

or, equivalently, iff the following problem

max

{0Tα;

n∑

i= 1

αiaΔi xi ≤ r1,

n∑

i= 1

αiaΔi yi ≤ r2, α ≤ e, −α ≤ e

},

Page 39: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 367

where r1 ≡ β−acx−bcz+bΔ|z|, r2 ≡ γ −acy−ccz+cΔ|z|, has an optimal solution.

From the duality theory in linear programming this problem has an optimal solution

iff the same holds for the problem

min

{r1u1 + r2u2 +

n∑i= 1

(vi + wi);

aΔi xiu1 + aΔ

i yiu2 + vi − wi = 0, u1, u2, vi, wi ≥ 0 ∀i = 1, …, n}

.

For optimal solution vi, wi the relation vi + wi = |aΔi xiu1 + aΔ

i yiu2| holds. Hence we

can the linear programming problem rewrite as

min

{r1u1 + r2u2 +

n∑i= 1

|aΔi xiu1 + aΔ

i yiu2|; u1, u2 ≥ 0

}.

Since the objective function ƒ(u1, u2) = r1u1 + r2u2 +n∑

i= 1

|aΔi xiu1 +aΔ

i yiu2| is positive

homogeneous, it is sufficient (similarly as in the proof of Lemma 2.1) to check its

nonnegativity only for special points:

(i) If u1 = 0, u2 = 1, then ƒ(0, 1) ≥ 0 is equal to r2 + aΔ|y| ≥ 0, which is the lthinequality from the system (3. 4).

(ii) Let u1 = 1, u2 ≥ 0. The function ƒ(1, u2) represents a broken line with

breaks in u2 = 0 and in u2 = − xk

yk≥ 0, yk �= 0. For the first case the condition

ƒ(1, 0) ≥ 0 is equal to r1 + aΔ|x| ≥ 0, which is the lth inequality from the system

(3. 3). In the second case, for the breaks of the objective function ƒ(1, u2) we obtain

the following inequality

r1 + r2

−xk

yk+

n∑i= 1

aΔi

∣∣∣xi − xk

ykyi

∣∣∣ ≥ 0 ∀k = 1, …, n : xkyk ≤ 0, yk �= 0.

Since − xk

yk=

∣∣∣− xk

yk

∣∣∣, the inequality is equal to (w.l.o.g. assume xk, yk �= 0, for

otherwise we get redundant condition)

r1|yk| + r2|xk| + aΔ|ykx − xky| ≥ 0, ∀k = 1, …, n : xkyk < 0,

which is the lth inequality from the system (3. 5). �

Note that the system (3. 5) from Theorem 3.1 is empty if x = y, or if x, y ≥ 0.

Hence from Theorem 3.1 two corollaries directly follow.

COROLLARY 3.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x ∈ Rn, z ∈ R

h form a solution ofthe system

Ax + Bz ≤ b,Ax + Cz ≤ c

Page 40: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

368 MILAN HLADIK

if and only if x is a weak solution of the interval system

AIx + BIz ≤ bI ,AIx + CIz ≤ cI .

COROLLARY 3.2. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x, y ∈ Rn, z ∈ R

h form a nonnegativesolution of the system

Ax + Bz ≤ b,Ay + Cz ≤ c

if and only if x is a solution of the system

Ax + Bz ≤ b,Ay + Cz ≤ c.

Remark 3.1. Contrary to the situation in common analysis, in interval analysis it

is not generally possible to transform an interval system of equations AIx = bI

to the interval system of inequalities AIx ≤ bI, −AIx ≤ −bI

. However, if the

interval system of inequalities is integrated with certain dependence structure, such a

transformation is possible. The interval system AIx = bIis weakly solvable iff there

exist A ∈ AI, b ∈ bI

such that the system Ax + bxn+1 ≤ 0, A(−x) + b(−xn+1) ≤ 0,

xn+1 = −1 is solvable. From Theorem 3.1 (with assignment y = −x, yn+1 = −xn+1)

it follows the solvability of the system

AΔ|x| + bΔ|xn+1| ≥ Acx + bcxn+1,

AΔ| − x| + bΔ| − xn+1| ≥ −Acx − bcxn+1,

(−Acx − bcxn+1)| − xk| + (Acx + bcxn+1)|xk| + AΔ|0| ≥ 0 ∀k = 1, …, n + 1,xn+1 = −1,

equivalently,

AΔ|x| + bΔ ≥ |Acx − bc|, (3.7)

which is the statement of Oettli-Prager theorem on solvability of AIx = bI.

4. Mixed Equalities and Inequalities

In previous sections we studied dependence structure for linear interval equations

and inequalities, respectively. Now we turn our attention to mixed linear interval

equations and inequalities with a dependence structure.

Page 41: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 369

THEOREM 4.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x, y ∈ Rn, z ∈ R

h form a solutionof the system

Ax + Bz = b, (4.1)

Ay + Cz ≤ c (4.2)

if and only if they satisfy the system of inequalities

AΔ|x| + BΔ|z| + bΔ ≥ |r1|, (4.3)

AΔ|y| + r2 ≥ 0, (4.4)

−r1yTdiag(sgn x) + r2|x|T + (bΔ+ BΔ|z|)|y|T + AΔ|xyT − yxT | ≥ 0, (4.5)

where r1 ≡ bc − Acx − Bcz, r2 ≡ c − Acy − Ccz + CΔ|z|.Proof. Denote aI ≡ AI

l, ⋅, bI ≡ BIl, ⋅, cI ≡ CI

l, ⋅, β I ≡ bIl , γ I ≡ cI

l . Let us consider the

lth equality and inequality in the systems (4. 1) and (4. 2) and denote them by

ax + bz = β, ay + cz ≤ γ , (4.6)

where a ∈ aI , b ∈ bI, c ∈ cI , β ∈ β I , γ ∈ γ I . Let the vector a ∈ aI in demand have

its ith component in the form ai ≡ aci + αiaΔ

i , where αi ∈ 〈−1, 1〉. The condition

(3. 6) holds iff for a certain α ∈ 〈−1, 1〉n we have

acx +

n∑

i= 1

αiaΔi xi + bcz ∈ 〈β c − bΔ|z| − β Δ, β c + bΔ|z| + β Δ〉,

acy +

n∑

i= 1

αiaΔi yi + ccz ≤ γ + cΔ|z|

or, equivalently iff the following problem

max

{0Tα; −

n∑

i= 1

αiaΔi xi ≤ −r1 + β1,

n∑

i= 1

αiaΔi xi ≤ r1 + β1,

n∑

i= 1

αiaΔi yi ≤ r2, α ≤ e, −α ≤ e

}

has an optimal solution (for r1 ≡ β c − acx − bcz, β1 ≡ β Δ + bΔ|z|, r2 ≡γ − acy− ccz + cΔ|z|). From the duality theory in linear programming this problem

has an optimal solution iff the problem

min

{−(r1 − β1)u1 + (r1 + β1)u2 + r2u3 +

n∑

i= 1

(vi + wi);

−aΔi xiu1 + aΔ

i xiu2 + aΔi yiu3 + vi − wi = 0,

u1, u2, u3, vi, wi ≥ 0 ∀i = 1, …, n}

Page 42: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

370 MILAN HLADIK

has an optimal solution. After substitution u1 ≡ u2 −u1 we can rewrite this problem

as

min

{(r1 + β1)u1 + 2β1u1 + r2u3 +

n∑i= 1

(vi + wi);

aΔi xiu1 + aΔ

i yiu3 + vi − wi = 0,

u1 ≥ −u1, u1, u3, vi, wi ≥ 0 ∀i = 1, …, n}

.

For optimal solution vi, wi, u1 it must vi + wi = |aΔi xiu1 + aΔ

i yiu3|, u1 = (−u1)+ hold.

Hence the problem is simplified to

min

{(r1 + β1)u1 + 2β1(−u1)+ + r2u3 +

n∑i= 1

|aΔi xiu1 + aΔ

i yiu3|;

u1 ∈ R , u3 ≥ 0

}.

Since the positive part of a real number p is equal to p+ =1

2( p + |p|), the linear

programming problem has the final form

min

{r1u1 + r2u3 + β1|u1| +

n∑i= 1

|aΔi xiu1 + aΔ

i yiu3|; u1 ∈ R , u3 ≥ 0

}.

The objective function ƒ(u1, u2) = r1u1 + r2u3 + β1|u1)| +n∑

i= 1

|aΔi xiu1 + aΔ

i yiu3| of

this problem is positive homogeneous, thus it is sufficient (similar as in the proof

of Lemma 2.1) to check the nonnegativity only for some special points:

(i) If u1 = ±1, u3 = 0, then ƒ(± 1, 0) ≥ 0 is equal to ± r1 + β1 + aΔ|x| ≥ 0, or

equivalently β1 + aΔ|x| ≥ |r1|, which is the lth inequality from the system (4. 3).

(ii) Let u3 = 1. The function ƒ(u1, 1) represents a broken line with breaks in

u1 = 0 and in u1 = − yk

xk≥ 0, xk �= 0. For the first case the function ƒ(0, 1) ≥ 0 is

equal to r2 + aΔ|y| ≥ 0, which is the lth inequality from the system (4. 4). In the

second case, for each nonzero break of the objective function ƒ(u1, 1) we obtain

inequality

r1

−yk

xk+ r2 + β1

∣∣∣∣−yk

xk

∣∣∣∣ +

n∑i= 1

aΔi

∣∣∣∣− yk

xkxi + yi

∣∣∣∣ ≥ 0.

This inequality can be expressed (since for xk = 0 we get a redundant condition)

as

−r1yksgn(xk) + r2|xk| + β1|yk| + aΔ|ykx − xky| ≥ 0, ∀k = 1, …, n,

Page 43: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 371

or in the vector form

−r1yTdiag(sgn x) + r2|x|T + β1|y|T + aΔ|xyT − yxT | ≥ 0,

which corresponds to the lth inequality from the system (4. 5). �

Putting x = y we immediately have he following corollary.

COROLLARY 4.1. Let AI ⊂ Rm × n, BI , CI ⊂ R

m × h, bI , cI ⊂ Rm. Then for certain

A ∈ AI , B ∈ BI , C ∈ CI , b ∈ bI , c ∈ cI vectors x ∈ Rn, z ∈ R

h form a solution ofthe system

Ax + Bz = b, (4.7)

Ax + Cz ≤ c (4.8)

if and only if they are a weak solution of the interval system

AIx + BIz = bI , (4.9)

AIx + CIz ≤ cI , (4.10)

CIz − BIz ≤ cI − bI . (4.11)

Proof. According to Theorem 4.1 vectors x ∈ Rn, z ∈ R

h form a solution of the

system (4. 7)–(4. 8) iff they satisfy the system

AΔ|x| + BΔ|z| + bΔ ≥ |r1|,AΔ|x| + r2 ≥ 0,

−r1xTdiag(sgn x) + r2|x|T + (bΔ+ BΔ|z|)|x|T + AΔ|xxT − xxT | ≥ 0.

From [3, Theorems 2.9 and 2.19] it follows that the first and second inequalities

of this system are equivalent to (4. 9) and (4. 10), respectively. The third inequality

can be rewritten as

−(bc − Acx − Bcz)|xT | + (c − Acx − Ccz + CΔ|z|)|x|T+ (bΔ

+ BΔ|z|)|x|T ≥ 0. (4.12)

If x = 0, then the statement holds. Assume that x �= 0. Then the inequality (4. 12)

can be simplified to

−(bc − Acx − Bcz) + (c − Acx − Ccz + CΔ|z|) + (bΔ+ BΔ|z|) ≥ 0,

and consequently to

CΔ|z| + BΔ|z| + c − b ≥ Ccz − Bcz.

According to [3, Theorem 2.19] this inequality is equivalent to (4. 11). �

THEOREM 4.2. Let AI ⊂ Rm × n, BI

i ⊂ Rm × h, CI

j ⊂ Rm × h, bI

i ⊂ Rm, cI

j ⊂ Rm,

i = 1, …, p, j = 1, …, q. Then for certain matrices A ∈ AI , Bi ∈ BIi , Cj ∈ CI

j , bi ∈ bIi ,

Page 44: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

372 MILAN HLADIK

cj ∈ cIj , i = 1, …, p, j = 1, …, q vectors x ∈ R

n, z ∈ Rh form a solution of the

system

Ax + Biz = bi, ∀i = 1, …, p, (4.13)

Ax + Cjz ≤ cj, ∀j = 1, …, q (4.14)

if and only if they are a weak solution of the interval system

AIx + BIi z = bI

i , ∀i = 1, …, p (4.15)

AIx + CIj z ≤ cI

j , ∀j = 1, …, q (4.16)(BI

i − BIk)z = bI

i − bIk, ∀i, k : i < k. (4.17)

(CI

j − BIi

)z ≤ cI

j − bIi , ∀i, j. (4.18)

Proof. One implication is obvious. If for certain A ∈ AI, Bi ∈ BI

i , Cj ∈ CIj ,

bi ∈ bIi , cj ∈ cI

j vectors x ∈ Rn, y ∈ R

h satisfy the system (4. 13)–(4. 14), then they

represent a weak solution of the interval system (4. 15)–(4. 18) as well.

To prove the second implication, denote aI ≡ AIl, ⋅, bI

i ≡ (BIi )l, ⋅, cI

j ≡ (CIj )l, ⋅,

β Ii ≡ (bI

i )l, γ Ij ≡ (cI

j )l, i = 1, …, p, j = 1, …, q. Let us consider the lth inequalities in

systems (4. 13)–(4. 14) and denote them by

ax + biz = βi, i = 1, …, p, (4.19)

ax + cjz ≤ γj, j = 1, …, q, (4.20)

where a ∈ aI , bi ∈ bIi , cj ∈ cI

j , βi ∈ β Ii , γj ∈ β I

j . Denote ri ≡ acx + bci z − β c

i ,

i = 1, …, p and sj ≡ acx + ccj z − γ c

j , j = 1, …, q. Vectors x, z satisfy the system

(4. 19)–(4. 20) iff there exists α ∈ 〈−aΔx, aΔx〉 for which we have

|ri + α| ≤ bΔi |z| + β Δ

i , i = 1, …, p,sj + α ≤ cΔ

j |z| + γ Δj , j = 1, …, q

or equivalently

α ≤ aΔx,α ≥ −aΔx,α ≤ −rib

Δi |z| + β Δ

i , i = 1, …, p,

α ≥ −ri − bΔi |z| − β Δ

i , i = 1, …, p,α ≤ −sj + cΔ

j |z| + γ Δj , j = 1, …, q.

Such a number α exists iff the following four conditions hold.

(i) First condition:

−aΔx ≤ −ri + bΔi |z| + β Δ

i , i = 1, …, p,

aΔx ≥ −ri − bΔi |z| − β Δ

i , i = 1, …, p,

Page 45: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

SOLUTION SET CHARACTERIZATION OF LINEAR INTERVAL SYSTEMS... 373

or, equivalently

|ri| ≤ bΔi |z| + β Δ

i , i = 1, …, p.

According to [3, Theorem 2.9] the first condition is equivalent to the condition that

vectors x, z represent a weak solution of the interval equation

aIx + bIi z = β I

i ,

which corresponds to the lth equation in the system (4. 15).

(ii) Second condition:

−aΔx ≤ −sj + cΔj |z| + γ Δ

j , j = 1, …, q.

According to [3, Theorem 2.19] the second condition is equivalent to the condition

that vectors x, z represent a weak solution of the interval inequality

aIx + cIj z ≤ γ I

j ,

which corresponds to the lth inequality in the system (4. 16).

(iii) Third condition:

−rk − bΔk |z| − β Δ

k ≤ −ri + bΔi |z| + β Δ

i , i, k = 1, …, p, i < k,

−ri − bΔi |z| − β Δ

i ≥ −rk + bΔk |z| + β Δ

k , i, k = 1, …, p, i < k,

or, equivalently

|ri − rj| ≤ (bΔi + bΔ

k )|z| + β Δi + β Δ

k , i, k = 1, …, p, i < k.

According to [3, Theorem 2.9] the third condition is equivalent to the condition that

vectors x, z represent a weak solution of the interval equation

(bIi − bI

k)z = β Ii − β I

k , i, k = 1, …, p, i < k,

which corresponds to the lth equation in the system (4. 17).

(iv) Fourth condition:

−ri − bΔi |z| − β Δ

i ≤ −sj + cΔj |z| + γ Δ

j , i = 1, …, p, j = 1, …, q,

or, equivalently

sj − ri ≤ (bΔi + cΔ

j )|z| + β Δi + γ Δ

j , i = 1, …, p, j = 1, …, q.

According to [3, Theorem 2.19] the fourth condition is equivalent to the condition

that vectors x, z represent a weak solution of the interval inequality

(cIj − bI

i )z ≤ γ Ij − β I

i , i = 1, …, p, j = 1, …, q,

which corresponds to the lth inequality in the system (4. 18). �

Page 46: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

374 MILAN HLADIK

References

1. Alefeld, G., Kreinovich, V., and Mayer, G.: On Symmetric Solution Sets, in: Herzberger, J. (ed.),Inclusion Methods for Nonlinear Problems. Proceedings of the International GAMM-Workshop,Munich and Oberschleissheim, 2000, Wien, Springer, Comput. Suppl. 16 (2003), pp. 1–22.

2. Alefeld, G., Kreinovich, V., and Mayer, G.: The Shape of the Solution Set for Systems of IntervalLinear Equations with Dependent Coefficients, Math. Nachr. 192 (1998), pp. 23–36.

3. Fiedler, M., Nedoma, J., Ramik, J., Rohn, J., and Zimmermann, K.: Linear Optimization Problemswith Inexact Data, Springer-Verlag, New York, 2006.

4. Gerlach, W.: Zur Losung linearer Ungleichungssysteme bei Storung der rechten Seite und derKoeffizientenmatrix, Math. Operationsforsch. Stat., Ser. Optimization 12 (1981), pp. 41–43.

5. Kolev, L. V.: Interval Methods for Circuit Analysis, Word Scientific, Singapore, 1993.6. Kolev, L. V.: Solving Linear Systems Whose Elements Are Nonlinear Functions of Interval

Parameters, Numerical Algorithms 37 (2004), pp. 199–212.7. Kolev, L. V. and Vladov, S. S.: Linear Circuit Tolerance Analysis via Systems of Linear Interval

Equations, ISYNT’89 6th International Symposium on Networks, Systems and Signal Processing,June 28–July 1, Zagreb, Yugoslavia, 1989, pp. 57–69.

8. Oettli, W. and Prager, W.: Compatibility of Approximate Solution of Linear Equations with GivenError Bounds for Coefficients and Right-Hand Sides, Numer. Math. 6 (1964), pp. 405–409.

9. Popova, E.: On the Solution of Parameterised Linear Systems, in: Kraemer, W. and Wolffvon Gudenberg, J. (eds), Scientific Computing, Validated Numerics, Interval Methods, Kluwer

Academic Publishers, Boston, 2001, pp. 127–138.

Page 47: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

SIAM J. MATRIX ANAL. APPL. c© 2008 Society for Industrial and Applied MathematicsVol. 30, No. 2, pp. 509–521

DESCRIPTION OF SYMMETRIC AND SKEW–SYMMETRICSOLUTION SET∗

MILAN HLADIK†

Abstract. We consider a linear system Ax = b, where A is varying inside a given interval matrixA, and b is varying inside a given interval vector b. The solution set of such a system is described bythe well-known Oettli–Prager Theorem. But if we are restricted only on symmetric/skew–symmetricmatrices A ∈ A, the problem is much more complicated. So far, the symmetric/skew–symmetricsolution set description could be obtained only by a lengthy Fourier–Motzkin elimination applied oneach orthant. We present an explicit necessary and sufficient characterization of the symmetric andskew–symmetric solution set by means of nonlinear inequalities. The number of the inequalities is,however, still exponential w.r.t. the problem dimension.

Key words. linear interval systems, solution set, interval matrix, symmetric matrix

AMS subject classifications. 65G40, 15A06, 15A57

DOI. 10.1137/070680783

NotationIRm×n the set of all m-by-n interval matrices

IRn the set of all n-dimensional interval vectors

�S interval hull of a set S ⊂ Rn, i.e., the smallest box [a1, b1]×· · ·×[an, bn] that contains all the elements of S

≺lex strict lexicographic ordering of vectors, i.e., u ≺lex v if for somek we have ui = vi, i < k, and uk < vk

�lex lexicographic ordering of vectors, i.e., u �lex v if u ≺lex v or u = v

|v| absolute value of a vector v, i.e., the vector with components|v|i = |vi|

Ai,. the ith row of a matrix A

ek the kth basis vector (with convenient dimension), i.e., the kthcolumn of the identity matrix

r+ positive part of a real number r, i.e., r+ = max(0, r)

1. Introduction. Real-life problems are often subject to uncertainties in datameasurements. Such uncertainties can be dealt with by methods of interval analysis [1]instead of exact values we compute with compact real intervals. An interval matrixis defined as

A = [A,A] = {A ∈ Rm×n | A ≤ A ≤ A},

where A ≤ A are fixed matrices (n-dimensional interval vectors can be regarded asinterval matrices n-by-1). By

Ac ≡ 1

2(A + A), AΔ ≡ 1

2(A − A)

we denote the midpoint and radius of A, respectively.

∗Received by the editors January 23, 2007; accepted for publication (in revised form) by A.Frommer February 12, 2008; published electronically May 9, 2008.

http://www.siam.org/journals/simax/30-2/68078.html†Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University,

Malostranske nam. 25, 118 00, Prague, Czech Republic ([email protected]).

509

Page 48: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

510 MILAN HLADIK

Let us consider a system of linear interval equations

Ax = b.

The solution set

Σ ≡ {x ∈ Rn | Ax = b, A ∈ A, b ∈ b}

is described by the well-known Oettli–Prager condition [11]

x ∈ Σ ⇔ AΔ|x| + bΔ ≥ |Acx − bc|.

In interval analysis, we usually suppose, that values vary in given intervals indepen-dently. But in some applications, dependencies can occur (cf. [5], [9]). Especially, wefocus on some types of the matrix A. The symmetric solution set is defined as

Σsym ≡ {x ∈ Rn | Ax = b, A = AT , A ∈ A, b ∈ b},

and the skew–symmetric solution set as

Σskew ≡ {x ∈ Rn | Ax = b, A = −AT , A ∈ A, b ∈ b}.

These sets have been exhaustively studied in recent years (see [2], [3], [4], [5], [6], and[7]). Applications involve Markov chains [8] and truss mechanics [10], for instance.Descriptions of Σsym and Σskew can be obtained by a Fourier–Motzkin eliminationapplied on each of 2n orthants. Contrary to Σ, the symmetric solution set Σsym isnot polyhedral, its shape is described by quadrics (see [3], [4], [5], and [6]), and it isnot convex in general, even if intersected with an orthant.

The paper is organized as follows. In section 2 we derive a solution set character-ization for a system of linear interval equations, where specific dependences occur. Asconsequences, we obtain a description of the symmetric solution set Σsym (section 3),and a description of the skew–symmetric solution set Σskew (section 4). The basicproperties of Σsym, which were mentioned above, simply follow from the proposedTheorem 3.1 in section 3 (illustrated by Figures 3.1 and 3.2).

2. Linear interval equations with particular dependences. This sectionprovides a characterization of the linear interval system equipped with a certain de-pendency (Theorem 2.2); the matrix A occurs twice in the system—in (2.3) andtransposed in (2.4). We will see later in sections 3 and 4 that the description of thesymmetric/skew–symmetric solution set is a simple consequence of Theorem 2.2. An-other reason for dealing with such a dependency is that similar relations (occurrenceof a matrix and its transposition in a system) can appear in some applications, e.g.,optimality conditions in linear programming.

First we state an auxiliary result.Lemma 2.1. Let a1, b1, d1 ∈ Rm, a2, b2, d2 ∈ Rn, and C ∈ Rm×n. The function

f(u, v) ≡ (a1)T u + (b1)T |u| + (a2)T v + (b2)T |v| +m∑

i=1

n∑

j=1

cij |d2jui + d1

i vj |(2.1)

is nonnegative for all u ∈ Rm and v ∈ Rn iff it is nonnegative for all u, v satisfyingat least one of the following conditions:

(i) ui ∈ {0, d1i } ∀ i = 1, . . . , m, and vj ∈ {0,−d2

j} ∀ j = 1, . . . , n;

(ii) ui ∈ {0,−d1i } ∀ i = 1, . . . , m, and vj ∈ {0, d2

j} ∀ j = 1, . . . , n;

(iii) (uT , vT )T = ±ek for some k ∈ {1, . . . , m + n}.

Page 49: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 511

Proof. One implication is obvious: If f(u, v) is nonnegative for all u ∈ Rm andv ∈ Rn, then it is nonnegative in particular points.

The converse will be proven by induction w.r.t. the dimension m + n.If m + n = 1, then without loss of generality (w.l.o.g.) assume m = 1, n = 0.

The function f(u) = a1u + b1|u| is nonnegative for all real u iff it is nonnegative foru = ±1, which is managed by the third condition of Lemma 2.1.

The induction step will be proven by contradiction. Let us assume that f(u, v) < 0for some vectors u, v.

(a) Suppose there are some vectors u, v such that f(u, v) < 0 and ui = 0 for someindex i. Delete the ith component in u and denote the resulting vector by u. Replaceb2 by b2, where b2

j = b2j + cij |d1

i |, j = 1, . . . , n, and apply the induction hypothesis to

u, v. Hence f(u′, v′) ≥ 0 for some vectors u′ ∈ Rm−1 and v′ ∈ Rn satisfying one ofthe conditions (i)–(iii). Canonical embedding of u′ to the space Rm yields the pairof vectors u′ ∈ Rm and v′ ∈ Rn such that f(u′, v′) ≥ 0, and one of the conditions(i)–(iii) is true. Thus, a contradiction.

(b) Suppose there are some vectors u, v such that f(u, v) < 0 and vj = 0 for someindex j. Here the assertion follows analogously to case (a).

(c) Assume—as the remaining case—that no component of u, v is zero for allvectors u, v with f(u, v) < 0.

First we show that d1i �= 0 for every i = 1, . . . , m, and d2

j �= 0 for every j = 1, . . . , n.

If w.l.o.g. d1i = 0 for some i, then we have

f(u, v) = f(u1, . . . , ui−1, 0, ui+1, . . . , um, v) + f(0, . . . , 0, ui, 0, . . . , 0, 0) < 0.

That is, one of the two summands is negative, which contradicts our assumption.Now, choose vectors u, v with f(u, v) < 0 such that the number of absolute values

in (2.1) that are zero is maximal. Define the graph G = (V, E), where the vertex setV consist of ui, i = 1, . . . , m, and vj , j = 1, . . . , n. The edge set E contains such pairs{ui, vj} for which d2

j ui + d1i vj = 0. We distinguish three cases and show that each of

them contradicts some assumption.

1. The graph G is connected. Choose (ui∗ , vj∗) ∈ E and define z∗ ≡ − vj∗d2

j∗�= 0.

Then ui∗ = d1i∗z∗, and vj∗ = −d2

j∗z∗. Due to the connectivity of G, we can extend

this property by induction to all i, j: If (ui, vj) ∈ E and ui = d1i z

∗, then vj = −d2jz

∗.If (ui, vj) ∈ E and vj = −d2

jz∗, then ui = d1

i z∗. Hence

ui = d1i z

∗ ∀ i = 1, . . . , m, vj = −d2jz

∗ ∀ j = 1, . . . , n.(2.2)

Define u′ ≡ 1|z∗| u, v′ ≡ 1

|z∗| v. Vectors u′, v′ satisfy the first or the second condition

of Lemma 2.1 (depending on the sign of z∗), but f(u′, v′) = 1|z∗|f(u, v) < 0. Thus, a

contradiction.2. The graph G is not connected and E �= ∅. We will construct vectors u′, v′ with

f(u′, v′) < 0 and at least one component of u′ or v′ to be zero, which contradicts theassumption of case (c).

Take a connected component G′ = (V ′, E′) of G such that E′ �= ∅. Then theproperty (2.2) holds when restricted on G′:

ui = d1i z

∗ ∀ i : ui ∈ V ′, vj = −d2jz

∗ ∀ j : vj ∈ V ′.

Page 50: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

512 MILAN HLADIK

Consider the function g(z) ≡ f(u(z), v(z)) as a function of variable z, where

ui(z) =

{d1

i z ui ∈ V ′,

ui otherwise,vj(z) =

{−d2

jz vj ∈ V ′,

vj otherwise.

Then g(z) is a piecewise linear function (broken line) on R. Moreover, it is linear ona neighborhood N(z∗) of z∗, that is, g(z) = pz + q, z ∈ N(z∗) for some p, q ∈ R.W.l.o.g. assume that z∗ > 0 and consider two possibilities.

Let g(z) be nondecreasing on N(z∗). In this case g(z) is nondecreasing on theinterval [0, z∗], since otherwise there is a break point in (0, z∗) contradicting ourassumption on the maximal number of zero absolute values. From g(0) ≤ g(z∗) =f(u, v) < 0 we get g(0) = f(u(0), v(0)) < 0 with u(0)i = 0 for all indices i such thatui ∈ V ′ (at least one exists due to E′ �= ∅). This contradicts the assumption of case(c).

Let g(z) be decreasing on N(z∗). Then g(z) is decreasing on [z∗,∞) (otherwisewe are in contradiction with our assumption on the maximal number of zero absolutevalues). Moreover, for sufficiently large z we have f(u′, v′) < 0, where

u′i =

{d1

i z ui ∈ V ′,

0 otherwise,v′

j =

{−d2

jz vj ∈ V ′,

0 otherwise.

As V ′ � V , the vectors u′, v′ contradict the assumption of case (c).3. The graph G is not connected and E = ∅. Define the function g(z) ≡

f(u1z, u2, . . . , um, v1, . . . , vn). This function is linear on a neighborhood N(z∗) ofz∗ ≡ 1.

If g(z) is nondecreasing on N(z∗), then it is nondecreasing on [0, z∗] (otherwisewe are in contradiction with our assumption on the maximal number of zero absolutevalues). From g(0) ≤ g(z∗) = f(u, v) < 0 we get g(0) = f(0, u2, . . . , um, v1, . . . , vn) <0. This contradicts the assumption of case (c).

If g(z) is decreasing on N(z∗), then it is decreasing on [z∗,∞) (otherwise weare in contradiction with our assumption on the maximal number of zero absolutevalues). Moreover, for sufficiently large z we have f(u1z, 0, . . . , 0, 0, . . . , 0) < 0. Thisalso contradicts the assumption of case (c).

Theorem 2.2. Let A ∈ IRn×n, b ∈ IRn, and d ∈ IRn. Then vectors x, y ∈ Rn

form a solution of the system

Ax = b,(2.3)

AT y = d(2.4)

for some A ∈ A, b ∈ b, and d ∈ d iff they satisfy the following system of inequalities:

AΔ|x| + bΔ ≥ |r1|,(2.5)

AΔ|y| + dΔ ≥ |r2|,(2.6)

n∑

i,j=1

aΔij |yixj(pi − qj)| +

n∑

i=1

(bΔi |yipi| + dΔ

i |xiqi|) ≥∣∣∣∣∣

n∑

i=1

(r1i yipi − r2

i xiqi)

∣∣∣∣∣(2.7)

∀ p, q ∈ {0, 1}n, where r1 ≡ −Acx + bc, r2 ≡ −(Ac)T y + dc.

Page 51: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 513

Proof. Let x, y ∈ Rn. Then x, y satisfy (2.3)–(2.4) iff for a certain α ∈ [−1, 1]n×n

the following relations hold:

Aci,.x +

n∑

k=1

αikaΔikxk ∈ [bc

i − bΔi , bc

i + bΔi ] ∀ i = 1, . . . , n,

(Ac.,j

)Ty +

n∑

k=1

αkjaΔkjyk ∈ [dc

j − dΔj , dc

j + dΔj ] ∀ j = 1, . . . , n.

Equivalently, iff the following linear programming problem

maxn∑

i,j=1

0 · αij

subject to

−n∑

k=1

αikaΔikxk ≤ −r1

i + bΔi ∀ i = 1, . . . , n,

n∑

k=1

αikaΔikxk ≤ r1

i + bΔi ∀ i = 1, . . . , n,

−n∑

k=1

αkjaΔkjyk ≤ −r2

j + dΔj ∀ j = 1, . . . , n,

n∑

k=1

αkjaΔkjyk ≤ r2

j + dΔj ∀ j = 1, . . . , n,

αij ≤ 1 ∀ i, j = 1, . . . , n,

−αij ≤ 1 ∀ i, j = 1, . . . , n

has an optimal solution.Recall duality in linear programming [12], [13]. The linear programs

max bT y subject to AT y ≤ c

and

min cT x subject to Ax = b, x ≥ 0

are dual to each other. Moreover, their optimal values are equal as long as at leastone of the problems is feasible (i.e., the constraints are satisfiable).

Thus our linear programming problem has an optimal solution iff the dual problem

min{

(−r1 + bΔ)T w1 + (r1 + bΔ)T w2 + (−r2 + dΔ)T w3

+ (r2 + dΔ)T w4 +n∑

i,j=1

(w5ij + w6

ij)}

subject to

−aΔijxjw

1i + aΔ

ijxjw2i − aΔ

ijyiw3j + aΔ

ijyiw4j + w5

ij − w6ij = 0 ∀ i, j = 1, . . . , n,

w1, w2, w3, w4, w5, w6 ≥ 0

has an optimal solution. The dual problem is feasible as its constraints are fulfilledwhen all the variables are equal to zero, for instance. After substitution u ≡ w2 −w1,

Page 52: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

514 MILAN HLADIK

v ≡ w4 − w3 we can rewrite this problem:

min (r1 + bΔ)T u + 2(bΔ)T w1 + (r2 + dΔ)T v + 2(dΔ)T w3 +n∑

i,j=1

(w5ij + w6

ij)

subject to

aΔijxjui + aΔ

ijyivj + w5ij − w6

ij = 0 ∀ i, j = 1, . . . , n,

w1 ≥ −u,

w3 ≥ −v,

w1, w3, w5, w6 ≥ 0.

For w1, w3, w5, and w6 some necessary optimality conditions can be given.For each i, j at least one of w5

ij ,w6ij is zero (otherwise subtract from them a

sufficiently small ε > 0 and obtain a better solution). If w5ij = 0, then w6

ij = aΔijxjui +

aΔijyivj ≥ 0, and hence w5

ij + w6ij = |aΔ

ijxjui + aΔijyivj |. Similarly, if w6

ij = 0, then

w5ij = −(aΔ

ijxjui + aΔijyivj) ≥ 0, and hence w5

ij + w6ij = |aΔ

ijxjui + aΔijyivj |. Therefore

w5ij + w6

ij = |aΔijxjui + aΔ

ijyivj |

holds in any case.Next, the only constraints involving the variable w1

i , i ∈ {1, . . . , n} are w1i ≥ −ui

and w1i ≥ 0. Since the objective function coefficient by w1

i is nonnegative, the optimalw1

i should be as small as possible. That is, w1i = max (−ui, 0) = (−ui)

+. Hence wehave w1 = (−u)+, and the equation w3 = (−v)+ follows analogously.

Using these necessary optimality conditions, the optimization problem can bereformulated as an unconstrained optimization problem:

minu,v∈Rn

{(r1 + bΔ)T u + 2(bΔ)T (−u)+ + (r2 + dΔ)T v

+ 2(dΔ)T (−v)+ +n∑

i,j=1

aΔij |xjui + yivj |

}.

The positive part of a real number p is equal to p+ = 12 (p + |p|), and the problem

comes in the form

minu,v∈Rn

(r1)T u + (bΔ)T |u| + (r2)T v + (dΔ)T |v| +n∑

i,j=1

|aΔijxjui + aΔ

ijyivj |.

As aΔij is nonnegative (because it is the radius of an interval), the objective function

can be written

f(u, v) ≡ (r1)T u + (bΔ)T |u| + (r2)T v + (dΔ)T |v| +n∑

i,j=1

aΔij |xjui + yivj |.(2.8)

Note that, it is positive homogeneous, that is,

f(λu, λv) = λf(u, v) ∀ λ ≥ 0.

If f(u, v) < 0 for some vectors u, v ∈ Rn, then f(λu, λv) tends to −∞ for λ → ∞,and the problem does not attain an optimum. On the other hand, if f(u, v) ≥ 0 for

Page 53: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 515

all u, v ∈ Rn, then the optimal solution is u = v = 0. Thus the optimization problemhas an optimal solution iff the objective function is nonnegative for all u, v ∈ Rn.

We now use Lemma 2.1 with a1 ≡ r1, a2 ≡ r2, b1 ≡ bΔ, b2 ≡ dΔ, C ≡ AΔ,d1 ≡ y, d2 ≡ x, and m = n. It follows that it is sufficient to test nonnegativity off(u, v) for three cases:

1. ui ∈ {0, yi} ∀ i = 1, . . . , n, and vj ∈ {0,−xj} ∀ j = 1, . . . , n;2. ui ∈ {0,−yi} ∀ i = 1, . . . , n, and vj ∈ {0, xj} ∀ j = 1, . . . , n;3. (uT , vT )T = ±ek for some k ∈ {1, . . . , 2n}.

The first and second cases yield

±n∑

i=1

r1i yipi +

n∑

i=1

bΔi |yipi| ∓

n∑

i=1

r2i xiqi +

n∑

i=1

dΔi |xiqi| +

n∑

i,j=1

aΔij |yixj(pi − qj)| ≥ 0

orn∑

i,j=1

aΔij |yixj(pi − qj)| +

n∑

i=1

(bΔi |yipi| + dΔ

i |xiqi|) ≥∣∣∣∣∣

n∑

i=1

(r1i yipi − r2

i xiqi)

∣∣∣∣∣ ,

where p, q ∈ {0, 1}n. In the third case when u = ±ek and v = 0, we get

±r1k + bΔ

k +n∑

j=1

aΔkj |xj | ≥ 0,

which is the kth Oettli–Prager inequality in (2.5). Likewise u = 0, v = ±ek yields thekth Oettli–Prager inequality in (2.6).

3. Symmetric solution set. In this section, we suppose w.l.o.g. that A = AT ,i.e., matrices Ac, AΔ are symmetric. Otherwise we restrict our considerations on theinterval matrix (aij ∩ aji)

ni,j=1.

Theorem 3.1, which is a simple corollary of Theorem 2.2, enables us to obtainan explicit description of the symmetric solution set Σsym. Nevertheless, the numberof inequalities in the description is still exponential. Therefore when checking x ∈Σsym for only one vector x, it is better from the theoretical viewpoint to use thelinear programming problem (from the proof of Theorem 2.2), which is polynomiallysolvable [13]. The question whether Σsym can be described by a polynomial numberof inequalities is still open.

Theorem 3.1. Let r ≡ −Acx+bc. The symmetric solution set Σsym is describedby the following system of inequalities:

AΔ|x| + bΔ ≥ |r|,(3.1)

n∑

i,j=1

aΔij |xixj(pi − qj)| +

n∑

i=1

bΔi |xi(pi + qi)| ≥

∣∣∣∣∣n∑

i=1

rixi(pi − qi)

∣∣∣∣∣(3.2)

for all vectors p, q ∈ {0, 1}n \ {0, 1} such that

p ≺lex q and (p = 1 − q ∨ ∃ i : pi = qi = 0).(3.3)

Proof. For every A ∈ A, the matrix 12 (A + AT ) ∈ A is symmetric, and for every

b1, b2 ∈ b we have 12 (b1 + b2) ∈ b. Thus, Σsym can be equivalently described as the

set of all x ∈ Rn satisfying

Ax = b1,(3.4)

AT x = b2(3.5)

Page 54: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

516 MILAN HLADIK

for some A ∈ A, b1, b2 ∈ b. Put y ≡ x, d ≡ b and apply Theorem 2.2 on system (3.4)–(3.5). We obtain that Σsym is described by (3.1)–(3.2) for all p, q ∈ {0, 1}n. To reducethe number of inequalities in (3.2), it is sufficient due to symmetry to consider onlyvectors p, q ∈ {0, 1}n for which p �lex q. Obviously, the case p = q is also redundant.

The inequality (3.2) corresponding to p = 0 and any q ∈ {0, 1}n can be omittedfor the following reason. Multiplying the Oettli–Prager system (3.1) by the vector(|x1q1|, . . . , |xnqn|) we obtain

n∑

i,j=1

aΔij |xixjqi| +

n∑

i=1

bΔi |xiqi| ≥

n∑

i=1

|rixiqi| ≥∣∣∣∣∣

n∑

i=1

rixiqi

∣∣∣∣∣ .

Due to the symmetry of AΔ the first sum is equal to∑n

i,j=1 aΔij |xixjqj |, and hence

the inequality

n∑

i,j=1

aΔij |xixjqj | +

n∑

i=1

bΔi |xiqi| ≥

∣∣∣∣∣n∑

i=1

rixiqi

∣∣∣∣∣

is a consequence of the Oettli–Prager system.The inequality (3.2) corresponding to any p ∈ {0, 1}n and q = 1 is redundant as

it is a consequence of the inequality (3.2) with p′ ≡ 1 − p, q′ ≡ 0 (which is redundantfor the same reason as before); the right-hand sides of the inequalities are the same,and the left-hand side of the former inequality includes all of the left-hand side termsof the latter inequality and possibly some more positive terms.

Finally, we prove redundancy for all inequalities (3.2) with p, q ∈ {0, 1}n \ {0, 1},p ≺lex q, and

p �= 1 − q and ∀ i : (pi = 1 ∨ qi = 1).(3.6)

Clearly, (3.6) is equivalent to

∀ i : (pi = 1 ∨ qi = 1) and ∃ i : pi = qi = 1.(3.7)

Such an inequality is a consequence of the inequality (3.2) with p′ ≡ 1− q, q′ ≡ 1− p.The vectors p′, q′ satisfy the condition (3.3).

We compute the number of inequalities for system (3.2).Proposition 3.2. The system (3.2) consists of 1

2 (4n−3n−2·2n+3) inequalities.Proof. There are (2n − 2)2 pairs of vectors p, q satisfying p, q ∈ {0, 1}n \ {0, 1}.

Since for each pair p, q just one of the conditions p ≺lex q, p = q, or q ≺lex p is true,the number of the vectors p, q satisfying p, q ∈ {0, 1}n \ {0, 1}, p ≺lex q, is equal to12

((2n − 2)2 − (2n − 2)

)= 1

2 (2n − 2)(2n − 3).Now we focus on condition (3.7) which determines the “bad” cases. For every p, q

define

Ip,q ≡ {i = 1, . . . , n | pi = qi = 1}, Jp,q ≡ {i = 1, . . . , n | pi + qi = 1}.

Vectors p, q ∈ {0, 1}n satisfy (3.7) iff |Ip,q| ≥ 1 and |Ip,q| + |Jp,q| = n. The value(nk

)2n−k identifies the number of p, q ∈ {0, 1}n for which |Ip,q| = k and |Jp,q| = n − k.

Summing up for all k = 1, . . . , n and using binomial expansion of (1 + 2)n we obtainthe number of pairs p, q ∈ {0, 1}n with property (3.7) is equal to

(n

1

)2n−1 +

(n

2

)2n−2 + · · · +

(n

n

)20 = 3n − 2n.

Page 55: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 517

From this amount we have to exclude the cases when p = 1 or q = 1:

3n − 2n − 2 · (2n − 1) + 1.

Exactly half of them satisfy p ≺lex q. Eventually, we obtain the number in question:

1

2(2n − 2)(2n − 3) − 1

2

(3n − 2n − 2 · (2n − 1) + 1

)=

1

2(4n − 3n − 2 · 2n + 3).

The number of inequalities in (3.2) is exponential, but not as tremendous as byusing Fourier–Motzkin elimination (no better upper bound is known than the doubleexponential one). Moreover, system (3.2) is characterized explicitly and is much moreeasy to handle.

Concretely, for n = 2 we have only one additional inequality (in comparison totwo inequalities obtained by Fourier–Motzkin elimination [4]), for n = 3 this numberrises up to 12 (cf. [3], [4], [6]; Fourier–Motzkin elimination leads to 44 inequalities).

Example 3.3. For the two-dimensional case, the symmetric solution set is de-scribed by the system consisting of the Oettli–Prager inequalities (3.1)

aΔ11|x1| + aΔ

12|x2| + bΔ1 ≥ |−ac

11x1 − ac12x2 + bc

1|aΔ21|x1| + aΔ

22|x2| + bΔ2 ≥ |−ac

21x1 − ac22x2 + bc

2|

supplemented by only one inequality (3.2)

aΔ11x

21 + aΔ

22x22 + bΔ

1 |x1| + bΔ2 |x2| ≥ | − ac

11x21 + ac

22x22 + bc

1x1 − bc2x2|.

In the list below we mention some particular examples. Figures 3.1 and 3.2 illustratea solution set (light gray color) and a symmetric solution set (gray color):

1. (Figure 3.1) A =( [1,2] [0,a]

[0,a] −1

), b =

(22

); here the interval hull �Σ can be

arbitrarily larger than �Σsym, depending on the real parameter a > 0.

2. (Figure 3.2) A =( −1 [−5,5]

[−5,5] 1

), b =

(1

[1,3]

); here Σ is unbounded, but Σsym

is bounded.

Fig. 3.1. Solution set arbitrarily larger than symmetric solution set, a = 4.

Page 56: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

518 MILAN HLADIK

Fig. 3.2. Unbounded solution set and bounded symmetric solution set.

3. For A =( [0,1] [1,2]

[1,2] [−1,0]

), b =

( [−1,1][−1,1]

)we have Σ = Σsym and both are bounded.

4. For A =( [−1,1] [0,2]

[0,2] [−1,1]

), b =

( [0,1][0,1]

)we have Σ = Σsym and both are un-

bounded.

4. Skew–symmetric solution set. In this section, let us suppose w.l.o.g. thatA = −AT and the diagonal of A is zero. Therefore Ac is skew–symmetric and AΔ isa symmetric matrix. The description of the skew–symmetric solution set Σskew is aconsequence of Theorem 2.2.

Proposition 4.1. Let r ≡ −Acx + bc. The skew–symmetric solution set Σskew

is described by the following system of inequalities:

AΔ|x| + bΔ ≥ |r|,(4.1)

n∑

i,j=1

aΔij |xixj(pi − qj)| +

n∑

i=1

bΔi |xi(pi + qi)| ≥

∣∣∣∣∣n∑

i=1

rixi(pi + qi)

∣∣∣∣∣

∀ p, q ∈{0, 1}n \ {0}, p �lex q.(4.2)

Proof. For all A ∈ A and b1, b2 ∈ b we have that 12 (A − AT ) ∈ A is a skew–

symmetric matrix and 12 (b1 + b2) ∈ b. Thus, Σskew can be equivalently described as

the set of all x ∈ Rn satisfying

Ax = b1,(4.3)

AT (−x) = b2(4.4)

for some A ∈ A, b1, b2 ∈ b. Put y ≡ −x, d ≡ b. Then

r1 = −Acx + bc = −(−Ac)(−x) + bc = −(Ac)T y + dc = r2 ≡ r.

Page 57: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 519

Apply Theorem 2.2 on system (4.3)–(4.4). We obtain that Σskew is described by (4.1)–(4.2). To reduce the number of inequalities in (4.2), it is sufficient due to symmetryto consider only vectors p, q ∈ {0, 1}n \ {0} for which p �lex q.

The number of inequalities in (4.2) is 2n−1(2n − 1) and can be furthermore de-creased to the number 2n −n− 1; see Theorem 4.2 where we claim that it is sufficientto consider only such inequalities for which p = q, the others being redundant.

Theorem 4.2. Let r ≡ −Acx + bc. The skew–symmetric solution set Σskew isdescribed by the following system of inequalities:

AΔ|x| + bΔ ≥ |r|,(4.5)

i<j

aΔij |xixj(pi − pj)| +

n∑

i=1

bΔi |xipi| ≥

∣∣∣∣∣n∑

i=1

rixipi

∣∣∣∣∣ ∀ p ∈ {0, 1}n \ {0}, p �= ek.

(4.6)

Proof. For given vectors p, q ∈ {0, 1}n denote the inequality (4.2) correspondingto p, q by Ineq(p, q). Let p, q be fixed, and define vectors s, t ∈ {0, 1}n componentwiseby

si =

{1 pi = qi = 1,

0 otherwise,ti =

{1 (pi = 1) ∨ (qi = 1),

0 otherwise.

We prove that Ineq(p, q) is a consequence of the inequality

1

2

(Ineq(s, s) + Ineq(t, t)

)(4.7)

and hence can be omitted.The right-hand side of the inequality Ineq(p, q) is |∑pi=1 rixi +

∑qi=1 rixi| =

|∑si=1 rixi +∑

ti=1 rixi|, which is not greater than |∑si=1 rixi| + |∑ti=1 rixi|, the

right-hand side of (4.7). The second sum in Ineq(p, q) is equal to∑

pi=1 bΔi |xi| +∑

qi=1 bΔi |xi|, which is equal to

∑si=1 bΔ

i |xi| +∑

ti=1 bΔi |xi|, the second sum in (4.7).

To prove the similar relations for the corresponding first sums let us note that diagonalterms (i.e., when i = j) in Ineq(p, q) are nonnegative, while diagonal terms are zeroin (4.7). We gather the remaining terms into symmetric pairs and show that for eachi < j one has

aΔij |xixj(pi − qj)| + aΔ

ij |xjxi(pj − qi)| ≥ 1

2

(aΔ

ij |xixj(si − sj)| + aΔij |xixj(ti − tj)|

)

+1

2

(aΔ

ij |xjxi(sj − si)| + aΔij |xjxi(tj − ti)|

)

= aΔij |xixj(si − sj)| + aΔ

ij |xixj(ti − tj)|.

In fact, we prove a stronger inequality

|pi − qj | + |pj − qi| ≥ |si − sj | + |ti − tj |.

Page 58: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

520 MILAN HLADIK

This can be shown simply by the enumeration of all possible values of pi, pj , qi, andqj , which is done in the following:

pi pj qi qj |pi − qj | + |pj − qi| si sj ti tj |si − sj | + |ti − tj |0 0 0 0 0 0 0 0 0 01 0 0 0 1 0 0 1 0 10 1 0 0 1 0 0 0 1 11 1 0 0 2 0 0 1 1 00 0 1 0 1 0 0 1 0 11 0 1 0 2 1 0 1 0 20 1 1 0 0 0 0 1 1 01 1 1 0 1 1 0 1 1 10 0 0 1 1 0 0 0 1 11 0 0 1 0 0 0 1 1 00 1 0 1 2 0 1 0 1 21 1 0 1 1 0 1 1 1 10 0 1 1 2 0 0 1 1 01 0 1 1 1 1 0 1 1 10 1 1 1 1 0 1 1 1 11 1 1 1 0 1 1 1 1 0

Now we have that the right-hand side of Ineq(p, q) is less or equal to the right-hand side of (4.7), and the left-hand side of Ineq(p, q) is greater or equal to theleft-hand side of (4.7). Therefore, Ineq(p, q) is redundant, and (4.2) can be replacedby the system

i<j

aΔij |xixj(pi − pj)| +

n∑

i=1

bΔi |xipi| ≥

∣∣∣∣∣n∑

i=1

rixipi

∣∣∣∣∣ ∀ p ∈ {0, 1}n \ {0}.(4.8)

The last reduction follows from the fact that for each unit vector p ≡ ek the cor-responding inequality in (4.8) represents an |xk|-multiple of the kth Oettli–Pragerinequality (4.5).

The resulting number of inequalities in the description is again exponential. But

in comparison with the upper bound 8(

32

)2κ+1

, κ = 12n(n + 1), for the final number

of inequalities obtained by Fourier–Motzkin elimination (see [4]), the improvement issignificant. For n = 2, system (4.6) comprises one inequality, and for n = 3 we getfour inequalities. In these cases, Fourier–Motzkin elimination yields two and eightinequalities, respectively.

Example 4.3. For n = 2, system (4.6) is composed of only one inequality

bΔ1 |x1| + bΔ

2 |x2| ≥ |bc1x1 + bc

2x2|.

In this two-dimensional case the set Σskew represents a polyhedral set, which is convexin each orthant (cf. [4]). The following are some particular examples:

1. For A =( 0 [1,2]

[−2,−1] 0

), b =

( [0,2][−2,2]

)we have Σ = Σskew and both are

bounded.2. For A =

( 0 [−1,1][−1,1] 0

), b =

( [0,2][−2,2]

)we have Σ = Σskew and both are

unbounded.

Page 59: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

DESCRIPTION OF SYMMETRIC SOLUTION SET 521

REFERENCES

[1] G. Alefeld and J. Herzberger, Introduction to Interval Computations, Academic Press,London, 1983.

[2] G. Alefeld and G. Mayer, On the symmetric and unsymmetric solution set of intervalsystems, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 1223–1240.

[3] G. Alefeld, V. Kreinovich, and G. Mayer, The shape of the symmetric solution set, inProceedings of the International Workshop on Applications of Interval Computations, ElPaso, 1995, B. Kearfott and V. Kreinovich, eds., Kluwer Academic Publishers, Dordrecht,1996.

[4] G. Alefeld, V. Kreinovich, and G. Mayer, On the shape of the symmetric, persymmetric,and skew-symmetric solution set, SIAM J. Matrix Anal. Appl., 18 (1997), pp. 693–705.

[5] G. Alefeld, V. Kreinovich, and G. Mayer, The shape of the solution set for systems ofinterval linear equations with dependent coefficients, Math. Nachr., 192 (1998), pp. 23–36.

[6] G. Alefeld, V. Kreinovich, and G. Mayer, On symmetric solution sets, in Inclusion Meth-ods for Nonlinear Problems: With Applications in Engineering, Economics and Physics.Proceedings of the International GAMM-Workshop, Munich and Oberschleissheim, 2000,Comput. Suppl. 16, J. Herzberger, ed., Springer, Wien, 2003, pp. 1–22.

[7] G. Alefeld, V. Kreinovich, and G. Mayer, On the solution sets of particular classes oflinear interval systems, J. Comput. Appl. Math., 152 (2003), pp. 1–15.

[8] R. Araiza, G. Xiang, O. Kosheleva, and D. Skulj, Under interval and fuzzy uncer-tainty, symmetric Markov chains are more difficult to predict, in Proceedings of the 26thInternational Conference of the North American Fuzzy Information Processing SocietyNAFIPS’2007, M. Reformat and M. R. Berthold, eds., San Diego, CA, 2007, pp. 526–531.

[9] M. Hladik, Solution set characterization of linear interval systems with a specific dependencestructure, Reliab. Comput., 13 (2007), pp. 361–374.

[10] Z. Kulpa, A. Pownuk, and I. Skalna, Analysis of linear mechanical structures with uncer-tainties by means of interval methods, Comput. Assist. Mech. Eng. Sci., 5 (1998), pp. 443–477.

[11] W. Oettli and W. Prager, Compatibility of approximate solution of linear equations withgiven error bounds for coefficients and right-hand sides, Numer. Math., 6 (1964), pp. 405–409.

[12] M. Padberg, Linear Optimization and Extension, Springer, Berlin, 1999.[13] A. Schrijver, Theory of Linear and Integer Programming, John Wiley & Sons Ltd., Chich-

ester, UK, 1998.

Page 60: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Int. J. Appl. Math. Comput. Sci., 2012, Vol. 22, No. 3, 561–574DOI: 10.2478/v10006-012-0043-4

ENCLOSURES FOR THE SOLUTION SET OF PARAMETRIC INTERVALLINEAR SYSTEMS

MILAN HLADIK

Department of Applied Mathematics, Faculty of Mathematics and PhysicsCharles University, Malostranske nam. 25, 118 00, Prague, Czech Republic

e-mail: [email protected]

Faculty of Informatics and StatisticsUniversity of Economics, nam. W. Churchilla 4, 13067, Prague, Czech Republic

We investigate parametric interval linear systems of equations. The main result is a generalization of the Bauer–Skeeland the Hansen–Bliek–Rohn bounds for this case, comparing and refinement of both. We show that the latter bounds arenot provable better, and that they are also sometimes too pessimistic. The presented form of both methods is suitablefor combining them into one to get a more efficient algorithm. Some numerical experiments are carried out to illustrateperformances of the methods.

Keywords: linear interval systems, solution set, interval matrix.

1. Introduction

Solving systems of interval linear equations is a fun-damental problem in interval computing (Fiedler et al.,2006; Neumaier, 1990). Therein, one assumes that thematrix entries and the right-hand side components perturbindependently and simultaneously within given intervals.However, this assumption is hardly true in practical prob-lems. Very often various correlations between input quan-tities appear, e.g., in robotics (Merlet, 2009) or in dynamicsystems (Busłowicz, 2010).

Linear dependences were investigated by several au-thors. The first paper on parametric interval systems (witha special structure) is that by Jansson (1991). For a spe-cial class of parametric systems, Neumaier and Pownuk(2007) proposed an effective method. The general prob-lem of interval parameter dependent linear systems wasfirst treated by Rump (1994).

Theoretical papers involve, e.g., characterization ofthe boundary of the solution set (Popova and Kramer,2008), the quality of the solution set (Popova, 2002), oran explicit characterization of a class of parametric inter-val systems (Hladık, 2008; Popova, 2009). Shapes of theparticular solution sets were first analyzed by Alefeld etal. (1997; 2003).

Kolev (2006) proposed a direct method and an itera-

tive one (Kolev, 2004) for computing an enclosure of thesolution set. Parametrized Gauss–Seidel iteration was em-ployed by Popova (2001). A direct method was given bySkalna (2006), and a monotonicity approach by Popova(2006a), Rohn (2004), and Skalna (2008). Inner and outerapproximations by a fixed-point method were developedby Rump (1994; 2010), and implemented by Popova andKramer (2007). A Mathematica package for solving para-metric interval systems is introduced by Popova (2004a).

Let

p := [p, p] = {p ∈ RK | p ≤ p ≤ p}

be an interval vector. By pc := 12 (p + p) and pΔ :=

12 (p − p) we denote the corresponding center and the ra-dius vector. Analogous notation is used for interval matri-ces. We suppose that the reader is familiar with the basicinterval arithmetic.

In this paper, we consider a general parametric sys-tem of interval linear equations in the form

A(p)x = b(p), p ∈ p, (1)

where

A(p) =

K∑

k=1

pkAk, b(p) =

K∑

k=1

pkbk.

Page 61: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

562 M. Hladık

Herein, p is the interval vector representing K interval pa-rameters, and Ak ∈ Rn×n and bk ∈ Rn, k = 1, . . . , K,are given matrices and vectors. Notice that this linearparametric form comprises affine linear parametric matri-ces and vectors,

A0 +

K∑

k=1

pkAk, b0 +

K∑

k=1

pkbk,

since one can simply introduce an idle parameter p0 :=[1, 1]. In our approach, no better results are obtained ex-plicitly for the affine linear parametric structure.

The solution set is defined as

Σ := {x ∈ Rn | A(p)x = b(p), p ∈ p}.

We use the following notation: ρ(A) stands for thespectral radius of a matrix A, Ai. for the i-th row of A, Ifor the identity matrix and ei for its ith column. The diag-onal matrix with entries z1, . . . , zn is denoted by diag(z),and A(p) is a short form for a family A(p), p ∈ p. Wewrite interval quantities in boldface.

The paper is structured as follows. In Section 2 wediscuss the regularity of a parametric interval matrix, andin Section 3 enclosures of a parametric interval linear sys-tem. We generalize the Bauer–Skeel and the Hansen–Bliek–Rohn bounds, which were developed for a standardinterval linear system; for the reader’s convenience, werecall the original formulae in Appendix. Moreover, wepropose efficient refinements of both methods.

2. Regularity of parametric intervalmatrices

In order to develop an enclosure for the parametric intervalsystem we have to discuss the regularity of the parametricinterval matrix A(p) first. The parametric interval matrixis called regular if A(p) is nonsingular for every p ∈ p.

Preconditioning and relaxing the parametric intervalmatrix, we obtain an interval matrix

A =

K∑

k=1

pk(RAk),

i.e.,

Aij =

[ K∑

k=1

min(p

k(RAk)ij , pk(RAk)ij

),

K∑

k=1

max(p

k(RAk)ij , pk(RAk)ij

)].

Clearly, if A is regular, then so is A(p). Thus we can em-ploy the well-known Beeck–Rump sufficient condition forthe regularity of interval matrices (Beeck, 1975; Rump,1983; Rex and Rohn, 1998).

Theorem 1. Let R ∈ Rn×n be such that

ρ

(|I − RA(pc)| +

K∑

k=1

pΔk |RAk|

)< 1. (2)

Then A(p) is regular.

Usually, the best choice for the matrix R is the nu-merically computed inverse of A(pc). In the following, weconsider the case R = A(pc)−1. For this special case, thesufficient condition was already stated by Popova (2004b).

How strong is the sufficient condition presented inTheorem 1? The following result shows a class ofproblems where the condition is not only sufficient, butalso necessary. It is a generalization of Rohn’s result(Rohn, 1989, Corollary 5.1.(ii)).

Proposition 1. Suppose that A(pc) is nonsingular andthere are z ∈ {±1}n and y ∈ {±1}K such that for everyk ∈ {1, . . . , K} we have

yk diag(z)A(pc)−1Ak diag(z) ≥ 0.

Then A(p) is regular if and only if

ρ

(K∑

k=1

pΔk |A(pc)−1Ak|

)< 1.

Proof. One implication is obvious in view of Theorem 1.We prove the converse by contradiction. Denote

A∗ :=

K∑

k=1

pΔk |A(pc)−1Ak|

=K∑

k=1

pΔk yk diag(z)A(pc)−1Ak diag(z),

and suppose for contradiction that ρ(A∗) ≥ 1. Since A∗

is non-negative, according to the Perron–Frobenius theo-rem (Horn and Johnson, 1985; Meyer, 2000) there is somenon-zero vector x such that

A∗x = ρ(A∗)x

or, equivalently,(

I − 1

ρ(A∗)A∗)

x = 0.

Premultiplying by A(pc) diag(z), we get(

A(pc) diag(z)− 1

ρ(A∗)A(pc) diag(z)A∗

)x = 0

or(

K∑

k=1

(pc

k − yk

ρ(A∗)pΔ

k

)Ak

)(diag(z)x) = 0.

Page 62: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 563

The vector diag(z)x is non-zero, and the constraint ma-trix belongs to A(p) since

pck − yk

ρ(A∗)pΔ

k ∈ pk, k = 1, . . . , K.

Thus we found a singular matrix in A(p), which is a con-tradiction. �

3. Enclosures for parametric interval linearsystems

The main problem studied within this paper is to find atight enclosure for the solution set Σ, where an enclosureis any interval vector containing Σ. A simple enclosurecan be acquired by relaxing the system (1) to an intervallinear system Ax = b, where (by using interval arith-metic)

A :=

K∑

k=1

pkAk, b :=

K∑

k=1

pkbk.

Since many efficient solvers of interval linear systems usepreconditioning, we should note that instead of precondi-tioning the system Ax = b by a matrix R it is better toprecondition the original data. That is, consider A′x = b′,where

A′ :=K∑

k=1

pk(RAk), b′ :=K∑

k=1

pk(Rbk). (3)

Proposition 2. We have A′ ⊆ RA and b′ ⊆ Rb.

Proof. Let i, j ∈ {1, . . . , n}. Due to the sub-distributivityof interval arithmetic, we can write

A′ij =

K∑

k=1

pk(RAk)ij =K∑

k=1

pk

(n∑

l=1

RilAklj

)

⊆K∑

k=1

n∑

l=1

pkRilAklj =

n∑

l=1

K∑

k=1

Ril(pkAklj)

=

n∑

l=1

Ril

(K∑

k=1

pkAklj

)=

n∑

l=1

RilAlj = (RA)ij .

We proceed similarly for b′ ⊆ Rb. �To obtain tighter enclosures, we have to inspect para-

metric systems more carefully. Recently, Popova (2009)proved that the inequality system given below in (4) is anexplicit description of a parametric interval linear systemof the so-called zero or first class; in this class, for eachk = 1, . . . , K , the nonzero entries of (Ak | bk) are situ-ated in one row only. First we show this is a necessary (butnot sufficient in general) characterization for any paramet-ric interval linear system.

Theorem 2. If x ∈ Rn solves (1) for some p ∈ p, then itsolves

|A(pc)x − b(pc)| ≤K∑

k=1

pΔk |Akx − bk|. (4)

Proof. Let x ∈ Rn be a solution to A(p)x = b(p) forsome p ∈ p. Then, in a similar way as for the well knownOettli–Prager theorem, we derive

|A(pc)x − b(pc)|

=

∣∣∣∣K∑

k=1

pck(Akx − bk)

∣∣∣∣

=

∣∣∣∣K∑

k=1

pck(Akx − bk) −

K∑

k=1

pk(Akx − bk)

∣∣∣∣

=

∣∣∣∣K∑

k=1

(pck − pk)(Akx − bk)

∣∣∣∣

≤K∑

k=1

|pck − pk||Akx − bk|

≤K∑

k=1

pΔk |Akx − bk|.

�A sufficient and necessary characterization of Σ is

given below in terms of infinite systems of inequalities.From another viewpoint, the system is composed of aunion of systems (4) over all possible preconditioningsof (1). An open question arises whether or not particularextremal points of Σ can be achieved by an appropriatepreconditioning of (1).

Theorem 3. We have that x ∈ Σ if and only if it solves

yT (A(pc)x − b(pc)) ≤K∑

k=1

pΔk |yT (Akx − bk)| (5)

for every y ∈ Rn.

Proof. Let x ∈ Rn. Then x ∈ Σ if and only if there is avector q ∈ [−1, 1]K such that

A(pc)x − b(pc) =

K∑

k=1

qkpΔk (Akx − bk).

Set d := A(pc)x − b(pc), and let D ∈ Rn×K bea matrix whose k-th column is equal to pΔ

k (Akx − bk),k = 1, . . . , K . Then x ∈ Σ if and only if there is anoptimal solution to the linear system

Dq = d, −1 ≤ q ≤ 1,

Page 63: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

564 M. Hladık

or, in other words, if and only if the linear program

max 0T q subject to Dq = d, −1 ≤ q ≤ 1

has an optimal solution. Consider the corresponding dualproblem

min dT y + 1T (u + v)

subject to

DT y + u − v = 0, u, v ≥ 0,

which is always feasible. According to the theory ofduality in linear programming (Padberg, 1999; Schri-jver, 1998), the existence of an optimal solution to oneproblem implies the same for the second one and the opti-mal values are equal.

For an optimal solution of the dual problem and everyi ∈ {1, . . . , K} either ui = 0 or vi = 0. Otherwise, wecan subtract a small positive amount from both ui and vi

and decrease the optimal value. If ui = 0, then (u+v)i =vi = (DT y)i ≥ 0. Similarly, vi = 0 implies (u + v)i =ui = −(DT y)i ≥ 0. Hence we can derive u+v = |DT y|,and the dual problem takes the form

min dT y + 1T |DT y| subject to y ∈ Rn.

Since the objective function is positive homogeneous, theproblem has an optimal solution (equal to zero) if and onlyif the objective function is non-negative, i.e.,

dT y + 1T |DT y| ≥ 0, ∀y ∈ Rn

or, substituting y := −y,

yT d ≤ |yT D|1, ∀y ∈ Rn.

In the setting of D and d, we get (5). �Based on Theorem 2 we develop a generalization of

the Bauer–Skeel bounds (Rohn, 2010; Stewart, 1998) toparametric interval systems. Note that the generalizedBauer–Skeel bounds yield the same enclosure as the directmethod by Skalna (2006). However, the following form ismore convenient for combining it with the Hansen–Bliek–Rohn method and for refinements.

Theorem 4. Suppose that A(pc) is nonsingular. Write

M :=

K∑

k=1

pΔk |A(pc)−1Ak|,

x∗ := A(pc)−1b(pc).

If ρ(M) < 1, then[x∗ − (I − M)−1

K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|,

x∗ + (I − M)−1K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|

]

is an interval enclosure to Σ.

Proof. Preconditioning the system A(p)x = b(p)by the matrix A(pc)−1, we obtain an equivalent systemA(pc)−1A(p)x = A(pc)−1b(p), or

K∑

k=1

pkA(pc)−1Akx =

K∑

k=1

pkA(pc)−1bk, p ∈ p.

According to Theorem 2 each solution to this system sat-isfies

|A(pc)−1A(pc)x − A(pc)−1b(pc)|

≤K∑

k=1

pΔk |A(pc)−1(Akx − bk)|.

Rearranging the system, we get

|x − x∗| ≤K∑

k=1

pΔk |A(pc)−1(Akx − bk)| (6)

=K∑

k=1

pΔk |A(pc)−1(Ak(x − x∗ + x∗) − bk)|

≤K∑

k=1

pΔk |A(pc)−1Ak(x − x∗)|

+

K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|

≤K∑

k=1

pΔk |A(pc)−1Ak||x − x∗|

+

K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|.

Equivalently,

(I −

K∑

k=1

pΔk |A(pc)−1Ak|

)|x − x∗|

≤K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|.

From ρ(M) < 1, it follows (Fiedler et al., 2006;Meyer, 2000, Theorem 1.31) that

(I − M)−1 =

∞∑

j=0

M j .

Since the matrix M is non-negative, so is (I − M)−1.Thus we may multiply the system by (I −M)−1 to obtain

|x − x∗| ≤ (I − M)−1K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|.

Page 64: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 565

This means, that

x ≥ x∗ − (I − M)−1K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|,

x ≤ x∗ + (I − M)−1K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|.

�The Hansen–Bliek–Rohn method (Fiedler et al.,

2006; Rohn, 1993, Theorem 2.39) gives an enclosure forthe solution set of an interval linear system. The fol-lowing is a generalization to parametric interval linearsystems; however, the result is the same as the Hansen–Bliek–Rohn bounds applied on the preconditioned system(3) by R := A(pc)−1. For the reader’s convenience, wepresent a detailed proof, which will be followed up in thenext section for a refinement. Note that an alternative formof the enclosure was developed by Neumaier (1999) aswell as Ning and Kearfott (1997).

Theorem 5. Suppose that A(pc) is nonsingular. Using thenotation from Theorem 4, write

M∗ := (I − M)−1,

x0 := M∗|x∗| +

K∑

k=1

pΔk M∗|A(pc)−1bk|.

If ρ(M) < 1, then any solution x to (1) satisfies

xi ≤ max

{x0

i + (x∗i − |x∗

i |)m∗ii,

1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

)},

and

xi ≥ min

{− x0

i + (x∗i + |x∗

i |)m∗ii,

1

2m∗ii − 1

(− x0

i + (x∗i + |x∗

i |)m∗ii

)}.

Proof. From the proof of Theorem 4 we know that eachsolution to (1) satisfies

|x − x∗| ≤K∑

k=1

pΔk |A(pc)−1(Akx − bk)| (7)

≤K∑

k=1

pΔk |A(pc)−1Ak||x| +

K∑

k=1

pΔk |A(pc)−1bk|.

This inequality system implies

x − x∗

≤K∑

k=1

pΔk |A(pc)−1Ak||x| +

K∑

k=1

pΔk |A(pc)−1bk|, (8)

and

|x| − |x∗|

≤K∑

k=1

pΔk |A(pc)−1Ak||x| +

K∑

k=1

pΔk |A(pc)−1bk|. (9)

Let i ∈ {1, . . . , n}. Consider the system (9) in whichthe i-th inequality is replaced by the i-th inequality from(8),

|x| − |x∗| + (xi − x∗i − |x|i + |x∗

i |)ei

≤K∑

k=1

pΔk |A(pc)−1Ak||x| +

K∑

k=1

pΔk |A(pc)−1bk|.

This can be rewritten as

(I −

K∑

k=1

pΔk |A(pc)−1Ak|

)|x| + (xi − |x|i)ei

≤ |x∗| + (x∗i − |x∗

i |)ei +

K∑

k=1

pΔk |A(pc)−1bk|.

From ρ(M) < 1, it follows (Fiedler et al., 2006;Meyer, 2000, Theorem 1.31) that

(I − M)−1 =

∞∑

j=0

M j .

Since the matrix M is non-negative, M∗ = (I −M)−1 ≥I . Thus we may multiply the system by M∗ ≥ 0 to obtain

|x| + (xi − |x|i)M∗ei

≤ M∗|x∗| + (x∗i − |x∗

i |)M∗ei

+

K∑

k=1

pΔk M∗|A(pc)−1bk|.

Setting

x0 = M∗|x∗| +K∑

k=1

pΔk M∗|A(pc)−1bk|,

the system reads

|x| + (xi − |x|i)M∗ei ≤ x0 + (x∗i − |x∗

i |)M∗ei.

The i-th inequality becomes

|xi| + (xi − |x|i)m∗ii ≤ x0

i + (x∗i − |x∗

i |)m∗ii.

Distinguish two cases. If xi ≥ 0, then

xi ≤ x0i + (x∗

i − |x∗i |)m∗

ii.

Page 65: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

566 M. Hladık

If xi < 0, then

−xi + 2xim∗ii ≤ x0

i + (x∗i − |x∗

i |)m∗ii.

Using the fact that M∗ ≥ I , we get that 2m∗ii ≥ 2 > 1

and

xi ≤ 1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

).

Summing up, we have an upper bound on xi as follows:

xi ≤ max{x0

i + (x∗i − |x∗

i |)m∗ii,

1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

)}.

To obtain a lower bound on xi, we realize that Ax =b if and only if A(−x) = −b. Thus, we apply the previousresult to the parametric interval system

A(p)(−x) = −b(p).

That is, the sign of bc and x∗ will be changed and

−xi ≤ max{x0

i + (−x∗i − |x∗

i |)m∗ii,

1

2m∗ii − 1

(x0

i + (−x∗i − |x∗

i |)m∗ii

)},

or,

xi ≥ min{

− x0i + (x∗

i + |x∗i |)m∗

ii,

1

2m∗ii − 1

(− x0

i + (x∗i + |x∗

i |)m∗ii

)}.

Remark 1. The Bauer–Skeel and Hansen–Bliek–Rohnmethods are similar to each other since they are derivedfrom the same basis. Nevertheless, as we will see in Sec-tion 6, both methods are incomparable, that is, sometimesthe former is better and sometimes the latter. Thus, to ob-tain enclosure as tight as possible we propose to computeboth and take their intersection. The overall computa-tional cost is low since we calculate the inverses A(pc)−1,M∗ = (I − M)−1 and other intermediate expressionsonly once. Using notations of Theorems 4 and 5, we com-pute the upper endpoints of the resulting enclosure as theminima of

x∗i + M∗

i.K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|,

and

max{x0

i + (x∗i − |x∗

i |)m∗ii,

1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

)},

i = 1, . . . , n. We proceed similarly for the lower end-points.

4. Refinement of enclosures

Now we show that the enclosures discussed in the previ-ous section can be made tighter. The idea is to use thoseenclosures to check some sign invariances, and if theyhold true, then the process of deriving the enclosures canbe refined. Note that the proposed refinements run alwaysin polynomial time.

Let x be the enclosure obtained by Theorems 4 or 5or by any other method, and let k ∈ {1, . . . , K}. Writeak := A(pc)−1(Akx − bk). We will employ notationsfrom Theorems 4 and 5, too. For the refinements, we as-sume ρ(M) < 1.

4.1. Refinement of the Bauer–Skeel bounds. First,we consider the Bauer–Skeel bounds. If ak ≥ 0, then forevery x ∈ Σ one has

|A(pc)−1(Akx − bk)|= A(pc)−1Ak(x − x∗) + A(pc)−1(Akx∗ − bk). (10)

Otherwise, if ak ≤ 0, then

|A(pc)−1(Akx − bk)|= −A(pc)−1Ak(x − x∗) − A(pc)−1(Akx∗ − bk).

(11)

Otherwise, we estimate the term from above as in theproof

|A(pc)−1(Akx − bk)|≤ |A(pc)−1Ak||x − x∗| + |A(pc)−1(Akx∗ − bk)|.

(12)

Anyway, the inequality (6) can be written as

|x − x∗| ≤K∑

k=1

pΔk |A(pc)−1(Akx − bk)|

≤ Y (x − x∗) + y + Z|x − x∗| + z

for some Y, Z ∈ Rn×n, y, z ∈ Rn, and Z ≥ 0. Here, Yand y are summed up from (10) and (11), whereas Z andz come from (12). Now, we proceed as follows:

|x − x∗| ≤ |Y ||x − x∗| + y + Z|x − x∗| + z,

whence

(I − |Y | − Z)|x − x∗| ≤ y + z,

and

x ≤ x∗ + (I − |Y | − Z)−1(y + z),

x ≥ x∗ − (I − |Y | − Z)−1(y + z).

Page 66: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 567

Algorithm 1 (Refinement of the Bauer–Skeel method)1: Y := 0; y := 0; Z := 0; z := 0;2: x∗ := A(pc)−1b(pc);3: Let x be an initial enclosure to Σ;4: for k = 1, . . . , K do5: ak := A(pc)−1(Akx − bk);6: for j = 1, . . . , n do7: if ak

j ≥ 0 then8: Yj. := Yj. + pΔ

k A(pc)−1j. Ak; yj := yj + pΔ

k A(pc)−1j. (Akx∗ − bk);

9: else if akj ≤ 0 then

10: Yj. := Yj. − pΔk A(pc)−1

j. Ak; yj := yj − pΔk A(pc)−1

j. (Akx∗ − bk);11: else12: Zj. := Zj. + pΔ

k |A(pc)−1j. Ak|; zj := zj + pΔ

k |A(pc)−1j. (Akx∗ − bk)|;

13: end if14: end for15: end for16: return

[x∗ − (I − |Y | − Z)−1(y + z), x∗ + (I − |Y | − Z)−1(y + z)

], an enclosure to Σ.

Since |Y |+Z is non-negative and |Y |+Z ≤ M , theinverse matrix (I −|Y |−Z)−1 exists and is non-negative.

Notice that even tighter bounds can be calculated bysplitting the terms of (6) componentwise. That is, wecheck the signs of ak

i and aki for every i = 1, . . . , n, and

use the i-th estimate either in (10), (11) or (12) accord-ingly. The method is described in Algorithm 1.

In the following we claim that the resulting enclosureis always as good as the initial Bauer–Skeel bounds.

Proposition 3. Let x be the enclosure obtained by The-orem 4, and x′ the enclosure obtained by Algorithm 1.Then x′ ⊆ x.

Proof. Recall that

x = x∗ + (I − M)−1K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|

= x∗ +

∞∑

j=1

M jK∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|,

and

x ′ = x∗ + (I − |Y | − Z)−1(y + z)

= x∗ +

∞∑

j=1

(|Y | + Z)j(y + z).

From

y + z ≤K∑

k=1

pΔk |A(pc)−1(Akx∗ − bk)|

and 0 ≤ |Y | + Z ≤ M,we obtain x ′ ≤ x. We proceedSimilarly for x′ ≥ x. �

4.2. Refinement of the Hansen–Bliek–Rohn bounds.We will refine the Hansen–Bliek–Rohn bounds in thesame manner as the Bauer–Skeel ones. If ak ≥ 0, then

|A(pc)−1(Akx − bk)|= A(pc)−1Akx − A(pc)−1bk. (13)

Otherwise, if ak ≤ 0, then

|A(pc)−1(Akx − bk)|= −A(pc)−1Akx + A(pc)−1bk. (14)

Otherwise, we use the standard estimation for theHansen–Bliek–Rohn method,

|A(pc)−1(Akx − bk)|≤ |A(pc)−1Ak||x| + |A(pc)−1bk|. (15)

Thus (7) takes the form of

|x − x∗| ≤K∑

k=1

pΔk |A(pc)−1(Akx − bk)|

≤ Y x − y + Z|x| + z

≤ (|Y | + Z)|x| − y + z,

where Y, Z ∈ Rn×n, y, z ∈ Rn, and Z ≥ 0. Next, weproceed as in the proof of Theorem 5. The method is sum-marized in Algorithm 2.

We show that the refinement of the Hansen–Bliek–Rohn method is in each component at least as tight as theoriginal Hansen–Bliek–Rohn bounds.

Proposition 4. Let x be the enclosure obtained by The-orem 5, and x′ the enclosure obtained by Algorithm 2.Then x′ ⊆ x.

Page 67: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

568 M. Hladık

Algorithm 2 (Refinement of the Hansen–Bliek–Rohn method)1: Y := 0; y := 0; Z := 0; z := 0;2: x∗ := A(pc)−1b(pc);3: Let x be an initial an enclosure to Σ;4: for k = 1, . . . , K do5: ak := A(pc)−1(Akx − bk);6: for j = 1, . . . , n do7: if ak

j ≥ 0 then8: Yj. := Yj. + pΔ

k A(pc)−1j. Ak; yj := yj + pΔ

k A(pc)−1j. bk;

9: else if akj ≤ 0 then

10: Yj. := Yj. − pΔk A(pc)−1

j. Ak; yj := yj − pΔk A(pc)−1

j. bk;11: else12: Zj. := Zj. + pΔ

k |A(pc)−1j. Ak|; zj := zj + pΔ

k |A(pc)−1j. bk|;

13: end if14: end for15: end for16: M∗ := (I − |Y | − Z)−1; x0 := M∗(|x∗| − y + z);17: for i = 1, . . . , n do

18: x ′i := max

{x0

i + (x∗i − |x∗

i |)m∗ii,

12m∗

ii−1

(x0

i + (x∗i − |x∗

i |)m∗ii

)};

19: x ′i := min

{−x0

i + (x∗i + |x∗

i |)m∗ii,

12m∗

ii−1

(− x0

i + (x∗i + |x∗

i |)m∗ii

)};

20: end for21: return x ′, an enclosure to Σ.

Proof. Let i ∈ {1, . . . , n}. We prove x ′i ≤ xi. The lower

case is done accordingly. Write

M ′∗ := (I − |Y | − Z)−1,

x′0 := M∗(|x∗| − y + z).

Clearly, M ′∗ ≤ M∗ and x′0 ≤ x0. Thus

−x′0i + (x∗

i + |x∗i |)m′∗

ii ≤ −x0i + (x∗

i + |x∗i |)m∗

ii.

Since m′∗ii ≥ 1, we have

1

2m′∗ii − 1

≤ 1,

and the term

1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

)

is the maximizer in Step 18 of Algorithm 2 if and only ifit is non-positive. In this case,

1

2m′∗ii − 1

(x′0

i + (x∗i − |x∗

i |)m′∗ii

)

≤ 1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

),

which completes the proof. �

5. Time complexity

Let us analyse the theoretical time complexity of the pro-posed methods. Both Bauer–Skeel and Hansen–Bliek–Rohn methods have the same asymptotic time complex-ities. The most computationally expensive is to calcu-late the matrix M . It costs O(n3K) operations by us-ing a naive implementation. However, the matrices Ak,k = 1, . . . , K , are usually sparse, in which case the com-plexity is lower.

Denote by P the maximum number of non-zero en-tries in some Ak, k = 1, . . . , K , that is, the maximumnumber of appearances of some parameter pk. Then, com-putation of M can be implemented in O(nK(n+P )), thematrix inverse is in O(n3) and the remaining calculationis negligible with respect to the worst case time complex-ity. Thus the algorithms are in O(n3 + n2K + nPK).

For instance, for symmetric interval systems, wehave P = 2, K = 1

2n(n − 1), so the total cost is O(n4).For Toeplitz systems we have P = O(n), K = O(n), sothe time complexity is O(n3).

Concerning the refinements discussed in Section 4 itturns out that their asymptotic time complexity is the sameas that of the original methods, that is, O(n3 + n2K +nPK). Of course, the multiplicative terms are greater,which causes the higher computational time presented inSection 6.

The iterative methods by Rump or Popova andKramer require O(n3+n2KI) operations, where I standsfor the number of iterations. Thus our approach is not

Page 68: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 569

asymptotically worse provided that P = O(nI).

6. Examples and numerical experiments

In his paper, Rohn (2010) claims that for the standard sys-tem of interval linear equations the Hansen–Bliek–Rohnbounds are never worse than the Bauer–Skeel ones. Inthe following examples we show that this is not the casefor (more general) parametric systems. Surprisingly, theBauer–Skeel bounds are sometimes notably better (Exam-ple 2).

Example 1. Consider Okumura’s problem of a linear re-sistive network (Popova and Kramer, 2008, Example 5.3.).It obeys the form of (1) with

A(p) =

⎛⎜⎜⎜⎜⎝

p1 + p6 −p6 0−p6 p2 + p6 + p7 −p7

0 −p7 p3 + p7 + p8

0 0 −p8

0 0 0

0 00 0

−p8 0p4 + p8 + p9 −p9

−p9 p5 + p9

⎞⎟⎟⎟⎟⎠

,

b(p) = (10, 0, 10, 0, 0)T , and pi ∈ [0.99, 1.01], i =1, . . . , 9. The Bauer–Skeel bounds computed accordingto Theorem 4 are

([7.0148, 7.1671], [4.1173, 4.2463], [5.3933, 5.5158],

[2.1377, 2.2260], [1.0601, 1.1217])T ,

and the refinement by Algorithm 1 yields

([7.0151, 7.1667], [4.1180, 4.2456], [5.3938, 5.5153],

[2.1382, 2.2255], [1.0605, 1.1213])T .

The Hansen–Bliek–Rohn method (Theorem 5) re-sults in a not-as-tight enclosure,

([6.9693, 7.2150], [4.0689, 4.2971], [5.3501, 5.5612],

[2.1083, 2.2568], [1.0397, 1.1431])T .

The refinement by Algorithm 2 gives

([6.9925, 7.1913], [4.1134, 4.2504], [5.3799, 5.5307],

[2.1324, 2.2317], [1.0576, 1.1244])T .

Notice that for this example the exact interval hull of thesolution set Σ is known (Popova and Kramer, 2008),

([7.0170, 7.1663], [4.1193, 4.2454], [5.3952, 5.5150],

[2.1392, 2.2253], [1.0614, 1.1211])T .

Example 2. From Example 3.4 of Popova and Kramer(2008) we have

(p1 p2 − 1p2 p1

)x =

(−p2 + 1

3p2

),

where p1 ∈ [−2, −1] and p2 ∈ [3, 5]. Here, the Bauer–Skeel enclosure gives

([0.1282, 1.2052], [−1.4103, −0.3675])T ,

whereas the Hansen–Bliek–Rohn method yields

([−0.4359, 3.7693], [−4.8718, −0.0923])T .

No refinement for this very low dimensional example wassuccessful. �

Example 3. The last example is devoted to numericalexperiments with randomly generated data. Even thoughthe real-life data are not random, such experiments revealsomething on the performance of algorithms. The compu-tations were carried out in MATLAB 7.7.0.471 (R2008b)on a machine with an AMD Athlon 64 X2 DualCore Processor 4400+, 2.2 GHz, CPU with 1004MB RAM. Interval arithmetics and some basic intervalfunctions were provided by the interval toolbox INTLABv5.3 (Rump, 2006). We used just a simple implemen-tation of the methods presented. Notice, for large-scaleproblems in particular, that a more subtle implementationutilizing the sparsity of matrices Ak, k = 1, . . . , K , couldbe used.

First, we consider systems with symmetric matri-ces that were generated in the following way. First,entries of Ac were chosen randomly and independentlyin [−10, 10] with uniform distribution, and then we setAc := Ac + (Ac)T + 10nI . The entries of the radiusmatrix AΔ are equal to R, where R > 0 is a parameter.The right-hand side interval vector was chosen to be de-generate (zero width) with entries chosen randomly from[−10, 10].

In diverse settings of the dimension n and the radiusR we carried out sequences of 10 runs. The results aresummarized in Table 1. We compare the resulting enclo-sures by relative sums of radii with respect to the Bauer–Skeel bounds. That is, for a given enclosure w and theBauer–Skeel enclosure v, we display

n∑i=1

wΔi

n∑i=1

vΔi

.

On the average, the Bauer–Skeel (BS) method givestighter enclosures than the Hansen–Bliek–Rohn (HBR)one. The refinement is more conclusive for the latter thanfor the former.

Page 69: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

570 M. Hladık

Table 1. Symmetric systems with random data.

n R relative sum of radii average execution time (in sec.)BS refined BS HBR refined HBR BS refined BS HBR refined HBR

5 0.05 1 0.9994 1.06 1.003 0.01537 0.0893 0.01421 0.08865 0.1 1 0.9988 1.058 1.001 0.0155 0.09176 0.0148 0.088985 0.5 1 0.9947 1.044 0.9931 0.01553 0.09054 0.01441 0.09115 1 1 0.9907 1.026 0.9795 0.01412 0.08438 0.01366 0.08453

10 0.05 1 0.999 1.1 1.001 0.04731 0.5632 0.0456 0.557910 0.1 1 0.9981 1.099 1.001 0.04588 0.5559 0.04498 0.549410 0.5 1 0.9912 1.092 1.001 0.04839 0.5813 0.04604 0.567910 1 1 0.984 1.082 1 0.04638 0.5461 0.04401 0.544915 0.05 1 0.999 1.104 1 0.1017 1.802 0.1 1.77815 0.1 1 0.9979 1.103 1 0.09783 1.719 0.09587 1.69515 0.5 1 0.9903 1.099 1.001 0.09836 1.759 0.09593 1.72415 1 1 0.9825 1.092 1.004 0.09666 1.733 0.0956 1.73120 0.05 1 0.999 1.104 1.001 0.1758 3.979 0.1695 3.95620 0.1 1 0.998 1.103 1.001 0.1721 3.937 0.1697 3.92820 0.5 1 0.9906 1.1 1.003 0.1726 3.961 0.1671 3.97620 1 1 0.9831 1.095 1.008 0.1699 4.01 0.169 3.99625 0.05 1 0.999 1.097 1 0.2774 7.591 0.2647 7.52425 0.1 1 0.9981 1.096 1 0.283 7.712 0.2775 7.64425 0.5 1 0.9909 1.094 1.003 0.2726 7.599 0.2669 7.49325 1 1 0.9837 1.09 1.008 0.2767 7.723 0.2704 7.7750 0.05 1 0.999 1.099 1 6.399 57.36 6.327 56.7250 0.1 1 0.9981 1.099 1.001 6.505 57.13 6.31 56.6350 0.5 1 0.9911 1.097 1.006 6.395 57.64 6.341 57.1150 1 1 0.984 1.095 1.013 6.371 57.78 6.317 57.36

100 0.05 1 0.999 1.095 1 90.71 511.9 90.07 488.2100 0.1 1 0.9981 1.095 1.001 91.75 527 88.77 489.9100 0.5 1 0.991 1.095 1.006 92.68 526.7 89.01 489.1100 1 1 0.9838 1.094 1.014 90.5 522.4 89.23 498.7

Second, we consider Toeplitz systems, i.e, systemswith matrices A satisfying ai,j = ai+1,j+1, i, j =1, . . . , n − 1. Herein, Ac

i,1 and Ac1,i, i = 2, . . . , n, were

chosen randomly in [−10, 10], whereas Ac1,1 in [−10 +

10n, 10 + 10n]. The entries of AΔ are equal to R. Theright-hand side vector was again degenerate with entriesselected randomly in [−10, 10]. The results are displayedin Table 2.

Third, we consider symmetric systems again gener-ated in the same way as above. We compare the combi-nation of the Bauer–Skeel and Hansen–Bliek–Rohn meth-ods (Remark 1) with the interval Cholesky method (Ale-feld and Mayer, 1993; 2008). We implemented the ba-sic version of the interval Cholesky method since themore sophisticated algorithm based on pivot tightening(Garloff, 2010) is intractable, having the exponential com-plexity. Table 3 demonstrates that the proposed method ismuch more efficient than the interval Cholesky one. Eventhough the computing time is slightly better for the latter,the former yields a significantly tighter enclosure.

Finally, we did some comparisons with the paramet-ric solver by Popova (2004a; 2006b); see Table 4. Again,

we considered symmetric interval systems. On the aver-age, our approach is slightly better, and the refinement ismore significantly better.

7. Concluding remarks

Numerical experiments revealed that a generalization ofthe Bauer–Skeel method is a competitive alternative to theHansen–Bliek–Rohn method. It is best to use a combina-tion of both to obtain a tight enclosure. As observed in thenumerical experiments, the resulting (direct) algorithm isa competitive alternative to existing direct or iterative al-gorithms. Moreover, efficient refinements of both meth-ods were proposed in order to compute tighter enclosures.

As pointed out by one referee, the performance ofthis centered form approach is limited (cf. Neumaier andPownuk, 2007). A non-centered form approach may leadto further improvements.

Acknowledgment

This paper was partially supported by the Czech ScienceFoundation Grant P403/12/1947.

Page 70: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 571

Table 2. Toeplitz systems with random data.

n R relative sum of radii average execution time (in sec.)BS refined BS HBR refined HBR BS refined BS HBR refined HBR

5 0.05 1 0.9985 1.217 1.01 0.008363 0.05246 0.008132 0.051395 0.1 1 0.997 1.215 1.008 0.0102 0.05672 0.009942 0.056845 0.5 1 0.9859 1.2 1.01 0.01007 0.05786 0.009805 0.057065 1 1 0.976 1.179 1.014 0.01092 0.05661 0.009788 0.05831

10 0.05 1 0.9979 1.317 1.002 0.01879 0.2069 0.01853 0.2110 0.1 1 0.9959 1.316 1.001 0.01832 0.2046 0.0172 0.200110 0.5 1 0.9792 1.307 0.9978 0.018 0.2023 0.01751 0.196310 1 1 0.9588 1.295 0.9967 0.01894 0.2084 0.01806 0.209115 0.05 1 0.9979 1.363 1.005 0.02552 0.4194 0.02465 0.412715 0.1 1 0.9958 1.362 1.005 0.02636 0.4542 0.03089 0.45515 0.5 1 0.9794 1.356 1.012 0.02747 0.4478 0.02632 0.444315 1 1 0.9605 1.349 1.03 0.02684 0.4291 0.02627 0.432420 0.05 1 0.9978 1.389 1.008 0.03518 0.7715 0.03548 0.749220 0.1 1 0.9956 1.388 1.01 0.03628 0.7911 0.03554 0.779720 0.5 1 0.9786 1.384 1.024 0.03488 0.757 0.03406 0.762720 1 1 0.9585 1.378 1.038 0.03498 0.769 0.03447 0.77625 0.05 1 0.9977 1.421 1.007 0.04478 1.157 0.04359 1.15625 0.1 1 0.9954 1.42 1.009 0.04647 1.195 0.04478 1.19225 0.5 1 0.9779 1.417 1.031 0.0467 1.198 0.04404 1.19425 1 1 0.9582 1.412 1.061 0.04455 1.166 0.04308 1.16550 0.05 1 0.9978 1.418 1.005 0.5689 4.549 0.5276 4.4850 0.1 1 0.9956 1.418 1.009 0.528 4.519 0.526 4.50950 0.5 1 0.9787 1.416 1.035 0.5322 4.719 0.535 4.62950 1 1 0.9599 1.414 1.068 0.5278 4.634 0.531 4.616

100 0.05 1 0.9976 1.452 1.004 3.704 20.19 3.694 19.7100 0.1 1 0.9953 1.452 1.008 3.717 20.13 3.91 19.84100 0.5 1 0.9776 1.451 1.043 3.719 20.22 3.705 20.11100 1 1 0.9582 1.45 1.087 3.678 20.41 3.663 20.26

ReferencesAlefeld, G., Kreinovich, V. and Mayer, G. (1997). On the shape

of the symmetric, persymmetric, and skew-symmetric so-lution set, SIAM Journal on Matrix Analysis and Applica-tions 18(3): 693–705.

Alefeld, G., Kreinovich, V. and Mayer, G. (2003). On thesolution sets of particular classes of linear interval sys-tems, Journal of Computational and Applied Mathematics152(1–2): 1–15.

Alefeld, G. and Mayer, G. (1993). The Cholesky method for in-terval data, Linear Algebra and Its Applications 194: 161–182.

Alefeld, G. and Mayer, G. (2008). New criteria for the feasibilityof the Cholesky method with interval data, SIAM Journalon Matrix Analysis and Applications 30(4): 1392–1405.

Beeck, H. (1975). Zur Problematik der Hullenbestimmung vonIntervallgleichungssystem en, in K. Nickel (Ed.), IntervalMathematics: Proceedings of the International Symposiumon Interval Mathematics, Lecture Notes in Computer Sci-ence, Vol. 29, Springer, Berlin, pp. 150–159.

Busłowicz, M. (2010). Robust stability of positive continuous-time linear systems with delays, International Journal of

Applied Mathematics and Computer Science 20(4): 665–670, DOI: 10.2478/v10006-010-0049-8.

Fiedler, M., Nedoma, J., Ramık, J., Rohn, J. and Zimmermann,K. (2006). Linear Optimization Problems with InexactData, Springer, New York, NY.

Garloff, J. (2010). Pivot tightening for the interval Choleskymethod, Proceedings in Applied Mathematics and Me-chanics 10(1): 549–550.

Hladık, M. (2008). Description of symmetric and skew-symmetric solution set, SIAM Journal on Matrix Analysisand Applications 30(2): 509–521.

Horn, R.A. and Johnson, C.R. (1985). Matrix Analysis, Cam-bridge University Press, Cambridge.

Jansson, C. (1991). Interval linear systems with symmetric ma-trices, skew-symmetric matrices and dependencies in theright hand side, Computing 46(3): 265–274.

Kolev, L.V. (2004). A method for outer interval solution of linearparametric systems, Reliable Computing 10(3): 227–239.

Kolev, L.V. (2006). Improvement of a direct method for outersolution of linear parametric systems, Reliable Computing12(3): 193–202.

Page 71: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

572 M. Hladık

Table 3. Comparison with the interval Cholesky method.

n R relative sum of radii average exec. time (in sec.)HBR–BS Cholesky HBR–BS Cholesky

5 0.05 1 187.0 0.01639 0.0097795 0.1 1 92.69 0.01649 0.0097685 0.5 1 17.29 0.01637 0.0097745 1 1 7.891 0.01643 0.009801

10 0.05 1 200.2 0.05505 0.00979110 0.1 1 99.19 0.05527 0.00979710 0.5 1 18.4 0.05724 0.00979310 1 1 8.298 0.05544 0.00981815 0.05 1 238.7 0.1195 0.0098415 0.1 1 118.3 0.1199 0.00981315 0.5 1 21.94 0.1202 0.00984415 1 1 9.9 0.1202 0.00980620 0.05 1 232.6 0.2118 0.0108420 0.1 1 115.3 0.2111 0.010220 0.5 1 21.42 0.2105 0.00984820 1 1 9.685 0.2112 0.0098725 0.05 1 235.9 0.3365 0.00985425 0.1 1 116.9 0.3361 0.00983425 0.5 1 21.76 0.3365 0.00983125 1 1 9.866 0.3361 0.009834

Merlet, J.-P. (2009). Interval analysis for certified numerical so-lution of problems in robotics, International Journal of Ap-plied Mathematics and Computer Science 19(3): 399–412,DOI: 10.2478/v10006-009-0033-3.

Meyer, C.D. (2000). Matrix Analysis and Applied Linear Alge-bra, SIAM, Philadelphia, PA.

Neumaier, A. (1990). Interval Methods for Systems of Equa-tions, Cambridge University Press, Cambridge.

Neumaier, A. (1999). A simple derivation of the Hansen–Bliek–Rohn–Ning–Kearfott enclosure for linear interval equa-tions, Reliable Computing 5(2): 131–136.

Neumaier, A. and Pownuk, A. (2007). Linear systems with largeuncertainties, with applications to truss structures, ReliableComputing 13(2): 149–172.

Ning, S. and Kearfott, R.B. (1997). A comparison of some meth-ods for solving linear interval equations, SIAM Journal onNumerical Analysis 34(4): 1289–1305.

Padberg, M. (1999). Linear Optimization and Extensions, 2ndEdn., Springer, Berlin.

Popova, E. (2002). Quality of the solution sets of parameter-dependent interval linear systems, Zeitschrift fur Ange-wandte Mathematik und Mechanik 82(10): 723–727.

Popova, E.D. (2001). On the solution of parametrised linear sys-tems, in W. Kramer and J.W. von Gudenberg (Eds.), Sci-entific Computing, Validated Numerics, Interval Methods,Kluwer, London, pp. 127–138.

Popova, E.D. (2004a). Parametric interval linear solver, Numer-ical Algorithms 37(1–4): 345–356.

Popova, E.D. (2004b). Strong regularity of parametric intervalmatrices, in I. Dimovski (Ed.), Mathematics and Education

in Mathematics, Proceedings of the 33rd Spring Confer-ence of the Union of Bulgarian Mathematicians, Borovets,Bulgaria, BAS, Sofia, pp. 446–451.

Popova, E.D. (2006a). Computer-assisted proofs in solvinglinear parametric problems, 12th GAMM/IMACS Inter-national Symposium on Scientific Computing, ComputerArithmetic and Validated Numerics, SCAN 2006, Duis-burg, Germany, p. 35.

Popova, E.D. (2006b). Webcomputing service framework, In-ternational Journal Information Theories & Applications13(3): 246–254.

Popova, E.D. (2009). Explicit characterization of a class of para-metric solution sets, Comptes Rendus de L’Academie Bul-gare des Sciences 62(10): 1207–1216.

Popova, E.D. and Kramer, W. (2007). Inner and outer boundsfor the solution set of parametric linear systems, Journalof Computational and Applied Mathematics 199(2): 310–316.

Popova, E.D. and Kramer, W. (2008). Visualizing parametricsolution sets, BIT Numerical Mathematics 48(1): 95–115.

Rex, G. and Rohn, J. (1998). Sufficient conditions for regular-ity and singularity of interval matrices, SIAM Journal onMatrix Analysis and Applications 20(2): 437–445.

Rohn, J. (1989). Systems of linear interval equations, LinearAlgebra and Its Applications 126(C): 39–78.

Rohn, J. (1993). Cheap and tight bounds: The recent result by E.Hansen can be made more efficient, Interval Computations(4): 13–21.

Rohn, J. (2004). A method for handling dependentdata in interval linear systems, Technical Report

Page 72: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Enclosures for the solution set of parametric interval linear systems 573

Table 4. Comparison with the parametric solver by Popova.

n R relative sum of radiiHBR-BS refined HBR-BS parametric solver

5 0.1 1 0.9945 0.99915 0.5 1 0.9773 1.00465 1 1 0.9674 1.0191

10 0.1 1 0.9970 1.000110 0.5 1 0.9879 0.999610 1 1 0.9799 1.000015 0.1 1 0.9971 0.997715 0.5 1 0.9883 1.000215 1 1 0.9814 1.009720 0.1 1 0.9971 1.001520 0.5 1 0.9881 1.000220 1 1 0.9783 1.000025 0.1 1 0.9975 1.001825 0.5 1 0.9874 1.000225 1 1 0.9792 1.0005

911, Institute of Computer Science, Academyof Sciences of the Czech Republic, Prague,http://uivtx.cs.cas.cz/˜rohn/publist/rp911.ps.

Rohn, J. (2010). An improvement of the Bauer–Skeel bounds,Technical Report V-1065, Institute of Computer Science,Academy of Sciences of the Czech Republic, Prague,http://uivtx.cs.cas.cz/˜rohn/publist/bauerskeel.pdf.

Rump, S.M. (1983). Solving algebraic problems with high ac-curacy, in U. Kulisch and W. Miranker (Eds.), A New Ap-proach to Scientific Computation, Academic Press, NewYork, NY, pp. 51–120.

Rump, S.M. (1994). Verification methods for dense and sparsesystems of equations, in J. Herzberger (Ed.), Topics in Vali-dated Computations, Studies in Computational Mathemat-ics, Elsevier, Amsterdam, pp. 63–136.

Rump, S.M. (2006). INTLAB—Interval Laboratory, theMatlab toolbox for verified computations, Version 5.3.http://www.ti3.tu-harburg.de/rump/intlab/.

Rump, S.M. (2010). Verification methods: Rigorous results us-ing floating-point arithmetic, Acta Numerica 19: 287–449.

Schrijver, A. (1998). Theory of Linear and Integer Program-ming, Reprint Edn., Wiley, Chichester.

Skalna, I. (2006). A method for outer interval solution of sys-tems of linear equations depending linearly on interval pa-rameters, Reliable Computing 12(2): 107–120.

Skalna, I. (2008). On checking the monotonicity of paramet-ric interval solution of linear structural systems, in R.Wyrzykowski, J. Dangarra, K. Karczewski and J. Was-niewski (Eds.), Parallel Processing and Applied Mathe-matics, Lecture Notes in Computer Science, Vol. 4967,Springer-Verlag, Berlin/Heidelberg, pp. 1400–1409.

Stewart, G. W. (1998). Matrix Algorithms, Vol. 1: Basic Decom-positions, SIAM, Philadelphia, PA.

Milan Hladık received his Ph.D. degree in operations research fromCharles University in Prague in 2006. In 2008, he worked in the Coprinteam at INRIA, Sophia Antipolis, France, as a postdoc research fellow.Since 2009, he has been an assistant professor at the Department of Ap-plied Mathematics, Faculty of Mathematics and Physics, Charles Uni-versity in Prague. His research interests are interval analysis, parametricprogramming and uncertain optimization.

Appendix

Consider a system of interval linear equations Ax = b,

which is a special case of (1), and the solution set Σ :={x ∈ Rn | Ax = b, A ∈ A, b ∈ b}. The Bauer–Skeelbounds (Rohn, 2010; Stewart, 1998) and the Hansen–Bliek–Rohn bounds (Fiedler et al., 2006; Rohn, 1993,Theorem 2.39) on Σ are given below.

Theorem 6. (Bauer–Skeel) Let Ac nonsingular andρ(|(Ac)−1|AΔ) < 1. Write x∗ := (Ac)−1bc, M :=

(Ac)−1|AΔ and M∗ := (I−M)−1. Then for each x ∈ Σwe have

x ≥ x∗ − M∗|(Ac)−1|(AΔ|x∗| + bΔ),

x ≤ x∗ + M∗|(Ac)−1|(AΔ|x∗| + bΔ).

Theorem 7. (Hansen–Bliek–Rohn) Under the same as-sumption and notation as in the previous theorem, we have

xi ≤ max

{x0

i + (x∗i − |x∗

i |)m∗ii,

1

2m∗ii − 1

(x0

i + (x∗i − |x∗

i |)m∗ii

)},

Page 73: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

574 M. Hladık

and

xi ≥ min

{− x0

i + (x∗i + |x∗

i |)m∗ii,

1

2m∗ii − 1

(− x0

i + (x∗i + |x∗

i |)m∗ii

)},

where x0 := M∗(|x∗| + |(Ac)−1|bΔ).

Received: 11 July 2011Revised: 24 November 2011

Page 74: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

FOCUS

Outer enclosures to the parametric AE solution set

Evgenija D. Popova • Milan Hladık

Published online: 19 February 2013

� European Union 2013

Abstract We consider systems of linear equations, where

the elements of the matrix and of the right-hand side vector

are linear functions of interval parameters. We study

parametric AE solution sets, which are defined by univer-

sally and existentially quantified parameters, and the for-

mer precede the latter. Based on a recently obtained

explicit description of such solution sets, we present three

approaches for obtaining outer estimations of parametric

AE solution sets. The first approach intersects inclusions of

parametric united solution sets for all combinations of the

end-points of the universally quantified parameters. Poly-

nomially computable outer bounds for parametric AE

solution sets are obtained by parametric AE generalization

of a single-step Bauer–Skeel method. In the special case of

parametric tolerable solution sets, we derive an enclosure

based on linear programming approach; this enclosure is

optimal under some assumption. The application of these

approaches to parametric tolerable and controllable solu-

tion sets is discussed. Numerical examples accompanied by

graphic representations illustrate the solution sets and

properties of the methods.

Keywords Linear systems � Dependent data �AE solution set � Tolerable solution set �Controllable solution set

1 Introduction

Consider a system of linear algebraic equations

AðpÞx ¼ bðpÞ ð1Þ

which has a linear uncertainty structure

AðpÞ ¼ A0 þXK

k¼1

Akpk; bðpÞ ¼ b0 þXK

k¼1

bkpk; ð2Þ

where Ak 2 Rn�n; bk 2 R

n; k ¼ 0; 1; . . .;K; and p ¼ðp1; . . .; pKÞ: The parameters pk; k 2 K ¼ f1; . . .;Kg are

considered as uncertain and varying within given intervals

pk ¼ ½pk; pk�: Such systems are common in many

engineering analysis or design problems (see Elishakoff

and Ohsaki (2010) and the references therein), control

engineering (Matcovschi and Pastravanu 2007; Sokolova

and Kuzmina 2008; Busłowicz 2010), robust Monte Carlo

simulations (Lagoa and Barmish 2002), etc. Usually, the

set of solutions to (1)–(2) which is sought for is the

so-called parametric united solution set

Rpuni ¼ RuniðAðpÞ; bðpÞ; pÞ

:¼ fx 2 Rn j ð9p 2 pÞðAðpÞx ¼ bðpÞÞg:

The united parametric solution set generalizes the united

non-parametric solution set to an interval linear system

Ax ¼ b; which is defined

Runi ¼ RuniðA; bÞ:¼ fx 2 R

n j ð9A 2 AÞð9b 2 bÞðAx ¼ bÞg:

Communicated by V. Kreinovich.

E. D. Popova (&)

Institute of Mathematics and Informatics,

Bulgarian Academy of Sciences, Acad. G. Bonchev Str.,

Bl. 8, 1113 Sofia, Bulgaria

e-mail: [email protected]

M. Hladık

Department of Applied Mathematics, Faculty of Mathematics

and Physics, Charles University, Malostranske nam. 25,

11800 Prague, Czech Republic

e-mail: [email protected]

123

Soft Comput (2013) 17:1403–1414

DOI 10.1007/s00500-013-1011-0

Page 75: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

However, the solutions of many practical problems

involving uncertain (interval) data have quantified

formulation involving the universal quantifier (V) besides

the existential quantifier (A). We consider quantified

solution sets where all universally quantified parameters

precede all existentially quantified ones. Such solution sets

are called AE solution sets, after Shary (2002). Examples of

several mathematical and engineering problems formulated

in terms of quantified solution sets can be found, for

example, in Shary (2002); Pivkina and Kreinovich (2006);

Wang (2008). AE solution sets are of particular interest

also for interval-valued fuzzy relational equations, see

Wang et al. (2003) where the concepts of so-called

tolerable and controllable solution sets of interval-valued

fuzzy relational equations are introduced, their structure

and relations are discussed. The literature on control

engineering contains many papers that explore problems

related to linear dynamical systems with uncertainties

bounded by interval matrices, see, e.g., Sokolova and

Kuzmina (2008) and the references in Matcovschi and

Pastravanu (2007); Busłowicz (2010). The tolerable

solution set is utilized in Sokolova and Kuzmina (2008)

for parameter identification problems and in controllability

analysis. As in the other problem domains, the uncertain

data involve more parameter dependencies than in an

interval matrix with independent elements. So, the more

realistic approaches consider linear dynamical systems

with linear dependencies between state parameters as

in Matcovschi and Pastravanu (2007), or structural

perturbations of state matrices as in Busłowicz (2010).

Although the non-parametric AE solution sets are stud-

ied, e.g., in Shary (1995); Shary (1997); Shary (2002);

Goldsztejn (2005); Goldsztejn and Chabert (2006); Pivkina

and Kreinovich (2006), there are a few results on the more

general case of linear parameter dependency. A special

case of parametric tolerable solution sets is dealt with in

Sharaya and Shary (2011). A characterization of the gen-

eral parametric solution set is given in Popova and Kramer

(2011), and a Fourier–Motzkin type elimination of

parameters is applied in Popova (2012) to derive explicit

descriptions of the parametric AE solution sets.

In this paper we are interested in obtaining outer bounds

for the parametric AE solution sets. To our knowledge, this

is the first systematic approach to outer estimations of

parametric AE solution sets in their general form. In Sect. 3

we prove that (inner or outer) estimations of parametric AE

solution sets can be obtained by using only some corre-

sponding estimations of parametric united solution sets. In

Sect. 4 we generalize a Bauer–Skeel method (see Rohn

(2010) and the references therein), applied so far for

bounding (parametric) united solution sets. The method is

derived in a form which leads to the same conclusion,

proven in Sect. 3, and requires intersecting the bounds of

parametric united solution sets for all combinations of the

end-points of the universally quantified parameters.

Another, single-step single-application parametric Bauer–

Skeel AE method is derived in Sect. 5 and both approaches

are compared on several numerical examples. The deriva-

tion of both forms of Bauer–Skeel parametric AE method is

self-contained and no knowledge of the original method is

required. The special cases of parametric tolerable and

controllable solution sets are discussed. In the tolerable

case, an enclosure based on linear programming approach

is derived in Sect. 6 Numerical examples accompanied by

graphic representations illustrate the solution sets and

properties of the methods.

2 Notations

Denote by Rn;Rn�m the set of real vectors with n com-

ponents and the set of real n 9 m matrices, respectively. A

real compact interval is

a ¼ ½a; a� :¼ fa 2 R j a� a� ag:

As a generalization of real compact intervals, an interval

matrix a with independent components is defined as a

family

A ¼ ½A;A� :¼ fA 2 Rn�m j A�A�Ag;

where A;A 2 Rn�m; A�A; are given matrices. Similarly we

define interval vectors. By IRn; IRn�m we denote the sets of

interval n-vectors and interval n 9 m matrices, respectively.

For a 2 IR; define mid-point ac :¼ ðaþ aÞ=2 and radius

aD :¼ ða� aÞ=2: These functions are applied to interval

vectors and matrices componentwise. Without loss of gen-

erality and in order to have a unique representation (2), we

assume that pDk [ 0 for all k 2 K: The spectral radius of a

matrix M is denoted by q(M). The identity matrix of

dimension n is denoted by I. For a given index set I ¼fi1; . . .; ikg denote pI :¼ ðpi1 ; . . .; pikÞ:Next, Card(S) denotes

the cardinality of a set S. The following definitions are

recalled from Popova and Kramer (2011).

Definition 1 A parameter pk, 1 B k B K, is of 1st class if

it occurs in only one equation of the system (1).

Definition 2 A parameter pk; k 2 K; is of 2nd class if it is

involved in more than one equation of the system (1).

Let E and A be two disjoint sets such that E [ A ¼ K:The parametric AE solution set is defined as

RpAE ¼ RAEðAðpÞ; bðpÞ; pÞ

:¼ fx 2 Rn j ð8pA 2 pAÞð9pE 2 pEÞðAðpÞx ¼ bðpÞÞg:

Beside the parametric united solution set, there are several

other special cases of AE solutions:

1404 E. D. Popova, M. Hladık

123

Page 76: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

– A parametric tolerable solution set is such that

universal quantifiers concern only the constraint matrix

and existential quantifiers only the right-hand side. That

is, Ak = 0 for every k 2 E and bk = 0 for every k 2 A:The parametric tolerable solution set is

Rptol ¼ RAEðAðpAÞ; bðpEÞ; pÞ

:¼ fx 2 Rn j ð8pA 2 pAÞ; ð9pE 2 pEÞðAðpAÞx ¼ bðpEÞÞg:

– In contrast to the tolerable solutions, a parametric

controllable solution set is such that existential

quantifiers concern only the constraint matrix and

universal quantifiers only the right-hand side. Thus,

Ak = 0 for every k 2 A and bk = 0 for every k 2 E:The parametric controllable solution set is denoted

shortly by Rpcont:

Rpcont ¼ RAEðAðpEÞ; bðpAÞ; pÞ

:¼ fx 2 Rn j ð8pA 2 pAÞ; ð9pE 2 pEÞðAðpEÞx ¼ bðpAÞÞg:

For a given parametric system and index sets A; E; there

is a unique non-parametric system, resp. non-parametric

AE solution set RðAðpA; pEÞ; bðpA; pEÞÞ; where

AðpA; pEÞ :¼ A0 þX

k2AAkpk þ

X

k2EAkpk;

bðpA; pEÞ :¼ b0 þX

k2Abkpk þ

X

k2Ebkpk:

On the other hand, every non-parametric system, resp. non-

parametric AE solution set, can be considered as a para-

metric system, resp. parametric AE solution set, involving

n2 ? n quantified parameters. Thus, every non-parametric

AE solution set presents a special case of parametric AE

solution set involving n2 ? n quantified parameters.

For a nonempty and bounded set S � Rn; define its

interval hull by hS :¼Tfy 2 IR

n j S � yg: For two

intervals u; v 2 IR; u � v; the percentage by which v

overestimates u is defined by

Oðu; vÞ :¼ 100ð1� uD=vDÞ:

In Popova and Kramer (2011), it was shown that every

x 2 RpAE satisfies the following inequality

jAðpcÞx� bðpcÞj �X

k2EjAkx� bkjpD

k

�X

k2AjAkx� bkjpD

k : ð3Þ

Moreover, for parametric systems involving only 1st class

existentially quantified parameters, this system of nonlinear

inequalities describes exactly the set RpAE:

3 End-point bounds for RpAE

It follows from the explicit representation of the parametric

AE solution sets (Popova 2012) that the interval hull of RpAE

is attained at particular end-points of the intervals for the

1st class existentially quantified parameters and the uni-

versally quantified parameters. Here we exploit this prop-

erty to develop a methodology for obtaining outer bounds

of the parametric AE solution set using only solvers for

bounding parametric united solution sets.

For a given index set I ¼ fi1; . . .; ikg; define

BI :¼ fðpci1þ di1 pD

i1; . . .; pc

ikþ dik p

DikÞ j d1; . . .; dk 2 f1gg:

Theorem 1 We have

RpAE ¼

\

~pA2BARðAð~pA; pEÞ; bð~pA; pEÞ; pEÞ: ð4Þ

Proof It follows from the set-theoretic representation of

RpAE (see (Popova and Kramer 2011, heorem 3.1)) that

RpAE ¼

\

~pA2pA

RðAð~pA; pEÞ; bð~pA; pEÞ; pEÞ:

Then, the assertion of the theorem follows from the relation

ð8p 2 ½p� : b1� f ðpÞ� b2Þ ,�b1� min

p2½p�f ðpÞ

�^�

maxp2½p�

f ðpÞ� b2

� ð5Þ

and because the polynomials involved in the explicit

description of RðAðpA; pEÞ; bðpA; pEÞ; pEÞ are linear with

respect to all V-parameters. h

The next theorem gives a sufficient condition for a non-

empty parametric AE solution set to be bounded.

Theorem 2 Let RpAE be non-empty and for some ~pA 2 BA

the matrix Að~pA; pEÞ be regular for all pE 2 pE : Then RpAE

is bounded.

Proof RpAE is not empty iff the intersection in (4) is not

empty. If for some ~pA 2 BA the matrix Að~pA; pEÞ is regular

for all pE 2 pE ; then RðAð~pA; pEÞ; bð~pA; pEÞ; pEÞ is bounded

and its intersection (which is not empty) with bounded or

unbounded solution sets for the remaining pA 2 BA will be

bounded. h

By Theorem 1, one can obtain (inner or outer) estima-

tions of a bounded parametric AE solution set by inter-

secting at most CardðBAÞ corresponding estimations of the

united parametric solution sets

RðAð~pA; pEÞ; bð~pA; pEÞÞ; ~pA 2 BA:

In particular, we have

Outer enclosures to the parametric AE solution set 1405

123

Page 77: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Corollary 1 For a bounded parametric AE solution set

RpAE 6¼ ; and a set B0A; such that B0A � BA and RðAð~pA;

pEÞ; bð~pA; pEÞ; pEÞ is bounded for all ~pA 2 B0A; we have

hRpAE �

\

~pA2B0A

hRðAð~pA; pEÞ; bð~pA; pEÞ; pEÞ:

If the parametric system involves some 1st class A-

parameters pk; k 2 E; we can further sharpen the estimation

of the parametric AE solution set. Denote by E1; E1 � E;the set of indices of all A-parameters which occur in only

one equation of the system. Since inf/sup of

RðAð~pA; pEÞ; bð~pA; pEÞ; pEÞ is attained at particular end-

points of pE1; we have

RpAE ¼

\

~pA2B0A

[

~pE12BE1

RA;E;E1;

and

hRpAE �

\

~pA2B0A

h[

~pE12BE1

h RA;E;E1

0@

1A; ð6Þ

where

RA;E;E1:¼ RðAð~pA; ~pE1

; pEnE1Þ; bð~pA; ~pE1

; pEnE1Þ; pEnE1

Þ:

By a methodology based on solving derivative systems

with respect to every parameter (Popova 2006) one can

prove that the interval hull of a united parametric solution

set can be attained at particular end-points of the param-

eters, which are not only of 1st class. The parameters,

for which we can prove this property, can be joined to the

set E1 in relation (6).

4 Bauer–Skeel method for parametric AE solution sets

Bauer–Skeel bounds were used to enclose bounded and

connected non-parametric united solution sets (Stewart

1998; Rohn 2010) and later bounded and connected para-

metric united solution sets (Skalna 2006; Hladık 2012). In

this section, we extend the Bauer–Skeel method to the case

of non-empty bounded and connected parametric AE

solution sets. Since the following is a generalization of the

Bauer–Skeel theorem, we do not state the original one

explicitly.

Theorem 3 For a fixed ~pA 2 BA in the form of

~pk ¼ pck þ ~dkpD

k ;~dk 2 f1g; k 2 A; suppose that Að~pA; pc

EÞbe regular and define

C :¼ AðpcÞ þX

k2A

~dkAkpDk

!�1

¼ A�1ð~pA; pcEÞ;

x :¼ C bðpcÞ þX

k2A

~dkbkpDk

!¼ Cbð~pA; pc

EÞ;

M :¼X

k2EjCAkjpD

k :

If q(M) \ 1, then every x 2 RpAE satisfies

jx� xj � ðI �MÞ�1X

k2EjCðAkx � bkÞjpD

k :

Proof We precondition (1) by C, so (2) reads

jCAðpcÞx� CbðpcÞj �X

k2EjCðAkx� bkÞjpD

k

�X

k2AjCðAkx� bkÞjpD

k ;ð7Þ

that is

jCAðpcÞx� CbðpcÞj þX

k2AjCðAkx� bkÞjpD

k

�X

k2EjCðAkx� bkÞjpD

k :ð8Þ

Since |u| ? |v| C |u ± v|, we have

CAðpcÞx� CbðpcÞ þX

k2A

~dkCðAkx� bkÞpDk

�����

�����

�X

k2EjCðAkx� bkÞjpD

k :

Rearranging we get

C AðpcÞ þX

k2A

~dkAkpDk

!x� C bðpcÞ þ

X

k2A

~dkbkpDk

!�����

�����

�X

k2EjCðAkx� bkÞjpD

k ;

or,

jx� xj �X

k2EjCðAkx� bkÞjpD

k :

Now, we approximate the right-hand side from above

jx� xj �X

k2EjCðAkx� bkÞjpD

k

�X

k2EjCAkðx� xÞjpD

k þX

k2EjCðAkx � bkÞjpD

k

�X

k2EjCAkjjx� xjpD

k þX

k2EjCðAkx � bkÞjpD

k

¼ Mjx� xj þX

k2EjCðAkx � bkÞjpD

k :

1406 E. D. Popova, M. Hladık

123

Page 78: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Hence

ðI �MÞjx� xj �X

k2EjCðAkx � bkÞjpD

k :

Since M C 0 and q(M) \ 1, we have (I - M)-1 C 0 and

jx� xj � ðI �MÞ�1X

k2EjCðAkx � bkÞjpD

k :

h

The application of Theorem 3 to the special case of

parametric united solution set has the following form,

which is identical with the Bauer–Skeel method general-

ized to parametric united solution sets in Skalna 2006;

Hladık 2012.

Corollary 2 Let A(pc) be regular and define

C :¼ A�1ðpcÞ;x :¼ CbðpcÞ;M :¼

X

k2EjCAkjpD

k :

If q(M) \ 1, then every x 2 Rpuni satisfies

jx� xj � ðI �MÞ�1X

k2EjCðAkx � bkÞjpD

k :

In the special case of parametric tolerable solution set

we have

Corollary 3 For a fixed ~pA 2 BA let Að~pAÞ be regular

and define

C :¼ AðpcÞ þX

k2A

~dkAkpDk

!�1

¼ A�1ð~pAÞ;

x :¼ CbðpcÞ ¼ CbðpcEÞ:

Then every x 2 Rptol satisfies

jx� xj �X

k2EjCbkjpD

k :

The special case of parametric controllable solution set

is discussed thoroughly in Sect. 5.

Corollary 4 The intersection of the solution enclosures

obtained by Theorem 3 (respectively Corollary 3) for all

~pA 2 BA is equal to the intersection of the solution

enclosures obtained by Corollary 2 for all ~pA 2 BA:

Proof The proof follows immediately from the equivalent

representation of C and x* presented in the formulation of

Theorem 3. h

Thus, the derivation of the parametric AE version of

Bauer–Skeel method confirms Corollary 1.

Corollary 1 with using Bauer–Skeel enclosures for the

particular united solution sets gives the same result as the

intersection of all enclosures by Theorem 3. However,

Corollary 1 with some other methods for enclosing para-

metric united solution sets may give better bounds.

Example 1 Let us consider the example from Popova and

Kramer (2011)

p1 p1 þ 1

p2 þ 1 �2p4

� �x ¼ p3

�3p2 þ 1

� �;

where p1; p2 2 ½0; 1� and p3; p4 2 ½�1; 1�: For the sake of

simplicity, R8p49p123denotes the parametric AE solution set

where the universal quantifier is applied to p4 and the

existential one elsewhere. Similar notation is used for other

combinations of quantifiers.

In case of R8p19p234; see Fig. 1,

h R9p234ðAðp

1ÞÞ\

R9p234ðAðp1ÞÞ

� �¼ hR9p234

ðAðp1ÞÞ:

That is why, enclosing sharply

hR9p234ðAðp

1ÞÞ ¼ ð½�2; 3�; ½�1; 1�Þ>;

we enclose hR8p19p234in a best way, although the set

R9p234ðAðp1ÞÞ is unbounded. The parametric Bauer–Skeel

method for R9p234ðAðp

1Þ; p234Þ gives the enclosure ([-11/

3, 3], [-1, 1])T and the 25% overestimation of x1 is

because the method cannot account well the row-

dependencies in p2. Therefore, applying (6), we compute

h[

p22f0;1gR9p234

ðAðp1ÞÞ ¼ ð½�2; 3�; ½�1; 1�Þ>:

For R8p29p134we cannot obtain an enclosure since the

assumption q(M) \ 1 is not fulfilled.

In case of R8p49p123; see Fig. 2,

R9p123ðAðp

4ÞÞ\

R9p123ðAðp4ÞÞ � hR9p123

ðAðp4ÞÞ:

Since R9p123ðAðp

4ÞÞ is unbounded, we cannot find

hR8p49p123and approximate the latter outwardly by

hR9p123ðAðp4ÞÞ:

Applying the Bauer–Skeel method for parametric united

solution sets, we obtain

hR9p123ðAð~p4Þ; p123Þ ¼

ð½�4:9161; 4:4546�; ½�2:7203; 2:8742�Þ>:

The overestimation is due to the row-dependencies in p1

and p2. Therefore, applying (6), we compute

h[

~p1;~p22f0;1gR9p3ðAð~p1; ~p2; p3; p4ÞÞ ¼ ð½�2; 3�; ½�1; 1�Þ>;

which is the interval hull of R9p123ðAðp4ÞÞ:

Outer enclosures to the parametric AE solution set 1407

123

Page 79: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Remark 1 The formulation of Bauer–Skeel method is in

real arithmetic, therefore its implementation in floating-

point arithmetic will not provide a guaranteed enclosure,

especially for intervals with very small radii or ill-condi-

tioned problems. All computations below based on Bauer–

Skeel method were done in rational arithmetic to avoid

uncontrolled round-off errors. Instead of Bauer–Skeel

method for bounding a parametric united solution set one

can use the parametric fixed-point iteration, see Popova and

Kramer (2007), which provides guaranteed enclosures of

comparable quality under the same requirement for strong

regularity of the parametric matrix. In fact, most of the

general-purpose methods for bounding a parametric united

solution set require strong regularity of the parametric

matrix.

5 Another form of the Bauer–Skeel method

Below, we derive another form of the parametric Bauer–

Skeel method under stronger assumptions.

Theorem 4 Let A(pc) be regular and define

C :¼ A�1ðpcÞ;x :¼ CbðpcÞ;M :¼

X

k2KjCAkjpD

k :

If q (M) \ 1, then every x 2 RpAE satisfies

jx� xj � ðI �MÞ�1X

k2EjCðAkx � bkÞjpD

k

�X

k2AjCðAkx � bkÞjpD

k

!:

Proof Consider the preconditioned parametric system C �AðpÞx ¼ C � bðpÞ: The characterization (3) for the pre-

conditioned system reads

jx� xj ¼ jCAðpcÞx� CbðpcÞj �X

k2EjCðAkx� bkÞjpD

k

�X

k2AjCðAkx� bkÞjpD

k :

For the right-hand side of the above inequality, due to

juj � jvj � juþ vj � juj þ jvj

we haveX

k2EjCðAkx� Akx þ Akx � bkÞjpD

k

�X

k2AjCðAkx� Akx þ Akx � bkÞjpD

k

� jx� xjX

k2EjCAkjpD

k þX

k2EjCðAkx � bkÞjpD

k

þ jx� xjX

k2AjCAkjpD

k �X

k2AjCðAkx � bkÞjpD

k ;

-3 -2 -1 2 3 4x1

-2

-1

1

2

x2

-2 2 3x1

x2

-3 -2 2 3 4x1

-2

2

x2

1

-1

-1

-0.5

0.5

1

-1

1-1

1 1

Fig. 1 Solution sets R8p19p234of the linear system from Example 1 for p1 2 f1; 0g (blue, green) and their intersection (yellow) (color figure

online)

-3 -2 -1 2 3 4x1

-2

-1

1

2

x2

-2 1 2 3x1

-1

1x2

-3 -2 2 3 4x1

-2

-1

2

x2

1

-1-0.5

0.5-1 1

1

Fig. 2 Solution sets R8p49p123of the linear system from Example 1 for p4 2 f�1; 1g (green, blue) and their intersection (red) (color figure online)

1408 E. D. Popova, M. Hladık

123

Page 80: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

which implies

I �X

k2KjCAkjpD

k

!jx� xj

�X

k2EjCðAkx � bkÞjpD

k �X

k2AjCðAkx � bkÞjpD

k :

Since M C 0 and q(M) \ 1, we have (I - M)-1 C 0 and

thus, the statement of the theorem. h

In the special case of a united parametric solution set,

Theorem 4 has the same form as Corollary 1. In the special

case of a parametric tolerable solution set, Theorem 4 is the

following.

Corollary 5 Let AðpcÞ ¼ AðpcAÞ be regular and

C :¼ A�1ðpcAÞ;

x :¼ CbðpcÞ ¼ CbðpcEÞ;

M :¼X

k2AjCAkjpD

k :

If q(M) \ 1, then every x 2 Rptol satisfies

jx� xj � ðI �MÞ�1X

k2EjCbkjpD

k �X

k2AjCAkxjpD

k

!:

In the special case of parametric controllable solution

sets, Theorem 4 is the following.

Corollary 6 Let AðpcÞ ¼ AðpcEÞ be regular and

C :¼ A�1ðpcEÞ;

x :¼ CbðpcÞ ¼ CbðpcAÞ;

M :¼X

k2EjCAkjpD

k :

If q(M) \ 1, then every x 2 Rpcont satisfies

jx� xj � ðI �MÞ�1X

k2EjCAkxjpD

k �X

k2AjCbkjpD

k

!:

The application of Corollary 4 requires strong regularity

of the parametric matrix Að~pA; pEÞ on the domain pE for

some ~pA 2 BA: Theorem 4 has a stronger requirement:

strong regularity of Að~pA; pEÞ on pE for all ~pA 2 pA; resp.

for all ~pA 2 BA: Therefore Corollary 1, resp. Corollary 4,

have a larger scope of applicability (and a bigger compu-

tational complexity) than Theorem 4. Let us compare the

two approaches for bounding parametric tolerable and con-

trollable solution sets.

Example 2 Obtain outer enclosures of the parametric

tolerable solution set for

AðpÞ ¼ p1 p2 þ 12

�2p2 p1 þ 1

� �; bðqÞ ¼ q1

q1 � q2

� �

and p1 2 ½0; 1�; p2 2 ½13 ; 1�; q1; q2 2 ½�1; 2�: The exact inter-

val hull of the parametric tolerable solution set is ð½� 25; 4

5�;

½� 23; 4

3�Þ>: Applying Corollary 5 we obtain the enclosure

ð½�36:904; 37:555�; ½�24:80; 25:38�Þ>

which overestimates the hull by more than 95%. The

application of Corollary 4 yields the interval hull. The

conservative enclosure of the tolerable solution set pro-

duced by Corollary 5 is natural. Since every parametric

tolerable solution set is a convex polyhedron (Popova

2012), its interval hull is attained at particular end-points of

the parameters, which is the approach exploited by Cor-

ollary 4. Indeed, shrinking the interval for p2 to ½ 9991000

; 1� the

overestimation produced by Theorem 4 is reduced to 45%,

resp. 35%. On the contrary, when we enlarge the interval

for p2 the parametric matrix is no more strongly regular.

While the application of Theorem 4 is not suitable for

bounding parametric tolerable solution sets, this theorem

gives a better enclosure for a parametric controllable

solution set than the enclosure obtained by Corollary 4 (the

intersection of the solution enclosures obtained by Theo-

rem 3 for all ~pA 2 BA).

Proposition 1 Under the same assumptions, the enclo-

sure of the parametric controllable solution set computed

by Corollary 6 is a subset of the enclosure computed by

Corollary 4.

Proof For a fixed end-point of a fixed solution component,

the intersection of the solution enclosures obtained by The-

orem 3 for all ~pA 2 BA is attained at a particular ~pA 2 BA:Let us consider an upper bound attained at a particular

~pA 2 BA:With the notations from Corollary 6, that particular

right end-point of the Bauer–Skeel enclosure by Theorem 3 is

x þ CX

j2A

~djbjpDj þ ðI �MÞ�1

X

k2E

���CAk

��

x þ CX

j2A

~djbjpDj

����pDk :

We estimate the right end-point from below as

x �X

j2AjCbjjpD

j þ ðI �MÞ�1X

k2EjCAkxjpD

k

� ðI �MÞ�1X

k2EjCAkjpD

k

X

j2AjCbjjpD

j

¼ x þ ðI �MÞ�1X

k2EjCAkxjpD

k

��I þ ðI �MÞ�1

M�X

j2AjCbjjpD

j :

Outer enclosures to the parametric AE solution set 1409

123

Page 81: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Using (I - M)-1 = I ? (I - M)-1M, we obtain

x þ ðI �MÞ�1X

k2EjCAkxjpD

k �X

j2AjCbjjpD

j

!;

which is the right end-point of the enclosure by Corollary

6. Similarly we prove a corresponding relation between the

left end-points of the enclosures. h

Example 3 Consider a parametric linear system where

AðpÞ ¼ p1 �p2

p2 p1

� �; bðqÞ ¼ 2q

2q

� �

and p1 2 ½0; 12�; p2 2 ½1; 3

2�; q 2 ½1; 3

2�: The exact interval hull

of the parametric controllable solution set is ð½2; 12=5�;½�2;�6=5�Þ>; see Fig. 3. Applying Corollary 4 we obtain

the enclosure

Rpcont � ð½1:186; 2:902�; ½�2:286;�0:263�Þ>;

overestimating the components of the interval hull by more

than 76 %, resp. 60 %. However, by Theorem 4 (Corollary

6), we obtain the enclosure

Rpcont � ð½1:7802; 2:8352�; ½�2:2198;�0:8571�Þ>;

and the overestimation is 62 %, resp. 41 %.

Bauer–Skeel method, in any of its forms, requires strong

regularity of the parametric matrix. Strong regularity (in

the present formulation q(M) \ 1 or (I - M)-1 C 0) must

be checked when implementing the method. Since it is a

sufficient condition for a parametric matrix to be regular,

Bauer–Skeel method may fail for some regular matrices

which are not strongly regular, see the next example.

Example 4 Consider the parametric system from Exam-

ple 3 with other domains for the parameters: p1 2 ½12 ; 32�;

p2 2 ½0; 1� and q 2 ½1; 2�: The parametric matrix is regular

but not strongly regular. Therefore, by Theorem 4 (resp.

Corollary 6), we cannot find outer bounds for the para-

metric controllable solution set which is connected

and bounded, see Fig. 4, and has interval hull

ð½8=3; 2ð1þffiffiffi2pÞ�; ½0; 4�Þ>:

Example 5 We look for the controllable solution set of

the parametric system from Example 3, enlarging the

domain for q to q 2 ½1; 52�: Although the parametric matrix

is strongly regular on the domain for p1, p2, the inequalityX

k2EjCðAkx � bkÞjpD

k �X

k2AjCðAkx � bkÞjpD

k

does not hold, which means that Rcont ¼ ;: Thus, by The-

orem 4 we not only compute enclosures of the controllable

solution set, but also can sometimes detect emptiness.

6 LP enclosure for the parametric tolerable solution set

Besides the united solution set, tolerable solutions are the

most studied AE solutions to interval linear systems. In the

1.5 2.5 3 3.5x1

-3

-2.5

-2

-0.5

x2

-1.5

-1

Fig. 3 The controllable solution set for the linear system from

Example 3 represented as intersection of the united solution sets for

q = 1 (light gray) and q = 3/2 (dark gray) together with its interval

hull and its enclosures obtained by Corollary 5 and Corollary 6

3.5 4 4.5x1

1

2

3

4

x2

Fig. 4 The parametric controllable solution set for the linear system

from Example 4

1410 E. D. Popova, M. Hladık

123

Page 82: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

non-parametric case, there are plenty of results, see Shary

(1995); Beaumont and Philippe (2001); Shary (2002);

Pivkina and Kreinovich (2006); Rohn (2006); Wang

(2008), among others. The only generalization to a special

class of parametric tolerable solution sets is found in

Sharaya and Shary (2011).

Corollary 4 provides an enclosure to the tolerable

solution set Rptol which is much sharper than the enclosure

provided by Theorem 4. By a careful inspection of the

characterization (2) we can derive a polyhedral approxi-

mation of Rptol:

Propositon 2 For every x 2 Rptol there are

yk 2 Rn; k 2 A; such that

AðpcÞxþX

k2ApD

k yk �X

k2EjbkjpD

k þ bðpcÞ; ð9aÞ

�AðpcÞxþX

k2ApD

k yk �X

k2EjbkjpD

k � bðpcÞ; ð9bÞ

Akx� yk; �Akx� yk; 8k 2 A: ð9cÞ

Moreover, for parametric systems involving only 1st class

existentially quantified parameters, the x solutions to (6)

form Rptol:

Proof By (2), each x 2 Rptol satisfies

jAðpcÞx� bðpcÞj þX

k2AjAkxjpD

k �X

k2EjbkjpD

k ;

or,

AðpcÞxþX

k2AjAkxjpD

k �X

k2EjbkjpD

k þ bðpcÞ;

�AðpcÞxþX

k2AjAkxjpD

k �X

k2EjbkjpD

k � bðpcÞ:

Substituting yk: = |Akx| we get (6). h

The system (6) consists of linear inequalities with

respect to x and yks, so we can employ linear programming

techniques to obtain lower and upper bounds for the

components of x.

Proposition 2 also shows that the parametric tolerable

solution set is a convex polyhedron for parametric linear

systems involving only 1st class parameters. This is in

accordance with the results from Sharaya and Shary

(2011); Popova (2012).

Linear programming (LP) techniques are well studied

for bounding non-parametric AE solution sets, see Beau-

mont and Philippe (2001). Proposition 2 generalizes the LP

approach for parametric tolerable solution sets and pro-

vides exact bounds when the involved A-parameters are

only of 1st class. Recall that a parametric matrix A(p) is

row-independent if for every k ¼ 1; . . .;K and every i ¼1; . . .; n the following set has cardinality at most one:

fj 2 f1; . . .; ng j ðAkÞij 6¼ 0g:

Due to the equality relation in (Popova 2012, eq. (5.3)),

inner and outer inclusions of a tolerable solution set, where

the matrix involves only row-independent parameters and

the right-hand side vector involves only 1st class parame-

ters, can be computed by methods for the non-parametric

case. Therefore, Proposition 2 is particularly useful for

linear systems involving row-dependent parameters in

the matrix and right-hand side vector with independent

components.

By using a standard linear programming technique to

calculate lower and upper bounds on x solutions of (6), we

have to solve 2n linear programs, each of them with

nð1þ CardðAÞÞ variables and 2nð1þ CardðAÞÞ con-

straints. For a non-parametric tolerable system ax ¼ b; this

number is too conservative. The system (6) may be furhter

reduced (Fiedler et al 2006; Rohn 1986) and the interval

hull of the tolerable solution set is determined by solving

2n linear programs, each of them with only 2n variables

and 4n constraints. If we call Corollary 1 to compute an

enclosure and linear programming to calculate the subor-

dinate interval hulls, then we have to solve 2n � 2CardðAÞ

linear programs, each with n variables and 2n constraints.

Example 6 Motivated by Example 5.2 in Popova (2012), let

Að1ÞðpÞ ¼p1 p2

p3 p1 þ 1

� �; Að2ÞðrÞ ¼ r r þ 1

2

�2r r þ 1

� �;

Að3ÞðsÞ ¼ s1 s1 þ 12

�2s2 s2 þ 1

� �; bðqÞ ¼

q1

q1 � q2

� �;

where p1; r; s1; s2 2 ½0; 1�; p2 2 ½12 ; 32�; p3 2 ½�2; 0� and

q1; q2 2 ½�1; 2�: Relaxing the parametric dependencies in

the interval systems A(1)(p)x = b(q), A(2)(r)x = b(q), and

A(3)(s)x = b(q) we get a standard interval system ax ¼ b

drawing

½0; 1� ½12; 3

2�

½�2; 0� ½1; 2�

� �x ¼ ½�1; 2�

½�3; 3�

� �:

Consider first the interval systems A(1)(p)x = b(q) and

Að1ÞðpÞx ¼ b: Applying Corollary 4 we obtain

RtolðAð1ÞðpÞ; bðqÞ; p; qÞ � ð½�2

5;4

5�; ½� 2

3;4

3�Þ>;

RtolðAð1ÞðpÞ; b; pÞ � ð½�1:167; 1:7�; ½�0:667; 1:334�Þ>:

The two parametric AE solution sets and the corresponding

enclosing boxes are presented on Fig. 5. Theorem 4 cannot

be applied since the parametric matrix is not strongly

Outer enclosures to the parametric AE solution set 1411

123

Page 83: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

regular. A linear programming approach based on

Proposition 2 gives

RtolðAð1ÞðpÞ; bðqÞ; p; qÞ � ð½�0:4; 0:8�; ½�1; 1:286�Þ>;RtolðAð1ÞðpÞ; b; pÞ � ð½�1:3; 1:7�; ½�1; 1:4�Þ>;Rtolða; bÞ � ð½�1:167; 1:625�; ½�0:667; 1:334�Þ>;

The LP enclosures to RtolðAð1ÞðpÞ; bðqÞ; p; qÞ and

RtolðAð1ÞðpÞ; b; pÞ are not optimal since the system

involves a 2nd class existentially quantified parameter p1.

Since the matrix A(1)(p) involves only row-independent

parameters,

RtolðAð1ÞðpÞ; b; pÞ ¼ Rtolða; bÞ� ð½�1:167; 1:625�; ½�0:667; 1:334�Þ>;

which is the interval hull.

Now, we consider the systems A(2)(r)x = b(q) and

Að2ÞðrÞx ¼ b: For these systems Corollary 4 gives the exact

interval hulls

RtolðAð2ÞðrÞ; b; rÞ � ð½�1:3; 1:7�; ½�1; 1:4�Þ>;RtolðAð2ÞðrÞ; bðqÞ; r; qÞ � ð½�0:4; 0:8�; ½�1; 1:4�Þ>:

Proposition 2 gives the same enclosures.

For the parametric interval system Að3ÞðsÞx ¼ b; Corol-

lary 4 yields the exact hull

RtolðAð3ÞðsÞ; b; sÞ � ð½�1:3; 1:7�; ½�1; 1:4�Þ>:

Since all parameters are of 1st class only, Proposition 2

gives the same result.

7 Conclusion

This paper presents a first attempt to propose and investi-

gate methods providing outer bounds for parametric AE

solution sets. The methods are general ones—they are

applicable to linear systems involving arbitrary linear

dependencies between interval parameters; the parametric

AE solution sets may be defined so that A- and E-param-

eters are mixed in both sides of the equations. Being the

most general, these methods are applicable to the special

cases of non-parametric AE solution sets, in particular non-

parametric tolerable or controllable solution sets.

From a methodological point of view, the methods we

consider are based on a simple (though not always com-

plete) Oettli-Prager-type description (3) of the parametric

AE solution sets. This allows us to obtain bounds for the

parametric AE solution sets either by bounding only para-

metric united solution sets or by using only real arithmetic

and the properties of classical interval arithmetic. This

makes the main methodological and computational differ-

ence between the methodology employed in this paper and

the methodology that is used so far for estimating non-

parametric AE solution sets (Shary 1995; Shary 1997,

2002; Goldsztejn 2005; Goldsztejn and Chabert 2006),

based on the arithmetic of proper and improper intervals

(called Kaucher interval arithmetic).

The methods we present here provide outer bounds for

non-empty, connected and bounded parametric AE solution

sets. The first approach intersects inclusions of parametric

united solution sets for all combinations of the end-points

of A-parameters. This approach has exponential compu-

tational complexity, however provides very sharp estima-

tions of the AE solution sets, especially for tolerable

solution sets and for general parametric AE solution sets

when combined with sharp bounds for the linear

E-parameters. The second method we discuss is a para-

metric AE generalization of the single-step Bauer–Skeel

method used so far for bounding parametric united solution

sets. In the special cases of non-parametric (tolerable,

controllable) AE solution sets, this new method expands the

range of available methods for outer enclosures. However,

while most of the known methods for enclosing non-

parametric AE solution sets are based on Kaucher interval

arithmetic, the present method is based on the classical

interval arithmetic. Also, it is a direct method and therefore

a fast one. Finally, for parametric tolerable solution sets,

we proposed a linear programming based method, which

utilizes a polyhedral approximation of the set. When each

existentially quantified parameter is involved in only one

equation of the system, this method yields the interval hull

of the parametric AE solution set.

We demonstrated that the approach intersecting enclo-

sures of parametric united solution sets for all combinations

of the end-points of A-parameters is applicable to a larger

class of parametric AE solution sets compared to the para-

metric Bauer–Skeel AE-method. Despite its computational

complexity, the first approach may be more suitable for

bounding tolerable solution set of large-scale parametric

-1x1

x2

x1

x2

-0.5 0.5 1

0.5

1

1.5

-0.5

Fig. 5 The tolerable solution sets for the linear systems

A(1)(p)x = b(q) (dark gray) and Að1ÞðpÞx ¼ b (light gray) from

Example 6 together with the enclosing boxes obtained by Corollary 4

1412 E. D. Popova, M. Hladık

123

Page 84: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

systems if one exploits distributed computations and mod-

ern methods for solving large-scale point systems which do

not invert the matrix. On the other hand, the parametric

Bauer–Skeel AE method provides better bounds for the

parametric controllable solution sets. This method implies a

simple necessary (sometimes and sufficient) condition for

any parametric AE solution set to be non-empty.

The present formulation of the parametric Bauer–Skeel

AE method is in real arithmetic, therefore its implementa-

tion in floating-point arithmetic will not provide a guar-

anteed enclosure, unless combined with suitably chosen

directed rounding. A self-verified method, which corre-

sponds to the present parametric Bauer–Skeel AE method,

and provides guaranteed outer bounds for nonempty con-

nected and bounded parametric AE solution sets will be

presented in a separate paper.

The parametric Bauer–Skeel AE method and the inter-

section of enclosures obtained by a self-verified method

can be used for bounding only connected and bounded

solution sets. However, the interval Gauss-Seidel method,

where the interval division is extended to allow division by

interval containing zero (Goldsztejn and Chabert 2006),

can be used to enclose bounded disconnected solution sets.

So, a parametric generalization of the Gauss-Seidel

method, may be helpful sometimes.

The parametric Bauer–Skeel AE method and most of the

general-purpose parametric self-verified methods do not

provide sharp enclosures of the parametric united solution

set when the system involves row-dependent parameters. A

parametric generalization of the right preconditioning

process, considered in Goldsztejn (2005) for non-para-

metric AE systems, may be also helpful.

Searching best estimations of parametric AE solution

sets one has to take into account the inclusion relations

between such solution sets (Popova 2012), and the prop-

erties of the methods.

Acknowledgments This work was inspired by the discussions held

during the Dagstuhl seminar 11371 in Dagstuhl, Germany, September

2011. E. Popova was partially supported by the Bulgarian National

Science Fund under grant No. DO 02-359/2008. M. Hladık was

partially supported by the Czech Science Foundation Grant P403/12/

1947. The authors thank the anonymous reviewers for the numerous

remarks improving readability of the paper.

References

Beaumont O, Philippe B (2001) Linear interval tolerance problem and

linear programming techniques. Reliab Comput 7(6):433–447

Busłowicz M (2010) Robust stability of positive conti-nuous-time linear

systems with delays. Int J Appl Math Comput Sci 20(4):665–670

Elishakoff I, Ohsaki M (2010) Optimization and anti-optimization of

structures under uncertainty. Imperial College Press, London, p 424

Fiedler M, Nedoma J, Ramık J, Rohn J, Zimmermann K (2006)

Linear optimization problems with inexact data. Springer, New

York

Goldsztejn A (2005) A right-preconditioning process for the formal-

algebraic approach to inner and outer estimation of AE-solution

sets. Reliab Comput 11(6):443–478

Goldsztejn A, Chabert G (2006) On the approximation of linear AE-

solution sets. In: Post-proceedings of 12th GAMM–IMACS

International Symposion on scientific computing, computer

arithmetic and validated numerics, IEEE Computer Society

Press, Duisburg

Hladık M (2012) Enclosures for the solution set of parametric interval

linear systems. Int J Appl Math Comp Sci 22(3):561–574

Lagoa C, Barmish B (2002) Distributionally robust monte carlo

simulation: a tutorial survey. In: Proceedings of the 15th IFAC

World Congress, IFAC, pp 1327–1338

Matcovschi M, Pastravanu O (2007) Box-const-rained stabilization

for parametric uncertain systems. In: Petre, E et al (eds)

Proceedings of SINTES 13, Internat. Symposium on system

theory, automation, robotics, computers, informatics, electronics

and instrumentation, Craiova, pp 140–145

Pivkina I, Kreinovich V (2006) Finding least expensive tolerance

solutions and least expensive tolerance revisions: algorithms and

computational complexity. Departmental technical reports (CS)

207, University of Texas at El Paso, http://digitalcommons.

utep.edu/cs_techrep/207

Popova ED (2006) Computer-assisted proofs in solving linear

parametric problems. In: Post-proceedings of 12th GAMM–

IMACS International Symposion on scientific computing, com-

puter arithmetic and validated numerics, IEEE Computer Society

Press, Duisburg

Popova ED (2012) Explicit description of AE solution sets for

parametric linear systems. SIAM J Matrix Anal Appl 33(4):

1172–1189

Popova ED, Kramer W (2007) Inner and outer bounds for the solution

set of parametric linear systems. J Comput Appl Math 199(2):

310–316

Popova ED, Kramer W (2011) Characterization of AE solution sets to

a class of parametric linear systems. Comptes Rendus de

L’Academie Bulgare des Sciences 64(3):325–332

Rohn J (1986) Inner solutions of linear interval systems. In: Nickel K

(ed) Proceedings of the International Symposium on interval

mathematics on Interval mathematics 1985, LNCS, vol 212,

Springer, Berlin, pp 157–158

Rohn J (2006) Solvability of systems of interval linear equations and

inequalities. In: Fiedler, M et al (ed) Linear optimization

problems with inexact data, chapter 2, Springer, New York,

pp 35–77

Rohn J (2010) An improvement of the Bauer-Skeel bounds. Technical

report V-1065, Institute of Computer Science, Academy of

Sciences of the Czech Republic, Prague, http://uivtx.cs.cas.cz/

rohn/publist/bauerskeel.pdf

Sharaya IA, Shary SP (2011) Tolerable solution set for interval linear

systems with constraints on coefficients. Reliab Comput

15(4):345–357

Shary SP (1995) Solving the linear interval tolerance problem. Math

Comput Simulat 39:53–85

Shary SP (1997) Controllable solution set to interval static systems.

Appl Math Comput 86(2–3):185–196

Shary SP (2002) A new technique in systems analysis under interval

uncertainty and ambiguity. Reliab Comput 8(5):321–418

Skalna I (2006) A method for outer interval solution of systems of

linear equations depending linearly on interval parameters.

Reliab Comput 12(2):107–120

Outer enclosures to the parametric AE solution set 1413

123

Page 85: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Sokolova S, Kuzmina E (2008) Dynamic properties of interval

systems. In: SPIIRAS Proceedings, Nauka, 7, pp 215–221 (in

Russian)

Stewart GW (1998) Matrix algorithms. Basic decompositions, vol. 1,

SIAM, Philadelphia

Wang Y (2008) Interpretable interval constraint solvers in semantic

tolerance analysis. Comput-Aided Des Appl 5(5):654–666

Wang S et al (2003) Solution sets of interval-valued fuzzy relational

equations. Fuzzy Optimization and Decision Making 2:41–60

1414 E. D. Popova, M. Hladık

123

Page 86: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

SIAM J. MATRIX ANAL. APPL. c© 2010 Society for Industrial and Applied MathematicsVol. 31, No. 4, pp. 2116–2129

BOUNDS ON REAL EIGENVALUES AND SINGULAR VALUES OFINTERVAL MATRICES∗

MILAN HLADIK† , DAVID DANEY‡ , AND ELIAS TSIGARIDAS‡

Abstract. We study bounds on real eigenvalues of interval matrices, and our aim is to developfast computable formulae that produce as-sharp-as-possible bounds. We consider two cases: generaland symmetric interval matrices. We focus on the latter case, since on the one hand such intervalmatrices have many applications in mechanics and engineering, and on the other hand many resultsfrom classical matrix analysis could be applied to them. We also provide bounds for the singularvalues of (generally nonsquare) interval matrices. Finally, we illustrate and compare the variousapproaches by a series of examples.

Key words. interval matrix, interval analysis, real eigenvalue, eigenvalue bounds, singular value

AMS subject classifications. 65G40, 65F15, 15A18

DOI. 10.1137/090753991

1. Introduction. Many real-life problems suffer from diverse uncertainties, forexample, due to data measurement errors. Considering intervals instead of fixed realnumbers is one possible way to tackle such uncertainties. In this paper, we study realeigenvalues of matrices, the entries of which vary simultaneously and independentlyinside some given intervals. The set of all possible eigenvalues forms a finite unionof several compact real intervals (see Proposition 2.1), and our aim is to computeas-sharp-as-possible bounds for these intervals.

The problem of computing lower and upper bounds for the eigenvalue set is wellstudied; see, e.g., [3, 10, 17, 27, 28, 29, 30, 32]. In recent years some effort wasmade in developing and extending diverse inclusion sets for eigenvalues [8, 22] suchas Gerschgorin discs or Cassini ovals. Even though such inclusion sets are more orless easy to compute and can be extended to interval matrices, the intervals that theyproduce are big overestimations of the actual ones.

The interval eigenvalue problem has a lot of applications in the field of mechanicsand engineering. Let us mention for instance automobile suspension systems [27], massstructures [26], vibrating systems [11], principal component analysis [12], and robotics[5]. In many cases, the properties of a system are given by the eigenvalues (or singularvalues) of a Jacobian matrix. A modern approach is to consider that the parametersof this matrix vary in a set of continuous states; hence it is useful to consider thismatrix as an interval matrix. The propagation of an interval representation of theparameters in the matrix allows us to bound the properties of the system over all itsstates. This is useful for designing a system, as well as to certify its performance.

Our goal is to revise and improve the existing formulae for bounding eigenvaluesof interval matrices. We focus on algorithms that are useful from a practical point

∗Received by the editors March 26, 2009; accepted for publication (in revised form) by A. FrommerMarch 29, 2010; published electronically June 11, 2010.

http://www.siam.org/journals/simax/31-4/75399.html†Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University,

Malostranske nam. 25, 11800, Prague, Czech Republic ([email protected]), and INRIA Sophia-Antipolis Mediterranee, 2004 route des Lucioles, BP 93, 06902 Sophia-Antipolis Cedex, France([email protected]).

‡INRIA Sophia-Antipolis Mediterranee, 2004 route des Lucioles, BP 93, 06902 Sophia-AntipolisCedex, France ([email protected], [email protected]).

2116

Page 87: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2117

of view, meaning that sometimes we sacrifice the accuracy of the results for speed.Nevertheless, the bounds that we derive are sharp enough for almost all practical pur-poses and are excellent candidates for initial estimates for various iterative algorithms[17].

We assume that the reader is familiar with the basics of interval arithmetic;otherwise we refer the reader to [2, 14, 24]. An interval matrix is defined as

A := [A, A] = {A ∈ Rm×n; A ≤ A ≤ A},

where A, A ∈ Rm×n, A ≤ A, are given matrices. By

Ac :=1

2(A + A), AΔ :=

1

2(A − A),

we denote the midpoint and the radius of A, respectively. A symmetric intervalmatrix is defined as

AS := {A ∈ A | A = AT }.

By an inner approximation of a set S we mean any subset of S, and by an outerapproximation of S we mean a set containing S as a subset. Our aim is to developformulae for calculating an outer approximation of the eigenvalue set of a (general orsymmetric) interval matrix. Moreover, the following notation will used throughoutthe paper:

|v| = max{−v, v} magnitude (absolute value) of an interval v

|A| magnitude (absolute value) of an interval ma-trix A, i.e., |A|ij = |Aij |

diag(v) a diagonal matrix with entries v1, . . . , vn

‖A‖p = maxx �=0‖Ax‖p

‖x‖pmatrix p-norm

κp(A) = ‖A‖p‖A−1‖p condition number (in p-norm)

σmax(A) maximal singular value of a matrix A

ρ(A) spectral radius of a matrix A

λRe(A) real part of an eigenvalue of a matrix A

λIm(A) imaginary part of an eigenvalue of a matrix A

The paper consists of two parts: the first is devoted to general interval matri-ces, and the second to symmetric interval matrices. Symmetry causes dependencybetween interval quantities, but—on the other hand—stronger theorems are applica-ble. Moreover, bounds of singular values of interval matrices could be obtained ascorollaries.

2. General interval matrix. Let A be a square interval matrix, and let

Λ := {λ ∈ R; Ax = λx, x �= 0, A ∈ A}

be the set of all real eigenvalues of matrices in A.Proposition 2.1. The set Λ is a finite union of compact real intervals.Proof. Suppose Λ �= ∅; otherwise we are done. Λ is bounded since |λ| ≤

max{‖A‖2; A ∈ A} for all λ ∈ Λ. To show the closedness consider a sequence λi ∈ Λ,i = 1, . . . , converging to λ ∈ C. For every i there is a matrix Ai ∈ A and a vector xi

with ‖xi‖2 = 1 such that Aixi = λixi. Choose a subsequence {iν}, ν = 1, . . . , such

Page 88: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2118 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

that Aiν converge to A ∈ A and xiν converge to x with Euclidean norm 1. Going tothe limit ν → ∞ we get Ax = λx showing λ ∈ Λ.

The finiteness follows from Rohn in [29, Theorem 3.4]. It states that everyboundary eigenvalue λ ∈ ∂Λ is attained for a matrix A ∈ A, which is of the formA = Ac − diag(y)AΔ diag(z), where y, z ∈ {±1}n. Therefore there are finitely manyboundary eigenvalues in Λ and hence also intervals.

Computation of the real eigenvalue set is considered a very difficult task. Evenchecking whether 0 ∈ Λ is an NP-hard problem since it is equivalent to checkingregularity of the interval matrix A, which is known to be an NP-hard problem [25].Therefore, we focus on a fast computation of the initial (hopefully sharp enough)outer approximation of Λ.

For other approaches that estimate Λ, we refer the reader to [10, 27, 32].Some methods do not calculate bounds for the real eigenvalues of A; instead they

compute bounds for the real parts of the complex eigenvalues. Denote the set of allpossible real parts by

Λr := {λRe ∈ R; Ax = λx, x �= 0, A ∈ A}.

As Λ ⊆ Λr, any outer approximation to Λr works for Λ as well.Let us recall a method proposed in Rohn [30, Theorem 2] that we will improve

in what follows.Theorem 2.2 (see [30]). Let

Sc :=1

2

(Ac + AT

c

), SΔ :=

1

2

(AΔ + AT

Δ

).

Then Λr ⊆ λ0 := [λ0, λ0], where

λ0 = λmin(Sc) − ρ(SΔ), λ0 = λmax(Sc) + ρ(SΔ),

and λmin(Sc), λmax(Sc) denote the minimal and maximal eigenvalue of Sc, respec-tively.

In most of the cases, the previous theorem provides a good estimation of theeigenvalue set Λ (cf. [17]). However, its main disadvantage is the fact that it producesnonempty estimations, even in the case where the eigenvalue set is empty. To over-come this drawback we propose an alternative approach that utilizes the Bauer–Fiketheorem [13, 18, 33].

Theorem 2.3 (see Bauer and Fike, 1960). Let A, B ∈ Rn×n and suppose thatA is diagonalizable, that is, V −1AV = diag(μ1, . . . , μn) for some V ∈ Cn×n andμ1, . . . , μn ∈ C. For every (complex) eigenvalue λ of A + B, there exists an indexi ∈ {1, . . . , n} such that

|λ − μi| ≤ κp(V ) · ‖B‖p.

For almost all practical cases the 2-norm seems to be the most suitable choice.In what follows we will use the previous theorem with p = 2.

Proposition 2.4. Let Ac be diagonalizable, i.e., V −1AcV is diagonal for someV ∈ Cn×n. Then Λr ⊆ (

⋃ni=1 λi), where for each i = 1, . . . , n,

λi = λRei (Ac) −

√(κ2(V ) · σmax(AΔ)

)2 − λImi (Ac)2,(2.1)

λi = λRei (Ac) +

√(κ2(V ) · σmax(AΔ)

)2 − λImi (Ac)2,(2.2)

Page 89: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2119

provided that(κ2(V ) · σmax(AΔ)

)2 ≥ λImi (Ac)

2; otherwise λi = ∅ for i = 1, . . . , n.Proof. Every A ∈ A can be written as A = Ac + A′, where |A′| ≤ AΔ (where the

inequality applies elementwise). The Bauer–Fike theorem with 2-norm implies thatfor each complex eigenvalue λ(A) there is some complex eigenvalue λi(Ac) such that

|λ(A) − λi(Ac)| ≤ κ2(V ) · ‖A′‖2 = κ2(V ) · σmax(A′).

As |A′| ≤ AΔ, we have σmax(A′) ≤ σmax(AΔ). Hence

|λ(A) − λi(Ac)| ≤ κ2(V ) · σmax(AΔ).

Thus all complex eigenvalues of all matrices A ∈ A lie in the circles with centersin λi(Ac)’s with corresponding radii κ2(V ) · σmax(AΔ). The formulae (2.1)–(2.2)represent an intersection of these circles with the real axis.

Notice that both a pair of complex conjugate eigenvalues λi(Ac) and λj(Ac) yieldsthe same interval λi = λj , so it suffices to consider only one of them.

Proposition 2.4 is a very useful tool for estimating Λ in the case where the “large”complex eigenvalues of Ac also have large imaginary parts. It is neither provably betternor provably worse than Rohn’s theorem; see Example 2.8. Therefore it is advisable,in practice, to use both of them.

Proposition 2.4 can be applied only if Ac is diagonalizable. For the case whereAc is defective we can build upon a generalization of the Bauer–Fike theorem due toChu [6, 7]. We present its special form.

Theorem 2.5 (see [6]). Let A, B ∈ Rn×n and let V −1AV = J be the Jordancanonical form of A. Denote by p the maximal dimension of the Jordan’s blocks in J .Then for every (complex) eigenvalue λ of A + B, there is i ∈ {1, . . . , n} such that

|λ − λi(A)| ≤ max{Θ2, Θ

1p

2

},

where

Θ2 =

√p(p + 1)

2· κ2(V ) · ‖B‖2.

Proceeding in a manner similar to that in the proof of Proposition 2.4 we obtainthe following general result for interval matrices.

Proposition 2.6. Let V −1AcV = J be the Jordan canonical form of Ac, and letp be the maximal dimension of the Jordan’s blocks in J . Denote

Θ2 =

√p(p + 1)

2· κ2(V ) · σmax(AΔ), Θ = max

{Θ2, Θ

1p

2

}.

Then Λ ⊆ (⋃n

i=1 λi), where for each i = 1, . . . , n,

λi = λRei (Ac) −

√Θ2 − λIm

i (Ac)2,

λi = λRei (Ac) +

√Θ2 − λIm

i (Ac)2,

provided that Θ2 ≥ λImi (Ac)

2; otherwise λi = ∅.This result is applicable for any interval matrix A. In our experience, Rohn’s

bounds are usually more narrow when the input intervals of A are wide. On the otherhand, this formula is better as long as the input intervals are narrow; cf. Example 2.9.

Page 90: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2120 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

We present one more improvement for computing bounds of Λ, which is based ona theorem by Horn and Johnson [19].

Theorem 2.7. Let A ∈ Rn×n. Then

λmin

(A + AT

2

)≤ λRe(A) ≤ λmax

(A + AT

2

)

for every (complex) eigenvalue λ(A) of the matrix A.The theorem says that any upper or lower bound of the eigenvalue set of the

symmetric interval matrix 12 (A + AT )S is also a bound of Λr. Symmetric interval

matrices are studied in detail in section 3 and the results obtained there can beused here to bound Λ via Theorem 2.7. Note that in this way, Rohn’s bounds fromTheorem 3.1 yield the same bounds as those from Theorem 2.2. Note also that if theinterval matrix A is pointed (i.e., A = A), then Theorems 2.2 and 2.7 yield the samerange.

In what follows we present two examples that utilize the bounds of the previouspropositions. We should mention that the purpose of all the examples in the presentpaper is to illustrate the proposed bounds; hence no verified computations were carriedout, as should always be the case for real-life applications.

Example 2.8. Let

A =

⎛⎜⎜⎜⎜⎝

[−5, −4] [−9, −8] [14, 15] [4.6, 5] [−1.2, −1][17, 18] [17, 18] [1, 2] [4, 5] [10, 11]

[17, 17.2] [−3.5, −2.7] [1.9, 2.1] [−13, −12] [6, 6.4][18, 19] [2, 3] [18, 19] [5, 6] [6, 7][13, 14] [18, 19] [9, 10] [−18, −17] [10, 11]

⎞⎟⎟⎟⎟⎠

.

Rohn’s theorem provides the outer approximation Λ ⊆ [−22.104, 35.4999]. Now weutilize Proposition 2.4. The eigenvalues of Ac are

−15.8973, −4.0671, 15.1215 + 15.9556 i, 15.1215 − 15.9556 i, and 20.7214,

while κ2(V ) · σmax(AΔ) = 8.5887. Hence

λ1 = [−24.486, −7.30853], λ2 = [−12.6559, 4.5216],

λ3 = λ4 = ∅, λ5 = [12.1327, 29.3101].

The resulting outer approximation of Λ is a union of two intervals, i.e.,

[−24.486, 4.5216]∪ [12.1327, 29.3101].

Proposition 2.6 yields the same result since the eigenvalues of Ac are mutually differ-ent.

If we take into account the results of all the methods, and we consider the intersec-tion of the corresponding intervals, we obtain a sharper result, i.e., [−22.104, 4.5216]∪[12.1327, 29.3101].

To estimate the quality of the aforementioned results, it is worth noticing that theexact description of the real eigenvalue set of A could be obtained using the algorithmin [17],

Λ = [−17.5116, −13.7578]∪ [−6.7033, −1.4582]∪ [16.7804, 23.6143].

Page 91: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2121

Example 2.9. Let A = [Ac − AΔ; Ac + AΔ], where

Ac =

⎛⎜⎜⎝

4 6 13 1−4 −5 −16 −41 2 6 10 −2 −10 −1

⎞⎟⎟⎠ ,

and all entries of AΔ equal ε. The eigenvalues of Ac are 1 ± 2 i (both are double).Let ε = 0.01. Rohn’s theorem leads to the outer approximation [−11.9445, 13.8445].

Proposition 2.4 is not applicable as Ac is defective. Using Proposition 2.6 we calculatep = 2 and Θ = 1.0612 and conclude that Λ = ∅, i.e., no matrix A ∈ A has any realeigenvalue.

For ε = 1, Rohn’s outer approximation is [−15.9045, 17.8045], but Proposition 2.6results in [−105.102, 107.102].

3. Symmetric interval matrix. Let A ∈ Rn×n be a real symmetric matrix. Ithas n real eigenvalues, which are in decreasing order (including multiplicities):

λ1(A) ≥ λ2(A) ≥ · · · ≥ λn(A).

Let AS be a symmetric interval matrix and denote by

λi(AS) :=

{λi(A) | A ∈ AS

}

the set of the ith eigenvalues. Each of these sets is a compact real interval; this isdue to the continuity of the eigenvalue function and the compactness and convexityof AS [16]. It can happen that the sets λi(A

S) and λj(AS), where i �= j, overlap.

Our aim is to compute as-sharp-as-possible bounds of the eigenvalue sets. Theupper bound λu

i (AS), i ∈ {1, . . . , n}, is any real number satisfying λui (AS) ≥ λi(A

S).

Lower bounds λli(A

S) for λi(AS), i ∈ {1, . . . , n}, can be computed as upper bounds

of −AS , so we omit their treatment.The symmetric case is very important for real-life applications as symmetric ma-

trices appear very often in engineering problems. Under the concept of interval com-putations, symmetry induces dependencies between the matrix elements, which arehard to deal with in general. The straightforward approach would be to “forget”the dependencies and apply the methods from the previous section to obtain boundson eigenvalues. Unfortunately, these bounds are far from sharp, since the loss ofdependency implies a big overestimation on the computed intervals.

We should mention that there are very few theoretical results concerning sym-metric interval matrices. Let us mention only that computing all the exact boundarypoints of the eigenvalue set is not known. Such a result could be of extreme practicalimportance since it can be used for testing the accuracy of existing approximationalgorithms. In this line of research, let us mention the work of Deif [10] and Hertz[15, 16]. The former provides an exact description of the eigenvalue set, but it worksonly under some not easily verified assumptions on sign pattern invariance of eigen-vectors; the latter (see also [31]) proposes a formula for computing the exact extremal

values λ1(AS), λ1(A

S), λn(AS), and λn(AS), which consists of 2n−1 iterations. The-oretical results could also be found in the work of Qiu and Wang [28]. However, someresults turned out to be incorrect [34].

Since the exact problem of computing the eigenvalue set(s) is a difficult one, sev-eral approximation algorithms were developed in recent years. An evolution strategymethod by Yuan, He, and Leng [34] yields an inner approximation of the eigenvalue

Page 92: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2122 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

set. By means of matrix perturbation theory, Qiu, Chen, and Elishakoff [26] proposedan algorithm for approximate bounds, and Leng and He [21] for outer approximation.Outer approximation was also presented by Beaumont [4]; he used a polyhedral ap-proximation of eigenpairs and an iterative improvement. Kolev [20] developed anouter approximation algorithm for the general case with nonlinear dependencies.

3.1. Basic bounds. The following theorem (without proof) appeared in [31];to ensure that this paper is self-contained, we present its proof.

Theorem 3.1. It holds that

λi(AS) ⊆ [λi(Ac) − ρ(AΔ), λi(Ac) + ρ(AΔ)].

Proof. By Weyl’s theorem [13, 18, 23, 33], for any symmetric matrices B, C ∈Rn×n it holds that

λi(B) + λn(C) ≤ λi(B + C) ≤ λi(B) + λ1(C) ∀i = 1, . . . , n.

Particularly, for every A ∈ A in the form of A = Ac + A′, A′ ∈ [−AΔ, AΔ], we have

λi(A) = λi(Ac + A′) ≤ λi(Ac) + λ1(A′) ≤ λi(Ac) + ρ(A′) ∀i = 1, . . . , n.

As |A′| ≤ AΔ, we get ρ(A′) ≤ ρ(AΔ), whence

λi(A) ≤ λi(Ac) + ρ(AΔ).

Working similarly, we can prove that λi(A) ≥ λi(Ac) − ρ(AΔ).

The bounds obtained by the previous theorem are usually quite sharp. However,the main drawback is that all the produced intervals λi(A

S), 1 ≤ i ≤ n, have thesame width.

The following proposition provides an upper bound for the largest eigenvalueof AS , i.e., an upper bound for the right endpoint of λ1(A

S). Even though theformula is very simple and the bound is not very sharp, there are cases where it yieldsa better bound than the one obtained by Rohn’s theorem. In particular it providesbetter bounds for nonnegative interval matrices and for interval matrices such as theones we consider in subsection 3.3 with the form [−AΔ, AΔ].

Proposition 3.2. It holds that

λ1(AS) ≤ λ1(|A|).

Proof. Using the well-known Courant–Fischer theorem [13, 18, 23, 33], we havefor every A ∈ A

λ1(A) = maxxT x=1

xT Ax ≤ maxxT x=1

|xT Ax|

≤ maxxT x=1

|x|T |A||x| ≤ maxxT x=1

|x|T |A||x|

= maxxT x=1

xT |A|x = λ1(|A|).

In the same way we can compute a lower bound for the eigenvalue set of A:λn(AS) ≥ −λ1(|A|). However, this inequality is not so useful in practice.

Page 93: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2123

3.2. Interlacing approach, direct version. The approach that we propose inthis section is based on Cauchy’s interlacing property for eigenvalues of a symmetricmatrix [13, 18, 23, 33].

Theorem 3.3 (interlacing property; see Cauchy, 1829). Let A ∈ Rn be a sym-metric matrix, and let Ai be a matrix obtained from A by removing the ith row andcolumn. Then

λ1(A) ≥ λ1(Ai) ≥ λ2(A) ≥ λ2(Ai) ≥ · · · ≥ λn−1(Ai) ≥ λn(A).

We develop two methods based on the interlacing property, the direct and theindirect one. These methods are useful as long as the intervals λi(A

S), i = 1, . . . , n,do overlap, or as long as there is a narrow gap between them. Overlapping happens,for example, when there are multiple eigenvalues in AS . If none of the previous casesoccur, then the bounds are not so sharp; see Example 3.6.

The first method uses the interlacing property directly. Bounds on the eigenvaluesof the principal minor AS

i are also bounds on the eigenvalues of matrices in AS

(except for λ1(AS) and λn(AS)). The basic idea is to compute the bounds recursively.

However, such a recursive algorithm would be of exponential complexity. Therefore,we propose a simple local search approach that requires only a linear number ofiterations and the results of which are quite satisfactory. It consists of selecting themost promising principal minor Ai and recursively using only this. To obtain evenbetter results in practice, we apply this procedure in the reverse order as well. (Thatis, we begin with some diagonal element aii of AS , which is a matrix one-by-one, anditeratively increase its dimension until we obtain AS .)

The algorithmic scheme is presented in Algorithm 1. We often need to compute anupper bound λu

1 (BS) for the maximal eigenvalue of any matrix in BS (steps 3 and 12).For this purpose we can call Theorem 3.1 or Proposition 3.2, or, to obtain the bestresults, we choose the minimum of the two. Notice that the algorithm computes onlyupper bounds for λi(A

S), i = 1, . . . , n. Lower bounds for λi(AS), i = 1, . . . , n, can

be obtained by calling the algorithm using −AS as input matrix.

Algorithm 1. (interlacing approach for upper bounds, direct version)

1: BS := AS ;2: for k = 1, . . . , n do3: compute λu

1 (BS);4: λu

k(AS) := λu1 (BS);

5: select the most promising index i ∈ {1, . . . , n − k + 1};6: remove the ith row and the ith column from BS ;7: end for8: put I = ∅;9: for k = 1, . . . , n do

10: select the most promising index i ∈ {1, . . . , n} \ I, and put I := I ∪ {i};11: let BS be a submatrix of AS restricted to the rows and columns indexed by I;12: compute λu

1 (BS);

13: λun−k+1(A

S) := min{λu

n−k+1(AS), λu

1 (BS)};

14: end for15: return λu

k(AS), k = 1, . . . , n.

An important ingredient of the algorithm is the selection of index i in steps 5and 10. We describe the selection for step 5; for step 10 we work similarly. In essence,

Page 94: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2124 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

there are two basic choices:

(3.1) i := arg minj=1,...,n−k+1

λu1 (BS

j ),

and

(3.2) i := arg minj=1,...,n−k+1

r,s�=j

|Br,s|2.

In both cases we select an index i in order to minimize λ1(BSi ).

The first formula requires more computation than the second but yields the op-timal index in more cases than the second. The latter formula is based on the well-known result [18, 33] that the square of the Frobenius norm of a normal matrix (i.e.,the sum of squares of its entries) equals the sum of squares of its eigenvalues. There-fore, the most promising index is the one that maximizes the sum of squares of theabsolute values (magnitudes) of the removed components.

The selection rule (3.1) causes a quadratic time complexity of Algorithm 1 withrespect to the number of calculations of spectral radii or eigenvalues. Using theselection rule (3.2) results in only a linear number of such calculations.

3.3. Interlacing approach, indirect version. The second method also usesthe interlacing property and is based on the following idea. Every matrix A ∈ AS

can be written as A = Ac + Aδ with Aδ ∈ [−AΔ, AΔ]S . We compute the eigenvaluesof the real matrix Ac and bounds on eigenvalues of matrices in [−AΔ, AΔ]S , and we

“merge” them to obtain bounds on eigenvalues of matrices in AS . For the “merging”step we use a theorem for perturbed eigenvalues.

The algorithm is presented in Algorithm 2. It returns only upper bounds λui (AS),

i = 1, . . . , n for λi(AS), i = 1, . . . , n, since lower bounds are likewise computable. The

bounds required in step 2 are computed using Algorithm 1.The following theorem due to Weyl [18, 33] gives very nice formulae for the eigen-

values of a matrix sum, which we use in step 4 of Algorithm 2.Theorem 3.4 (Weyl, 1912). Let A, B ∈ Rn×n be symmetric matrices. Then

λr+s−1(A + B) ≤ λr(A) + λs(B) ∀r, s ∈ {1, . . . , n}, r + s ≤ n + 1,

λr+s−n(A + B) ≥ λr(A) + λs(B) ∀r, s ∈ {1, . . . , n}, r + s ≥ n + 1.

Algorithm 2. (interlacing approach for upper bounds, indirect version)

1: Compute eigenvalues λ1(Ac) ≥ · · · ≥ λn(Ac);2: compute bounds λu

1

([−AΔ, AΔ]S

), . . . , λu

n

([−AΔ, AΔ]S

);

3: for k = 1, . . . , n do4: λu

k(AS) := mini=1,...,k

{λi(Ac) + λu

k−i+1

([−AΔ, AΔ]S

)};

5: end for6: return λu

k(AS), k = 1, . . . , n.

3.4. Diagonal maximization. In this subsection we show that the largesteigenvalues are achieved when the diagonal entries of A ∈ AS are the maximumentries. Therefore, we can fix them and consider only a subset of A ∈ AS . Similarresults can be obtained for the smallest eigenvalues.

Page 95: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2125

Lemma 3.5. Let i ∈ {1, . . . , n}. Then there is some matrix A ∈ AS with diagonalentries Aj,j = Aj,j such that λi(A) = λi(A

S).

Proof. Let A′ ∈ AS be such that λi(A′) = λi(A

S). Such a matrix always exists,since λi(A

S) is defined as the maximum of a continuous function on a compact set.We define A ∈ AS as follows: Aij := A′

ij if i �= j, and Aij := Aij if i = j. By theCourant–Fischer theorem [13, 18, 23, 33], we have

λi(A′) = max

V ⊆Rn; dim V =imin

x∈V ; xT x=1xT A′x

≤ maxV ⊆Rn; dim V =i

minx∈V ; xT x=1

xT Ax

= λi(A).

Hence λi(A) = λi(A)′ = λi(AS).

This lemma implies that for computing upper bounds λui (AS) of λi(A

S), i =1, . . . , n, it suffices to consider only the symmetric interval matrix AS

r ⊆ AS definedas

ASr := {A ∈ AS | Aj,j = Aj,j ∀j = 1, . . . , n}.

To this interval matrix we can apply all the algorithms developed in the previoussubsections. The resulting bounds are sometimes sharper and sometimes not so sharp;see Examples 3.6–3.7. So the best possible results are obtained by using all themethods together.

3.5. Singular values. Let A ∈ Rm×n and denote q := min{m, n}. By σ1(A) ≥· · · ≥ σn(A) we denote the singular values of A. It is well known [13, 18, 23] that thesingular values of A are identical with the q largest eigenvalues of the Jordan–Wielandtmatrix

(0 AT

A 0

),

which is symmetric. Consider an interval matrix A ⊂ Rm×n. By σi(A) := {σi(A) |A ∈ A}, i = 1, . . . , q, we denote the singular value sets of A. The problem ofapproximating the singular value sets was considered, e.g., in [1, 9]. Deif’s method [9]produces exact singular value sets, but only under some assumptions that are generallydifficult to verify. Ahn and Chen [1] presented a method for calculating the largestpossible singular value σ1(A). It is a slight modification of [15] and its time complexityis exponential (2m+n−1 iterations). They also proposed a lower bound calculation forthe smallest possible singular value σn(A) by means of interval matrix inversion.

To get an outer approximation of the singular value set of A we can exhibit themethods proposed in the previous subsections and apply them on the eigenvalue setof the symmetric interval matrix

(3.3)

(0 AT

A 0

)S

.

Diagonal maximization (see subsection 3.4) has no effect, since the diagonal of thesymmetric interval matrix (3.3) consists of zeros only. The other methods work well.Even though they run very fast, they can be accelerated a bit, as some of them can beslightly modified and used directly on A instead of (3.3). Particularly, Proposition 3.2is easy to modify for singular values (σ1(A) ≤ σ1(|A|)), and the interlacing propertycan be applied directly to A; cf. [13, 18, 19].

Page 96: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2126 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

3.6. Case of study. The aim of the following examples is to show that nopresented method is better than the others. In different situations, different variantsare the best.

Example 3.6. Consider the example given by Qiu, Chen, and Elishakoff [26] (seealso [34]):

AS =

⎛⎜⎜⎝

[2975, 3025] [−2015, −1985] 0 0[−2015, −1985] [4965, 5035] [−3020, −2980] 0

0 [−3020, −2980] [6955, 7045] [−4025, −3975]0 0 [−4025, −3975] [8945, 9055]

⎞⎟⎟⎠

S

.

Proposition 3.2 yields the upper bound λu1 (AS) = 12720.2273, which is—by chance—

the optimal value. The other outer approximations of the eigenvalues sets λi(AS),

i = 1, . . . , n, are listed below. The corresponding items are as follows:

(R) bounds computed by Rohn’s theorem (Theorem 3.1)(D1) bounds computed by Algorithm 1 with index selection rule (3.1)(D2) bounds computed by Algorithm 1 with index selection rule (3.2)(I1) bounds computed by Algorithm 2 with index selection rule (3.1)(I2) bounds computed by Algorithm 2 with index selection rule (3.2)

(DD1) bounds computed by diagonal maximization by using Algorithm 1 andindex selection rule (3.1)

(DI1) bounds computed by diagonal maximization by using Algorithm 2 andindex selection rule (3.1)

(B) bounds obtained by using Theorem 3.1 and Algorithms 1 and 2, andthen choosing the best ones; the index selection rule is (3.1)

(O) optimal bounds; they are known provided that an inner and outer ap-proximation (calculated or known from references) coincide; some ofthem are determined according to Hertz [15, 16]

Table 3.1Results for Example 3.6.

[λl1(A

S), λu1 (AS)] [λl

2(AS), λu

2 (AS)] [λl3(A

S), λu3 (AS)] [λl

4(AS), λu4 (AS)]

(R) [12560.6296, 12720.4331] [6984.5571, 7144.3606] [3309.9466, 3469.7501] [825.2597, 985.0632]

(D1) [8945.0000, 12720.2273] [4945.00000, 9055.0000] [2924.5049, 6281.7216] [825.2597, 3025.0000]

(D2) [8945.0000, 12720.2273] [2945.0000, 9453.4449] [1708.9320, 6281.7216] [825.2597, 3025.0000]

(I1) [12560.6296, 12720.4331] [6984.5571, 7144.3606] [3309.9466, 3469.7501] [825.2597, 985.0632]

(I2) [12560.6296, 12720.4331] [6984.5571, 7144.3606] [3309.9466, 3469.7501] [825.2597, 985.0632]

(DD1) [8945.0000, 12720.2273] [4965.0000, 9055.0000] [2950.0000, 6281.7216] [837.0637, 3025.0000]

(DI1) [12557.7243, 12723.3526] [6990.7616, 7138.1800] [3320.2863, 3459.4322] [837.0637, 973.1993]

(B) [12560.6296, 12720.2273] [6990.7616, 7138.1800] [3320.2863, 3459.4322] [837.0637, 973.1993]

(O) [12560.8377, 12720.2273] [7002.2828, 7126.8283] [3337.0785, 3443.3127] [842.9251, 967.1082]

Table 3.1 shows that the direct interlacing methods (D1), (D2), and (DD1) arenot effective; gaps between the eigenvalue sets λi(A

S), i = 1, . . . , n, are too wide.The indirect interlacing methods (I1) and (I2) yield the same intervals as the Rohnmethod (R). The indirect interlacing method using diagonal maximization is severaltimes better (e.g., for λl

4(AS), λu

4 (AS)) and several times worse (e.g., for λl1(A

S),λu

1 (AS)) than (R). The combination (B) of all the methods produces good outerapproximation of the eigenvalue set, particularly for that of λ1(A

S).

Page 97: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2127

For this example, Qiu, Chen, and Elishakoff [26] obtained the approximate values

λ1(AS) ≈ 12588.29, λ1(A

S) ≈ 12692.77, λ2(AS) ≈ 7000.195, λ2(A

S) ≈ 7128.723,

λ3(AS) ≈ 3331.162, λ3(A

S) ≈ 3448.535, λ4(AS) ≈ 826.7372, λ4(A

S) ≈ 983.5858.

However, these values form neither inner nor outer approximations of the eigenvalueset. The method of Leng and He [21] based on matrix perturbation theory results inthe following bounds:

λl1(A

S) = 12550.53, λu1 (AS) = 12730.53, λl

2(AS) = 6974.459, λu

2 (AS) = 7154.459,

λl3(A

S) = 3299.848, λu3 (AS) = 3479.848, λl

4(AS) = 815.1615, λu

4 (AS) = 995.1615.

In comparison to (B), they are not so sharp. The evolution strategy method proposedby Yuan, He, and Leng [34] returns an inner approximation of the eigenvalue set, whichis equal to the optimal result (see (O) in Table 3.1) in this example.

Example 3.7. Consider the symmetric interval matrix

AS =

⎛⎝

[0, 2] [−7, 3] [−2, 2][−7, 3] [4, 8] [−3, 5][−2, 2] [−3, 5] [1, 5]

⎞⎠

S

.

Following the notation used in Example 3.6 we display in Table 3.2 results obtainedby the presented methods.

Table 3.2Results for Example 3.7.

[λl1(A

S), λu1 (AS)] [λl

2(AS), λu2 (AS)] [λl

3(AS), λu

3 (AS)]

(R) [−2.2298, 16.0881] [−6.3445, 11.9734] [−8.9026, 9.4154]

(D1) [4.0000, 15.3275] [−2.5616, 6.0000] [−8.9026, 2.0000]

(D2) [4.0000, 15.3275] [−2.5616, 6.0000] [−8.9026, 2.0000]

(I1) [−0.7436, 16.0881] [−3.3052, 10.4907] [−8.9026, 6.3760]

(I2) [−0.7436, 16.0881] [−3.3052, 10.4907] [−8.9026, 6.3760]

(DD1) [4.0000, 15.3275] [−2.0000, 6.0000] [−8.3759, 2.0000]

(DI1) [−0.9115, 16.3089] [−2.9115, 10.8445] [−8.3759, 6.7850]

(B) [4.0000, 15.3275] [−2.0000, 6.0000] [−8.3759, 2.0000]

(O) [6.3209, 15.3275] [?, ?] [−7.8184, 0.7522]

This example illustrates the case when direct interlacing methods (D1)–(D2) yieldbetter results than the indirect ones (I1)–(I2). The same is true for the diagonalmaximization variants (DD1) and (DI1). Rohn’s method (R) is not very effectivehere. Optimal bounds are known only for λu

1 (AS) and λl3(A

S).Example 3.8. Herein, we consider an example by Deif [9] on singular value sets

of

A =

⎛⎝

[2, 3] [1, 1][0, 2] [0, 1][0, 1] [2, 3]

⎞⎠ .

Deif’s method yields the following estimation of the singular value sets:

σ1(A) ≈ [2.5616, 4.5431], σ2(A) ≈ [1.3134, 2.8541].

Page 98: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2128 MILAN HLADIK, DAVID DANEY, AND ELIAS TSIGARIDAS

Ahn and Chen [1] confirmed that σ1(A) = 4.5431, but the real value of σ2(A) mustbe smaller. Namely, it is less than or equal to one since σ2(A) = 1 for AT = ( 2 0 1

1 0 2 ).Our approach using a combination of all presented methods results in an outer ap-proximation of

σ1(A) ⊆ [2.0489, 4.5431], σ2(A) ⊆ [0.4239, 3.1817].

4. Conclusion and future work. In this paper we considered outer approx-imations of the eigenvalue sets of general interval matrices and symmetric intervalmatrices. For both cases, we presented several improvements. Computing sharpouter approximations of the eigenvalue set of a general interval matrix is a difficultproblem. The proposed methods provide quite satisfactory results, as indicated byExamples 2.8–2.9. Examples 3.6–3.8 demonstrate that we are able to bound quitesharply the eigenvalues of symmetric interval matrices and the singular values of in-terval matrices. Our bounds are quite close to the optimal ones. Nevertheless, assuggested by one of the referees, it is worth exploring the possibility of using a morenumerically stable decomposition than the Jordan canonical form in Proposition 2.6.

Currently, there is no algorithm that computes the best bounds in all the cases.Since the computational cost of the presented algorithms is rather low, it is advisableto use all of them in practice and select the best one depending on the particularinstance.

Acknowledgments. The authors thank Andreas Frommer and the anonymousreferees for their valuable comments.

REFERENCES

[1] H.-S. Ahn and Y. Q. Chen, Exact maximum singular value calculation of an interval matrix,IEEE Trans. Automat. Control, 52 (2007), pp. 510–514.

[2] G. Alefeld and J. Herzberger, Introduction to Interval Computations, Academic Press,New York, 1983.

[3] G. Alefeld and G. Mayer, Interval analysis: Theory and applications, J. Comput. Appl.Math., 121 (2000), pp. 421–464.

[4] O. Beaumont, An Algorithm for Symmetric Interval Eigenvalue Problem, Technical reportIRISA-PI-00-1314, Institut de recherche en informatique et systemes aleatoires, Rennes,France, 2000.

[5] D. Chablat, Ph. Wenger, F. Majou, and J.-P. Merlet, An interval analysis based studyfor the design and the comparison of three-degrees-of-freedom parallel kinematic machines,Int. J. Robot. Res., 23 (2004), pp. 615–624.

[6] K.-w. E. Chu, Generalization of the Bauer–Fike theorem, Numer. Math., 49 (1986), pp. 685–691.

[7] K.-w. E. Chu, Perturbation theory and derivatives of matrix eigensystems, Appl. Math. Lett.,1 (1988), pp. 343–346.

[8] L. Cvetkovic, V. Kostic, and R. S. Varga, A new Gersgorin-type eigenvalue inclusion set,Electron. Trans. Numer. Anal., 18 (2004), pp. 73–80.

[9] A. S. Deif, Singular values of an interval matrix, Linear Algebra Appl., 151 (1991), pp. 125–133.

[10] A. S. Deif, The interval eigenvalue problem, Z. Angew. Math. Mech., 71 (1991), pp. 61–64.[11] A. D. Dimarogonas, Interval analysis of vibrating systems, J. Sound Vibration, 183 (1995),

pp. 739–749.[12] F. Gioia and C. N. Lauro, Principal component analysis on interval data, Comput. Statist.,

21 (2006), pp. 343–363.[13] G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd ed., Johns Hopkins University

Press, Baltimore, MD, 1996.[14] E. Hansen and G. W. Walster, Global Optimization Using Interval Analysis, 2nd ed., Marcel

Dekker, New York, 2004.

Page 99: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

BOUNDS ON EIGENVALUES OF INTERVAL MATRICES 2129

[15] D. Hertz, The extreme eigenvalues and stability of real symmetric interval matrices, IEEETrans. Automat. Control, 37 (1992), pp. 532–535.

[16] D. Hertz, Interval analysis: Eigenvalue bounds of interval matrices, in Encyclopedia of Opti-mization, C. A. Floudas and P. M. Pardalos, eds., Springer, New York, 2009, pp. 1689–1696.

[17] M. Hladık, D. Daney, and E. Tsigaridas, An Algorithm for the Real Interval EigenvalueProblem, Research report RR-6680, INRIA, Sophia-Antipolis, France, 2008, http://hal.inria.fr/inria-00329714/en/, submitted to J. Comput. Appl. Math.

[18] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge,UK, 1985.

[19] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press,Cambridge, UK, 1994.

[20] L. V. Kolev, Outer interval solution of the eigenvalue problem under general form parametricdependencies, Reliab. Comput., 12 (2006), pp. 121–140.

[21] H. Leng and Z. He, Computing eigenvalue bounds of structures with uncertain-but-non-random parameters by a method based on perturbation theory, Comm. Numer. MethodsEngrg., 23 (2007), pp. 973–982.

[22] H.-B. Li, T.-Z. Huang, and H. Li, Inclusion sets for singular values, Linear Algebra Appl.,428 (2008), pp. 2220–2235.

[23] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2000.[24] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cam-

bridge, UK, 1990.[25] S. Poljak and J. Rohn, Checking robust nonsingularity is NP-hard, Math. Control Signals

Systems, 6 (1993), pp. 1–9.[26] Z. Qiu, S. Chen, and I. Elishakoff, Bounds of eigenvalues for structures with an interval

description of uncertain-but-non-random parameters, Chaos Solitons Fractals, 7 (1996),pp. 425–434.

[27] Z. Qiu, P. C. Muller, and A. Frommer, An approximate method for the standard inter-val eigenvalue problem of real non-symmetric interval matrices, Comm. Numer. MethodsEngrg., 17 (2001), pp. 239–251.

[28] Z. Qiu and X. Wang, Solution theorems for the standard eigenvalue problem of structureswith uncertain-but-bounded parameters, J. Sound Vibration, 282 (2005), pp. 381–399.

[29] J. Rohn, Interval matrices: Singularity and real eigenvalues, SIAM J. Matrix Anal. Appl., 14(1993), pp. 82–91.

[30] J. Rohn, Bounds on eigenvalues of interval matrices, ZAMM Z. Angew. Math. Mech., 78(1998), pp. S1049–S1050.

[31] J. Rohn, A Handbook of Results on Interval Linear Problems, http://www.cs.cas.cz/rohn/handbook (2005).

[32] J. Rohn and A. Deif, On the range of eigenvalues of an interval matrix, Computing, 47(1992), pp. 373–377.

[33] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford UniversityPress, New York, 1988.

[34] Q. Yuan, Z. He, and H. Leng, An evolution strategy method for computing eigenvalue boundsof interval matrices, Appl. Math. Comput., 196 (2008), pp. 257–265.

Page 100: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

Contents lists available at ScienceDirect

Journal of Computational and AppliedMathematics

journal homepage: www.elsevier.com/locate/cam

An algorithm for addressing the real interval eigenvalue problemMilan Hladík a,b,∗, David Daney b, Elias P. Tsigaridas b

a Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 118 00, Prague, Czech Republicb INRIA Sophia-Antipolis Méditerranée, 2004 route des Lucioles, BP 93, 06902 Sophia-Antipolis Cedex, France

a r t i c l e i n f o

Article history:Received 20 October 2008Received in revised form 28 June 2010

MSC:65G4065F1515A18

Keywords:Interval matrixReal eigenvalueEigenvalue boundsRegularityInterval analysis

a b s t r a c t

In this paper we present an algorithm for approximating the range of the real eigenvaluesof interval matrices. Such matrices could be used to model real-life problems, where datasets suffer from bounded variations such as uncertainties (e.g. tolerances on parameters,measurement errors), or to study problems for given states.

The algorithm that we propose is a subdivision algorithm that exploits sophisticatedtechniques from interval analysis. The quality of the computed approximation and therunning time of the algorithm depend on a given input accuracy. We also present anefficient C++ implementation and illustrate its efficiency on various data sets. In most ofthe cases we manage to compute efficiently the exact boundary points (limited by floatingpoint representation).

© 2010 Elsevier B.V. All rights reserved.

1. Introduction

Computation of real eigenvalues is a ubiquitous operation in applied mathematics, not only because it is an importantmathematical problem, but also due to the fact that such computations lie at the core of almost all engineering problems.However, in these problems, which are real-life problems, precise data are very rare, since the input data are influenced bydiverse uncertainties.

We study these problems throughmodels that reflect the real-life situations as well as possible. A modern approach is toconsider that the parameters to be defined are not exact values, but a set of possible values. The nature of these variationsis not physically homogeneous, mainly due to measurement uncertainties, or due to tolerances that come from fabricationand identification, or simply because we want to study the system in a set of continuous states.

Contrary to adopting a statistical approach, which, we have to note, is not always possible, it may be more simple orrealistic to bound the variations of the parameters by intervals. Interval analysis turns out to be a very powerful techniquefor studying the variations of a system and for understanding its properties. One of the most important properties of thisapproach is the fact that it is possible to certify the results of all the states of a system.

Such an approach motivates us to look for an algorithm that computes rigorous bounds on eigenvalues of an intervalmatrix. Interval-based problems have been studied intensively in the past few decades, for example in control in order toanalyse the stability of interval matrices [1]. The interval eigenvalue problem, in particular, also has a variety of applicationsthroughout diverse fields of science. Let us mention automobile suspension systems [2], vibrating systems [3], principalcomponent analysis [4], and robotics [5], for instance.

∗ Corresponding author at: Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 118 00,Prague, Czech Republic.

E-mail addresses: [email protected] (M. Hladík), [email protected] (D. Daney), [email protected] (E.P. Tsigaridas).

0377-0427/$ – see front matter© 2010 Elsevier B.V. All rights reserved.doi:10.1016/j.cam.2010.11.022

Page 101: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2716 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

1.1. Motivation

As a motivating example, let us mention the following problem from robotics, that usually appears in experimentalplanning, e.g. [6].We consider the following simple roboticmechanism. Let X = (x, y) be a point in the plane,which is linkedto two points, M = (a, b) and N = (c, d), using two prismatic joints, r1 and r2 respectively. In this case, the end-effector Chas two degrees of freedom for moving in the plane. The joints r1 and r2 are measured in a range [min{rk},max{rk}], wherek = 1, 2. The range of the joints is obtained due to mechanical constraints and describes the workspace of the mechanism.

If we are given r1 and r2 andwewant to estimate the coordinates of X , thenwe solve the polynomial system F1 = F2 = 0,where F1 = |X − S|2 − r21 and F2 = |X − T |

2− r22 , which describes the kinematics problem. For the calibration problem

things are quite different [7,8]. In this casewewant to compute, or estimate, the coordinatesM andN as a function of severalmeasurements of X , that is X1 = (x1, y1), X2 = (x2, y2), X3 = (x3, y3), . . . . This is so becauseM andN are not known exactly,due to manufacturing tolerances. We have four unknowns, a, b, c and d, expressed as a function of the measurements Xi,where 1 ≤ i ≤ n. If n ≥ 2, then we can compute a, b, c , and d using the classical approach of the least squares method.However, we have to take into account the noise in the measurements l1,i and l2,i. To get a robust solution, we choose theposition of the measurements by also selecting the values of l1,i, l2,i in [min{lk},max{lk}], where k = 1, 2.

Weestimate the several criteria of selection using the eigenvalues of the observabilitymatrix [8], that is the eigenvalues ofJT J , where elements of J are partial derivatives of Fk with respect to kinematic parameters. Such an approach requires boundson the eigenvalues of the observability matrix, which is what we propose in this paper. We present a detailed example inExample 5.

Furthermotivation comes frompolynomial system real solving. Consider a systemof polynomials inR[x1, . . . , xn] and letI be the ideal that they define. The coordinates of the solutions of the system can be obtained as eigenvalues of the so calledmultiplication tables, e.g. [9]. That is for each variable xi we can construct (using Gröbner basis or normal form algorithms) amatrix Mxi that corresponds to the operator of multiplication by xi in the quotient algebra R[x1, . . . , xn]/I . The eigenvaluesof these matrices are the coordinates of the solutions; thus the real eigenvalues are the coordinates of the real solutions. Ifthe coefficients of the polynomials are not known exactly, then we can consider the multiplications as interval matrices. Foran algorithm for solving bivariate polynomial systems that is based on the eigenvalues and eigenvectors of the Bézoutianmatrix, the reader may refer to [10]. For the great importance of eigenvalue computations in polynomial systems solvingwith inexact coefficients we refer the reader to [11].

1.2. Notation and preliminaries

In what follows we will use the following notation:

sgn(r) The sign of a real number r , i.e., sgn(r) = 1 if r ≥ 0, and sgn(r) = −1 if r < 0sgn(z) The sign of a vector z, i.e., sgn(z) = (sgn(z1), . . . , sgn(zn))T

e A vector of all ones (with convenient dimension)diag(z) The diagonal matrix with entries z1, . . . , znρ(A) The spectral radius of a matrix AA•,i The ith column of a matrix A∂S The boundary of a set S

|S| The cardinality of a set S

For basic interval arithmetic the reader may refer to e.g. [12–14]. A square interval matrix is defined asA := [A, A] = {A ∈ Rn×n

; A ≤ A ≤ A},

where A, A ∈ Rn×n and A ≤ A are given matrices and the inequalities are considered elementwise. By

Ac ≡12(A + A), A∆ ≡

12(A − A),

we denote the midpoint and radius of A, respectively. We use analogous notation for interval vectors. An interval linearsystem of equations

Ax = b,is a short form for a set of systems

Ax = b, A ∈ A, b ∈ b.The set of all real eigenvalues of A is defined as

Λ := {λ ∈ R; Ax = λx, x = 0, A ∈ A},

and is compact set. It seems that Λ is always composed of at most n compact real intervals, but this conjecture has not beenproven yet and is proposed as an open problem.

In general, computing Λ is a difficult problem. Even checking whether 0 ∈ Λ is an NP-hard problem, since the problemis equivalent to checking regularity of the interval matrix A, which is known to be NP-hard [15]. An inner approximation ofΛ is any subset of Λ, and an outer approximation of Λ is a set containing Λ as a subset.

Page 102: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2717

1.3. Previous work and our contribution

The problem of computing (the intervals of) the eigenvalues of interval matrices has been studied since the nineties. Thefirst results were due to Deif [16] and Rohn and Deif [17]. They proposed formulae for calculating exact bounds; the formercase bounds real and imaginary parts for complex eigenvalues, while the latter case bounds the real eigenvalues. However,these results apply only under certain assumptions on the sign pattern invariance of the corresponding eigenvectors; suchassumptions are not easy to verify (cf. [18]). Otherworks by Rohn concern theorems for the real eigenvalues [19] and boundsof the eigenvalue set Λ [20]. An approximate method was given in [2]. The related topic of finding verified intervals ofeigenvalues for real matrices is studied in [21].

If A has a special structure, then it is possible to develop stronger results, that is to compute tighter intervals for theeigenvalue set. This is particularly true when A is symmetric; we postpone this discussion to a forthcoming communication.Our aim is to consider the general case, and to propose an algorithm for the eigenvalue problem, when the input is a genericinterval matrix, without any special property.

Several methods are known for computing the eigenvalues of scalar (non-interval) matrices. It is not possible to directlyapply them to interval matrices, since this causes enormous overestimation of the computed eigenvalue intervals. For thesame reason, algorithms that are based on the characteristic polynomial of A are rarely, if at all, used. Even though interval-valued polynomials can be handled efficiently [22], this approach cannot yield sharp bounds, due to the overestimation ofthe intervals that correspond to the coefficients of the characteristic polynomial.

A natural way of computing the set of the eigenvalue intervals Λ, is to try to solve directly the interval nonlinearsystem

Ax = λx, ‖x‖ = 1, A ∈ A, λ ∈ λ0, (1)

where λ0⊇ Λ is some initial outer estimation of the eigenvalue set, and ‖ · ‖ is any vector norm. Interval analysis

techniques for solving nonlinear systems of equationswith interval parameters are very developed nowadays [23,14]. Usingfiltering, diverse consistency checking, and sophisticated box splitting they achieve excellent results. However, the curse ofdimensionality implies that these techniques are applicable only to problems of relative small size. Recall that the curse ofdimensionality refers to the exponential increase of the volume, when additional dimensions are added to a problem. Forthe eigenvalue problem (1), this is particularly the case (cf. Section 4).

We present an efficient algorithm for approximating the set of intervals of the real eigenvalues of a (generic) intervalmatrix, Λ, within a given accuracy. Our approach is based on a branch and prune scheme. We use several interval analysistechniques to provide efficient tests for inner and outer approximations of the intervals in Λ.

The rest of the paper is structured as follows. In Section 2 we present the main algorithm, the performance of whichdepends on checking intervals for being outer (containing no eigenvalue) or inner (containing only eigenvalues). These testsare discussed in Sections 2.3 and 2.4, respectively. Using some known theoretical assertions we can achieve in most casesthe exact bounds of the eigenvalue set. This is considered in Section 3. In Section 4 we present an efficient implementationof the algorithm and experiments on various data sets.

2. The general algorithm

The algorithm that we present is a subdivision algorithm, based on a branch and prune method [23]. The pseudo-code ofthe algorithm is presented in Algorithm 1. The input consists of an interval matrix A and a precision rate ε > 0. Notice thatε is not a direct measure of the approximation accuracy.

The output of the algorithm consists of two lists of intervals: Linn which comprises intervals lying insideΛ, and Lunc whichconsists of intervals where we cannot decide whether they are contained in Λ or not, with the given required precision ε.The union of these two lists is an outer approximation of Λ.

The idea behind our approach is to subdivide a given interval that initially contains Λ until either we can certify that aninterval is an inner or an outer one, or its length is smaller than the input precision ε. In the latter case, the interval is placedin the list Lunc.

The (practical) performance of the algorithm depends on the efficiency of its subroutines and more specifically on thesubroutines that implement the inner and outer tests. This is discussed in detail in Sections 2.3 and 2.4.

2.1. Branching in detail

Wemay consider the process of the Algorithm 1 as a binary tree in which the root corresponds to the initial interval thatcontains Λ. At each step of the algorithm the inner and outer tests are applied to the tested interval. If both are unsuccessfuland the length of the interval is greater than ε, thenwe split the interval into two equal intervals and the algorithm is appliedto each of them.

There are two basic ways to traverse this binary tree, either depth-first or breadth-first. Even though from a theoreticalpoint of view the two ways are equivalent, this is not the case from a practical point of view. The actual running time of animplementation of Algorithm 1 depends closely on the way that we traverse the binary tree. This is of no surprise. Exactlythe same behavior is noticed in the problem of real root isolation of integer polynomials [24–26].

Page 103: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2718 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

Algorithm 1 (Approximation of Λ)

1: compute initial bounds λ0, λ0such that Λ ⊆ λ0

:= [λ0, λ0];

2: L := {λ0}, Linn := ∅, Lunc := ∅;

3: while L = ∅ do4: choose and remove some λ from L;5: if λ ∩ Λ = ∅ then6: {nothing};7: else if λ ⊆ Λ then8: Linn := Linn ∪ {λ};9: else if λ∆ < ε then

10: Lunc := Lunc ∪ {λ};11: else12: λ1

:= [λ, λc], λ2:= [λc, λ], L := L ∪ {λ1, λ2

};13: end if14: end while15: return Linn and Lunc;

A closely related issue is the data structure that we use to implement the various lists of the algorithm and in particularL. Our experience suggests that we should implement L as a stack, so that the last inserted element to be chosen at step 4 isthe next candidate interval λ. Hereby, at step 12 we insert λ2 first, and λ1 afterwards.

Note that, in essence, the stack implementation of L closely relates to the depth-first search algorithm for traversing abinary tree. In this case, nodes correspond to intervals handled. Each node is a leaf if it is recognized as an outer or innerinterval, or if it is small enough. Otherwise, it has two descendants: the left one is for the left part of the interval and theright one is for the right part.

Themain advantage of the depth-first exploration of the tree, and consequently of the choice to use a stack to implementL, in stack implementation, is that it allows us to exhibit some useful properties of the tested intervals. For example, if atested interval λ is an inner interval, then the next interval in the stack, which is adjacent to it, cannot be an outer interval.Thus, for this interval we can omit steps 5–6 of the algorithm. Similarly, when a tested interval is an outer interval, then thenext in the stack cannot be inner. These kinds of properties allow us to avoid many needless computations in a lot of cases,and turn out to be very efficient in practice.

Another important consequence of the choice of traversing the tree depth-first is that it allows us to improve the timecomplexity of the inner tests. This is discussed in Section 2.4.

2.2. Initial bounds

During the first step of Algorithm 1 we compute an initial outer approximation of the eigenvalue set Λ, i.e. an intervalthat is guaranteed to contain the eigenvalue set. For this computation we use a method proposed in [20, Theorem 2]:

Theorem 1. Let

Sc :=12

Ac + AT

c

,

S∆ :=12

A∆ + AT

.

Then Λ ⊆ λ0:= [λ0, λ

0], where

λ0= λmin(Sc) − ρ(S∆),

λ0

= λmax(Sc) + ρ(S∆),

and λmin(Sc), λmax(Sc) denote the minimal and maximal eigenvalue of Sc , respectively.

The aforementioned bounds are usually very tight, especially for symmetric interval matrices. Moreover, it turns out,as we will discuss in Section 4, that λ0 is an excellent starting point for our subdivision algorithm. Other bounds can bedeveloped if we use Gerschgorin discs or Cassini ovals. None of these bounds, however, provide in practice approximationsas sharp as the ones of Theorem 1.

2.3. The outer test

In this section, we propose several outer tests, which can be used in step 5 of Algorithm 1. Even though their theoretical(worst-case) complexities are the same, their performances in practice differ substantially.

Consider an interval matrix A and a real closed interval λ. We want to decide whether λ ∩ Λ = ∅, that is, there is nomatrix A ∈ A that has a real eigenvalue inside λ. In this case, we say that λ is an outer interval.

Page 104: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2719

The natural idea is to transform the problem to the problem of checking regularity of intervalmatrices. An intervalmatrixM is regular if every M ∈ M is nonsingular.

Proposition 1. If the interval matrix A − λI is regular, then λ is an outer interval.

Proof. Let λ ∈ λ and A ∈ A. The real number λ is not an eigenvalue of A if and only if the matrix A−λI is nonsingular. Thus,if A − λI is regular, then for every λ ∈ λ and A ∈ A we have that A − λI is nonsingular (not conversely), and hence λ is notan eigenvalue. �

In general, Proposition 1 gives a sufficient but not necessary condition for checking the outer property (due to thedependences caused by multiple appearances of λ). Nevertheless, the smaller the radius of λ, the stronger the condition.

We now review some of the applicable conditions and methods. Recall that testing regularity of an interval matrix is anNP-hard problem [15]; therefore we exhibit a sufficient condition as well.

2.3.1. A sufficient regularity conditionThere are diverse sufficient conditions for an interval matrix to be regular [27]. The very strong one, which turned out to

very useful (cf. Section 4), is formulated below.

Proposition 2. An interval matrixM is regular if Mc is nonsingular and ρ(|M−1c |M∆) < 1.

Proof. See e.g. [27, Corollary 3.2.]. �

2.3.2. The Jansson and Rohn methodHerein we recall the Jansson and Rohn method [28] for testing regularity of an interval matrixM . Its great benefit is that

the time complexity is not a priori exponential. Its modification is also is very useful for the inner test (Section 2.4). That iswhy we describe the method here in more detail.

Choose an arbitrary vector b ∈ Rn and consider the interval system of equationsMx = b. The solution set

X = {x ∈ Rn;Mx = b,M ∈ M}

is described by

|Mcx − b| ≤ M∆|x|.

This solution set is formed by a union of convex polyhedra, since a restriction of X on an orthant is characterized by a linearsystem of inequalities

(Mc − M∆diag(z))x ≤ b, (Mc + M∆diag(z))x ≥ b, diag(z)x ≥ 0, (2)

where z ∈ {±1}n is a vector of signs corresponding to the orthant.Regularity ofM closely relates to unboundedness of X . Indeed, Jansson and Rohn [28] obtained the following result.

Theorem 2. Let C be a component (maximal connected set) of X. ThenM is regular if and only if C is bounded.

The algorithm starts by selecting an appropriate vector b. The component C is chosen so that it includes the pointM−1c b.

We check the unboundedness of C by checking the unboundedness of (2), for each orthant that C intersects. The list Lcomprises the sign vectors (orthants) to be inspected, and V consists of the already visited orthants.

To speed up the process, we notice that there is no need to inspect all the neighboring orthants. It suffices to inspect onlythat orthants possibly connected to the actual one. Thus we can skip the ones that are certainly disconnected. Jansson andRohn proposed an improvement in this way; we refer the reader to [28] for more details.

The performance of Algorithm 2 depends strongly on the choice of b. It is convenient to select b so that the solution setX intersects a (possibly) small number of orthants. The selection procedure of b, proposed by Jansson and Rohn, consists ofa local search.

2.3.3. The ILS methodThe ILS (interval linear system) method is a simple but efficient approach for testing regularity of an interval matrix M .

It is based on transforming the problem to solving an interval linear system and using an ILS solver. The more effective theILS solver is, the more effective the ILS method.

Proposition 3. The interval matrixM is regular if and only if the interval linear system

Mx = 0, x = 0 (5)

has no solution.

Page 105: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2720 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

Algorithm 2 (Jansson and Rohn method checking regularity ofM)1: ifMc is singular then2: return ‘‘M is not regular’’;3: end if4: select b;5: z := sgn(A−1

c b);6: L := {z}, V := ∅;7: while L = ∅ do8: choose and remove some z from L;9: V := V ∪ {z};

10: solve the linear program

maxzT x; (Mc − M∆diag(z))x ≤ b, (Mc + M∆diag(z))x ≥ b, diag(z)x ≥ 0

; (3)

11: if (3) is unbounded then12: return ‘‘M is not regular’’;13: else if (3) is feasible then14: L := L ∪

N(z) \ (L ∪ V )

}, where

N(z) = {(z1, . . . , zi−1, −zi, zi+1, . . . , zn)T ; 1 ≤ i ≤ n, }; (4)

15: end if16: end while17: return ‘‘M is regular’’;

Algorithm 3 (ILS method)1: for i = 1, . . . , n do2: b := M•,i {the ith column ofM};3: M ′

:=M•,1, . . . ,M•,i−1,M•,i+1, . . . ,M•,n

{the matrixM without the ith column};

4: solve (approximately) the interval linear system

M ′x′= −b, −e ≤ x′

≤ e; (7)

5: if (7) has possibly a solution then6: return ‘‘λ needn’t be outer’’;7: end if8: end for9: return ‘‘λ is an outer interval’’;

As x can be normalized, we replace the inequation x = 0 by ‖x‖∞ = 1, where the maximum norm is defined as ‖x‖∞ :=

max{|x|i; i = 1, . . . , n}. Moreover, since both x and −x satisfy (5), we derive the following equivalent formulation of (5):

Mx = 0, ‖x‖∞ = 1, xi = 1 for some i ∈ {1, . . . , n}, (6)

the solvability of which can be tested using Algorithm 3.The efficiency of the ILS method depends greatly on the selection of an appropriate ILS solver. It is not necessary to

solve (7) exactly, as this is time-consuming. In fact, checking solvability is known to be NP-hard [29]. It suffices to exploita (preferably fast) algorithm to produce an outer approximation of (6); that is, an approximation that contains the wholesolution set. Experience shows that the algorithmproposed in [30]modified so as towork for overconstrained linear systemsis a preferable choice. It is sufficiently fast and produces a good approximation of the solution set of (7).

2.3.4. Direct enumerationThe ILS method benefits us even whenM is not recognized as a regular matrix. In this case, we have an approximation of

the solution set of (7), at each iteration of step 4. By embedding them into n-dimensional space and joining them together,we get an outer approximation of the solution set of (6). This is widely usable; see also Section 3.Wewill present somemoredetails of this procedure.

Let v ⊆ Rn be an interval vector. We consider the sign vector set sgn(v), that is the set of vectors z ∈ {±1}n withcomponents

zi =

+1, vi ≥ 0,−1, vi < 0, vi ≤ 0,±1, otherwise (vi < 0 < vi).

(8)

Page 106: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2721

Clearly, the cardinality of sgn(v) is always a power of 2. Notice that this set does not always consist of the sign vectors of allv ∈ v; the difference is caused when vi < 0, vi = 0 holds for some i = 1, . . . , n.

Let x be an outer approximation of the solution set of (6), and let Z := sgn(x). As long as Z has reasonably small cardinalitywe can check the regularity ofM by inspecting all the corresponding orthants and solving the linear programs of Eq. (3) withb = 0. There is no need to check the other orthants, since x is a solution of (6) if and only if it is a feasible solution of (3) withb = 0, z = sgn(x) and zT x > 0. Algorithm 4 gives a formal description of this procedure.

Algorithm 4 (Direct enumeration via Z)1: for all z ∈ Z do2: solve the linear program (3) with b = 0;3: if (3) is unbounded then4: return ‘‘M is not regular’’;5: end if6: end for7: return ‘‘M is regular’’;

2.3.5. Practical implementationOur implementation exhibits and combines all the methods mentioned in this section. We propose the following

procedure (Algorithm 5) for the outer test of Algorithm 1:

Algorithm 5 (Outer test)1: M := A − λI;2: ifMc is singular then3: return ‘‘λ is not an outer interval’’;4: end if5: if ρ(|M−1

c |M∆) < 1 then6: return ‘‘λ is an outer interval’’;7: end if8: call Algorithm 2 (Jansson and Rohn) with the number of iterations limited by a constant K3;9: if the number of iteration does not exceed K3 then

10: return its output;11: end if12: call Algorithm 3 (ILS method);13: if Algorithm 3 recognize λ as an outer interval then14: return ‘‘λ is an outer interval’’;15: end if16: use the obtained approximation x to define Z;17: if |Z | < K4 then18: call Algorithm 4;19: return its output;20: end if21: return ‘‘no decision on λ’’;

Jansson and Rohn method is very fast as long as radii of M are small and λ is not close to the border of Λ. If this is notthe case, then it can be time-consuming. We limit the number of iterations of this procedure to K3, where K3 := n3. If afterthis number of iterations the result is not conclusive, then we call the ILS method. Finally, if ILS does not succeed, then wecompute Z , and if its cardinality is less than K4, then we call Algorithm 4. Otherwise, we cannot reach a conclusion about λ.We empirically choose K4 := 2α with α := 2 log(K3 + 200) − 8.

Notice that in step 2 of Algorithm 5 we obtain a little more information. Not only is λ not an outer interval, but also itshalf-intervals [λ, λc], [λc, λ] cannot be outer.

Remark 1. The interval Newton method [13,14] applied to the nonlinear system

Ax = λx, ‖x‖2 = 1

did not turn out to be efficient. Using the maximum norm was more promising; however, at each iteration the intervalNewton method solves an interval linear system that is a consequence of (6), and therefore cannot yield better results thanthe ILS method.

Page 107: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2722 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

2.4. The inner test

This section is devoted to the inner test (step 7 in Algorithm1).We are given a real closed intervalλ and an intervalmatrixA. The question is whether every λ ∈ λ represents an eigenvalue of some A ∈ A. If so, then λ is called an inner interval.

Using inner testing in interval-valued problems is not a common procedure. It depends greatly on the problem underconsideration, since interval parameters are usually correlated, and such correlations are, in general, hard to deal with.However, utilization of inner testing provides two great benefits: it decreases the running time and allows us to measurethe sharpness of the approximation.

Our approach is a modification of Jansson and Rohn method.

Proposition 4. We have that λ ∈ R is an eigenvalue of some A ∈ A if the linear programming problem

maxzT x; (Ac − λI − A∆diag(z))x ≤ b, (Ac − λI + A∆diag(z))x ≥ b, diag(z)x ≥ 0

(9)

is unbounded for some z ∈ {±1}n.

Proof. It follows from [28, Theorems 5.3 and 5.4]. �

Proposition 5. We have that λ is an inner interval if the linear programming problem

maxzT x1 − zT x2; (Ac − A∆diag(z))(x1 − x2) − λx1 + λx2 ≤ b, (Ac + A∆diag(z))(x1 − x2) − λx1

+ λx2 ≥ b, diag(z)(x1 − x2) ≥ 0, x1, x2 ≥ 0

(10)

is unbounded for some z ∈ {±1}n.

Proof. Let z ∈ {±1}n and let (10) be unbounded. That is, there exists a sequence of feasible points (x1k, x2k), k = 1, 2, . . . ,

such that limk→∞(zT x1k − zT x2k) = ∞. We show that (9) is unbounded for every λ ∈ λ, and thereby λ is an inner interval.Let λ ∈ λ be arbitrary. Define a sequence of points xk := (x1k − x2k), k = 1, 2, . . . . Every xk is a feasible solution to (9),

since

(Ac − λI − A∆diag(z))xk = (Ac − λI − A∆diag(z))(x1k − x2k)

≤ (Ac − A∆diag(z))(x1 − x2) − λx1 + λx2 ≤ b,

and

(Ac − λI + A∆diag(z))xk = (Ac − λI + A∆diag(z))(x1k − x2k)

≥ (Ac + A∆diag(z))(x1 − x2) − λx1 + λx2 ≥ b,

and

diag(z)xk = diag(z)(x1k − x2k) ≥ 0.

Next, limk→∞ zT xk = limk→∞ zT (x1k − x2k) = ∞. Therefore the linear program (9) is unbounded. �

Proposition 5 gives us a sufficient condition for checking whether λ is an inner interval. The condition becomes strongerand stronger as λ becomes more narrow. The natural question is that of how to search for a sign vector z that guaranteesthe unboundedness of (10). We can modify the Jansson and Rohn method and inspect all orthants intersecting a givencomponent. In our experience, slightly better results are obtained by the variation described by Algorithm 6.

This approach has several advantages. First, it solves the linear program (10), which has twice as many variables as (3),at most (n + 1) times. Next, we can accelerate Algorithm 1 by means of the following properties:

• Algorithm 6 returns that λ is not an inner interval only if λc is not an eigenvalue. In this case, neither of the half-intervals[λ, λc] and [λc, λ] can be inner.

• If (10) is unbounded for some sign vector z, then it is sometimes probable that the interval adjacent to λ is also innerand the corresponding linear program is (10), unbounded for the same sign vector z. Therefore, the sign vector z isworth remembering for the subsequent iterations of step 3 in Algorithm 6. This is particularly valuable when the list L isimplemented as a stack; see the discussion in Section 2.

2.5. Complexity

In this section we discuss the complexity of Algorithm 1. Recall that even testing the regularity of a matrix is an NP-hardproblem; thus we cannot expect a polynomial algorithm. By LP(m, n) we denote the (bit) complexity of solving a linearprogram with O(m) inequalities and O(n) unknowns.

Our algorithm is a subdivision algorithm. Its complexity is the number of tests it performs, times the complexity of eachstep. At each step we perform, in the worst case, an outer and an inner test.

Page 108: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2723

Algorithm 6 (Inner test)1: call Algorithm 2 (Jansson and Rohn) withM := A − λc I;2: ifM is regular then3: return λ is not inner interval;4: end if5: let z be a sign vector for which (3) is unbounded;6: solve (10);7: if (10) is unbounded then8: return ‘‘λ is an inner interval’’;9: else if (10) is infeasible then

10: return ‘‘λ is possibly not inner’’;11: end if12: for all y ∈ N(z) do13: solve (10) with y as a sign vector;14: if (10) is unbounded then15: return ‘‘λ is an inner interval’’;16: end if17: end for18: return ‘‘λ is possibly not inner’’;

Let us first compute the number of steps of the algorithm. Letmax{Aij} := max{max{Aij},max{Aij}} ≤ 2τ , i.e. we considera bound on the absolute value on the numbers used to represent the interval matrix. From Section 2.2 we deduce that thereal eigenvalue set of A is contained in an interval, centered at zero and with radius bounded by the sum of the spectralradii of SS and S∆. Evidently the bound 2τ holds for the elements of these matrices, as well. Since for an n × n matrix Mthe absolute value of its (possible complex) eigenvalues is bounded by nmaxij |Mij|, we deduce that the spectral radius of SSand S∆ is bounded by n2τ and thus the real eigenvalues of A are in the interval [−n2τ+1, n2τ+1

]. Let ε = 2−k be the inputaccuracy. In this case the total number of intervals that we need to test, or in other words the total number of steps that thealgorithm performs, is n2τ+1/2−k

= n2τ+k+1.It remains to compute the complexity of each step. At each step we perform an inner and an outer test. For each of

these tests we should solve, in the worst case, 2O(n) linear programs that consist of O(n) variables and inequalities. Theexponential number of linear programs is a consequence of the fact that we should enumerate all the vertices of a hypercubein n dimensions (refer to Algorithm 4).

Thus the total complexity of the algorithm is O(n 2k+τ+1 2n LP(n, n)).

2.6. The interval hull

We can slightly modify Algorithm 1 to approximate the interval hull of Λ, i.e., the smallest interval containing Λ. Let λL

(resp. λU ) be the lower (resp. upper) boundary of Λ, i.e.,

λL:= inf{λ; λ ∈ Λ} and λU

:= sup{λ; λ ∈ Λ}.

In order to compute λL, we consider the followingmodifications of Algorithm 1.We remove all the steps that refer to thelist Linn, and we change step 8 to

...8: return Lunc;...

The modified algorithm returns Lunc as output. The union of all the intervals in Lunc is an approximation of the lowerboundary point λL. If the list Lunc is empty, then the eigenvalue set Λ is empty, too.

An approximation of the upper boundary point,λU , can be computed as a negative value of the lower eigenvalue boundarypoint of the interval matrix (−A).

3. Exact bounds

Algorithm 1 yields an outer and an inner approximation of the set of eigenvalues Λ. In this section we show how to useit for computing the exact boundary points of Λ. This exactness is limited by the use of floating point arithmetic. Rigorousresults would be obtained by using interval arithmetic, but this is a direct modification of the proposed algorithm and wedo not discuss it in detail.

As long as interval radii of A are small enough, we are able, in most of the cases, to determine the exact bounds in areasonable time. Surprisingly, sometimes computing exact bounds is faster than high precision approximation.

Page 109: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2724 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

We build on [19, Theorem 3.4]:

Theorem 3. Let λ ∈ ∂Λ. Then there exist nonzero vectors x, p ∈ Rn and vectors y, z ∈ {±1}n such thatAc − diag(y)A∆diag(z)

x = λx, (11)

ATc − diag(z)AT

∆diag(y)p = λp,

diag(z)x ≥ 0,diag(y)p ≥ 0.

Theorem 3 asserts that the boundary eigenvalues are produced by special matrices Ay,z ∈ A of the form of Ay,z :=

Ac − diag(y)A∆diag(z). Here, z is the sign vector of the right eigenvector x, and y is the sign vector of the left eigenvector p.Recall that a right eigenvector is a nonzero vector x satisfying Ax = λx, and a left eigenvector is a nonzero vector p satisfyingATp = λp.

In our approach, we are given an interval λ and we are trying to find outer approximations of the corresponding left andright eigenvectors, i.e. p and x, respectively. If no component of p and x contains zero, then the sign vectors y := sgn(p) andz := sgn(x) are uniquely determined. In this case we enumerate all the eigenvalues of Ay,z . If only one of them belongs to λ,then we have succeeded.

If the eigenvectors in p and x are normalized according to (5), then we must inspect not only Ay,z , but also A−y,z (theothers, Ay,−z and A−y,−z , are not needed due to symmetry).

The formal description is given in Algorithm 7.

Algorithm 7 (Exact bound)1: M := A − λI;2: call Algorithm 3 (ILS method) with the input matrix MT to obtain an outer approximation p of the corresponding

solutions.3: if p

i≤ 0 ≤ pi for some i = 1, . . . , n then

4: return ‘‘bound is possibly not unique’’;5: end if6: y := sgn(p);7: call Algorithm3 (ILSmethod)with the inputmatrixM to obtain an outer approximation x of the corresponding solutions.

8: if xi ≤ 0 ≤ xi for some i = 1, . . . , n then9: return ‘‘bound is possibly not unique’’;

10: end if11: z := sgn(x);12: let L be a set of all eigenvalues of Ay,z and A−y,z ;13: if L ∩ λ = ∅ then14: return ‘‘no boundary point in λ’’;15: else if L ∩ λ = {λ∗

} then16: return ‘‘λ∗ is a boundary point candidate’’;17: else18: return ‘‘bound is possibly not unique’’;19: end if

We now describe how to integrate this procedure into our main algorithm. Suppose that at some iteration of Algorithm1 we have an interval λ1 recognized as outer. Suppose next that the following current interval λ2 is adjacent to λ1 (i.e.,λ1

= λ2); it is not recognized as outer and it fulfills the precision test (step 9). According to the result of Algorithm 7 wedistinguish three possibilities:• If L ∩ λ2

= ∅ then there cannot be any eigenvalue boundary point in λ2, and therefore it is an outer interval.• If L ∩ λ2

= {λ∗} then λ∗ is the exact boundary point required, and moreover [λ∗, λ

2] ⊆ Λ.

• If |L ∩ λ2| > 1 then the exact boundary point is λ∗

:= min{λ; λ ∈ L ∩ λ2}. However, we cannot say anything about the

remaining interval [λ∗, λ2].

A similar procedure is applied when λ1 is inner and λ2 is adjacent and narrow enough.We can simply extend Algorithm 7 to the case where there are some zeros in the components of p and x. In this case, the

sign vectors y and z are not determined uniquely. Thus, we have to take into account the sets of all the possible sign vectors.Let v be an interval vector and sgn′(v) be a sign vector set, that is, the set of all sign vectors z ∈ {±1}n satisfying

zi =

+1, vi > 0,−1, vi < 0,±1, otherwise (vi ≤ 0 ≤ vi).

Page 110: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2725

The definition of sgn′(v) slightly differs from that of sgn(v) in (8). Herein, we must take into account the both signs of ziwhenever vi contains zero (even on a boundary). Assume

Y := sgn′(p), Z := sgn′(x).

Instead of two matrices, Ay,z and A−y,z , we must inspect all possible combinations with y ∈ Y and z ∈ Z . In this way, step12 of Algorithm 7 will we replaced by

...12′: L := {λ; λ is an eigenvalue of Ay,z or of A−y,z, y ∈ Y , z ∈ Z};...

The cardinality of Y is a power of 2, and the cardinality of Z as well. Since we have to enumerate eigenvalues of |Y | · |Z |

matrices, step 12′ is tractable for only reasonably small sets Y and Z .

4. Numerical experiments

In this section we present results of some numerical experiments. They confirm the quality of the algorithm presented.We are able to determine the eigenvalue set exactly or at least very sharply for dimensions up to about 30. The running timedepends heavily not only on the dimension, but also on the widths of matrix intervals.

We also compared our implementation with another techniques that solve directly the interval nonlinear system (1). Itturnedout that such techniques are comparable only for very small dimensions, i.e.∼5. Results of our numerical experimentsare displayed in tables that follow and can be interpreted using the following notation:

n Problem dimensionε PrecisionR Maximal radius‘‘Exactness’’ Indication of whether exact bounds of Λ were achieved; if not, we display the number of uncertain

intervals‘‘Time’’ Computing time in hours, minutes and seconds‘‘Hull time’’ Computing time of the interval hull of Λ; see Section 2.6

Note that ε refers to the precision used in the step 9 of Algorithm 1. For the additional computation of exact boundarypoints we use 10−4ε precision.

Generally, better results were obtained for smaller R, as both the Jansson and Rohn method and the sufficient regularitycondition are more efficient for smaller radii of matrix intervals.

The results were carried on an Intel Pentium(R) 4, CPU 3.4 GHz, with 2 GB RAM, and the source code was writtenin C++ using GLPK v.4.23 [31] for solving linear programs, CLAPACK v.3.1.1 for its linear algebraic routines, andPROFIL/BIAS v.2.0.4 [32] for interval arithmetics. Notice, however, that routines of GLPK and CLAPACK [33] do notproduce verified solutions, and for real-life problems preferably verified software or interval arithmetic should be used.

Example 1 (Random Matrices). The entries of the midpoint matrix Ac are chosen randomly with uniform distribution in[−20, 20]. The entries of the radius matrix A∆ are chosen randomly with uniform distribution in [0, R], where R is a positivereal number. The results are displayed in Table 1.

Example 2 (Random SymmetricMatrices). The entries ofAc andA∆ are chosen randomly in the samemanner as in Example 1;the only difference is that both of these matrices are composed to be symmetric. See Table 2 for the results.

Example 3 (Random ATA Matrices). The entries of Ac and A∆ are chosen randomly as in Example 1, and our algorithm isapplied on the matrix generated by the product ATA. In this case, the maximal radius value R is a bit misleading, since itrefers to the original matrix A instead of the product used. The results are displayed in Table 3.

Example 4 (Random Nonnegative Matrices). The entries of Ac and A∆ are chosen randomly as in Example 1, and theeigenvalue problem is solved for its absolute value

|A| := {|A|; A ∈ A}.

The absolute value of an interval matrix is again an interval matrix and with entries

|A|ij =

Aij Aij ≥ 0,−Aij Aij ≤ 0,[0,max(−Aij, Aij)] otherwise.

See Table 4 for the results.

Page 111: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2726 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

Table 1Randommatrices.

n ε R Exactness Time Hull time

5 0.1 1 Exact 2 s 1 s10 0.1 0.1 Exact 7 s 2 s10 0.1 0.5 Exact 9 s 4 s10 0.1 1 Exact 16 s 1 s10 0.1 5 Exact 1 min 12 s 1 min 11 s15 0.1 0.1 Exact 37 s 5 s15 0.1 0.5 Exact 10 min 29 s 6 s15 0.1 0.5 Exact 20 min 54 s 35 s15 0.1 1 Exact 7 min 59 s 1 min 12 s20 0.1 0.1 Exact 2 min 16 s 10 s20 0.1 0.1 Exact 7 min 27 s 39 s20 0.1 0.5 Exact 21 min 6 s 46 s25 0.1 0.01 Exact 5 min 46 s 23 s25 0.1 0.05 Exact 10 min 39 s 1 min 34 s30 0.01 0.01 Exact 14 min 37 s 54 s30 0.01 0.1 Exact 48 min 31 s 29 s40 0.01 0.01 Exact – 2 min 20 s40 0.01 0.05 Exact – 1 h 42 min 36 s40 0.01 0.1 Exact – 1 h 52 min 15 s50 0.01 0.01 Exact – 9 min 25 s50 0.01 0.1 2 – 21 min 34 s

Table 2Random symmetric matrices.

n ε R Exactness Time Hull time

5 0.1 1 Exact 3 s 1 s10 0.1 0.1 Exact 11 s 1 s10 0.1 0.5 Exact 17 s 1 s10 0.1 1 2 2 min 18 s 1 s10 0.1 5 2 11 s 10 s15 0.1 0.1 Exact 3 min 51 s 1 s15 0.1 0.5 6 31 min 43 s 4 s20 0.1 0.01 Exact 2 min 25 s 3 s20 0.1 0.05 Exact 6 min 39 s 4 s20 0.1 0.1 Exact 27 min 48 s 8 s20 0.1 0.1 10 40 min 19 s 8 s25 0.1 0.01 Exact 7 min 51 s 12 s25 0.1 0.05 Exact 1 h 59 min 11 s 11 s30 0.01 0.1 Exact – 29 s40 0.01 0.1 Exact – 6 min 15 s50 0.01 0.01 Exact – 1 min 23 s50 0.01 0.1 Exact – 1 h 2 min 43 s

100 0.01 0.01 Exact – 34 min 5 s

Table 3Random ATAmatrices.

n ε R Exactness Time Hull time

5 0.1 0.1 Exact 5 s 1 s10 0.1 0.1 Exact 37 s 3 s10 0.1 0.1 Exact 4 min 0 s 1 s10 0.1 0.5 Exact 1 min 35 s 7 s10 0.1 1 Exact 1 min 3 s 56 s15 0.1 0.001 Exact 1 min 1 s 3 s15 0.1 0.002 Exact 40 s 2 s15 0.1 0.01 3 3 min 38 s 17 s15 0.1 0.02 1 1 min 58 s 13 s15 0.1 0.1 Exact 39 min 27 s 4 min 48 s20 0.01 0.1 Exact – 1 h 18 min 16 s

Figs. 1–4 present some examples of the eigenvalue set Λ. Intervals of Λ are colored red while the outer intervals areyellow and green; yellow color is for the intervals recognized by the sufficient regularity condition (step 5 of Algorithm 5),and green is for the remainder.

Page 112: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2727

Table 4Random nonnegative matrices.

n ε R Exactness Time Hull time

10 0.01 0.1 Exact 13 s 1 s10 0.01 1 Exact 8 s 1 s15 0.01 0.1 Exact 2 min 22 s 6 s15 0.01 0.1 Exact 47 s 4 s15 0.01 0.5 Exact 1 min 53 s 27 s15 0.01 1 Exact 57 s 37 s15 0.01 5 Exact – 1 h 8 min 49 s20 0.01 0.1 Exact 3 min 55 s 9 s20 0.01 0.5 Exact 8 min 36 s 1 min 19 s25 0.01 0.1 Exact 51 min 58 s 12 s30 0.01 0.01 Exact 19 min 47 s 49 s30 0.01 0.1 Exact – 37 min 44 s40 0.01 0.01 Exact – 2 min 41 s40 0.01 0.05 Exact – 15 min 57 s50 0.01 0.1 Exact – 2 h 2 min 22 s

Fig. 1. Random matrix, n = 30, R = 0.1, computing time 48 min 31 s, initial approximation [−86.888, 86.896]. (For interpretation of the references tocolour in this figure legend, the reader is referred to the web version of this article.)

Fig. 2. Random symmetric matrix, n = 15, R = 0.5, computing time 13 min 48 s, initial approximation [−60.614, 58.086]. (For interpretation of thereferences to colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 3. RandomATAmatrix, n = 15, R = 0.02, computing time 1min 58 s, initial approximation [−39.620, 5679.196]. (For interpretation of the referencesto colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 4. Random nonnegative matrix, n = 15, R = 0.2, computing time 2 min 22 s, initial approximation [−27.548, 144.164]. (For interpretation of thereferences to colour in this figure legend, the reader is referred to the web version of this article.)

Example 5 (Interval Matrices in Robotics). The following problem usually appears in experimental planning, e.g. [6]. Weconsider a PRRRP planar mechanism, where P stands for a prismatic and R for a rotoid joint; for further details we refer thereader to [34].

We consider the mechanism of Fig. 5. Let X = (x, y) be a point in the plane, which is linked to two points,M and N , usingtwo fixed length bars so that it holds that ‖X − M‖ = r1, ‖X − N‖ = r2. We can move M (respectively N) between thefixed points A = (0, 0) and B = (L, 0) (respectively C = (L + r3, 0) and D = (2.L + r3, 0)) using two prismatic joints, so‖A − M‖ = l1, ‖C − N‖ = l2. In the example that we consider, Fig. 5, the points A, B, C,D are aligned.

If we control the length l1 and l2 by using two linear actuators we allow the displacement of the end-effector X to havetwo degrees of freedom in a planar workspace that is limited by the articular constraints l1, l2 ∈ [0, L]. The two equationsF1(X, l1) ≡ ‖M − X‖

2− r21 = 0 and F2(X, l2) ≡ ‖N − X‖

2− r22 = 0 link the generalized coordinates (x, y) and the articular

coordinates l1, l2.The calibration of such a mechanism is not an easy problem due to assembly and manufacturing errors, and because the

kinematic parameters, that is the lengths r1, r2 and r3, are not well known.The aim is to estimate them using several measurements, k = 1, . . . , n, of the end-effector Xk and the corresponding

measurements of the articular coordinates lk,1, lk,2.The identification procedure of r1, r2, and r3 is based on a classical least square approach for the (redundant) system

F ≡ [F1,1(X1, l1,1), F1,2(X1, l1,2), . . . , Fn,1(Xn, ln,1), Fn,2(Xn, ln,2)]T = 0.

That is, we want to compute r1, r2, and r3 that minimize the quantity F T F .A natural question is that of how to estimate the measurements positions inside the workspace [7] to improve the

robustness of the numerical solution of a least square method. For this we can use the observability index [8], which isa square root of the smallest eigenvalue of JT J , where J is the identification Jacobian. It is defined by the derivatives of F1k

Page 113: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2728 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

16

14

12

10

8

6

4

2

0

-20 5 10 15 20 25

A M B C N D

Fig. 5. PRRRP planar parallel robot.

and F2k with respect to the kinematic parameters r1, r2, and r3, that is

J =

∂F1,1(X1, l1,1)∂r1

∂F1,1(X1, l1,1)∂r2

∂F1,1(X1, l1,1)∂r3

∂F1,2(X1, l1,2)∂r1

∂F1,2(X1, l1,2)∂r2

∂F1,2(X1, l1,2)∂r3

......

...

∂Fn,1(Xn, ln,1)∂r1

∂Fn,1(Xn, ln,1)∂r2

∂Fn,1(Xn, ln,1)∂r3

∂Fn,2(Xn, ln,2)∂r1

∂Fn,2(Xn, ln,2)∂r2

∂Fn,2(Xn, ln,2)∂r3

.

The observability index can be equivalently defined as the third-largest eigenvalue of the matrix0 JJT 0

. (12)

We employ this approach since it gives rise to more accurate estimates.Recall that due tomeasurement errors, it is not possible to obtain the actual values of the kinematic parameters. However,

if the set of measurements is chosen so as to maximize this index, the error of the end-effector positions after calibration isminimized.

We demonstrate our approach by setting n = 2. Let r1 = r2 = 15, r3 = 5, and L = 10.If l1,1 ∈ [0, 5], l1,2 ∈ [5, 10], l2,1 ∈ [5, 10] and l2,2 ∈ [0, 5] then

J =

[−30, −30] 0 0

0 [−30, −30] [−30, −30]0 [15.2, 25] [5, 14.8]

,

and the third-largest eigenvalue λ3 of (12) lies in the interval [0.25, 12.53].Similarly, if l1,1 ∈ [0, 2], l1,2 ∈ [8, 10], l2,1 ∈ [8, 10], and l2,2 ∈ [0, 2], this is workspace 1, ws1, in Fig. 6; then

λ3 = [7.56, 12.53], where

J =

[−30, −30] 0 0

0 [−30, −30] [−30, −30]0 [21, 25] [5, 9]

.

If l1,1 ∈ [4, 7], l1,2 ∈ [9, 10], l2,1 ∈ [9, 10], and l2,2 ∈ [4, 7], this is workspace 2, ws2, in Fig. 6; then λ3 = [2.52, 7.56],where

J =

[−30, −30] 0 0

0 [−30, −30] [−30, −30]0 [17, 21] [9, 13]

.

As regards the last two examples, it is always better to chose the measurement poses in ws1, in red in Fig. 6, than theones in ws2, in blue in Fig. 6.

Page 114: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730 2729

15

14

14

13

12

12

11

10

10

9

88 16 18

Fig. 6. Workspacesws1 in red andws2 in blue. (For interpretation of the references to colour in this figure legend, the reader is referred to the web versionof this article.)

5. Conclusion

In this paper we considered the problem of computing the real eigenvalues of matrices with interval entries. Sharpapproximation of the set of the (real) eigenvalues is an important subroutine in various engineering applications. Weproposed an algorithm based on a branch and prune scheme and splitting only along one dimension (real axis) to computethe intervals of the real eigenvalues. The algorithm approximates the real eigenvalues with an accuracy depending on agiven positive parameter ε.

Numerical experiments demonstrate that the algorithm is applicable in high dimensions. An exact bound can be achievedin real time up to the dimension of 30, but more or less sharp approximations can be produced in any dimension. To thebest of our knowledge there is no comparable method for dimension greater that 5.

Our algorithm could be also seen as a first step of an algorithm that produces intervals (in the complex plane) that containall the eigenvalues of a given interval matrix. This is work in progress.

References

[1] W. Karl, J. Greschak, G. Verghese, Comments on ‘A necessary and sufficient condition for the stability of interval matrices’, Internat. J. Control 39 (4)(1984) 849–851.

[2] Z. Qiu, P.C. Müller, A. Frommer, An approximatemethod for the standard interval eigenvalue problem of real non-symmetric intervalmatrices, Comm.Numer. Methods Engrg. 17 (4) (2001) 239–251.

[3] A.D. Dimarogonas, Interval analysis of vibrating systems, J. Sound Vibration 183 (4) (1995) 739–749.[4] F. Gioia, C.N. Lauro, Principal component analysis on interval data, Comput. Statist. 21 (2) (2006) 343–363.[5] D. Chablat, P. Wenger, F. Majou, J. Merlet, An interval analysis based study for the design and the comparison of 3-DOF parallel kinematic machines,

Int. J. Robot. Res. 23 (6) (2004) 615–624.[6] E. Walter, L. Pronzato, Identification of Parametric Models, Springer, Heidelberg, 1997.[7] D. Daney, Y. Papegay, B. Madeline, Choosing measurement poses for robot calibration with the local convergence method and Tabu search, Int. J.

Robot. Res. 24 (6) (2005) 501.[8] A. Nahvi, J. Hollerbach, The noise amplification index for optimal pose selection in robot calibration, in: IEEE International Conference on Robotics

and Automation, Citeseer, 1996, pp. 647–654.[9] D. Cox, J. Little, D. O’Shea, Ideals, Varieties, and Algorithms, 2nd ed., in: Undergraduate Texts in Mathematics, Springer-Verlag, New York, 1997.

[10] L. Busé, H. Khalil, B. Mourrain, Resultant-based methods for plane curves intersection problems, in: V. Ganzha, E. Mayr, E. Vorozhtsov (Eds.), Proc. 8thInt. Workshop Computer Algebra in Scientific Computing, in: LNCS, vol. 3718, Springer, 2005, pp. 75–92.

[11] H. Stetter, Numerical Polynomial Algebra, Society for Industrial Mathematics, 2004.[12] G. Alefeld, J. Herzberger, Introduction to Interval Computations, Academic Press, London, 1983.[13] E. Hansen, G.W. Walster, Global Optimization Using Interval Analysis, 2nd ed., Marcel Dekker, New York, 2004, revised and expanded.[14] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990.[15] S. Poljak, J. Rohn, Checking robust nonsingularity is NP-hard, Math. Control Signals Systems 6 (1) (1993) 1–9.[16] A.S. Deif, The interval eigenvalue problem, Z. Angew. Math. Mech. 71 (1) (1991) 61–64.[17] J. Rohn, A. Deif, On the range of eigenvalues of an interval matrix, Computing 47 (3–4) (1992) 373–377.[18] A. Deif, J. Rohn, On the invariance of the sign pattern of matrix eigenvectors under perturbation, Linear Algebra Appl. 196 (1994) 63–70.[19] J. Rohn, Interval matrices: singularity and real eigenvalues, SIAM J. Matrix Anal. Appl. 14 (1) (1993) 82–91.[20] J. Rohn, Bounds on eigenvalues of interval matrices, ZAMM Z. Angew. Math. Mech. 78 (Suppl. 3) (1998) S1049–S1050.[21] G. Alefeld, G. Mayer, Interval analysis: theory and applications, J. Comput. Appl. Math. 121 (1–2) (2000) 421–464.[22] E.R. Hansen, G.W. Walster, Sharp bounds on interval polynomial roots, Reliab. Comput. 8 (2) (2002) 115–122.[23] L. Jaulin, M. Kieffer, O. Didrit, É. Walter, Applied Interval Analysis. With Examples in Parameter and State Estimation, Robust Control and Robotics,

Springer, London, 2001.[24] F. Rouillier, Z. Zimmermann, Efficient isolation of polynomial’s real roots, J. Comput. Appl. Math. 162 (1) (2004) 33–50.[25] I.Z. Emiris, B. Mourrain, E.P. Tsigaridas, Real algebraic numbers: complexity analysis and experimentation, in: P. Hertling, C. Hoffmann, W. Luther,

N. Revol (Eds.), Reliable Implementations of Real Number Algorithms: Theory and Practice, in: LNCS, vol. 5045, Springer-Verlag, 2008, pp. 57–82.Also available in: www.inria.fr/rrrt/rr-5897.html.

Page 115: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

2730 M. Hladík et al. / Journal of Computational and Applied Mathematics 235 (2011) 2715–2730

[26] W. Krandick, Isolierung reeller nullstellen von polynomen, in: J. Herzberger (Ed.), Wissenschaftliches Rechnen, Akademie-Verlag, Berlin, 1995,pp. 105–154.

[27] G. Rex, J. Rohn, Sufficient conditions for regularity and singularity of interval matrices, SIAM J. Matrix Anal. Appl. 20 (2) (1998) 437–445.[28] C. Jansson, J. Rohn, An algorithm for checking regularity of interval matrices, SIAM J. Matrix Anal. Appl. 20 (3) (1999) 756–776.[29] J. Rohn, Solvability of systems of interval linear equations and inequalities, in: M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, K. Zimmermann (Eds.), Linear

Optimization Problems with Inexact Data, Springer, New York, 2006, pp. 35–77 (Chapter 2).[30] O. Beaumont, Algorithmique pour les intervalles: comment obtenir un résultat sűr quand les données sont incertaines, Ph.D. Thesis, Université de

Rennes 1, Rennes, 1999.[31] A. Makhorin, GLPK—GNU linear programming kit. http://www.gnu.org/software/glpk/.[32] O. Knüppel, D. Husung, C. Keil, PROFIL/BIAS—a C++ class library. http://www.ti3.tu-harburg.de/Software/PROFILEnglisch.html.[33] CLAPACK—linear algebra Package written for C. http://www.netlib.org/clapack/.[34] X.-J. Liu, J.Wang, G. Pritschow,On the optimal kinematic design of the PRRRP 2-DOFparallelmechanism,Mech.Mach. Theory 41 (9) (2006) 1111–1130.

Page 116: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Computers and Mathematics with Applications 62 (2011) 3152–3163

Contents lists available at SciVerse ScienceDirect

Computers and Mathematics with Applications

journal homepage: www.elsevier.com/locate/camwa

Characterizing and approximating eigenvalue sets of symmetricinterval matricesMilan Hladík a,b, David Daney b, Elias Tsigaridas c,∗

a Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 11800, Prague, Czech Republicb INRIA Sophia-Antipolis Méditerranée, 2004 route des Lucioles, BP 93, 06902 Sophia-Antipolis Cedex, Francec Computer Science Department, Aarhus University, Denmark

a r t i c l e i n f o

Article history:Received 23 March 2011Received in revised form 26 June 2011Accepted 8 August 2011

Keywords:Interval matrixSymmetric matrixInterval analysisEigenvalueEigenvalue bounds

a b s t r a c t

We consider the eigenvalue problem for the case where the input matrix is symmetric andits entries are perturbed, with perturbations belonging to some given intervals.We presenta characterization of some of the exact boundary points, which allows us to introducean inner approximation algorithm, that in many case estimates exact bounds. To ourknowledge, this is the first algorithm that is able to guarantee exactness. We illustrate ourapproach by several examples and numerical experiments.

© 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Computing eigenvalues of amatrix is a basic linear algebraic task used throughout inmathematics, physics and computerscience. Real life makes this problem more complicated by imposing uncertainties and measurement errors on the matrixentries. We suppose we are given some compact intervals in which the matrix entries can vary. The set of all possible realeigenvalues forms a compact set, and the question that we deal with in this paper is how to characterize and compute it.

The interval eigenvalue problem has its own history. The first results are probably due to Deif [1] and Rohn & Deif [2]:bounds for real and imaginary parts for complex eigenvalues were studied by Deif [1], while Rohn & Deif [2] considered realeigenvalues. Their theorems are applicable only under an assumption on sign pattern invariancy of eigenvectors, which isnot easy to verify (cf. [3]). A boundary point characterization of the eigenvalue set was given by Rohn [4], and it was usedby Hladík et al. [5] to develop a branch & prune algorithm producing an arbitrarily tight approximation of the eigenvalueset. Another approximate method was given by Qiu et al. [6]. The related topic of finding verified intervals of eigenvaluesfor real matrices was studied in, e.g. [7–9].

In this paper we consider the case of the symmetric eigenvalue problem. Symmetric matrices naturally appear inmany practical problems, but symmetric interval matrices are hard to deal with. This is so, mainly due to the so-calleddependencies, that is, correlations between thematrix components. Ifwe ‘‘forget’’ these dependencies and solve the problemby reducing it to the previous case, then the results will be greatly overestimated, in general (but not the extremal points,see Theorem 2). From now on we consider only the symmetric case.

Due to the dependencies just mentioned, the theoretical background for the eigenvalue problem of symmetric intervalmatrices is notwell established enough and there are fewpracticalmethods. The known results are byDeif [1] andHertz [10].

∗ Corresponding author.E-mail addresses:[email protected], [email protected] (M. Hladík), [email protected] (D. Daney), [email protected]

(E. Tsigaridas).

0898-1221/$ – see front matter© 2011 Elsevier Ltd. All rights reserved.doi:10.1016/j.camwa.2011.08.028

Page 117: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3153

Deif [1] gives an exact description of the eigenvalue set together with restrictive assumptions. Hertz [10] (cf. [11]) proposeda formula for computing two extremal points of the eigenvalue set—the largest and the smallest ones. As the problem itselfis very hard, it is not surprising conjectures on the problem [12] turned out to be wrong [13].

In recent years, several approximation algorithms have been developed. By means of matrix perturbation theory, Qiuet al. [14] proposed an algorithm for approximate bounds, and Leng & He [15] for an outer estimation. An outer estimationwas also considered by Kolev [16], but for the general case with nonlinear dependencies. Some initial bounds that are easyand quick to compute were discussed by Hladík et al. [17], and an iterative refinement in [18]. An iterative algorithm forouter estimation was given by Beaumont [19].

In this paper we focus more on the inner approximations (subsets) of the eigenvalue sets. There are much fewer papersdevoted to inner approximation. Let us mention an evolution strategy method by Yuan et al. [13] or a general method fornonlinear systems [9].

The interval eigenvalue problem has a lot of applications in the field of mechanics and engineering. Let us mention forinstance automobile suspension systems [6],mass structures [14], vibrating systems [20], principal component analysis [21],and robotics [22]. Another applications arise from the engineering area concerning singular values and condition numbers.Using the well-known Jordan–Wielandt transformation [23, Section 8.6], [24, Section 7.5] we can simply reduce a singularvalue calculation to a symmetric eigenvalue one.

This paper is organized as follows. In Section 2 we introduce the notation that we use throughout the paper. In Section 3we present our main theoretical result that enables to exactly determine some of the eigenvalue set. It is a basis for thealgorithms that we present in Section 4. The algorithms calculate inner approximations of the eigenvalue sets. Even thoughouter approximation is usually considered in literature, inner approximation is of interest, too. Moreover, due to the maintheorem, we can obtain exact eigenvalue bounds in some cases. Finally, in Section 5 we demonstrate our approach by anumber of examples and numerical experiments.

2. Basic definitions and theoretical background

Let us introduce some notions first. An interval matrix is denoted by boldface and defined as

A := [A, A] = {A ∈ Rm×n; A ≤ A ≤ A},

where A, A ∈ Rm×n, A ≤ A, are given matrices. By

Ac :=12(A + A), A∆ :=

12(A − A)

we denote the midpoint and the radius of A, respectively.By an interval linear system of equations Ax = bwemean a family of systems Ax = b, such that A ∈ A, b ∈ b. In a similar

way we introduce interval linear systems of inequalities and mixed systems of equations and inequalities. A vector x is asolution of Ax = b if it is a solution of Ax = b for some A ∈ A and b ∈ b. We assume that the reader is familiar with thebasics of interval arithmetic; for further details we refer to e.g. [25–27].

Let F be a family of n × nmatrices. We denote the eigenvalue set of the family F by

Λ(F ) := {λ ∈ R; ∃A ∈ F ∃x = 0 : Ax = λx}.

A symmetric interval matrix as defined as

AS:= {A ∈ A | A = AT

}.

It is usually a proper subset of A. Considering the eigenvalue set Λ(A), it generally represents an overestimation of Λ(AS).That is why we focus directly on the eigenvalue set of the symmetric portion, even though we must take into account thedependencies between the elements, in the definition of AS .

A real symmetric matrix A ∈ Rn×n has always n real eigenvalues, let us sort them in non-increasing order

λ1(A) ≥ λ2(A) ≥ · · · ≥ λn(A).

We extend this notation for symmetric interval matrices

λi(AS) := {λi(A) | A ∈ AS}.

These sets represent n compact intervals λi(AS) = [λi(AS), λi(AS)], i = 1, . . . , n; cf. [17]. The intervals can be disjoint, can

overlap, or some of them, can be identical. However, what cannot happen is that one interval is a proper subset of anotherinterval. The union of these intervals produces Λ(AS). For instance, consider an interval matrix

AS=

[2, 3] 00 [1, 4]

. (1)

Then λ1(AS) = [2, 4], λ2(AS) = [1, 3] and Λ(AS) = [1, 4].

Page 118: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

3154 M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163

Throughout the paper we use the following notation:λi(A) the ith eigenvalue of a symmetric matrix A (in non-increasing order)σi(A) the ith singular value of a matrix A (in non-increasing order)vi(A) the eigenvector associated to the ith eigenvalue of a symmetric matrix Aρ(A) the spectral radius of a matrix A∂S the boundary of a set Sconv S the convex hull of a set Sdiag(y) the diagonal matrix with entries y1, . . . , ynsgn(x) the sign vector of a vector x, i.e., sgn(x) = (sgn(x1), . . . , sgn(xn))T

‖x‖2 the Euclidean vector norm, i.e., ‖x‖2 =√xT x

‖x‖∞ the Chebyshev (maximum) vector norm, i.e., ‖x‖∞ = max{|x|i; i = 1, . . . , n}x ≤ y, A ≤ B vector and matrix relations are understood component-wise.

3. Main theorem

The following theorem is themain theoretical result of the present paper. We remind the reader that the principalm×msubmatrix of a given n × n matrix is any submatrix obtained by eliminating any n − m rows and the corresponding n − mcolumns.

Theorem 1. Let λ ∈ ∂Λ(AS). Then there is k ∈ {1, . . . , n} and a principal submatrix AS ⊂ Rk×k of AS such that:

• If λ = λj(AS) for some j ∈ {1, . . . , n}, then

λ ∈ {λi(Ac + diag(z)A∆diag(z)); z ∈ {±1}k, i = 1, . . . , k}. (2)

• If λ = λj(AS) for some j ∈ {1, . . . , n}, then

λ ∈ {λi(Ac − diag(z)A∆diag(z)); z ∈ {±1}k, i = 1, . . . , k}. (3)

Proof. Let λ ∈ ∂Λ(AS). Then either λ = λj(AS) or λ = λj(AS), for some j ∈ {1, . . . , n}. We assume the former case. The

latter can be proved similarly.The eigenvalueλ corresponds to amatrixA ∈ A.Without loss of generalitywe assume that the corresponding eigenvector

x, ‖x‖2 = 1, is of the form x = (0T , yT )T , where y ∈ Rk and yi = 0, for all 1 ≤ i ≤ k, and for some k ∈ {1, . . . , n}. Thesymmetric interval matrix AS can be written as

AS=

BS CC T DS

,

where BS⊂ R(n−k)×(n−k), C ⊂ R(n−k)×k, DS

⊂ Rk×k. This can be achieved by a suitable permutation PTASP , whereP is a permutation matrix. Notice that PTASP remains symmetric with the same eigenvalues and eigenvectors, and nooverestimation occurs since PTASP has the same entries as AS but at different positions.

From the basic equality Ax = λx it follows that

Cy = 0 for some C ∈ C, (4)

and

Dy = λy for some D ∈ DS . (5)

We focus on the latter relation; it says that λ is an eigenvalue of D. We will show that DS is the required principal submatrixAS thanks to the proposed permutation, and D could be written as in (2).

From (5) we have that λ = yTDy, and hence the partial derivatives are

∂λ

∂dij= yiyj = 0, i, j = 1, . . . , k.

This relation strongly influences the structure of D. If yiyj > 0, then dij = dij. This is so, because otherwise by increasingdij we also increase the value of λ, which contradicts our assumption that λ lies on the upper boundary of Λ(AS). Likewise,yiyj < 0 implies dij = dij. This allows us to write D in the following more compact form

D = Dc + diag(z)D∆diag(z), (6)

where z = sgn(y) ∈ {±1}k. Therefore, λ belongs to a set as the one presented in the right-hand side of (2), which completesthe proof. �

Page 119: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3155

Note that not every λj(AS) or λj(AS) is a boundary point of Λ(AS); see (1). Theorem 1 is also true for such λj(A

S)

or λj(AS) that are non-boundary, but represent no multiple eigenvalue (since the corresponding eigenvector is uniquelydetermined). However, correctness of Theorem 1 for all λj(A

S) and λj(AS), j = 1, . . . , n, is still an open question. Moreover,full characterization of all λj(A

S) and λj(AS), j = 1, . . . , n, is lacking too.As we have already mentioned, in general, the eigenvalue set of an interval matrix is larger than the eigenvalue set

of its symmetric portion. This is true even if both the midpoint and radius matrices are symmetric (see Example 1). Thefollowing theorem says that overestimation caused by the additional matrices is somehow limited by the convex hull area.An illustration will be given in Example 1, where

Λ(AS) = [3.7321, 6.7843] ∪ [0.00000, 0.3230] ∪ [−4.1072, − 1.0000],Λ(A) = [3.7321, 6.7843] ∪ [−0.6458, 0.3230] ∪ [−4.1072, − 1.0000].

The lower bounds and the upper bounds Λ(AS) and Λ(A) are always the same, but the other boundary points may differ.

Theorem 2. Let Ac, A∆ ∈ Rn×n be symmetric matrices. Then

convΛ(AS) = convΛ(A).

Proof. The inclusion convΛ(AS) ⊆ convΛ(A) follows from the definition of the convex hull.Let A ∈ A be arbitrary, λ one of its real eigenvalues, and x the corresponding eigenvector, where ‖x‖2 = 1. Let

B :=12 (A + AT ) ∈ AS , then the following holds:

λ = xTAx ≤ max‖y‖2=1

yTAy = max‖y‖2=1

yTBy = λ1(B) ∈ convΛ(AS).

Similarly,

λ = xTAx ≥ min‖y‖2=1

yTAy = min‖y‖2=1

yTBy = λn(B) ∈ convΛ(AS).

Therefore λ ∈ convΛ(AS), and so convΛ(A) ⊆ convΛ(AS), which completes the proof. �

4. Inner approximation algorithms

Theorem 1 naturally yields an algorithm to compute a very sharp inner approximation of Λ(AS), which could also beexact in some cases. We will present the algorithm in the sequel (Section 4.3). First, we define some notions and proposetwo simple but useful methods for inner approximations.

Any subset of S is called an inner approximation. Similarly, any set that contains S is called an outer approximation. In ourcase, an inner approximation of the eigenvalue set λi(AS), is denoted by µi(AS) = [µ

i(AS), µi(AS)] ⊆ λi(AS), and an outer

approximation is denoted by ωi(AS) = [ωi(AS), ωi(AS)] ⊇ λi(AS), where 1 ≤ i ≤ n.

From a practical point of view, an outer approximation is usually more useful. However, an inner approximation is alsoimportant in some applications. For example, it could be used to measure quality (sharpness) of an outer approximation, orit could be used to prove the (Hurwitz or Schur) instability of certain interval matrices, cf. [28].

We introduce three inner approximation algorithms. The first one, a local improvement, is an efficient algorithm, butneedn’t be very accurate. On the contrary, vertex enumeration gives more accurate results (two bounds are exact), but it ismore costly. Eventually, submatrix vertex enumeration yields the tightest inner approximation but on the account of thetime complexity.

4.1. Local improvement

The first algorithm that we present is based on a local improvement search technique. A similar method, but for intervalmatrices A with Ac and A∆ symmetric, was proposed by Rohn [28]. The basic idea of the algorithm is to start with aneigenvalue, λi(Ac), and the corresponding eigenvector, vi(Ac), of the midpoint matrix, Ac , and then move to an extremalmatrix in AS according to the sign pattern of the eigenvector. The procedure is repeated until no improvement is possible.

Algorithm 1 outputs the upper boundaries µi(AS) of the inner approximation [µi(AS), µi(AS)], where 1 ≤ i ≤ n. The

lower boundaries,µi(AS), can be obtained similarly. The validity of the procedure follows from the fact that every considered

matrix, A, belongs to AS .The algorithm terminates after at most 2n−1

+ 1 iterations since we can normalize vi(A) such that the first entry isnon-negative. However, usually in practice the number of iterations is much smaller, which makes the algorithm attractivefor applications. Our numerical experiments (Section 5) indicate that the number of iterations is rarely greater than two,even for matrices of dimension 20. Moreover, the resulting inner approximation is quite sharp, depending on the width ofintervals inAS . This is not surprising aswhenever the input intervals are narrow enough, the algorithmproduces, sometimes

Page 120: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

3156 M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163

Algorithm 1 (Local improvement for µi(AS), i = 1, . . . , n)1: for i = 1, . . . , n do2: µi(AS) = −∞;3: A := Ac ;4: while λi(A) > µi(AS) do5: µi(AS) := λi(A);6: D := diag(sgn(vi(A)));7: A := Ac + DA∆D;8: end while9: end for

10: return µi(AS), i = 1, . . . , n.

even after the first iteration, exact bounds; see [1]. This is due to sign invariancy of eigenvectors, which enables to set up anoptimal scenario in steps 6 and 7. If the eigenvectors have no invariant signs of their entries, then we still can achieve theoptimal bound by the local improvement.

We refer the reader to Section 5 for a more detailed presentation of the experiments.

4.2. Vertex enumeration

The second method that we present is based on enumeration of some special boundary matrices of A. It consists ofinspecting all matrices

Az := Ac + diag(z)A∆diag(z), z ∈ {±1}n, z1 = 1, (7)

and continuously improving an inner approximation µi(AS), whenever λi(Az) > µi(AS), where 1 ≤ i ≤ n. The lowerbounds, µ

i(AS), could be obtained in a similar way using the matrices Ac − diag(z)A∆diag(z), where z ∈ {±1}n, and z1 = 1.

The condition z1 = 1 follows from the fact that diag(z)A∆diag(z) = diag(−z)A∆diag(−z), which gives us the freedom tofix one component of z. The number of steps that the algorithm performs is 2n−1. Therefore, this method is suitable only formatrices of moderate dimensions.

The main advantages of the vertex enumeration approach are the following. First, it provides us with a sharper innerapproximation of the eigenvalue sets than the local improvement; in local improvement we inspect only some of thematrices in (7). Second, two of the computed bounds are exact; by Hertz [10] (cf. [11]) and Hertz [29] we have thatµ1(AS) = λ1(AS) and µ

n(AS) = λn(A

S). Concerning the other bounds calculated by vertex enumeration, even though itwas conjectured that there were exact [12], it turned out that they were not exact, in general [13]. The assertion by Hertz[29, Theorem 1] that µ

1(AS) = λ1(A

S) and µn(AS) = λn(AS) is wrong, too; see Example 3. Nevertheless, Theorem 1 and itsproof indicate a sufficient condition: if no eigenvector corresponding to an eigenvalue of AS has a zero component, then thevertex enumeration yields exactly the eigenvalue sets λi(AS), i = 1, . . . , n. This is easy to see from the proof of Theorem 1;the submatrices in question is only the matrix A itself, and the values (2)–(3) correspond to matrices that are processed byvertex enumeration.

The efficient implementation of this approach is quite challenging. In order to overcome in practice the exponentialcomplexity of the algorithm, we implemented a branch & bound algorithm, which is in accordance with the suggestions ofRohn [28]. However, the adopted bounds are not that tight, and the actual running times are usually worse than the directvertex enumeration; it is probably because of weak pruning part of the exhaustive search, so one has to go through almostall the search tree. That is whywe do not consider further this variant. The direct vertex enumeration scheme for computingthe upper bounds, µi(AS), is presented in Algorithm 2.

Algorithm 2 (Vertex enumeration for µi(AS), i = 1, . . . , n)1: for i = 1, . . . , n do2: µi(AS) = λi(Ac);3: end for4: for all z ∈ {±1}n, z1 = 1, do5: A := Ac + diag(z)A∆diag(z);6: for i = 1, . . . , n do7: if λi(A) > µi(AS) then8: µi(AS) := λi(A);9: end if

10: end for11: end for12: return µi(AS), i = 1, . . . , n.

Page 121: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3157

4.3. Submatrix vertex enumeration

In this section we present an algorithm that is based on Theorem 1, and it usually produces very tight inner approxi-mations, even exact ones in some cases. The basic idea underlying the algorithm is to enumerate all the vertices of all theprincipal submatrices of AS including AS itself. Thus we go throughmorematrices than vertex enumeration and themethodyieldsmore accurate approximation, but with higher time complexity. The number of steps performedwith this approach is

2n−1+ n2n−2

+

n2

2n−2

+ · · · + n20=

12(3n

− 1).

To overcome the obstacle of the exponential number of iterations, at least in practice, we notice that not all eigenvalues ofthe principal submatrices of the matrices in AS belong to some of the eigenvalue sets λi(AS), where 1 ≤ i ≤ n. For this wewill introduce a condition for checking such an inclusion.

Assume that we are given an inner approximation µi(AS) and an outer approximation ωi(AS) of the eigenvalue setsλi(AS); that is µi(AS) ⊆ λi(AS) ⊆ ωi(AS), where 1 ≤ i ≤ n. As we will see in the sequel, the quality of the output of ourmethods depends naturally on the sharpness of the outer approximation used.

LetDS⊂ Rk×k be a principal submatrix ofAS and, without loss of generality, assume that it is situated in the right-bottom

corner, i.e.,

AS=

BS CC T DS

,

where BS⊂ R(n−k)×(n−k) and C ⊂ R(n−k)×k. This can be obtained by an appropriate permutation PTASP , where P is a

permutation matrix as in the proof of Theorem 1.Let λ be an eigenvalue of some vertexmatrix D ∈ DS , which is of the form (6), and let y be the corresponding eigenvector.

If the eigenvector is not unique then λ is a multiple eigenvalue and therefore it is a simple eigenvalue of some principalsubmatrix of DS due to Cauchy’s interlacing property for eigenvalues [23, Theorem 8.1.7] [24, Example 7.5.3]; in this casewe restrict our consideration to this submatrix.

Let p ∈ {1, . . . , n} be fixed. We want to determine whether λ is equal to λp(AS) ∈ Λ(AS), or, if this is not possible, toimprove the upper bound µp(AS); the lower bound can be handled accordingly. In view of (4), Cy = 0 must hold for someC ∈ C , whence

0 ∈ Cy.

So λ is an eigenvalue of some matrix in AS . Now, we are sure that λ ∈ Λ(AS) and it remains to determine whether λ alsobelongs to λp(AS).

If λ ≤ µp(AS), then it is useless to further consider λ, since it would not improve the inner approximation of the ptheigenvalue set. Suppose λ > µp(AS). If p = 1 or λ < ωp−1(A

S), then λ must belong to λp(AS), and we can improve the innerbound µp(AS) := λ. In this case the algorithm terminates early, and that is the reason we need ωi(AS), 1 ≤ i ≤ n, to be astight as possible.

If p > 1 and λ ≥ ωp−1(AS), we proceed as follows. We pick an arbitrary C ∈ C , such that Cy = 0; we refer to, e.g. [30]

for details on the selection process. Next, we select an arbitrary B ∈ BS and let

A :=

B CCT D

. (8)

We compute the eigenvalues of A, and if µp(AS) < λp(A), then we set µp(AS) := λp(A), otherwise we do nothing.However, it can happen that λ = λi(AS), and we do not identify it, and hence we do not enlarge the inner estimation

µp(AS). Nevertheless, if we apply the method for all p = 1, . . . , n and all principal submatrices of AS , then we touch all theboundary points of Λ(AS). If λ ∈ ∂Λ(AS), then λ is covered by the resulting inner approximation. In the case when λ is anupper boundary point, we consider the maximal i ∈ {1, . . . , n} such that λ = λi(AS) and then the ith eigenvalue of thematrix (8) must be equal to λ. Similar tests are valid for a lower boundary point.

Now we have all the ingredients at hand for the direct version of the submatrix vertex enumeration approach that ispresented in Algorithm 3, which improves the upper bound µp(AS) of an inner approximation, where the index p is stillfixed. Let us also mention that in step 4 of Algorithm 3, the decomposition of AS according to the index set J means that DS

is a restriction of AS to the rows and the columns indexed by J , BS is a restriction of AS to the rows and the columns indexedby {1, . . . , n} \ J , and C is a restriction of AS to the rows indexed by {1, . . . , n} \ J and the columns indexed by J .

4.3.1. Branch & bound improvementIn order to tackle the exponential worst case complexity of Algorithm 3, we propose the following modification. Instead

of inspecting all non-empty subsets of {1, . . . , n} in step 3, we exploit a branch & bound method, which may skip someuseless subsets. Let a non-empty J ⊆ {1, . . . , n} be given. The new, possibly improved, eigenvalue λ must lie in the interval

Page 122: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

3158 M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163

Algorithm 3 (Direct submatrix vertex enumeration for µp(AS))

1: compute outer approximation ωi(AS), i = 1, . . . , n;2: call Algorithm 1 to get inner approximation µi(AS), i = 1, . . . , n;3: for all J ⊆ {1, . . . , n}, J = ∅, do

4: decompose AS=

BS CC T DS

according to J;

5: for all z ∈ {±1}|J|, z1 = 1, do6: D := Dc + diag(z)D∆diag(z);7: for i = 1, . . . , |J| do8: λ := λi(D);9: y := vi(D);

10: if λ > µp(AS) and λ ≤ ωp(AS) and 0 ∈ Cy then11: if p = 1 or λ < ωp−1(A

S) then12: µp(AS) := λ;13: else14: find C ∈ C such that Cy = 0;

15: A :=

Bc CCT D

;

16: if λp(A) > µp(AS) then17: µp(AS) := λp(A);18: end if19: end if20: end if21: end for22: end for23: end for24: return µp(AS).

λ := [µp(AS), ωp(AS)]. If this is the case, then the interval matrix AS−λI must be irregular, i.e., it contains a singularmatrix.

Moreover, the interval system

(AS− λI)x = 0, ‖x‖∞ = 1,

has a solution x, where xi = 0 for all i ∈ J . We decompose AS− λI according to J , and, without loss of generality, we may

assume that J = {n − |J| + 1, . . . , n}, then

AS− λI =

BS

− λI CC T DS

− λI

.

The interval system becomes

Cy = 0, (DS− λI)y = 0, ‖y‖∞ = 1, (9)

where we considered x = (0T , yT )T . This is a very useful necessary condition. If (9) has no solution, then we cannot improvethe current inner approximation. We can also prune the whole branch with J as a root; that is, we will inspect no index setsJ ′ ⊆ J . The strength of this condition follows from the fact that the system (9) is overconstrained, it has more equations thanvariables. Therefore, with high probability that it has no solution, even for larger J .

Let us make two comments about the interval system (9). First, this system has a lot of dependencies. They are causedfrom themultiple occurrences ofλ, and by the symmetry ofDS . If no solver for interval systems that can handle dependenciesis available, then we can solve (9) as an ordinary interval system, ‘‘forgetting’’ the dependencies. The necessary conditionwill be weaker, but still valid. This is what we did in our implementation.

The second comment addresses the expression ‖y‖∞ = 1.We have chosen themaximumnorm in order that the intervalsystem be linear. The expression could be rewritten as −1 ≤ y ≤ 1 (for checking solvability of (9) we can use eithernormalization ‖y‖∞ = 1 or ‖y‖∞ ≤ 1). Another possibility is to write

−1 ≤ y ≤ 1, yi = 1 for some i ∈ {1, . . . , |J|}.

This indicates that we can split the problem into solving |J| interval systems

Cy = 0, (DS− λI)y = 0, −1 ≤ y ≤ 1, yi = 1,

where i runs, sequentially, through all the values {1, . . . , |J|}; cf. the ILS method proposed in [5]. The advantage of thisapproach is that the overconstrained interval systems have (one) more equation than the original overconstrained system,

Page 123: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3159

and hence the resulting necessary condition could be stronger. Our numerical results discussed in Section 5 concern thisvariant. As a solver for interval systems we utilize the convex approximation approach by Beaumont [31]; it is sufficientlyfast and produces narrow enough approximations of the solution set.

4.3.2. How to conclude for exact bounds?Let us summarize properties of the submatrix vertex enumeration method. On the one hand the worst case complexity

of the algorithm is rather prohibitive, O(3n), but on the other hand, we obtain better inner approximations, and sometimeswe get exact bounds of the eigenvalue sets. Theorem 1 and the discussion in the previous section allow us to recognizeexact bounds. Namely, for any i ∈ {2, . . . , n}, we have that if λi(AS) < λi−1(A

S), then µi(AS) = λi(AS); a similar inequalityholds for the lower bound. This is a rather theoretical recipe because we may not know a priori whether the assumption issatisfied. However, we can propose a sufficient condition: if ωi(AS) < ωi−1(A

S), then two successive eigenvalue sets do notoverlap and the assumption is obviously true. In this case we conclude µi(AS) = λi(AS); otherwise we cannot conclude.

This sufficient condition is another reason why we need a sharp outer approximation. The sharper it is, the more oftenwe are able to conclude that the exact bound is achieved.

Exploiting the condition we can also decrease the running time of submatrix vertex enumeration. We call Algorithm 3only for p ∈ {1, . . . , n} such that p = 1 or ωp(AS) < ωp−1(A

S). The resulting inner approximation may be a bit less tight,but the number of exact boundary points of Λ(AS) that we can identify remains the same.

Notice that there is enough open space for developing better conditions. For instance, we do not know whetherµi(AS) < µ

i−1(AS) (computed by submatrix vertex enumeration) can serve also as a sufficient condition for the purpose of

determining exact bounds.

5. Numerical experiments

In this section we present some examples and numerical results illustrating properties of the proposed algorithms. Weperformed the experiments on a PC Intel(R) Core 2, CPU 3 GHz, 2 GB RAM, and the source code was written inC++. We use GLPK v.4.23 [32] for solving linear programs, CLAPACK v.3.1.1 for its linear algebraic routines, andPROFIL/BIAS v.2.0.4 [33] for interval arithmetic and basic operations. Notice, however, that routines of GLPK andCLAPACK [34] do not produce verified solutions; for real-life problems this may not be acceptable.

Example 1. Consider the following symmetric interval matrix

AS=

1 2 [1, 5]2 1 1

[1, 5] 1 1

S

.

Local improvement (Algorithm 1) yields an inner approximation

µ1(AS) = [3.7321, 6.7843],

µ2(AS) = [0.0888, 0.3230],

µ3(AS) = [−4.1072, − 1.0000].

The same result is obtained by the vertex enumeration (Algorithm 2). Therefore, µ1(AS) = λ1(AS) and µ3(AS) = λ3(A

S).An outer approximation that is needed by the submatrix vertex enumeration (Algorithm 3) is computed using the methodsof Hladík et al. [17,18]. It is

ω1(AS) = [3.5230, 6.7843],ω2(AS) = [0.0000, 1.0519],ω3(AS) = [−4.1214, − 0.2019].

Now, the submatrix vertex enumeration algorithm yields the inner approximation

µ′

1(AS) = [3.7321, 6.7843],

µ′

2(AS) = [0.0000, 0.3230],

µ′

3(AS) = [−4.1072, − 1.0000].

Since the outer approximation intervals do not overlap, we can conclude that this approximation is exact, that is, λi(AS) =

µ′

i(AS), i = 1, 2, 3.

This example shows two important aspects of the interval eigenvalue problem. First, it demonstrates that the vertexenumeration does not produce exact bounds in general. Second, the symmetric eigenvalue set can be a proper subset of the

Page 124: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

3160 M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163

unsymmetric one, i.e., Λ(AS) $ Λ(A). This could be easily seen by the matrix1 2 12 1 15 1 1

.

It has three real eigenvalues 4.6458, −0.6458 and −1.0000, but the second one does not belong to Λ(AS). Indeed, using themethod by Hladík et al. [5] we obtain

Λ(A) = [3.7321, 6.7843] ∪ [−0.6458, 0.3230] ∪ [−4.1072, − 1.0000].

Example 2. Consider the example given by Qiu et al. [14] (see also [17,13]):

AS=

[2975, 3025] [−2015, −1985] 0 0[−2015, −1985] [4965, 5035] [−3020, −2980] 0

0 [−3020, −2980] [6955, 7045] [−4025, −3975]0 0 [−4025, −3975] [8945, 9055]

S

.

The local improvement (Algorithm 1) yields an inner approximation

µ1(AS) = [12560.8377, 12720.2273], µ2(A

S) = [7002.2828, 7126.8283],

µ3(AS) = [3337.0785, 3443.3127], µ4(A

S) = [842.9251, 967.1082].

The vertex enumeration (Algorithm 2) produces the same result. Hence we can state that µ1(AS) and µ4(AS) are optimal.

To call the last method, submatrix vertex enumeration (Algorithm 3) we need an outer approximation. We use thefollowing by [17]

ω1(AS) = [12560.6296, 12720.2273], ω2(AS) = [6990.7616, 7138.1800],ω3(AS) = [3320.2863, 3459.4322], ω4(AS) = [837.0637, 973.1993].

Now, submatrix vertex enumeration yields the same inner approximation as the previous methods. However, nowwe havemore information. Since the outer approximation intervals are mutually disjoint, the obtained results are the best possible.Therefore, µi(AS) = λi(AS), where i = 1, . . . , 4.

Example 3. Herein, we present two examples for approximating the singular values of an interval matrix. Let A ∈ Rm×n andq := min{m, n}. By the Jordan–Wielandt theorem [23, Section 8.6], [24, Section 7.5] the singular values σ1(A) ≥ · · · ≥ σq(A)of A are identical to the q largest eigenvalues of the symmetric matrix

0 AT

A 0

.

Thus, if we consider the singular value sets σ1(A), . . . , σq(A) of some interval matrix A ∈ Rm×n, we can identify them as theq largest eigenvalue sets of the symmetric interval matrix

M :=

0 AT

A 0

S

.

(1) Consider the following interval matrix from [35]

A =

[2, 3] [1, 1][0, 2] [0, 1][0, 1] [2, 3]

.

Both the local improvement and the vertex enumeration result in the same inner approximation, i.e.

µ1(M) = [2.5616, 4.5431], µ2(M) = [1.2120, 2.8541].

Thus, σ 1(A) = 4.5431. Additionally, consider the following outer approximation from [17].

ω1(M) = [2.0489, 4.5431], ω2(M) = [0.4239, 3.1817].

Using Algorithm 3, we obtain

µ′

1(M) = [2.5616, 4.5431], µ′

2(M) = [1.0000, 2.8541].

Now we can claim that σ 2(A) = 1, since ω2(M) > 0. Unfortunately, we cannot conclude about the exact values of theremaining quantities, since the two outer approximation intervals overlap. We only know that σ 1(A) ∈ [2.0489, 2.5616]and σ 2(A) ∈ [2.8541, 3.1817].

Page 125: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3161

(2) The second example comes from Ahn & Chen [36]. Let A be the following interval matrix

A =

[0.75, 2.25] [−0.015, − 0.005] [1.7, 5.1][3.55, 10.65] [−5.1, − 1.7] [−1.95, − 0.65][1.05, 3.15] [0.005, 0.015] [−10.5, − 3.5]

.

Both local improvement and vertex enumeration yield the same result, i.e.

µ1(M) = [4.6611, 13.9371], µ2(M) = [2.2140, 11.5077],µ3(M) = [0.1296, 2.9117].

Hence, σ 1(A) = 13.9371. As an outer approximation we use the following intervals calculated by a method from [17]

ω1(M) = [4.3308, 14.0115], ω2(M) = [1.9305, 11.6111],ω3(M) = [0.0000, 5.1000].

Running the submatrix vertex enumeration, we get the inner approximation

µ′

1(M) = [4.5548, 13.9371], µ′

2(M) = [2.2140, 11.5077],µ′

3(M) = [0.1296, 2.9517].

We cannot conclude that σ 3(A) = µ3(A) = 0.1296, because ω3(M) has a nonempty intersection with the fourth largest

eigenvalue set, which is equal to zero. Also the other singular value sets remain uncertain, but within the computed innerand outer approximations.

Notice that µ′

1(M) < µ

1(M), whence µ′

1(M) < λ1(M) = σ 1(A) disproving the Hertz’s theorem 1 from [29] that the

lower and upper limits of λ1(M) and λn(M) are computable by the vertex enumeration method. It is true only for λ1(M)and λn(M).

Example 4. In this examplewepresent some randomly generated examples of large dimensions. The entries of themidpointmatrix, Ac , are taken randomly in [−20, 20] using the uniform distribution. The entries of the radius matrix A∆ are takenrandomly, using the uniform distribution in [0, R], where R is a positive real number. We applied our algorithm on theinterval matrix M := ATA, because it has a convenient distribution of eigenvalue set—some are overlapping and some arenot. Sharpness of results is measured using the quantity

1 −eTµ∆(M S)

eTω∆(M S),

where e = (1, . . . , 1)T . This quantity lies always within the interval [0, 1]. The closer to zero it is, the tighter theapproximation. In addition, if it is zero, then we achieved exact bounds for every eigenvalue set λi(M S), 1 ≤ i ≤ n. Theinitial outer approximation,ωi(M S), 1 ≤ i ≤ n, was computed using the method due of Hladík et al. [17], and filtered by themethod proposed by Hladík et al. in [18]. Finally, it was refined according to the comment in Section 4.3.2. For the submatrixvertex enumeration algorithmwe implemented the branch & bound improvement, which is described in Sections 4.3.1 and4.3.2.

The results are displayed in Table 1; the values are appropriately rounded. We see that local improvement yields almostas tight inner approximation as vertex enumeration, but with much lower effort. Submatrix vertex enumeration is evenmore costly, but it can sometimes conclude for exact bounds, so the approximation is more accurate, particularly for narrowinput intervals.

Example 5. In this example we present some numerical results on approximating singular value sets as introduced inExample 3. The input consists of an interval (rectangular) matrix A ⊂ Rm×n which is selected randomly as in the previousexample.

Table 2 presents our experiments. The time in the table corresponds to the computation of the approximation of only theq largest eigenvalue sets of the Jordan–Wielandt matrix. The behavior of the three algorithms is similar to that in Example 4.

6. Conclusion and future directions

We proposed a new solution theorem for the symmetric interval eigenvalue problem, which describes some of theboundary points of the eigenvalue set. Unfortunately, the complete characterization is still a challenging open problem.

We developed an inner approximation algorithm (submatrix vertex enumeration), which in the case where theeigenvalue sets are disjoint, and the intermediate gaps are wide enough, outputs exact results. To our knowledge, evenunder this assumption, this is the first algorithm that can guarantee exact bounds. Thus, it can be used in correspondencewith outer approximation methods to produce exact eigenvalue sets.

We carried out comparisons with other inner approximation methods, local improvement and vertex enumeration.The local improvement method is very efficient with sufficiently tight bounds. The vertex enumeration is more time

Page 126: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

3162 M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163

Table 1Eigenvalues of random interval symmetric matrices ATA of dimension n × n.

n R Algorithm 1 (local improvement) Algorithm 2 (vertex enumeration) Algorithm 3 (submatrix vertex enumeration)Sharpness Time (s) Sharpness Time Sharpness Time

5 0.001 0.05817 0.00 0.05041 0.00 s 0.00000 0.04 s5 0.01 0.07020 0.00 0.05163 0.00 s 0.00000 0.03 s5 0.1 0.26273 0.00 0.23389 0.00 s 0.17332 0.04 s5 1 0.25112 0.00 0.23644 0.00 s 0.20884 0.01 s

10 0.001 0.08077 0.00 0.07412 0.09 s 0.00000 1.15 s10 0.01 0.13011 0.01 0.11982 0.08 s 0.04269 1.29 s10 0.1 0.27378 0.01 0.25213 0.09 s 0.12756 3.17 s10 1 0.56360 0.01 0.52330 0.09 s 0.52256 2.58 s15 0.001 0.07991 0.02 0.07557 7.3 s 0.00000 16.47 s15 0.01 0.21317 0.02 0.19625 6.5 s 0.11341 2 min 29 s15 0.1 0.36410 0.02 0.34898 7.0 s 0.34869 4 min 58 s15 1 0.76036 0.02 0.73182 7.2 s 0.73182 7.5 s20 0.001 0.09399 0.06 0.09080 7 min 21 s 0.00000 13 min 46 s20 0.01 0.24293 0.06 0.22976 7 min 6 s 0.12574 1 h 14 min 55 s20 0.1 0.43199 0.06 0.40857 7 min 14 s 0.22360 1 h 15 min 41 s20 1 0.82044 0.06 0.79967 7 min 33 s 0.79967 7 min 39 s25 0.001 0.14173 0.13 0.13397 6 h 53 min 0 s 0.02871 9 h 32 min 54 s

Table 2Singular values of random interval matrices of dimensionm × n.

m n R Algorithm 1 (local improvement) Algorithm 2 (vertex enumeration) Algorithm 3 (submatrix vertex enumeration)Sharpness Time (s) Sharpness Time Sharpness Time

5 5 0.01 0.08945 0.00 0.07716 0.10 s 0.00000 0.53 s5 5 0.1 0.09876 0.01 0.09270 0.08 s 0.00000 0.73 s5 5 1 0.43560 0.01 0.31419 0.10 s 0.26795 4.34 s5 10 0.01 0.11320 0.02 0.10337 5.79 s 0.00000 7.91 s5 10 0.1 0.13032 0.02 0.12321 5.98 s 0.00000 8.40 s5 10 1 0.35359 0.02 0.33176 5.52 s 0.22848 21.53 s5 15 0.01 0.10603 0.05 0.09424 5 min 31 s 0.00000 5 min 36 s5 15 0.1 0.17303 0.04 0.16758 5 min 33 s 0.00000 7 min 58 s5 15 1 0.46064 0.05 0.39708 5 min 32 s 0.31847 15 min 47 s

10 10 0.01 0.10211 0.06 0.09652 8 min 3 s 0.00000 8 min 19 s10 10 0.1 0.13712 0.07 0.13387 8 min 10 s 0.00000 14 min 12 s10 10 1 0.39807 0.07 0.35580 7 min 52 s 0.30279 26 h 48 min 38 s10 15 0.01 0.09561 0.12 0.09116 5 h 51 min 53 s 0.00000 5 h 54 min 56 s

consuming with slightly more accurate bounds, two of which are exact. Our numerical experiments suggest that the localsearch algorithm is superior to the other methods as long as the input matrices have higher dimension. However, forsmall dimensional problems with possibly narrow input intervals, the submatrix vertex enumeration approach gives veryaccurate bounds in reasonable time. Thus local improvement is suitable for high dimensional problems or for problemswhere computing time is important. Contrary, submatrix vertex enumeration is a good choice when accuracy is the mainobjective.

Acknowledgments

The authors thank the reviewers for their detailed and helpful comments.ET is partially supported by an individual postdoctoral grant from the Danish Agency for Science, Technology and

Innovation, and also acknowledges support from the Danish National Research Foundation and the National ScienceFoundation of China (under the grant 61061130540) for the Sino-Danish Center for the Theory of Interactive Computation,within which part of this work was performed.

References

[1] A.S. Deif, The interval eigenvalue problem, Z. Angew. Math. Mech. 71 (1) (1991) 61–64.[2] J. Rohn, A. Deif, On the range of eigenvalues of an interval matrix, Computing 47 (3–4) (1992) 373–377.[3] A. Deif, J. Rohn, On the invariance of the sign pattern of matrix eigenvectors under perturbation, Linear Algebr. Appl. 196 (1994) 63–70.[4] J. Rohn, Interval matrices: singularity and real eigenvalues, SIAM J. Matrix Anal. Appl. 14 (1) (1993) 82–91.[5] M.Hladík, D. Daney, E.P. Tsigaridas, An algorithm for addressing the real interval eigenvalue problem, J. Comput. Appl.Math. 235 (8) (2011) 2715–2730.[6] Z. Qiu, P.C. Müller, A. Frommer, An approximatemethod for the standard interval eigenvalue problem of real non-symmetric intervalmatrices, Comm.

Numer. Methods Engrg. 17 (4) (2001) 239–251.[7] G. Alefeld, G. Mayer, Interval analysis: theory and applications, J. Comput. Appl. Math. 121 (1–2) (2000) 421–464.

Page 127: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

M. Hladík et al. / Computers and Mathematics with Applications 62 (2011) 3152–3163 3163

[8] S. Miyajima, T. Ogita, S. Rump, S. Oishi, Fast verification for all eigenpairs in symmetric positive definite generalized eigenvalue problems, Reliab.Comput. 14 (2010) 24–45.

[9] S.M. Rump, Verification methods: rigorous results using floating-point arithmetic, Acta Numer. 19 (2010) 287–449.[10] D. Hertz, The extreme eigenvalues and stability of real symmetric interval matrices, IEEE Trans. Automat. Control 37 (4) (1992) 532–535.[11] J. Rohn, A handbook of results on interval linear problems, 2005. http://www.cs.cas.cz/rohn/handbook.[12] Z. Qiu, X.Wang, Solution theorems for the standard eigenvalue problem of structureswith uncertain-but-bounded parameters, J. Sound Vib. 282 (1–2)

(2005) 381–399.[13] Q. Yuan, Z. He, H. Leng, An evolution strategy method for computing eigenvalue bounds of interval matrices, Appl. Math. Comput. 196 (1) (2008)

257–265.[14] Z. Qiu, S. Chen, I. Elishakoff, Bounds of eigenvalues for structureswith an interval description of uncertain-but-non-randomparameters, Chaos Solitons

Fractals 7 (3) (1996) 425–434.[15] H. Leng, Z. He, Computing eigenvalue bounds of structures with uncertain-but-non-random parameters by a method based on perturbation theory,

Comm. Numer. Methods Engrg. 23 (11) (2007) 973–982.[16] L.V. Kolev, Outer interval solution of the eigenvalue problem under general form parametric dependencies, Reliab. Comput. 12 (2) (2006) 121–140.[17] M. Hladík, D. Daney, E.P. Tsigaridas, Bounds on real eigenvalues and singular values of interval matrices, SIAM J. Matrix Anal. Appl. 31 (4) (2010)

2116–2129.[18] M. Hladík, D. Daney, E.P. Tsigaridas, A filtering method for the interval eigenvalue problem, Appl. Math. Comput. 217 (12) (2011) 5236–5242.[19] O. Beaumont, An algorithm for symmetric interval eigenvalue problem, Tech. Rep. IRISA-PI-00-1314, Institut de recherche en informatique et systèmes

aléatoires, Rennes, France, 2000.[20] A.D. Dimarogonas, Interval analysis of vibrating systems, J. Sound Vib. 183 (4) (1995) 739–749.[21] F. Gioia, C.N. Lauro, Principal component analysis on interval data, Comput. Statist. 21 (2) (2006) 343–363.[22] D. Chablat, P. Wenger, F. Majou, J. Merlet, An interval analysis based study for the design and the comparison of 3-dof parallel kinematic machines,

Int. J. Robot. Res. 23 (6) (2004) 615–624.[23] G.H. Golub, C.F. Van Loan, Matrix Computations, 3rd ed., Johns Hopkins University Press, 1996.[24] C.D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2000.[25] G. Alefeld, J. Herzberger, Introduction to Interval Computations, Academic Press, London, 1983.[26] E. Hansen, G.W. Walster, Global Optimization Using Interval Analysis, 2nd ed., Marcel Dekker, New York, 2004 (revised and expanded).[27] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990.[28] J. Rohn, An algorithm for checking stability of symmetric interval matrices, IEEE Trans. Automat. Control 41 (1) (1996) 133–136.[29] D. Hertz, Interval analysis: eigenvalue bounds of interval matrices, in: C.A. Floudas, P.M. Pardalos (Eds.), Encyclopedia of Optimization, Springer, New

York, 2009, pp. 1689–1696.[30] J. Rohn, Solvability of systems of interval linear equations and inequalities, in: M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, K. Zimmermann (Eds.), Linear

Optimization Problems with Inexact Data, Springer, New York, 2006, pp. 35–77 (Chapter 2).[31] O. Beaumont, Solving interval linear systems with linear programming techniques, Linear Algebr. Appl. 281 (1–3) (1998) 293–309.[32] A. Makhorin, GLPK—GNU Linear Programming Kit. http://www.gnu.org/software/glpk/.[33] O. Knüppel, D. Husung, C. Keil, PROFIL/BIAS — a C++ class library. http://www.ti3.tu-harburg.de/Software/PROFILEnglisch.html.[34] CLAPACK — Linear Algebra PACKage written for C. http://www.netlib.org/clapack/.[35] A.S. Deif, Singular values of an interval matrix, Linear Algebr. Appl. 151 (1991) 125–133.[36] H.-S. Ahn, Y.Q. Chen, Exact maximum singular value calculation of an interval matrix, IEEE Trans. Automat. Control 52 (3) (2007) 510–514.

Page 128: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

A filtering method for the interval eigenvalue problem

Milan Hladík a,b,⇑, David Daney b, Elias Tsigaridas c

a Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 11800 Prague, Czech Republicb INRIA Sophia-Antipolis Méditerranée, 2004 route des Lucioles, BP 93, 06902 Sophia-Antipolis Cedex, Francec Department of Computer Science, Aarhus University, IT-parken, Aabogade 34, DK 8200 Aarhus N, Denmark

a r t i c l e i n f o

Keywords:Interval matrixSymmetric matrixInterval analysisEigenvalueEigenvalue bounds

a b s t r a c t

We consider the general problem of computing intervals that contain the real eigenvaluesof interval matrices. Given an outer approximation (superset) of the real eigenvalue set ofan interval matrix, we propose a filtering method that iteratively improves the approxima-tion. Even though our method is based on a sufficient regularity condition, it is very effi-cient in practice and our experimental results suggest that it improves, in general,significantly the initial outer approximation. The proposed method works for general, aswell as for symmetric interval matrices.

� 2010 Elsevier Inc. All rights reserved.

1. Introduction

In order to model real-life problems and perform computations we must deal with inaccuracy and inexactness; these aredue to measurement, to simplification assumption on physical models, to variations of the parameters of the system, andfinally due to computational errors. Interval analysis is an efficient and reliable tool that allows us to handle the aforemen-tioned problems, even in the worst case where all together are encountered simultaneously. The input quantities are givenwith some interval estimation and the algorithms output verified intervals as results that (even though they usually have thedrawback of overestimation) cover all the possibilities for the input quantities.

We are interested in the interval real eigenvalue problem. Given a matrix the elements of which are real intervals, alsocalled interval matrix, the task is to compute real intervals that contain all possible eigenvalues. For formal definitions werefer the reader to the next section.

Moreover, there is a need to distinguish general interval matrices from the symmetric ones. Applications arise mostly inthe field of mechanics and engineering. We name, for instance, automobile suspension system [1], mass structures [2],vibrating systems [3], robotics [4], and even principal component analysis [5] and independent component analysis [6],which could be considered as a statistics oriented applications. Using the well-known Jordan–Wielandt transformation[7,8], if we are given a solution of the interval real eigenvalue problem, we can provide an approximation for the singularvalues and the condition number; both quantities have numerous applications.

The first general results for the interval real eigenvalue problem were produced by Deif [9], and Deif and Rohn [10]. How-ever their solutions depend on theorems that have very strong assumptions. Later, Rohn [11], introduced a boundary pointcharacterization of the eigenvalue set. Approximation methods were addressed by Qiu et al. [1], Leng et al. [12] and by Hladíket al. [13]. The works [12,13] are based on a branch and prune approach and yield results that depend on a given arbitrarilyhigh accuracy.

0096-3003/$ - see front matter � 2010 Elsevier Inc. All rights reserved.doi:10.1016/j.amc.2010.09.066

⇑ Corresponding author at: Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 11800Prague, Czech Republic.

E-mail addresses: [email protected], [email protected] (M. Hladík), [email protected] (D. Daney), [email protected] (E.Tsigaridas).

Applied Mathematics and Computation 217 (2011) 5236–5242

Contents lists available at ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate /amc

Page 129: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

The symmetric eigenvalue problem is very important in practice. However, in its interval form it is hard to handle withsince the correlations between the entries of the matrices force the algorithms of interval analysis to overestimate actually alot the results. The symmetric case was pioneered by Deif [9]. Another theoretical result is attributed to Hertz [14], see also[15], to determine two extremal points of the eigenvalue set. Diverse approximation algorithms have also been developed.An evolution strategy method by Yuan et al. [16] gives an inner approximation (subset) of the eigenvalues set, and undersome conditions it converges to the exact bounds. Matrix perturbation theory was used by Qiu et al. [2], who proposedan algorithm for approximating the bounds, and by Leng and He [17] for outer approximation (superset) of the eigenvalueset. Outer bounds that are easy and fast to compute were presented by Hladík et al. [18], and the general parametric case wasconsidered by Kolev [19].

In this paper, we propose a filtering method to reduce the overestimation produced by various methods. Generally,filtering methods start with an initial outer approximation and iteratively make it tighter. Even though filtering is commonlyused approach in constrained programming, it is not widely used for the interval eigenvalue problem. We can, of course,apply any filtering for the interval nonlinear system of equations arising from eigenvalue definition, but no such approachhas been successful yet; cf. [13]. To the best of our knowledge, the only related work is by Beaumont [20], where an iterativealgorithm is presented, based on convex approximation of eigenpairs. The new filtering method that we propose is moresimple and applicable for both the symmetric and unsymmetric cases. Since we do not take into account eigenvectors, itis much more efficient, too.

2. Basic definitions and main theorem

An interval matrix is defined as a family of matrices

A :¼ ½A;A� ¼ fA 2 Rm�n; A 6 A 6 Ag;

where A; A 2 Rm�n; A 6 A, are given matrices, and the inequality is considered element-wise. By Ac :¼ 12 ðAþ AÞ, and

AD :¼ 12 ðA� AÞ we denote the midpoint and the radius of A, respectively.

Let A # Rn�n be a square interval matrix. Its eigenvalue set is defined as

KðAÞ :¼ fk 2 R; Ax ¼ kx; x–0;A 2 Ag:

That is, matrices in A may have both real and complex eigenvalues, but we focus on the real ones only. An outer approxi-mation of K(A) is any set having K(A) as a subset. An important class of matrices is that of symmetric ones. Its generalizationto interval matrices is as follows. A symmetric interval matrix is defined as

AS :¼ fA 2 AjA ¼ ATg:

Without loss of generality assume that AS is non-empty, which is easy to check, and that AD and Ac are symmetric. Its eigen-value set is denoted similarly to generic case, that is

KðASÞ :¼ fk 2 R; Ax ¼ kx; x – 0; A 2 ASg:

Since AS is a proper subset of A, its eigenvalue set, K(AS), is in general a subset of K(A).Since a real symmetric matrix A 2 Rn�n has always n real eigenvalues, we can sort them in a non-increasing order as

follows

k1ðAÞP k2ðAÞP � � �P knðAÞ:

We extend this notation for symmetric interval matrices, that is kiðASÞ :¼ kiðAÞjA 2 ASn o

. These sets form n convex compactintervals, which can be disjoint or may overlap, see for example [18]. The union of these interval results in K(AS). We denotetheir outer approximations by xi(AS) � ki(AS), where i = 1, . . . ,n.

Let q(�) be the spectral radius, and j � j the matrix absolute value, understood componentwise. We now present our maintheoretical result, which employs the sufficient regularity conditions by Beeck [21] and Rump [22] (compare Rex and Rohn[23]), and allows us to present the filtering method.

Theorem 1. Let k0 R K(A) and define M :¼ A � k0I. Then (k0 + k) R K(A) for all real k satisfying

jkj <1� 1

2 q jI � QMcj þ jI � QMcjT þ jQ jMD þMTDjQ j

T� �

12 q jQ j þ jQ jT� � ; ð1Þ

where Q 2 Rn�n; Q – 0, is an arbitrary matrix.

Proof. It is sufficient to prove that every k satisfying (1) the interval matrix M � kI = A � k0I � kI is regular, i.e., consists ofnonsingular matrices only.

M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242 5237

Page 130: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

It is known [23] that an interval matrix B is regular if for any matrix Q one has

qðjI � QBcj þ jQ jBDÞ < 1:

Substituting B :¼M � kI we obtain a sufficient condition for k not to be an eigenvalue

qðjI � QðMc � kIÞj þ jQ jMDÞ < 1:

By theory of non-negative matrices [8], q(A) 6 q(B) provided 0 6 A 6 B. In our case, we have

qðjI � QðMc � kIÞj þ jQ jMDÞ 6 qðjI � QMcj þ jkkQ j þ jQ jMDÞ:

It holds [24] that qðBÞ 6 12 qðBþ BTÞ for any B P 0. Thus, we obtain

qðjI � QMcj þ jkkQ j þ jQ jMDÞ 612q jI � QMcj þ jI � QMcjT þ jkjðjQ j þ jQ jTÞ þ jQ jMD þMT

DjQ jT

� �:

The resulting matrix in the right-hand side is symmetric, thus we can exploit the well-known Weyl’s theorem [7,8], on spec-tral radius of sum of two symmetric matrices: For A, B symmetric, q(A + B) 6 q(A) + q(B). Thus

12qðjI � QMcj þ jI � QMcjT þ jkjðjQ j þ jQ jTÞ þ jQ jMD þMT

DjQ jTÞ

612jkjq jQ j þ jQ jT

� �þ 1

2q jI � QMcj þ jI � QMcjT þ jQ jMD þMT

DjQ jT

� �:

Now, the sufficient condition states as follows

12jkjqðjQ j þ jQ jTÞ þ 1

2qðjI � QMcj þ jI � QMcjT þ jQ jMD þMT

DjQ jTÞ < 1:

By eliminating jkj we get (1). Note that the denominator is zero iff Q = 0. h

If we set Q :¼ M�1c , then we have the most convenient and simple form of (1), which we state below. However, when we

use floating point arithmetic these conditions may be violated. Thus, from the numerical stability point of view, it is recom-mended to use the original form using Q as the approximate inverse of Mc.

Corollary 1. Let k0 R K(A) and define M :¼ A � k0I. Then (k0 + k) R K(A) for all real k satisfying

jkj <1� 1

2 q jM�1c jMD þMT

DjM�1c j

T� �

12 q jM�1

c j þ jM�1c j

T� � : ð2Þ

Another consequence follows for the case of a symmetric interval matrix. Notice that for A and AS the filtering results arethe same as long as AD and Ac are symmetric.

Corollary 2. Let k0 R K(AS) and define MS :¼ AS � k0I. Then (k0 + k) R K(AS) for all real k satisfying

jkj <1� 1

2 q jI � QMcj þ jI �McQ j þ jQ jMD þMDjQ jð ÞqðjQ jÞ ; ð3Þ

where Q 2 Rn�n; Q – 0, is an arbitrary symmetric matrix.

These results allow us to propose an efficient filtering method for reducing outer approximations of the eigenvalue set.The algorithm is presented in the next section.

3. Algorithm

In this section we propose a filtering algorithm which is based on Theorem 1. Let an interval a ¼ ½a; a� be given. A filteringmethod is a method which iteratively cuts off some parts (margins) from a that do not include any eigenvalue. Finally, weobtain an interval b # a such that (anb) \K(A) = ;.

To avoid infinitely many iterations we limit the number by a constant T. To skip the steps that cut off very narrow pieces,we repeat the main loop when the reduction is significant; that is, we prune away at least e � aD part of the interval, where

5238 M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242

Page 131: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

e 2 (0,1) is a given accuracy. The pseudo-code of Algorithm 1 presents our filtering method that ‘‘filters’’ the input intervalsfrom above. Filtering from below is analogous.

Algorithm 1 Filtering a from above

1: b :¼ a;2: t :¼ 0;3: k :¼ ebD + 1;4: while k > ebD and t < T do5: t :¼ t + 1;

6: M :¼ A� bI;

7: compute Q :¼ M�1c ;

8: k :¼ 2�qðjI�QMc jþjI�QMc jTþjQ jMDþMTD jQ j

T ÞqðjQ jþjQ jT Þ

;

9: if k > 0 then

10: b :¼ b� k;11: end if

12: if b < b then13: return b :¼ ;;14: end if15: end while16: return b.

The filtering method is quite straightforward. The input interval a could be any initial outer approximation of K(A), or wecan split such an outer approximation into several pieces and call the filtering algorithm for all of them. The former approachdoes not detect gaps which are inside the non-convex set K(A), while the latter is able to identify them, provided some gene-ricity condition for the splitting.

Algorithm 1 is also applicable for the symmetric eigenvalue problem, but the filtering of K(AS) yields the same result, as inthe generic case, K(A). The advantage of the symmetric case is that K(AS) consists of a union of convex sets ki(AS), i = 1, . . . ,n,and convex sets do not have gaps. Thus we can employ methods [15] for computing outer approximations of the eigenvaluesets ki(AS), i = 1, . . . ,n, and do filtering for these particular sets separately. Note that the filtering is applicable only for non-overlapping parts; if they overlap, then we cut off nothing.

As we will see in Section 4, the filtering runs very fast, and the reduction is significant. However, it does not converge tothe optimal boundaries in general, because it is based on the sufficient condition for interval matrix regularity.

4. Numerical results

In this section we present three examples and numerical results illustrating properties of the proposed filtering method.In all the examples, we call Algorithm 1 with accuracy coefficient e :¼ 0.01. The maximum number of iterations T :¼ 100. Theresults were carried on an Intel Pentium (R) 4, CPU 3.4 GHz, with 2 GB RAM, and the program was written in C++. We useGLPK v.4.23 [25] for solving linear programming problems, CLAPACK v.3.1.1 [26] for its linear algebraic routines, andPROFIL/BIAS v.2.0.4 [27] for interval arithmetic and basic operations. We have to notice, however, that routines of GLPKand CLAPACK do not produce verified solutions, and for real-life problems preferably verified software or interval arithmeticshould be used.

Example 1. Let us adopt an example by Hladík et al. [18]

A ¼

½�5;�4� ½�9;�8� ½14;15� ½4:6;5� ½�1:2;�1�½17;18� ½17;18� ½1;2� ½4;5� ½10;11�½17;17:2� ½�3:5;�2:7� ½1:9;2:1� ½�13;�12� ½6;6:4�½18;19� ½2;3� ½18;19� ½5;6� ½6;7�½13;14� ½18;19� ½9;10� ½�18;�17� ½10;11�

0BBBBBB@

1CCCCCCA:

The Rohn’s outer approximation [18,28] of K(A) is [�22.1040,35.4999]. Calling Algorithm 1 we obtain the following se-quences of improvement:

� from above: 35.4999 ? 28.0615 ? 25.6193 ? 24.7389 ? 24.4086;� from bellow: (�22.1040) ? (�18.4018) ? (�17.8239) ? (�17.7346).

So we need only seven iterations to achieve the much more tighter outer approximation [�17.7346,24.4086].

M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242 5239

Page 132: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Using Proposition 2 of [18] we have an outer approximation [�24.4860,4.5216] [ [12.1327,29.3101]. We will filter boththe intervals. In the former case we obtain the following approximations.

� from above: 4.5216 ? 2.4758 ? 0.8342 ? (�0.0951) ? (�0.5335) ? (�0.7149);� from bellow: (�24.4860) ? (�18.8351) ? (�17.8926) ? (�17.7438).

and in the latter one we obtain

� from above: 29.3101 ? 26.0645 ? 24.9010 ? 24.4704 ? 24.3053 ? 24.2412;� from bellow: 12.1327 ? 13.4809 ? 14.4703 ? 15.1443 ? 15.5761 ? 15.8462 ? 16.0127 ? 16.1143 ? 16.1760.

Thus, we have in 21 iterations the filtered outer approximation [�17.7438,�0.7149] [ [16.1760,24.2412]. We cancompare this result with the exact solution, which is K(A) = [�17.5116,�13.7578] [ [�6.7033,�1.4582] [[16.7804,23.6143]. It was obtained by the algorithm of Hladík et al. [13].

We notice that the filtered approximation is very tight. There is one gap remaining which we cannot detect unless wedivide the initial approximation to sub-intervals.

Example 2. Consider the example given by Qiu et al. [2] (see also [18,16]):

AS ¼

½2975;3025� ½�2015;�1985� 0 0½�2015;�1985� ½4965;5035� ½�3020;�2980� 0

0 ½�3020;�2980� ½6955;7045� ½�4025;�3975�0 0 ½�4025;�3975� ½8945;9055�

0BBB@

1CCCA

S

:

To call the filtering method we need some initial outer approximation of the eigenvalue sets. We use the following one by[18]:

x1ðASÞ ¼ ½12560:6296;12720:2273�; x2ðASÞ ¼ ½6990:7616;7138:1800�;x3ðASÞ ¼ ½3320:2863;3459:4322�; x4ðASÞ ¼ ½837:0637;973:1993�:

Even though this approximation is quite tight, the filtering makes it more tighter. Calling Algorithm 1 we get after only 10iterations:

xf1ðA

SÞ ¼ ½12560:8129;12720:2273�; xf2ðA

SÞ ¼ ½6999:7862;7129:2716�;xf

3ðASÞ ¼ ½3332:7164;3447:4625�; xf

4ðASÞ ¼ ½841:5328;968:5845�:

What if we start with another initial outer approximation? We use that produced by of the method of Leng and He [17]:

~x1ðASÞ ¼ ½12550:53;12730:53�; ~x2ðASÞ ¼ ½6974:459;7154:459�;~x3ðASÞ ¼ ½3299:848;3479:848�; ~x4ðASÞ ¼ ½815:1615;995:1615�:

Even though the initial estimation is not so tight, our filter method improves it substantially after only 13 iterations to

~xf1ðA

SÞ ¼ ½12560:8129;12720:2472�; ~xf2ðA

SÞ ¼ ½6999:8026;7129:2716�;~xf

3ðASÞ ¼ ½3332:7944;3447:4628�; ~xf

4ðASÞ ¼ ½841:5328;968:5505�:

We can compare this outer approximation with the exact description from [12,18]:

k1ðASÞ ¼ ½12560:8377;12720:2273�; k2ðASÞ ¼ ½7002:2828;7126:8283�;k3ðASÞ ¼ ½3337:0785;3443:3127�; k4ðASÞ ¼ ½842:9251;967:1082�:

As in the previous example the filtering method converges quickly to the solutions. Moreover, it does not seem to be sen-sitive to the initial approximation.

Example 3. To be fully convinced about the quality of the filtering method we carried out a number of randomly generatedexamples. Components of the midpoint matrix Ac are taken randomly with uniform distribution in [�20,20]. Components ofthe radius matrix AD are taken randomly with uniform distribution in [0,R], where R is a given positive real number. Weapplied our algorithm on the interval matrix M :¼ ATA since such kinds of symmetric interval matrices often appear in prac-

tice. The filtering method was called for all the eigenvalue sets kðMSÞ ¼ k1ðMSÞ; . . . ; knðMSÞ� �T

.

5240 M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242

Page 133: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

The results are displayed in Table 1. Each row shows results of a series of 100 tests carried out for a given dimension n andthe parameter R. We provide average cut off and its standard deviation, average number of iterations (for all parts of an outerapproximation together) and average running time. The cut off provides information about the filtering efficiency. It is

measured by the ratio 1� eTxfDðM

SÞeTxDðMSÞ ; where e = (1, . . . ,1)T denotes the vector of all ones, xf(MS) an initial outer approximation of

k(MS), and xf(MS) the result of the filtering procedure. This quotient says how much we cut off from the whole outerapproximations of the eigenvalue sets, but the real efficiency (how much we eliminate from the overestimation addition) ismuch better.

The results show that the proposed filtering method is not only very fast, but also efficient and eliminates quite largeparts of given outer approximation of eigenvalue sets. This is particularly true when the input intervals of A are narrow. Ifthey are wide then the filtering method is not so successful, partially because some of the eigenvalue sets overlap.

5. Conclusion

We propose a filtering method to improve an outer approximation of the eigenvalue set of an interval matrix. Our methodis applicable for both generic and symmetric matrices. Even though the proposed algorithm does not converge always to theoptimal bounds, our numerical experiments show that they compute very fast and quite accurate results in general. Thealgorithm performs well even when the initial (input) outer approximation is not very tight, thus it is not sensitive withrespect to the input estimation. A drawback of our approach is that it cannot detect possible gaps inside the initial outerapproximation. Such cases should be handled by splitting into smaller sub-intervals or using another kind of initial approx-imation. This is a problem whose further research is needed.

References

[1] Z. Qiu, P.C. Müller, A. Frommer, An approximate method for the standard interval eigenvalue problem of real non-symmetric interval matrices,Commun. Numer. Methods Eng. 17 (2001) 239–251.

[2] Z. Qiu, S. Chen, I. Elishakoff, Bounds of eigenvalues for structures with an interval description of uncertain-but-non-random parameters, Chaos Soliton.Fract. 7 (1996) 425–434.

[3] A.D. Dimarogonas, Interval analysis of vibrating systems, J. Sound Vib. 183 (1995) 739–749.[4] D. Chablat, P. Wenger, F. Majou, J. Merlet, An interval analysis based study for the design and the comparison of 3-dof parallel kinematic machines, Int.

J. Robot. Res. 23 (2004) 615–624.[5] F. Gioia, C.N. Lauro, Principal component analysis on interval data, Comput. Stat. 21 (2006) 343–363.[6] P. Comon et al., Independent component analysis, a new concept? Signal process. 36 (1994) 287–314.

Table 1Filtering procedure for outer estimates of eigenvalue sets of random interval symmetric matrices ATA.

n R Cut off quotient Iterations Time (s)

Average Std. deviation

5 0.001 0.186185 0.034635 15.25 0.00125 0.01 0.182395 0.042879 16.23 0.00105 0.1 0.142715 0.041260 16.51 0.00115 1 0.015686 0.011977 3.99 0.0004

10 0.001 0.233480 0.022064 33.41 0.006210 0.01 0.207711 0.032813 37.63 0.006810 0.1 0.074682 0.024832 22.13 0.003110 1 0.003388 0.002995 1.42 0.000515 0.001 0.242457 0.021210 54.13 0.023915 0.01 0.181559 0.022123 56.44 0.025015 0.1 0.025517 0.010055 13.48 0.008215 1 0.001410 0.001704 0.94 0.001120 0.001 0.243601 0.017704 76.69 0.065020 0.01 0.154207 0.021279 67.92 0.059520 0.1 0.009844 0.006030 7.42 0.011420 1 0.000493 0.000890 0.60 0.001225 0.001 0.238694 0.016706 97.15 0.137325 0.01 0.122852 0.017385 73.67 0.112225 0.1 0.004785 0.003815 3.84 0.012325 1 0.000117 0.000369 0.33 0.003330 0.001 0.232266 0.015133 117.90 0.267930 0.01 0.093589 0.015072 74.07 0.181230 0.1 0.002288 0.001849 2.78 0.012130 1 0.000031 0.000150 0.09 0.003650 0.001 0.194401 0.011126 184.75 1.523850 0.01 0.028148 0.006065 48.19 0.516950 0.1 0.000428 0.000545 1.05 0.022550 1 0.000000 0.000000 0.00 0.0166

M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242 5241

Page 134: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

[7] G.H. Golub, C.F. Van Loan, Matrix computations, J. Hopkins Univ. Press, 1996.[8] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985.[9] A.S. Deif, The interval eigenvalue problem, Z. Angew. Math. Mech. 71 (1991) 61–64.

[10] J. Rohn, A. Deif, On the range of eigenvalues of an interval matrix, Computing 47 (1992) 373–377.[11] J. Rohn, Interval matrices: singularity and real eigenvalues, SIAM J. Matrix Anal. Appl. 14 (1993) 82–91.[12] H. Leng, Z. He, Q. Yuan, Computing bounds to real eigenvalues of real-interval matrices, Int. J. Numer. Methods Eng. 74 (2008) 523–530.[13] M. Hladík, D. Daney, E. Tsigaridas, An algorithm for the real interval eigenvalue problem, J. Comput. Appl. Math., accepted for publication, doi:10.1016/

j.cam.2010.11.022. <http://dx.doi.org/10.1016/j.cam.2010.11.022>.[14] D. Hertz, The extreme eigenvalues and stability of real symmetric interval matrices, IEEE Trans. Autom. Control 37 (1992) 532–535.[15] J. Rohn, A handbook of results on interval linear problems, 2005. <http://www.cs.cas.cz/rohn/handbook>.[16] Q. Yuan, Z. He, H. Leng, An evolution strategy method for computing eigenvalue bounds of interval matrices, Appl. Math. Comput. 196 (2008) 257–265.[17] H. Leng, Z. He, Computing eigenvalue bounds of structures with uncertain-but-non-random parameters by a method based on perturbation theory,

Commun. Numer. Methods Eng. 23 (2007) 973–982.[18] M. Hladík, D. Daney, E. Tsigaridas, Bounds on real eigenvalues and singular values of interval matrices, SIAM J. Matrix Anal. Appl. 31 (2010) 2116–2129.[19] L.V. Kolev, Outer interval solution of the eigenvalue problem under general form parametric dependencies, Reliab. Comput. 12 (2006) 121–140.[20] O. Beaumont, An algorithm for symmetric interval eigenvalue problem, Technical Report IRISA-PI-00-1314, INRIA, Rennes, France, 2000.[21] H. Beeck, Zur Problematik der Hüllenbestimmung von Intervallgleichungssystemen, in: K. Nickel (Ed.), Proc. Int. Symp. Interval Math., Vol. 29,

Springer, Berlin, 1975, pp. 150–159.[22] S.M. Rump, Solving algebraic problems with high accuracy, in: U. Kulisch, W. Miranker (Eds.), A New Approach to Scientific Computation, Proc. Symp,

Academic Press, New York, 1983, pp. 51–120.[23] G. Rex, J. Rohn, Sufficient conditions for regularity and singularity of interval matrices, SIAM J. Matrix Anal. Appl. 20 (1998) 437–445.[24] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press, 1994.[25] A. Makhorin, GLPK – GNU Linear Programming Kit., 2008. <http://www.gnu.org/software/glpk/>.[26] CLAPACK – Linear Algebra PACKage written for C., 2007. <http://www.netlib.org/clapack/>.[27] O. Knüppel, D. Husung, C. Keil, PROFIL/BIAS – a C++ class library., 2006. <http://www.ti3.tu-harburg.de/Software/PROFILEnglisch.html>.[28] J. Rohn, Bounds on eigenvalues of interval matrices, ZAMM, Z. Angew. Math. Mech. 78 (1998) S1049–S1050.

5242 M. Hladík et al. / Applied Mathematics and Computation 217 (2011) 5236–5242

Page 135: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Bounds on eigenvalues of real and complex interval matrices

Milan Hladík ⇑Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 11800 Prague, Czech RepublicUniversity of Economics, Faculty of Informatics and Statistics, nám. W. Churchilla 4, 13067 Prague, Czech Republic

a r t i c l e i n f o

Keywords:Interval matrixInterval analysisEigenvalue bounds

a b s t r a c t

We present a cheap and tight formula for bounding real and imaginary parts of eigenvaluesof real or complex interval matrices. It outperforms the classical formulae not only for thecomplex case but also for the real case. In particular, it generalizes and improves the resultsby Rohn (1998) [5] and Hertz (2009) [19]. The main idea behind is to reduce the problem toenclosing eigenvalues of symmetric interval matrices, for which diverse methods can beutilized.

The result helps in analysing stability of uncertain dynamical systems since the formulagives sufficient conditions for testing Schur and Hurwitz stability of interval matrices. Itmay also serve as a starting point for some iteration methods.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

Stability of dynamical systems attracts attention of control community for several decades. Discrete dynamical systemslead to a Schur stability, and continuous systems lead to a Hurwitz stability of the system matrices. A matrix A is Schur stableif its spectral radius is less than 1, and A is Hurwitz if real parts of all eigenvalues are negative.

There are intrinsic uncertainties when solving practical problems. Uncertainties are modeled in diverse ways, but intervalanalysis naturally handles the best and worst cases of continuous domains of parameters. The aim of this paper is to derivecheap but sharp bounds on eigenvalues of complex interval matrices. This will give an efficient tool in stability checking,among others, since we obtain a strong sufficient condition for stability. Thus, we avoid exhaustive and expensive enumer-ative or branch & bound methods in many cases.

An interval matrix is defined as a family of matrices

A :¼ ½A;A� ¼ fA 2 Rn�n; A 6 A 6 Ag;

where A; A 2 Rn�n; A 6 A, are given matrices, and the inequality is considered element-wise. The midpoint and the radius ofA are denoted respectively by

Ac :¼ 12ðAþ AÞ; AD :¼ 1

2ðA� AÞ:

The set of all n� n interval matrices is denoted by IRn�n. A complex interval matrix as a family Aþ iB, where A and B areinterval matrices of order n. The eigenvalue set KðAþ iBÞ corresponding to Aþ iB is defined as the set of all eigenvalues overall Aþ iB 2 Aþ iB, that is,

0096-3003/$ - see front matter � 2012 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.amc.2012.11.075

⇑ Address: Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostranské nám. 25, 11800 Prague, CzechRepublic.

E-mail addresses: [email protected], [email protected]

Applied Mathematics and Computation 219 (2013) 5584–5591

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate /amc

Page 136: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

KðAþ iBÞ ¼ fkþ ilj9A 2 A 9B 2 B 9xþ iy – 0 : ðAþ iBÞðxþ iyÞ ¼ ðkþ ilÞðxþ iyÞg:

Next, we use qðAÞ for the spectral radius of A. The real and imaginary part of a complex number z is denoted by ReðzÞ andImðzÞ, respectively.

Deif [1] presents a description of the eigenvalue sets and their exact bounds, however, it is valid only under some assump-tions on sign pattern invariance of eigenvectors, which are not so easy to verify. In praxis, it mostly suffices to calculate a fastcomputable enclosures (supersets) of eigenvalue sets. Kolev and Petrakieva [2] develop an enclosure for real parts eigen-values by solving nonlinear system of equations; an exact bound can be achieved under some monotonicity assumptions.Kolev [3] extends it to the class of interval parametric matrices. Mayer [4] proposes an enclosure method for eigenvaluesof real and complex interval matrices based on Taylor expansion. A cheap formula for an enclosure is in Rohn [5]. An esti-mation on eigenvalues based on perturbation theory appears in Ahn et al. [6].

Even though complex matrices are less common in practice, there are still some applications and techniques using them,and thus motivation our research. Stability of systems with complex matrices is studied e.g. in [7–9]. Ahn et al. [10] reducedthe problem of checking robust stability of a fractional-order linear time invariant uncertain interval system to finding max-imal eigenvalues of a Hermitian complex interval matrix.

For Hurwitz stability checking, Franze et al. [11] present a sufficient condition by using a Gershgorin-type theorem. Xiaoand Unbehauen [12] show that Schur/Hurwitz stability checking can be reduced to checking only exposed faced of an inter-val matrix. Further, Rohn [13] proved that Hurwitz stability can be reduced to inspecting 22n�1 special vertex matrices pro-vided that each matrix in A has real eigenvalues only. Stability analysis based on Lyapunov equation was studied in[10,14,15].

A special subclass of interval matrices are symmetric interval matrices. For an interval matrix A, the corresponding sym-metric interval matrix AS is defined as a family of all symmetric matrices in A, that is,

AS ¼ fA 2 AjA ¼ ATg:

A real symmetric matrix A 2 Rn�n has n real eigenvalues; we can assume that they are sorted in a non-increasing order asfollows

k1ðAÞP k2ðAÞP � � �P knðAÞ:

Extending the notation for symmetric interval matrices, we introduce

kiðASÞ ¼ ½kiðASÞ; kiðASÞ� :¼ kiðAÞjA 2 ASn o

; i ¼ 1; . . . ;n

the eigenvalue sets of a symmetric interval matrix AS. The eigenvalue sets represent n compact intervals, which are eitherdisjoint or may overlap [16].

Even though the main focus of this paper is on bounding eigenvalues of general (complex) interval matrices, we reviewsome methods for symmetric interval matrices since our main result reduces computation of the general case to the sym-metric one. Various bounding methods are discussed in Hladík et al. [16]. For checking Schur/Hurwitz stability a symmetricinterval matrix, [17] proposed a branch & bound algorithm. Other related results are found in [18–21].

The following simple but efficient bounds are due to Rohn [21].

Theorem 1 (Rohn, 2005). For each i 2 f1; . . . ;ng one has

kiðASÞ# ½kiðAcÞ � qðADÞ; kiðAcÞ þ qðADÞ�:As shown by Hertz [18,19], the extremal eigenvalue limits k1ðASÞ and knðASÞ can be computed exactly by inspecting 2n�1

special vertex matrices.

Theorem 2 (Hertz, 1992). Define Z :¼ f1g � f�1gn�1 ¼ fð1;�1; . . . ;�1Þg and for a z 2 Z define Az; A0z 2 AS in this way:

ðazÞij ¼aij if zi ¼ zj;

aij if zi – zj;

�; ða0zÞij ¼

aij if zi ¼ zj;

aij if zi – zj:

Then

k1ðASÞ ¼ maxz2Z

k1ðAzÞ; knðASÞ ¼minz2Z

knðA0zÞ:

The main focus of this paper is on bounding complex eigenvalues of interval matrices. One of the basic bounds is the fol-lowing formula by Rohn [5].

Theorem 3 (Rohn, 1998). Let A 2 IRn�n. Then for each eigenvalue kþ il 2 KðAÞ we have

k 6 k112ðAc þ AT

c Þ� �

þ q12ðAD þ AT

D� �

;

M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591 5585

Page 137: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

k P kn12ðAc þ AT

c Þ� �

� q12ðAD þ AT

D� �

;

l 6 k10 1

2 ðAc � ATc Þ

12 ðA

Tc � AcÞ 0

!þ q

0 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 0

!;

l P kn0 1

2 ðAc � ATc Þ

12 ðA

Tc � AcÞ 0

!� q

0 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 0

!:

The Rohn’s result was generalized to complex interval matrices by Hertz [19].

Theorem 4 (Hertz, 2009). Let A; B 2 IRn�n. Then for each kþ il 2 KðAþ iBÞ we have

k 6 k112ðAc þ AT

c Þ� �

þ q12ðAD þ AT

D� �

þ k10 1

2 ðBTc � BcÞ

12 ðBc � BT

c Þ 0

!þ q

0 12 ðB

TD þ BDÞ

12 ðBD þ BT

DÞ 0

!;

k P kn12ðAc þ AT

c Þ� �

� q12ðAD þ AT

D� �

þ kn0 1

2 ðBTc � BcÞ

12 ðBc � BT

c Þ 0

!� q

0 12 ðB

TD þ BDÞ

12 ðBD þ BT

DÞ 0

!;

and

l 6 k112ðBc þ BT

c Þ� �

þ q12ðBD þ BT

D� �

þ k10 1

2 ðAc � ATc Þ

12 ðA

Tc � AcÞ 0

!þ q

0 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 0

!;

l P kn12ðBc þ BT

c Þ� �

� q12ðBD þ BT

D� �

þ kn0 1

2 ðAc � ATc Þ

12 ðA

Tc � AcÞ 0

!� q

0 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 0

!:

2. Main result

Now we present our main result. It generalizes and outperforms the results of Rohn and Hertz, and enables to derive shar-per bounds.

Theorem 5. Let A;B 2 IRn�n. Then for each eigenvalue kþ il 2 KðAþ iBÞ we have

kn

12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!S

6 k 6 k1

12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!S

;

kn

12 ðBþ BTÞ 1

2 ðA� ATÞ12 ðA

T � AÞ 12 ðBþ BTÞ

!S

6 l 6 k1

12 ðBþ BTÞ 1

2 ðA� ATÞ12 ðA

T � AÞ 12 ðBþ BTÞ

!S

:

Proof. Let A 2 A; B 2 B, let kþ il be an eigenvalue of Aþ iB and xþ iy the corresponding eigenvector. Suppose that theeigenvector is normalized such that 1 ¼ kxþ iyk2 ¼ xT xþ yT y. Then we have ðAþ iBÞðxþ iyÞ ¼ ðkþ ilÞðxþ iyÞ, or by compar-ing the real and the imaginary parts separately

Ax� By ¼ kx� ly;

Bxþ Ay ¼ kyþ lx:

Multiplying the first equation by xT and the second by yT , and summing up we obtain

xT Ax� xT Byþ yT Bxþ yT Ay ¼ kðxT xþ yT yÞ ¼ k:

Now, from the Courant–Fischer theorem [22–24] it follows

k1

12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!S

P k1

12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!

¼ maxkðu;vÞk¼1

ðuT ; vTÞ12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!u

v

� �P ðxT ; yTÞ

12 ðAþ ATÞ 1

2 ðBT � BÞ

12 ðB� BTÞ 1

2 ðAþ ATÞ

!x

y

� �

¼ xT 12ðAþ ATÞxþ xT 1

2ðBT � BÞyþ yT 1

2ðB� BTÞxþ yT 1

2ðAþ ATÞy ¼ xT Axþ yT Ayþ xTðBT � BÞy ¼ k:

5586 M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591

Page 138: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Similarly, we obtain the lower bound on k. To show the lower and the upper bound on l we can proceed analogously, or wecan realize that an imaginary part of an eigenvalue of an interval complex matrix Aþ iB is equal to the real part of the eigen-value of �iðAþ iBÞ ¼ B� iA. h

Theorem 5 reduces the problem of enclosing complex eigenvalues of interval complex matrices to bounding the maximaland minimal eigenvalues of symmetric interval matrices. Thus, any bound on extremal eigenvalues of symmetric intervalmatrices can be utilized. A discussion of various cheap and tight enclosing formulae was addressed in Hladík et al. [16].Hladík et al. [20] developed an iterative method, the so called filtering method, to sequentially contract an eigenvalue enclo-sure; even though it does not converge to the optimal interval hull in general, it usually shrinks significantly the initial enclo-sure in only several iterations. Employing the exponential Hertz’s formula (Theorem 2) we obtain the tightest bounds thatTheorem 5 can serve for. Applying the basic Rohn’s bounds from Theorem 1 we get the following corollary.

Corollary 1. Let A;B 2 IRn�n. Then for each kþ il 2 KðAþ iBÞ we have

k 6 k1

12 ðAc þ AT

c Þ 12 ðB

Tc � BcÞ

12 ðBc � BT

c Þ 12 ðAc þ AT

c Þ

!þ q

12 ðAD þ AT

DÞ 12 ðB

TD þ BDÞ

12 ðBD þ BT

DÞ 12 ðAD þ AT

!; ð1Þ

k P kn

12 ðAc þ AT

c Þ 12 ðB

Tc � BcÞ

12 ðBc � BT

c Þ 12 ðAc þ AT

c Þ

!� q

12 ðAD þ AT

DÞ 12 ðB

TD þ BDÞ

12 ðBD þ BT

DÞ 12 ðAD þ AT

!;

l 6 k1

12 ðBc þ BT

c Þ 12 ðAc � AT

c Þ12 ðA

Tc � AcÞ 1

2 ðBc þ BTc Þ

!þ q

12 ðBD þ BT

DÞ 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 1

2 ðBD þ BTDÞ

!;

l P kn

12 ðBc þ BT

c Þ 12 ðAc � AT

c Þ12 ðA

Tc � AcÞ 1

2 ðBc þ BTc Þ

!� q

12 ðBD þ BT

DÞ 12 ðAD þ AT

DÞ12 ðA

TD þ ADÞ 1

2 ðBD þ BTDÞ

!:

For a real interval matrix A we obtain the same result as in Theorem 3. This is easy to see since by putting Bc ¼ BD ¼ 0 e.g.to (1) we get

k1

12 ðAc þ AT

c Þ 12 ðB

Tc � BcÞ

12 ðBc � BT

c Þ 12 ðAc þ AT

c Þ

!þ q

12 ðAD þ AT

DÞ 12 ðB

TD þ BDÞ

12 ðBD þ BT

DÞ 12 ðAD þ AT

!

¼ k1

12 ðAc þ AT

c Þ 0

0 12 ðAc þ AT

c Þ

!þ q

12 ðAD þ AT

DÞ 0

0 12 ðAD þ AT

!¼ k1

12ðAc þ AT

c Þ� �

þ q12ðAD þ AT

D� �

:

Thus, using tighter bounds for eigenvalues of symmetric interval matrices than that by Rohn we obtain tighter bounds on thedesired complex eigenvalues.

For a complex interval matrix Aþ iB we obtain a stronger result than the bounds by Hertz (Theorem 4). This is because

k1

12 ðAc þ AT

c Þ 12 ðB

Tc � BcÞ

12 ðBc � BT

c Þ 12 ðAc þ AT

c Þ

!6 k1

12 ðAc þ AT

c Þ 0

0 12 ðAc þ AT

c Þ

!þ k1

0 12 ðB

Tc � BcÞ

12 ðBc � BT

c Þ 0

!

¼ k112ðAc þ AT

c Þ� �

þ k10 1

2 ðBTc � BcÞ

12 ðBc � BT

c Þ 0

!;

and similarly for the other terms.Another consequence is obtained for Hermitian matrices. Hermitian matrices are of the form Aþ iB, where A is symmetric

and B skew-symmetric, ane they have always real eigenvalues. Consider an interval Hermitian matrix Aþ iB, where A ¼ AT

and B ¼ �BT . Even though the family of matrices Aþ iB contains not only Hermitian matrices, we can bound any eigenvaluek of a Hermitian matrix from Aþ iB according to Theorem 5 as

knA BT

B A

!S

6 k 6 k1A BT

B A

!S

:

3. Examples

The following examples are illustrated by Figs. 1–4. Therein, an approximation of the eigenvalue set is depicted by usingthe Monte Carlo method. Different colors distinguish eigenvalues into several group by their continuity, however, since theeigenvalues overlap, the coloring may be less informative. In each of the figures, four enclosures calculated are representedby rectangles. The largest one, the basic extent, is for the Hertz method (Theorem 4), and the others for enclosures computed

M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591 5587

Page 139: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Fig. 1. Eigenvalue set and enclosures to Example 1.

Fig. 2. Eigenvalue set and enclosures to Example 2.

Fig. 3. Eigenvalue set and enclosures to Example 3.

5588 M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591

Page 140: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

by our approach. Namely, the largest of them is obtained by utilizing the Rohn bounds (Theorem 1) and is identical with theprevious one in the real case. The smaller one is the improvement by the filtering method, and the smallest one is deter-mined by the exponential Hertz formula (Theorem 2).

Example 1. Inspired by Seyranian et al. [25], consider a matrix in the form of AðsÞ ¼ ðI � ssTÞBðsÞ, where

BðsÞ ¼

1 0 1

0 1 0

1 0 4

0BB@

1CCAþ i

5 0 4

0 5 2

4 2 0

0BB@

1CCAþ 4i

0 �s1 � i s2 i s3

s1 þ i s2 0 �s3

�i s3 s3 0

0BB@

1CCA:

Let the vector of parameter s vary within s ¼ ð½0;0:2�; ½0:9797;1�;0Þ.The enclosure computed by the Hertz method (Theorem 4) is

k 2 ½�5:6732;8:5134�; l 2 ½�7:4311;11:8843�:

Our approach depends on the solver for the symmetric interval eigenvalue problem used. Employing the Rohn bounds(Theorem 1) we obtain

k 2 ½�4:6546;7:6146�; l 2 ½�5:7632;10:3387�;

which can be improved by the filtering method to

k 2 ½�4:6546;6:7421�; l 2 ½�5:4017;9:8787�:

The tightest enclosure is obtained by the exponential Hertz formula (Theorem 2)

k 2 ½�4:5180;6:3031�; l 2 ½�4:8237;9:6227�:

The results are illustrated in Fig. 1.In a similar manner as in Example 1 we have considered some real interval matrices, too. Since the first two enclosures

are equal in the real case, we display just the results of Theorem 5 by using the Rohn bounds, the filtering and the Hertzformula.

Example 2 (Fig. 2). Consider the interval matrix [14, Example 4.3]

A ¼0 �1 �12 ½�1:399;�0:001� 01 0:5 �1

0B@

1CA:

The enclosures obtained by using the Rohn bounds, the filtering and the Hertz formula are, respectively,

k 2 ½�1:9068;0:9702�; l 2 ½�2:5191;2:5191�;

k 2 ½�1:6474;0:5205�; l 2 ½�2:1934;2:1934�;

k 2 ½�1:6474;0:5205�; l 2 ½�2:1112;2:1112�:

Fig. 4. Eigenvalue set and enclosures to Example 4.

M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591 5589

Page 141: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

Example 3 (Fig. 3). Consider the interval matrix [12, Example 2]

A ¼�1 0 ½�1;1�0 �1 ½�1;1�

½�1;1� ½�1;1� 0:1

0B@

1CA:

The enclosures calculated are, respectively,

k 2 ½�2:4143;1:5143�; l 2 ½�1:4143;1:4143�;k 2 ½�2:0532;1:1532�; l 2 ½�1:4143;1:4143�;k 2 ½�1:9674;1:0674�; l 2 ½�1:4143;1:4143�:

Example 4 (Fig. 4). Consider the interval matrix [15, Example 1]

A ¼

½�3;�2� ½4;5� ½4;6� ½�1;1:5�½�4;�3� ½�4;�3� ½�4;�3� ½1;2�½�5;�4� ½2;3� ½�5;�4� ½�1;0�½�1;0:1� ½0;1� ½1;2� ½�4;2:5�

0BBB@

1CCCA

We get the following enclosures

k 2 ½�8:8221;3:4408�; l 2 ½�10:7497;10:7497�;k 2 ½�7:4848;3:3184�; l 2 ½�9:4224;9:4224�;k 2 ½�7:3691;3:2742�; l 2 ½�8:7948;8:7948�:

Example 5. In [10], the authors proposed a new robust stability test for fractional-order linear time invariant uncertain sys-tems. The corresponding stability condition is

maxi

RekiðAÞ < 0; and maxi

arctanImkiðAÞRekiðAÞ

��������

� �<

p2ð2� aÞ;

where a; 1 < a < 2, is the fractional commensurate order, and A 2 A. That is, stable area of eigenvalues in the complex planeis represented by the sector determined by the angle pð2� aÞ and the vertex in the origin. For a ¼ 1, it would coincide withthe standard Hurwitz stability.

Therein, the authors transformed the problem to checking negativity of the maximal eigenvalue of a certain Hermitiancomplex interval matrix. For concreteness, consider the interval matrix [10, Example 1]

A ¼½�1:8;�1:2� ½0:4;0:6� ½0:8;1:2�½�1:2;�0:8� ½�3:6;�2:4� ½0:8;1:2�½�0:6;�0:4� ½�1:8;�1:2� ½�3;�2�

0B@

1CA:

First, the complex interval matrix Aþ AT þ iðA� ATÞ is created. Next, its largest eigenvalue k ¼ �0:4 is calculated by exhaus-tive inspection of all vertex matrices of A.

Contrary, in our approach by using Theorem 5, we calculate k 6 0:2042 by employing Theorem 1 and k 6 �0:1437 byemploying Theorem 2. Even though the former simple formula is not able to conclude for stability, the latter does so.

If we change slightly elements of A then cheaper sufficient conditions for stability come into play. Let us change A33 to be½�3;�2:01� instead of ½�3;�2�. Then the Rohn bounds used to estimate the largest eigenvalues in Theorem 5 result in theupper bound k 6 0:198. Now, we call just one iteration of the efficient filtering method [20] to decrease the bound tok 6 �0:0013. This suffices to prove stability avoiding the time consuming enumeration of vertex matrices.

4. Conclusion

We presented a method for computing bounds on eigenvalues of complex interval matrices. It works well not only for thecomplex case, but also for real interval matrices. The basic idea is to reduce the problem to the symmetric case, and utilizevarious methods for this case. Thus by using Rohn bounds or the filtering method we obtain very cheap and efficient formu-lae outperforming the Hertz method for the complex case. By incorporating the symmetric Hertz method we get the tightestenclosure but on the account of the exponential computing time; moreover, the superiority to the filtering method is notsignificant.

Averall, we obtain a method that quite tight bounds with very low computational cost. Thus, we may solve problems inmuch higher dimensions than many branch & bound algorithms. Even though the bounds need not be exact, they are suf-

5590 M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591

Page 142: Interval linear algebra...Thus, interval linear algebra is a basis for solving more complex interval-valued problems, such as in-terval mathematical programming, interval least squares

ficiently accurate for many situations. However, when more accurate bounds are needed, one can switch to more exhaustiveand expensive methods, and consider our approach as a sufficient condition or initial bounds.

Acknowledgment

The author was partially supported by the Czech Science Foundation Grant P403/12/1947.

References

[1] A.S. Deif, The interval eigenvalue problem, Z. Angew. Math. Mech. 71 (1) (1991) 61–64.[2] L. Kolev, S. Petrakieva, Assessing the stability of linear time-invariant continuous interval dynamic systems, IEEE Trans. Autom. Control 50 (3) (2005)

393–397.[3] L. Kolev, Eigenvalue range determination for interval and parametric matrices, Int. J. Circ. Theory Appl. 38 (10) (2010) 1027–1061.[4] G. Mayer, A unified approach to enclosure methods for eigenpairs, Z. Angew. Math. Mech. 74 (2) (1994) 115–128.[5] J. Rohn, Bounds on eigenvalues of interval matrices, ZAMM Z. Angew. Math. Mech. 78 (suppl. 3) (1998) S1049–S1050.[6] H.-S. Ahn, K.L. Moore, Y. Chen, Monotonic convergent iterative learning controller design based on interval model conversion, IEEE Trans. Autom.

Control 51 (2) (2006) 366–371.[7] R. Brayton, C. Tong, Stability of dynamical systems: a constructive approach, IEEE Trans. Circ. Syst. 26 (4) (1979) 224–234.[8] M. Rotea, M. Corless, D. Da, I. Petersen, Systems with structured uncertainty: relations between quadratic and robust stability, IEEE Trans. Autom.

Control 38 (5) (1993) 799–803.[9] W. Zhang, S.Q. Shen, Z.Z. Han, Sufficient conditions for Hurwitz stability of matrices, Latin Am. Appl. Res. 38 (2008) 253–258. http://www.scielo.org.ar/

scielo.php?script=sci_arttext&pid=S0%327-07932008000300009&nrm=is.[10] H.-S. Ahn, Y. Chen, I. Podlubny, Robust stability test of a class of linear time-invariant interval fractional-order system using Lyapunov inequality, Appl.

Math. Comput. 187 (1) (2007) 27–34.[11] G. Franzè, L. Carotenuto, A. Balestrino, New inclusion criterion for the stability of interval matrices, IEE Proc. Control Theory Appl. 153 (4) (2006) 478–

482.[12] Y. Xiao, R. Unbehauen, Robust Hurwitz and Schur stability test for interval matrices, in: Proceedings of the 39th IEEE Conference on Decision and

Control, vol. 5, Sydney, Australia, 2000, pp. 4209–4214.[13] J. Rohn, Stability of interval matrices: the real eigenvalue case, IEEE Trans. Autom. Control 37 (10) (1992) 1604–1605.[14] D. Petkovski, Reduced conservatism in stability analysis of interval matrices, J. Optim. Theory Appl. 70 (3) (1991) 597–606.[15] K. Wang, A.N. Michel, D. Liu, Necessary and sufficient conditions for the Hurwitz and Schur stability of interval matrices, IEEE Trans. Autom. Control 39

(6) (1994) 1251–1255.[16] M. Hladík, D. Daney, E. Tsigaridas, Bounds on real eigenvalues and singular values of interval matrices, SIAM J. Matrix Anal. Appl. 31 (4) (2010) 2116–

2129.[17] J. Rohn, An algorithm for checking stability of symmetric interval matrices, IEEE Trans. Autom. Control 41 (1) (1996) 133–136.[18] D. Hertz, The extreme eigenvalues and stability of real symmetric interval matrices, IEEE Trans. Autom. Control 37 (4) (1992) 532–535.[19] D. Hertz, Interval analysis: eigenvalue bounds of interval matrices, in: C.A. Floudas, P.M. Pardalos (Eds.), Encyclopedia of Optimization, Springer, New

York, 1689.[20] M. Hladík, D. Daney, E.P. Tsigaridas, A filtering method for the interval eigenvalue problem, Appl. Math. Comput. (12) (2011) 5236–5242.[21] J. Rohn, A handbook of results on interval linear problems, 2005. Available from: <http://www.cs.cas.cz/rohn/handbook>.[22] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins University Press, 1996.[23] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985.[24] C.D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2000.[25] A.P. Seyranian, O.N. Kirillov, A.A. Mailybaev, Coupling of eigenvalues of complex matrices at diabolic and exceptional points, J. Phys. A Math. Gen. 38 (8)

(2005) 1723–1740.

M. Hladík / Applied Mathematics and Computation 219 (2013) 5584–5591 5591