Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

189
Springer Monographs in Mathematics Leonid Bunimovich Benjamin Webb Isospectral Transformations A New Approach to Analyzing Multidimensional Systems and Networks

Transcript of Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Page 1: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Springer Monographs in Mathematics

Leonid BunimovichBenjamin Webb

Isospectral TransformationsA New Approach to Analyzing Multidimensional Systems and Networks

Page 2: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

More information about this series at http://www.springer.com/series/3733

Springer Monographs in Mathematics

Page 3: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks
Page 4: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Leonid Bunimovich • Benjamin Webb

Isospectral TransformationsA New Approach to AnalyzingMultidimensional Systems and Networks

123

Page 5: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Leonid BunimovichSchool of MathematicsGeorgia Institute of TechnologyAtlanta, USA

Benjamin WebbDepartment of MathematicsBrigham Young UniversityProvo, UT, USA

ISSN 1439-7382 ISSN 2196-9922 (electronic)ISBN 978-1-4939-1374-9 ISBN 978-1-4939-1375-6 (eBook)DOI 10.1007/978-1-4939-1375-6Springer New York Heidelberg Dordrecht London

Library of Congress Control Number: 2014944753

Mathematics Subject Classification: 05C82, 37N99, 65F30, 15A18, 34D20

© Springer Science+Business Media New York 2014This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part ofthe material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,broadcasting, reproduction on microfilms or in any other physical way, and transmission or informationstorage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodologynow known or hereafter developed. Exempted from this legal reservation are brief excerpts in connectionwith reviews or scholarly analysis or material supplied specifically for the purpose of being enteredand executed on a computer system, for exclusive use by the purchaser of the work. Duplication ofthis publication or parts thereof is permitted only under the provisions of the Copyright Law of thePublisher’s location, in its current version, and permission for use must always be obtained from Springer.Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violationsare liable to prosecution under the respective Copyright Law.The use of general descriptive names, registered names, trademarks, service marks, etc. in this publicationdoes not imply, even in the absence of a specific statement, that such names are exempt from the relevantprotective laws and regulations and therefore free for general use.While the advice and information in this book are believed to be true and accurate at the date ofpublication, neither the authors nor the editors nor the publisher can accept any legal responsibility forany errors or omissions that may be made. The publisher makes no warranty, express or implied, withrespect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)

Page 6: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

To Larissa and Rebekah

Page 7: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks
Page 8: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Foreword

This book provides a new approach to the analysis of networks and, more generally,to those multidimensional dynamical systems that have an irregular structure ofinteractions. Here, the term irregular structure means that the system’s variablesdepend on each other in dissimilar ways. For instance, all-to-all or nearest-neighborinteractions have a regular structure, because each variable depends on the others ina similar manner.

In practice, this structure of interactions is represented by a graph, calledthe network’s graph of interactions or the network’s topology. Depending onthe particular network, this graph may be directed or undirected, weighted orunweighted, with or without loops, with or without parallel edges, etc. In each case,the techniques provided in this book can be directly applied to these networks.

It is worth mentioning that although these methods are fairly new, they havealready proven to be an efficient tool in some classical and more recent problemsin applied mathematics. Here, these techniques are presented as a way in which toview and analyze real-world networks.

One of the major goals of this book is to make these methods and techniquesaccessible to researchers who deal with such networks. With this goal in mind, wenote that the computations required to implement these techniques are remarkablystraightforward. In fact, they can be carried out using any existing softwaresophisticated enough to perform elementary linear algebra.

In terms of the book’s content, we note that each of the results is given witha mathematical proof. However, with the hope that this book will be read andused as well by nonmathematicians, the text is written so that those interested inapplications can safely ignore these arguments and use the stated formulas andtechniques directly. Still, we stress the fact that only a basic understanding of linearalgebra is needed to understand the proofs.

The definitions and results we present are motivated and accompanied by manyexamples, which both nonmathematicians and mathematicians should appreciate.

vii

Page 9: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

viii Foreword

The book also contains a large number of examples and figures depicting the graphsassociated with particular networks as well as their various transformations.

Almost all of these examples deal with directed graphs. The reason is thatdirected graphs are more general objects than undirected graphs. However, thetheory developed here works just as well for undirected graphs. This is important,for instance, in the study of real networks, since a large number of those networkshave an undirected graph structure (topology).

Because real-life networks are often large and have a complicated structure,it is tempting to find ways of simplifying them in terms of both their size andcomplexity. What is important, though, is that some basic or fundamental propertyof the network be preserved in this process. Yet such an attempt seems doomed tofailure. These are real networks, so we do not know much if anything about them,including which characteristic(s) we should retain. Moreover, there are potentiallymany ways in which a network could be reduced. Hence, there is first the problem ofchoosing which way the network should be reduced and second determining whatthe reduced network tells us. Thus many objections are immediately raised if onewants to reduce the size of a network.

From this point of view, our goal of reducing a network may seem overlyambitious. In fact, one could ask how it is possible even to represent an arbitrarynetwork. The universally excepted answer is that this can be done by drawing agraph whose vertices (nodes) correspond to the network elements and whose edges(links) correspond to the directed interactions between these elements.

Equivalently, one can represent a network by a matrix A with entries Aij . Inthis representation, Aij is the strength or weight of the directed interaction betweenthe i th and j th network elements, where Aij D 0 if these elements do not interact.Such a matrix is called the weighted adjacency matrix of a network. If the network’sinteraction strengths are not known, the nonzero entries of the matrix are set equalto 1, and A is called the (unweighted) adjacency matrix of the network. In practice,knowledge of a network’s adjacency matrix is often the most one can hope to have.

It is well known that a very basic characteristic of a matrix is its spectrum, i.e.,its collection of eigenvalues including multiplicities. One of the main questionswe address in this book is whether it is possible to reduce a network to somesmaller network while preserving the network’s spectral properties. Phrased anotherway, this question could be stated as whether it is possible to reduce the size of anetwork’s adjacency matrix while maintaining the network’s eigenvalues.

The immediate answer to this question is, of course, no. In fact, while presentingthese results, we have had audience members protest that what we hope to do isimpossible. Indeed, as everyone knows, the fundamental theorem of algebra statesthat an n � n matrix has n eigenvalues, while a smaller matrix has fewer.

However, our claim is that it is possible to reduce a matrix and preserve thematrix’s spectral properties. In this book, we show that the answer to our questionbecomes yes if one considers a larger class of matrices, namely matrices with entriesthat are rational functions of a spectral parameter �. That is, it is possible to reducean n � n matrix with scalar entries to a smaller m � m matrix with functions asentries and maintain the matrix’s spectrum. We refer to this process as isospectralmatrix reduction.

Page 10: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Foreword ix

At this point, the reader may think that by isospectrally reducing a network’sadjacency matrix we are, in fact, shifting the complexity of a network’s structure(topology) to the complexity of its edge weights. We pause to reassure our readersthat we have considered this idea and that many facts and results in this bookdemonstrate that such is not the case. However, before moving on, we stress justone fundamental fact regarding isospectral reductions.

The structure (topology) of an isospectrally reduced network does not dependon the strengths (weights) of the initial unreduced network. It depends onlyon the network’s structure. The structure of the reduced network will be thesame regardless of the strengths of interactions in the initial network. Therefore,the isospectral reductions we consider really capture some hidden but intrinsicinformation regarding the structure of a network.

This approach to analyzing networks is based on ideas and methods from thetheory of dynamical networks, which is a part of the modern theory of dynamicalsystems. The first dynamical networks addressed in this theory were the so-calledcoupled map lattices (CML). CMLs were introduced in the mid-1970s, almostsimultaneously, by four physicists in four countries. The mathematical theory ofCML was begun in [7], in which the first precise definitions of space-time chaos anda coherent structure were given. Nowadays, the theory of lattice dynamical systemsis a respected part of contemporary dynamical systems theory (see, e.g., [13]).

A number of remarkable findings of the late 1990s demonstrated that realnetworks have very complicated topologies [3, 19, 20, 23, 24, 30]. The first thoughtwas that the ideas of dynamical systems theory and of statistical mechanics couldbe applied to such systems, as had been done in [7,12] for CMLs. However, infinitelattices have a group translation property, which is missing if a graph of interactionshas an irregular structure. An approach to dealing with this irregular structure waseventually developed in [1, 4], in which the following was observed.

Every dynamical network has three features: (i) the individual dynamics ofthe network elements, e.g., a single isolated neuron in a neural network; (ii) theinteractions between the elements of a network; and (iii) the structure (or topology)of a network. In this framework, we assume that a network’s structure does notchange over time, so that it has a fixed structure of interactions. However, as we laterpoint out, the transformations considered in this book could be useful for studyingnetworks that do have a structure that evolves over time.

Observe that features (i) and (ii) of a network are dynamical systems. Thus, itis customary to deal with such systems by analyzing the combined influence of(i) and (ii), as is done in other spatially extended systems. Perhaps the most popularexample of these is reaction–diffusion systems in which nonlinear reactions push thesystem towards chaotic behavior while diffusion has a stabilizing effect. However,the question is what to do with (iii), which is clearly a static characteristic of thenetwork.

As is shown in [1], the topology of a network can also be treated as a dynamicalsystem generated by considering all infinite paths on the network’s graph ofinteractions. This, together with the ideas from the theory of spatially extendedsystems, forms the basis of our approach.

Page 11: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

x Foreword

This approach, as demonstrated in this book, has allowed for a variety of newresults regarding dynamical systems, matrices, polynomials, and in particular, thestability of dynamical networks [8–10]. Most importantly, these techniques providea reliable yet flexible tool for the analysis of real networks or conversely, the designof networks with specific properties. We also demonstrate, via the proofs we give,that this approach is internally consistent, which makes this an especially promisingtool for analyzing real networks.

In this regard, consider, for instance, a network with the set elements (vertices)V . If the network is reduced to a smaller network with elements A � V , then thisreduced network can generally be reduced to a network with an even smaller set ofelements B � A � V . However, one could imagine that the initial network couldbe reduced over another set of elements C � V and that the resulting network couldbe reduced over the same final set of elements B � C � V .

The question is, would these two isospectral reductions of the initial networkonto a network with the set of elements B be identical? This, of course, is a seriousconsideration if our approach is to be useful. If the two results are different, then theutility of our approach would be questionable, to say the least.

In fact, one of our results demonstrates that these two isospectral reductions arethe same. Moreover, the graph that results from any sequence of reductions dependsonly on the last (minimal) set of elements over which the network is reduced. Thisis another indication that our approach is internally consistent and that it identifiessome intrinsic characteristic of a network’s topology.

Additionally, it suggests at least two immediate possibilities regarding theanalysis of real networks. First, it is possible to reduce a network isospectrally overany collection of elements of the initial network. The major question then is, whoshould choose this subset of elements, i.e., which elements of a network are themost important? Naturally, the expert in the field, e.g., biologist, medical doctor,engineer, is the most logical candidate for this task. An expert can both choose theelements over which to reduce a network and interpret the meaning of the reducednetwork.

Another potential use of these techniques is the following. There exist roughly10 to 15 characteristics of a network’s elements (vertices) and interactions (edges)that are routinely used in the analysis of real networks. These include centrality, in-and out-degree, betweenness, etc. Our theory allows anyone to determine the coreof a network, i.e., a collection of elements that are the most important from the pointof view of any such characteristic. We show that for any characteristic that uniquelydescribes a subset of elements (or edges), there is a unique reduced network withthese and only these elements (or edges).

Such core subnetworks can, for instance, be complete graphs or graphs withnearest-neighbor connections, that is, a graph in which all vertices (or edges) aresimilar with respect to a particular characteristic. One can then get an expert’sinterpretation of the meaning of this core and compare it with other cores obtainedusing different characteristics.

Another general finding is that a rule that uniquely selects a collection of networkelements (or edges) defines a partition of the set of all networks into classes of

Page 12: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Foreword xi

spectrally equivalent networks. The idea is that spectrally equivalent networks havesimilar dynamics. This equivalence means that if the networks L and M and thenetworks M and N are spectrally equivalent with respect to a given rule, then thenetworks L and N are also spectrally equivalent with respect to that rule.

It is important to mention here that we require such rules to define a subset ofelements (or interactions) uniquely. For instance, the rule “remove one element ofthe network” does not uniquely specify which element should be removed. However,the rule “remove all elements (vertices) with minimal centrality” uniquely definesthe set of elements to be removed. Therefore, this second rule partitions the set ofall networks into spectrally equivalent classes, while the first does not.

It is worth mentioning that an isospectral reduction with respect to a network’sinteraction (edges) is absolutely analogous to isospectral reduction with respect toits elements (vertices). One need only consider the dual of the initial network’s graphof interactions in which all vertices become edges and edges become vertices.

Having mentioned some of the applications of our approach, we would liketo explain here what is principally new in our approach. In this respect, we havealready mentioned that this approach requires the use of matrices with entries thatare rational functions. But there is also something new from the point of view ofgraph theory, which is the theory so often used in the modern theory of networks.

We introduce the idea that there are special subsets of a graph’s (network’s)vertices (elements) over which the graph can be reduced. We call these specialsubsets the graph’s structural sets. A collection of vertices is a structural set if itscomplement does not contain any cycles of the graph apart from loops, which arecycles of a single vertex. Every graph has at least one structural set. Moreover, in thecase of an undirected graph, such sets have a specific form as a result of the graph’ssymmetries.

For different applications, we will sometimes use modifications or simplificationsof this notion, but structural sets are at the heart of our theory. Once a structural sethas been chosen, the entire graph (network) can be decomposed into a number ofbranches. These branches are paths between two vertices of the structural set that donot contain any other vertices of that set. Our procedure of isospectral reduction is,in a nutshell, the removal of all vertices that do not belong to the structural set. Thisis done by substituting each branch by a single edge and calculating the weights ofthese new edges.

It is important to note that in this procedure, we do not simply erase all verticesthat are not in a structural set. We also add some new edges that were not present inthe initial (larger) graph. Therefore, an isospectrally reduced graph is not a subgraphof the initial graph but a graph with a smaller number of vertices. Again, the sameoperation can be applied to edges of our graph instead of its vertices if we wish toreduce the graph isospectrally to a graph with fewer edges.

Having described, to some extent, what is new in our approach, we now give adetailed description of the book’s content. The first chapter deals with isospectralmatrix reductions. Formally, one of the major goals of the book is to develop aset of tools that will allow one to compress a network in a way that preservesall information relative to its spectrum. In this regard, we are technically dealing

Page 13: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

xii Foreword

with an isospectral reduction of a network’s (weighted) adjacency matrix. Thus, inChap. 1 we describe how a matrix can be reduced.

Chapter 2 is the most fundamental chapter of the book. It describes the procedureof isospectral network reduction first introduced in [9]. The fundamental conceptsof a structural set, branches, and branch weights are introduced there. This chapteralso develops the operations of branch expansion, branch merging, and branchreweighting and demonstrates how these operations are useful for the analysis ofdynamical networks.

In particular, the operations of expanding, reweighting, and merging a network’sbranches allow one to transform a network isospectrally while keeping the network’sedge weights restricted to some set. Mathematically, such sets must be unitalsubrings of the ring of rational functions. For example, the weights 0 and 1 formsuch a ring, as do the positive real numbers. That is, it is not necessary for isospectraltransformations, including those that reduce the size of a network, to result in anetwork that has edges with weights as fancy as rational functions.

Chapter 3 deals with the global stability of nondelayed and time-delayeddynamical networks. By global stability we mean the existence of a globallyattracting fixed point in a network’s phase space. In this particular context, we areinterested not in the entire spectrum of a dynamical network but rather in its spectralradius, i.e., the maximum absolute value of all eigenvalues. Therefore, we introduceother transformations that are simpler than the isospectral transforms of Chap. 2.These transformations either preserve the spectral radius of a network or modify itin a specific way [10].

The first of these transformations that preserves the network’s spectral radiusis called an isoradial transformation. The second is referred to as a boundedradial transformation, which allows one to transform the network in such a waythat its spectral radius remains below a certain value. By making use of thesetransformations and some structural features of networks, it is possible to obtainstronger sufficient conditions on a dynamical network’s global stability than thoseobtained in [1]. These improved estimates come from taking into account the localstructure of a network’s topology, particularly its branch structure.

One intrinsic feature of a real network is that its dynamics are time-delayed. Suchdelays are typically caused by the “physical” distance between network elements aswell as by the time required for a network element to process incoming informationbefore sending a signal to another element of the network.

Time delays can be of two types. If the current state of a network element dependson the state of another element at only a single arbitrary point in time, we saythat this interaction has a single delay. If this interaction depends on a number ofprevious points in time, we refer to it as a multiple delay.

It is well known that by adding or removing time delays one can destabilizedynamics, i.e., a globally stable dynamical network can become unstable. It isproved in Chap. 3 that the situation changes if one considers a slightly strongerproperty, which we call intrinsic global stability (see Definition 3.2).

A major and unexpected finding in this chapter is that if a network is intrinsicallyglobally stable, then the addition and removal of singular delays to the network’s

Page 14: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Foreword xiii

dynamics will not destabilize the network. Moreover, the network remains stableif we remove any of its multiple time delays. This result, in the case of singulardelays, allows one to analyze the stability of a time-delayed network in termsof a nondelayed network, which is usually much simpler, especially from acomputational point of view.

Additionally, using this theory of time delays, we show that even if a network’sdynamics are not explicitly time delayed, the network still has what we refer to asimplicit time delays. By removing a network’s implicit time delays, we are able toconstruct a lower-dimensional dynamical network. This network, which we refer toas a network restriction, is similar in many ways to the graph reductions introducedin Chap. 2 and can be used to obtain improved estimates of a network’s globalstability.

In Chap. 3, we begin by considering the fundamental theorem known as Abel’simpossibility theorem (or the Abel–Ruffini theorem). This theorem states that thereis no general solution for polynomial equations of degree five or higher, i.e., thereis no algebraic formula that represents the roots of a general polynomial of degreen > 4. Therefore, there is no algebraic formula for the eigenvalues of n�n matricesfor n > 4.

However, interest in practical problems, such as the stability of wave motion,has stimulated development in a branch of linear algebra that deals with estimatingthe spectra of matrices. This is built on the fundamental theorem of algebra, whichstates that an n�n matrix with complex entries has exactly n eigenvalues, includingmultiplicities, in the complex plane. The goal in this particular theory is to findregions in the complex plane that contain the spectrum of a given matrix.

In Chap. 4, we demonstrate that by combining our method of isospectralreduction with any of the classical methods of eigenvalue estimation, we obtainbetter estimates than can be obtained using a particular classical method by itself[8]. In this context, better estimates mean that the regions on the complex planeachieved with the help of isospectral transformations are smaller than those obtainedwithout the use of these reductions.

One reason for this improvement is that our method uses more informationabout the structure of the corresponding matrix. Analogous results are obtained forestimating spectra of the combinatorial and normalized graph Laplacians. We alsoshow that the estimates of a network’s spectra can be improved if we have somespecific information regarding the network’s structure (topology), e.g., some localinformation about a large graph.

Chapter 5 deals with the pseudospectra and inverse pseudospectra of matriceswith complex entries. The pseudospectrum of a complex-valued matrix is acollection of numbers that are sufficiently close (within a tolerance � > 0) tothe eigenvalues of the matrix. In this chapter, we first extend the definition ofpseudospectra to matrices with entries that are rational functions. Since this typeof matrix also has a well-defined inverse spectrum, we also introduce the idea of aninverse pseudospectrum for these matrices and study the properties of these sets.

In particular, we show that the pseudospectrum of the reduced matrix is lesssusceptible to perturbations than the pseudospectrum of the original matrix [28].

Page 15: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

xiv Foreword

Linear mass–spring networks are a major example of applying the techniques ofpseudospectra and inverse pseudospectra estimates in this chapter. These networksalso allow us to give a physical interpretation of pseudospectra and isospectralreductions.

The final chapter, Chap. 6, deals with yet another application of isospectraltransformations. Here, we consider open chaotic dynamical systems, that have afinite Markov partition. In these systems, we suppose that some element (or unionof elements) of the Markov partition acts as a “hole.” In this case, an orbit that hitsthe hole stays there forever.

A major characteristic of the dynamics in such systems is the system’s survivalprobability. An open system’s survival probability is the probability that an orbitavoids falling into the hole until some fixed point in time. We show that bytransforming the underlying open dynamical system using our theory of isospectraltransformations, we can obtain improved estimates of a system’s survival probability[11].

Overall, the current state of our theory of isospectral transformations is that ineach setting in which it has been applied, this theory has led to either improved orentirely new results. Therefore, we are quite optimistic about its future applications.In particular, as this book demonstrates, the theory of isospectral network reductionsis ripe for applications to real-life networks, and we hope that this book will help tostimulate such investigations.

Page 16: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Contents

1 Isospectral Matrix Reductions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Matrices with Rational Function Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Isospectral Matrix Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Sequential Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 Spectral Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Dynamical Networks and Isospectral Graph Reductions . . . . . . . . . . . . . . . 192.1 Dynamical Networks as Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2 Isospectral Graph Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3 Sequential Graph Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.4 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.5 Weight-Preserving Isospectral Transformations . . . . . . . . . . . . . . . . . . . . . . . 39

2.5.1 Branch Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422.6 Isospectral Graph Transformations over Modified Weight Sets . . . . . . 45

2.6.1 Branch Reweighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462.6.2 Branch Merging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3 Stability of Dynamical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.1 Networks as Dynamical Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.2 Time-Delayed Dynamical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.3 Graph Structure of a Dynamical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713.4 Implicit Delays and Restrictions of Dynamical Networks . . . . . . . . . . . . 80

4 Improved Eigenvalue Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.1 Gershgorin-Type Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.2 Brauer-Type Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.3 Brualdi-Type Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.4 Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

xv

Page 17: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

xvi Contents

5 Pseudospectra and Inverse Pseudospectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.1 Pseudospectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.2 Pseudospectra Under Isospectral Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365.3 Inverse Pseudoeigenvalues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395.4 Eigenvalue Inclusions and Equivalence of Definitions . . . . . . . . . . . . . . . . 144

6 Improved Estimates of Survival Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.1 Open Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.2 Piecewise Linear Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.3 Nonlinear Estimates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596.4 Improved Escape Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

Page 18: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 1Isospectral Matrix Reductions

The main object of study in this chapter is not networks, which are the main focusof this book, but matrices. The reason is that although a network has more structurethan its adjacency matrix, the adjacency matrix of a network is a very convenientand compact way of storing this structural information. We therefore postpone ouranalysis of a network’s graph structure until the next chapter.

In the preface, we stated that one of the major goals of this book is to developa set of mathematical tools that will allow us to reduce the size of a network whilepreserving all information relative to its spectrum. Since the spectrum of a networkis the eigenvalues of its adjacency matrix, we are, in fact, looking for a way to reducethe size of a matrix while maintaining the matrix’s set of eigenvalues.

This may be a bit surprising, especially if we consider the fundamental theoremof algebra, which states that an n � n matrix with complex entries has exactly n

eigenvalues (with multiplicities). Consequently, every matrix smaller than A musthave fewer than n eigenvalues.

In this chapter, we will develop the basic tools that will allow us to fulfill thisseemingly impossible task of reducing a matrix while maintaining its spectrum. Thekey is to consider matrices with entries that are not complex numbers but ratherfunctions of a spectral parameter. For our purposes, the class of functions we willuse is that of rational functions.

One of our main results in this chapter is to show that a matrix can be reducedin size to a smaller matrix with rational function entries in a way that essentiallypreserves its eigenvalues. This procedure, called isospectral matrix reduction,reduces a matrix over one of its principal submatrices. We show that a matrix can bereduced over any of its principal submatrices and that it is possible to give a formulafor the reduced matrix.

Once we have reduced a matrix, a natural question is whether we can take thereduced matrix and reduce it a second time. The second half of this chapter dealswith the question whether a matrix can be sequentially reduced. Here, we prove a

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__1

1

Page 19: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2 1 Isospectral Matrix Reductions

crucial result, that a matrix can be sequentially reduced and that the matrix resultingfrom a sequence of reductions does not depend on the particular sequence but onlyon the final submatrix over which it is reduced.

Aside from the basic theory of isospectral matrix reductions, we also introducesome concepts in this chapter that will be of use later in the book. As is true forthe entire book, the practically oriented reader can skip the proofs and simply applythe formulas that are given to analyze a matrix or set of matrices that are of interestin any area of application. In this regard, we note that the computations required tofind an isospectral matrix reduction are simple enough that they can be carried outwith the aid of any standard software.

1.1 Matrices with Rational Function Entries

As previously stated, one of our major goals is to be able to reduce the size ofa matrix in such a way that we preserve the matrix’s spectrum, including themultiplicity of each eigenvalue. However, this presupposes that such reductions arepossible.

If the matrix A is in Cn�n, then the eigenvalues of A are defined as the solutions

of its characteristic equation

�.A/ D f� 2 C W det.A � �I/ D 0g:

Since the characteristic polynomial det.A��I/ of A has degree n, the fundamentaltheorem of algebra states that A has exactly n eigenvalues. Hence, if B 2 C

m�m,where m < n, then A and B cannot have the same spectrum. This alone seemsto imply that it is impossible to reduce the size of a matrix while preserving itseigenvalues. This conclusion is certainly true if we limit ourselves to matrices withscalar entries. The only remaining possibility, then, is to consider matrices whoseentries are not complex numbers but some other mathematical object, presumablyone that carries more information.

For the purposes of reducing a matrix, the class of entries we consider are thatof rational functions of �. Specifically, let CŒ�� be the set of polynomials in thecomplex variable � with complex coefficients. We denote by W the set of rationalfunctions of the form

!.�/ D p.�/=q.�/;

where p.�/; q.�/ 2 CŒ�� are polynomials having no common linear factors, i.e., nocommon roots, and where q.�/ is not identically zero.

Each rational function !.�/ 2 W can be expressed in the form

!.�/ D ai �i C ai�1�i�1 C � � � C a0

bj �j C bj �1�j �1 C � � � C b0

;

Page 20: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.1 Matrices with Rational Function Entries 3

where, without loss in generality, we take bj D 1. The domain of !.�/ consistsof all but a finite number of complex numbers for which the polynomial q.�/ Dbj �j C bj �1�j �1 C � � � C b0 is zero.

Addition and multiplication on the set W are defined as follows. For p.�/=q.�/

and r.�/=s.�/ in W, let

�p

qC r

s

�.�/ Dp.�/s.�/ C q.�/r.�/

q.�/s.�/I and (1.1)

�p

q� r

s

�.�/ Dp.�/r.�/

q.�/s.�/; (1.2)

where the common linear factors on the right-hand side of (1.1) and (1.2) are can-celed. The set W is then a field under the operations of addition and multiplication.

Because we are primarily concerned with the eigenvalues of a matrix, which isa set that includes multiplicities, the following will be important. The element ˛ ofthe set A that includes multiplicities has multiplicity m if there are m elements of A

equal to ˛. If ˛ 2 A with multiplicity m and ˛ 2 B with multiplicity n, then

(i) the union A [ B is a set in which ˛ has multiplicity m C n; and(ii) the difference A � B is a set in which ˛ has multiplicity m � n if m � n > 0

and where ˛ … A � B otherwise.

Definition 1.1. Let Wn�n denote the set of n � n matrices with entries in W. For amatrix M.�/ 2 W

n�n, the determinant det�M.�/ � �I

�is given by

det�M.�/ � �I

� D p.�/=q.�/ (1.3)

for some p.�/=q.�/ 2 W. The spectrum (or set of eigenvalues) of M.�/ is the set

�.M/ D f� 2 C W p.�/ D 0g:

The inverse spectrum (or inverse eigenvalues) of M.�/ is the set

��1.M/ D f� 2 C W q.�/ D 0g:

Both �.M/ and ��1.M/ are understood to be sets that include multiplicities. Forexample, if the polynomial p.�/ 2 CŒ�� in (1.3) factors as

p.�/ DmY

iD1

.� � ˛i /ni for ˛i 2 C and ni 2 N;

then f� 2 C W p.�/ D 0g is the set in which ˛i has multiplicity ni .

Page 21: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4 1 Isospectral Matrix Reductions

Example 1.1. Consider the matrix M.�/ 2 W4�4 given by

M.�/ D

2664

2� C 2 1�

0 1�

1�

2� C 2 1�

0

0 1�

2� C 2 1�

1�

0 1�

2� C 2

3775 : (1.4)

As one can compute,

det.M.�/ � �I/ D .� C 2/2.�2.� C 2/2 � 4/

�2;

implying �.M/ D f�2; �2; �1 ˙ i; �1 ˙ p3g. Here we note that although M is

a 4 � 4 matrix, it has six eigenvalues including multiplicities. This gives us our firstexample of a matrix with more eigenvalues than either rows or columns.

Example 1.1 suggests that it may be possible to reduce the size of a network (ormatrix) while at the same time preserving its spectrum. For instance, the matrix

A D

266666664

�2 1 1 0 0 0

0 �1 C i 0 1 0 0

0 0 �1 � i 0 1 0

0 0 0 �1 � p3 0 1

0 0 0 0 �1 C p3 1

0 0 0 0 0 �2

377777775

(1.5)

and the matrix M.�/ in Example 1.1 have the same spectrum. However, it is not atall obvious whether some procedure exists that would allows us to reduce the matrixA to the smaller matrix M.�/ or what such a procedure might be.

Since C � W, we note that Definition 1.1 is an extension of the standarddefinition of the eigenvalues of a matrix to the matrices W

n�n. In particular, if thematrix M.�/ is in C

n�n, then �.M/ are the standard eigenvalues of M .Because our motivation is to extend the spectral theory of matrices, our standard

practice will be to take concepts used in the context of scalar-valued matrices anddemonstrate that they can be applied to W

n�n.For instance, the matrix A.�/ 2 W

n�n is said to be invertible if there is amatrix B.�/ 2 W

n�n such that A.�/B.�/ D I is the n � n identity matrix. IfA.�/ is invertible, we use the standard notation A.�/�1 to denote its inverse. As anexample, if

M.�/ D�

1�

1

0 �

�then M.�/�1 D

�� �1

0 1�

�:

Page 22: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.2 Isospectral Matrix Reductions 5

In what follows, we may, for convenience, suppress the dependence of the matrixM.�/ 2 W

n�n on � and simply write M . One reason for this is that for much ofthe theory that is developed here, we do not evaluate M.�/ at any particular point� 2 C. Rather, we consider M formally as a matrix with rational function entriesand not as a function of the spectral parameter �.

However, when we do consider the matrix M.�/ 2 Wn�n to be a function of �,

we mean that M is the function

M W dom.M/ ! Cn�n;

where dom.M/ is the set of all but the finite number of complex numbers for whicheach entry of M.�/ is defined. Surprisingly, it may be the case that �.M/ is not asubset of dom.M/, as the following example shows.

Example 1.2. Consider the matrix M.�/ 2 W2�2 given by

M.�/ D�

�0 .� � �0/�1

0 �0

�;

where �0 2 C. As one can compute, det.M.�/ � �I/ D .� � �0/2, implying that�.M/ D f�0; �0g. Therefore, �.M/ is not a subset of dom.M/ D C � f�0g.

1.2 Isospectral Matrix Reductions

Having introduced the notions of a spectrum and inverse spectrum of a matrix withrational function entries, we can now describe an isospectral reduction of a matrixM 2 W

n�n. The major goals in this section are first, to describe the process ofisospectral reduction, and second, to compare the spectrum of a reduced matrixwith that of the original unreduced matrix.

For M 2 Wn�n, let N D f1; : : : ; ng. If the sets R; C � N are nonempty, we

denote by MRC the jRj � jC j submatrix of M with rows indexed by R and columnsby C . Suppose the nonempty sets S � N and its complement NS D N � S arenonempty. The Schur complement M=M NS NS 2 W

jS j�jS j of M NS NS in M is the matrix

M=M NS NS D MSS � MS NS M �1NS NS M NSS ; (1.6)

assuming that the submatrix M NS NS is invertible.The Schur complement arises in many applications (see, e.g., [18]). For our

purposes, the Schur complement allows us to define the reduction of a matrixM 2 W

n�n.

Page 23: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6 1 Isospectral Matrix Reductions

Definition 1.2. For M.�/ 2 Wn�n, let S and NS form a nonempty partition of N .

The isospectral reduction of M over the set S is the matrix

R�.M I S/ D MSS � MS NS .M NS NS � �I/�1M NSS 2 WjS j�jS j (1.7)

if the matrix M NS NS � �I is invertible.

As can be seen from Definition 1.2, the reduced matrix R.M I S/ exists ifand only if the matrix M NS NS � �I is invertible. Moreover, R�.M I S/ is a Schurcomplement plus a multiple of the identity:

R�.M I S/ D .M � �I/=.M NS NS � �I/ C �I: (1.8)

In what follows, we will more often than not suppress the dependence of the reducedmatrix R�.M I S/ on � and instead write it as R.M I S/.

Example 1.3. Consider the matrix M 2 W6�6 with .0; 1/-entries given by

M D

266666664

0 0 1 1 0 0

0 1 0 0 1 1

1 0 1 0 0 0

0 1 0 1 0 0

1 0 0 0 0 0

0 1 0 0 0 0

377777775

:

For S D f1; 2g and NS D f3; 4; 5; 6g, one can compute that

.M NS NS � �I/�1 D

2664

11��

0 0 0

0 11��

0 0

0 0 � 1�

0

0 0 0 � 1�

3775 :

The isospectral reduction of M over S D f1; 2g is then

R.M I S/ D�

0 0

0 1

���

1 1 0 0

0 0 1 1

�2664

11��

0 0 0

0 11��

0 0

0 0 � 1�

0

0 0 0 � 1�

3775

2664

1 0

0 1

1 0

0 1

3775

D�

1��1

1��1

1�

�C1�

�2 W

2�2:

If a matrix has an isospectral reduction, the spectrum and inverse spectrum of theisospectral reduction and the original matrix are related in the following way.

Page 24: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.2 Isospectral Matrix Reductions 7

Theorem 1.1 (Spectrum and Inverse Spectrum of Isospectral Reductions).For M.�/ 2 W

n�n, let S and NS form a nonempty partition of N D f1; : : : ; ng. IfR�.M I S/ exists, then its spectrum and inverse spectrum are given by

��R.M I S/

� D ��.M/ [ ��1.M NS NS /

� � ��.M NS NS / [ ��1.M/

� I and

��1�R.M I S/

� D ��.M NS NS / [ ��1.M/

� � ��.M/ [ ��1.M NS NS /

�:

Proof. For M 2 Wn�n, we may assume, without loss of generality, that M has the

block matrix form

M D�M NS NS M NSS

MS NS MSS

�; (1.9)

where M NS NS � �I is invertible.Note that the determinant of a matrix and that of its Schur complement are related

by the identity

det

�A B

C D

�D det.A/ � det.D � CA�1B/; (1.10)

provided that the submatrix A is invertible. Using this identity on the matrix M ��I

yields

det.M � �I/ D det.M NS NS � �I/ � det�.MSS � �I/ � MS NS .M NS NS � �I/�1M NSS

�:

Therefore,

det�R.M I S/ � �I

� D det.M � �I/

det.M NS NS � �I/: (1.11)

To compare the eigenvalues of R.M I S/, M, and M NS NS , write

det.M � �I/ D p.�/

q.�/and det.M NS NS � �I/ D t .�/

u.�/;

for some p.�/=q.�/; t.�/=u.�/ 2 W. Hence

det�R.M I S/ � �I

� D p.�/u.�/

q.�/t.�/:

Let P D f� 2 C W p.�/ D 0g, Q D f� 2 C W q.�/ D 0g, T D f� 2 C W t .�/ D 0g,and U D f� 2 C W u.�/ D 0g be the sets that include multiplicities. By cancelingcommon linear factors, Definition 1.1 implies

Page 25: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

8 1 Isospectral Matrix Reductions

��R.M I S/

� Df� 2 C W p.�/u.�/ D 0g � f� 2 C W q.�/t.�/ D 0gD.P [ U / � .Q [ T /I and

��1�R.M I S/

� Df� 2 C W q.�/t.�/ D 0g � f� 2 C W p.�/u.�/ D 0gD.Q [ T / � .P [ U /:

Since P D �.M/, Q D �.M NS NS /, T D ��1.M/, and U D ��1.M NS NS / the resultfollows. ut

Since a matrix M 2 Cn�n has no inverse spectrum (i.e., ��1.M/ D ;),

Theorem 1.1 applied to a complex-valued matrix has the following corollary.

Corollary 1.1. For M 2 Cn�n, let S and NS form a nonempty partition of N .

Then

(i) ��R.M I S/

� D �.M/ � �.M NS NS /; and(ii) ��1

�R.M I S/

� D �.M NS NS / � �.M/.

Example 1.4. Let M and S be as in Example 1.3. As one can compute, the spectrumand inverse spectrum of M are given by �.M/ D f2; �1; 1; 1; 0; 0g and �.M NS NS / Df1; 1; 0; 0g. From Corollary 1.1, we have

��R.M I S/

� D f2; �1; 1; 1; 0; 0g � f1; 1; 0; 0g D f2; �1gI and

��1�R.M I S/

� D f1; 1; 0; 0g � f2; �1; 1; 1; 0; 0g D ;:

Observe that by reducing M over S , we lose all eigenvalues in the spectrum of thesubmatrix �.M NS NS / D f1; 1; 0; 0g.

Theorem 1.1 describes exactly which eigenvalues we gain from an isospectralreduction and which we may lose. Specifically, by isospectrally reducing the matrixM over S , we always gain the eigenvalues ��1.M NS NS / and lose all eigenvalues inthe set �.M NS NS / [ ��1.M/.

In this sense, an isospectral reduction of a matrix preserves the spectral infor-mation of the original matrix. However, it may not always be possible to reduce amatrix M 2 W

n�n over a particular set S � N .

Example 1.5. Consider the matrix M 2 W2�2 given by

M D�

1 1

1 �

�: (1.12)

For S D f1g and NS D f2g, note that M NS NS � �I D Œ0�, which is not invertible.Therefore, M cannot be isospectrally reduced over S .

In general, there is no way to know beforehand whether the isospectral reductionR.M I S/ exists without attempting to compute .M NS NS � �I/�1. However, thefollowing subset of W

n�n can always be reduced over every nonempty subsetS � N .

Page 26: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.2 Isospectral Matrix Reductions 9

For p.�/ 2 CŒ��, let deg.p/ denote the degree of the polynomial p.�/. If therational function w.�/ is equal to p.�/=q.�/, where p.�/; q.�/ 2 CŒ�� and p.�/ ¤0, we define the degree of the rational function w.�/ by

�.w/ D deg.p/ � deg.q/:

When p.�/ D 0, we let �.w/ D 0.

Definition 1.3. Let W� be the set of rational functions

W� D fw.�/ 2 W W �.w/ � 0g;

and let Wn�n� be the set of n � n matrices with entries in W� .

The set W� � W consists of the rational functions for which the degree ofthe numerator is less than or equal to the degree of the denominator. To describewhy this collection of rational functions is so useful in the theory of isospectralreductions, we begin by proving the following statement.

Lemma 1.1. Suppose !i .�/ D pi .�/=qi .�/, where pi .�/, qi .�/ 2 CŒ�� and qi .�/

is nonzero for 1 � i � n. For 1 � i; j � n, the following properties hold:

PniD1 !i

!D max1�i�n

˚�.!i / W !i ¤ 0

�I (1.13)

QniD1 !i

!D

8<:

nXiD1

�.!i / if 8 i 2 f1; � � � ; ng !i ¤ 0

0 otherwiseI(1.14)

��!i =!j

� D(

�.!i / � �.!j / if !i ¤ 0

0 otherwisefor !j ¤ 0I and (1.15)

�.!i � �/ D 1 for !i .�/ 2 W� : (1.16)

Proof. We first prove equations (1.13) and (1.14) for n D 2. The general statementsfollow by induction. For !i .�/ D pi .�/=qi .�/ and i D 1; 2, we have

�.!1 C !2/ D �

�p1q2 C p2q1

q1q2

D maxfdeg.p1q2/; deg.p2q1/g � deg.q1/ � deg.q2/

D maxf�.!1/; �.!2//g:

Page 27: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

10 1 Isospectral Matrix Reductions

Similarly, the product !1.�/!2.�/ has degree

�.!1 C !2/ D �

�p1p2

q1q2

D deg.p1/ C deg.p2/ � deg.q1/ � deg.q2/

D �.!1/ C �.!2/:

To prove equation (1.15), we observe that for !i .�/; !j .�/ ¤ 0, we have

�.!i =!j / D �

�pi qj

qi qj

D deg.pi / C deg.qj / � deg.qi / � deg.pj /

D �.!i / � �.!j /:

If !i .�/ D 0, then !i =!j D 0, implying �.!i =!j / D 0.For equation (1.15), we have

�.!i � �/ D �

�pi � �qi

qi

D maxfdeg.pi /; deg.qi / C 1g � deg.qi / D 1;

because deg.pi / � deg.qi /, since !i .�/ 2 W� .

Equations (1.13) and (1.14) directly imply that W� is closed under addition andmultiplication. However, W� is not a field, since most elements in this set do nothave a multiplicative inverse.

Theorem 1.2 (Existence of Isospectral Reductions). Let M.�/ 2 Wn�n� . If S

and NS form a nontrivial partition of N , then R.M I S/ exists and is in WjS j�jS j� .

Proof. Let M 2 Wn�n� . The inverse of the matrix M � �I can be written as

.M � �I/�1 D 1

det.M � �I/adj.M � �I/; (1.17)

where adj.M � �I/ is the adjugate matrix of M � �I , i.e., the matrix with entries

adj.M � �I/ij D .�1/iCj det.Mj i /; 1 � i; j � n; (1.18)

where Mij 2 W.n�1/�.n�1/ is obtained by deleting the i th row and j th column of

M � �I .

Page 28: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.2 Isospectral Matrix Reductions 11

Additionally, the determinant of M � �I can be written as

det.M � �I/ DX

�2Pn

sgn.�/

nYiD1

.M � �I/i;�.i/

; (1.19)

where the sum is taken over the set Pn of permutations on N . The sign sgn.�/ ofthe permutation � 2 Pn is 1 (respectively �1) if � is the composition of an even(respectively odd) number of permutations of two elements.

Using (1.14) and (1.16), we see that the term in (1.19) corresponding to theidentity permutation � D id 2 Pn has degree n, while for � ¤ id, the other termshave degree strictly smaller than n. Equation (1.13) then implies that

��

det.M � �I/� D n: (1.20)

Therefore, det.M � �I/ is not identically zero, implying via equation (1.17) thatthe inverse .M � �I/�1 exists. Similarly, for i 2 N , the matrix Mi i is equal to

QMi i � �I for some QM 2 W.n�1/�.n�1/� . Hence,

��

det.Mi i /� D n � 1; for i 2 N: (1.21)

For i ¤ j , the matrices Mij 2 W.n�1/�.n�1/ contain n � 2 entries of the form

Mk`��, while all other entries of Mij belong to the set W� . Hence, equations (1.14)and (1.16) imply that for i ¤ j ,

��

det.Mij /� � n � 2; for i; j 2 N; (1.22)

since for � 2 Pn�1, at most n � 2 terms in the productQn�1

kD1.Mij /k;�.k/ have theform Mk` � �.

Given that the degree of det.Mij / in (1.22) may be zero, equations (1.20)–(1.22)together with (1.15) imply that �..M � �I/�1

ij / � 0 for all 1 � i; j � n. Therefore,for every M 2 W

n�n� , the matrix M � �I is invertible, and .M � �I/�1 2 W

n�n� .

Now suppose that S and NS form a nontrivial partition of N . Since M NS NS 2W

j NS j�j NS j� , it follows that .M NS NS � �I/�1 2 W

j NS j�j NS j� . Definition 1.2 along with (1.14)

and (1.16) then implies that R.M I S/ exists and has entries in W� . utNote that Theorem 1.2 implies that the reduction R.M I S/ exists for every

matrix M 2 Wn�n� and S � N . The reason then, that we were unable to reduce the

matrix M in Example 1.5 is that M did not belong to W2�2� . Therefore, Theorem 1.2

does not apply.Although Theorem 1.2 does not apply to every possible matrix, it does apply

to those we consider most often, namely, those matrices that have complex-valuedentries. This we summarize in the following remark.

Page 29: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

12 1 Isospectral Matrix Reductions

Remark 1.1. If c 2 C, then c D c=1 has degree �.c/ D 0. As a consequence,the complex-valued matrices C

n�n are contained in Wn�n� . Theorem 1.2 therefore

implies that every complex-valued matrix A 2 Cn�n can be reduced over every

nonempty set S � N .

1.3 Sequential Reductions

In the previous section, we observed that the isospectral reduction R.M I S/ of amatrix M 2 W

n�n� is again a matrix in W

m�m� . According to Theorem 1.2, it is

therefore possible to reduce the matrix R.M I S/ over some subset of S . That is, wemay sequentially reduce every matrix M 2 W

n�n� .

A natural question that arises is this: to what extent does a sequentially reducedmatrix depend on the particular sequence of index sets over which it has beenreduced? As it turns out, if a matrix has been reduced over the index set S1, andthen over S2, and so on up to the index set Sm, then the resulting matrix dependsonly on the index set Sm.

To formalize this, let M 2 Wn�n� and suppose there are nonempty sets S1; : : : ; Sm

such that N S1 ; : : : ; Sm. Then M can be sequentially reduced over the setsS1; : : : ; Sm, where we write

R.M I S1; : : : ; Sm/ D R�

: : : R.R.M I S1/I S2/ : : : I Sm

to indicate this sequence of reductions. If M is sequentially reduced over the indexsets S1; : : : ; Sm, we call Sm the final index set of this sequence of reductions.

Theorem 1.3 (Uniqueness of Sequential Reductions). For M.�/ 2 Wn�n� , sup-

pose N S1 ; : : : ; Sm, where Sm is nonempty. Then

R.M I S1; : : : ; Sm/ D R.M I Sm/:

That is, in a sequence of reductions, the resulting matrix is completely specifiedby the final index set.

To prove Theorem 1.3 we require the following lemma.

Lemma 1.2. Let the nonempty sets S , T , and S [ T partition N . If M.�/ 2 Wn�n� ,

then R.M I S [ T; S/ D R.M I S/.

Proof. Let the nonempty sets S , T , and U D S [ T partition N . We assume,without loss of generality, that M 2 W

n�n� can be written as

M D24MSS MST MSU

MTS MT T MT U

MUS MU T MU U

35 :

Page 30: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.3 Sequential Reductions 13

Using the definition of isospectral reduction, we have

R.M I S/ D MSS � �MST MSU

� �MT T � �I MT U

MU T MU U � �I

��1 �MTS

MUS

�and

(1.23)

R.M I S [ T / D�MSS MST

MTS MT T

���

MSU

MT U

�.MU U � �I/�1

�MUS MU T

�: (1.24)

Taking the isospectral reduction of R.M I S [ T / over S in (1.24), we obtain

R.M I S [ T; S/

D MSS � MSU K.�/�1MUS

� �.MST � MSU K.�/�1MU T /L.�/�1.MTS � MT U K.�/�1MUS /�

;

(1.25)

where K.�/ MU U � �I and L.�/ MT T � �I � MT U K.�/�1MU T . Note thatboth K.�/�1 and L.�/�1 exist, as can be seen from the proof of Theorem 1.2. Toobtain the desired result, we need to verify that expressions (1.23) and (1.25) areequal.

Recall the following identity for the inverse of an invertible square matrix M

with 2 � 2 blocks:

M �1 D�

A B

C D

��1

D�

E�1 �E�1BD�1

�D�1CE�1 D�1 C D�1CE�1BD�1

�; (1.26)

where D is an invertible square matrix and E D A � BD�1C is the Schurcomplement of D in M . We note that if M and D are invertible in (1.26), thenthe same is true of the matrix E.

From the proof of Theorem 1.2, it follows that the 2 � 2 block matrix appearingin (1.23) is invertible, as is the submatrix MU U ��I . Using (1.26) to find the inverseof this 2 � 2 block matrix, we get

�MT T � �I MT U

MU T MU U � �I

��1

D�

L.�/�1 �L.�/�1MT U K.�/�1

�K.�/�1MU T L.�/�1 K.�/�1 C K.�/�1MU T L.�/�1MT U K.�/�1

�:

(1.27)

Using (1.27) in (1.23), we get (1.25), completing the proof. utWe now give a proof of Theorem 1.3.

Page 31: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

14 1 Isospectral Matrix Reductions

Proof. For M 2 Wn�n� , suppose N � S1 � � � Sm, where Sm ¤ ;. If m D 2,

then Lemma 1.2 directly implies that R.M I S1; S2/ D R.M I S2/.For 2 � k < m, suppose R.M I S1; : : : ; Sk/ D R.M I Sk/. Then

R.M I S1; : : : ; Sk; SkC1/ D R.M I Sk; SkC1/ D R.M I SkC1/;

where the second equality follows from Lemma 1.2. By induction, it then followsthat R.M I S1; : : : ; Sm/ D R.M I Sm/. utExample 1.6. Let M 2 C

4�4 be the matrix with .0; 1/-entries given by

M D

2664

1 0 1 0

0 1 0 1

0 1 1 1

1 0 1 1

3775 ;

and let S D f1; 2g. Our goal in this example is to illustrate that

R.M I S/ D R.M I S [ f3g; S/ D R.M I S [ f4g; S/:

One can compute

R.M I S [ f3g/ D24 1 0 1

1��1

1 1��1

1��1

1 ���1

35 and R.M I S [ f4g/ D

241 1

��11

��1

0 1 1

1 1��1

���1

35 :

Although R.M I S [ f3g/ ¤ R.M I S [ f4g/, note that by reducing both of thesematrices over S D f1; 2g, one has

R.M I S/ D R.M I S [ f3g; S/ D R.M I S [ f4g; S/ D"

�2�2�C1�2�2�

��1�2�2�

��1�2�2�

�2�2�C1�2�2�

#:

Additionally, �.M/ D f 12.3 ˙ p

5/; 12.1 ˙ p�3/g and �.M NS NS / D f0; 2g. Hence,

the matrix M and the reduced matrix R.M; S/ have the same eigenvalues byCorollary 1.1.

Example 1.6 illustrates the fact that an isospectral reduction need not have anyeffect on the spectrum of a matrix.

Page 32: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.4 Spectral Inverse 15

1.4 Spectral Inverse

In this section, we introduce a matrix transformation that exchanges the spectrumand inverse spectrum of a matrix M 2 W

n�n. This transformation will, in fact, beuseful in the following chapters, in which we will use it to investigate and defineconcepts related to the inverse spectrum of a matrix. We introduce this notion herebecause it is first and foremost a matrix operation.

Definition 1.4. For M.�/ 2 Wn�n, let S �1

� .M/ 2 Wn�n be the matrix

S �1� .M/ D .M.�/ � �I/�1 C �I 2 W

n�n

if the inverse .M.�/ � �I/�1 exists. The matrix S �1� .M/ is called the spectral

inverse of the matrix M.�/.

We will typically write the spectral inverse of M 2 Wn�n as S �1.M/ unless

otherwise noted.We note that a necessary and sufficient condition for S �1.M/ to exist is that the

matrix M.�/ � �I be invertible. For instance, the matrix

M D�

� 0

0 �

�2 W

2�2

cannot be spectrally inverted. However, if M has a spectral inverse, then thefollowing holds.

Theorem 1.4. Suppose M.�/ 2 Wn�n has a spectral inverse S �1.M/. Then

��S �1.M/

� D ��1.M/ and ��1�S �1.M/

� D �.M/:

Proof. Let M.�/ 2 Wn�n with spectral inverse S �1.M/. Note that

det�.S �1.M/ � �I/.M � �I/

� D det�.M � �I/�1.M � �I/

� D det.I / D 1:

Since the determinant is multiplicative, it follows that

det.S �1.M/ � �I/ D det.M � �I/�1;

and the result follows. utAs noted, a matrix M 2 W

n�n may or may not have a spectral inverse. However,if M 2 W

n�n� , then the proof of Theorem 1.2 implies that M � �I is invertible.

Therefore, S �1.M/ exists. This result is stated as the following lemma.

Lemma 1.3. If M.�/ 2 Wn�n� , then M.�/ has a spectral inverse.

Page 33: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

16 1 Isospectral Matrix Reductions

Example 1.7. Let M 2 W4�4� be the matrix given by

M D

2664

1�

1�

0 0

0 1�

1 0

0 0 1�

0

0 0 0 1�

3775 ;

for which

det�M.�/ � �I

� D �8 � 4�6 C 6�4 � 4�2 C 1

�4:

As one can calculate, the spectral inverse S �1.M/ is the matrix

S �1.M/ D

266664

���2�1

��.�2�1/2

��2

.�2�1/3��3

.�2�1/4

0 ���2�1

��2

.�2�1/2��3

.�2�1/3

0 0 ���2�1

��2

.�2�1/2

0 0 0 ���2�1

377775C �I:

Taking the determinant of S �1.M/ � �I , one has

det�S �1.M/ � �I

� D �4

�8 � 4�6 C 6�4 � 4�2 C 1:

That is, det�S �1.M/ � �I

� D det.M.�/ � �I/�1.

Observe that for every M 2 Wn�n� , the spectral inverse S �1.M/ does not belong

to Wn�n� . Therefore, we have no guarantee via Theorem 1.2 that S �1.M/ can be

isospectrally reduced. As it turns out, though, the following holds.

Theorem 1.5 (Reductions of the Spectral Inverse). For M.�/ 2 Wn�n� , suppose

that N S1 ; : : : ; Sm, where Sm D S is nonempty. Then

(i) R�S �1.M/I S

�exists;

(ii) R.S �1.M/I S1; : : : ; Sm/ D R.S �1.M/I Sm/; and(iii) R.S �1.M/I S/ D .M � �I/�1=

�.M � �I/�1

�NS NS C �I .

Proof. For M 2 Wn�n� , suppose S and NS form a nonempty partition of N . By

Lemma 1.3, the matrix S �1.M/ exists, and

S �1.M/ � �I D .M � �I/�1 2 Wn�n� :

By considering the submatrices of the matrix in the previous equation, we find thatŒS �1.M/�SS � �I , ŒS �1.M/�S NS , ŒS �1.M/� NSS , and ŒS �1.M/� NS NS � �I all have

Page 34: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

1.4 Spectral Inverse 17

entries in W� . Moreover, ŒS �1.M/� NS NS � �I is not identically zero, so its inverseexists. We deduce that the reduction of S �1.M/ over S exists and is given by

R.S �1.M/I S/ � �I

D .ŒS �1.M/�SS � �I/ � ŒS �1.M/�S NS�ŒS �1.M/� NS NS � �I

��1ŒS �1.M/� NSS :

Moreover, R.M I S/ 2 WjS j�jS j� .

To prove (iii), notice that we have

ŒS �1.M/�SS � �I D �.M � �I/�1

�SS

ŒS �1.M/� NSS D ŒS �1.M/ � �I � NSS D �.M � �I/�1

�NSS

ŒS �1.M/�S NS D �.M � �I/�1

�S NS and

ŒS �1.M/� NS NS � �I D �.M � �I/�1

�NS NS :

These relations imply (iii).Substituting each submatrix MRC in the proof of Lemma 1.2 by the matrix

S �1.M/RC D(

.M � �I/�1RC C �I if R D C;

.M � �I/�1RC otherwise,

and then following the proof of Theorem 1.3 using S �1.M/ instead of M yields aproof of part (ii). ut

In summary, if M 2 Wn�n� , then it is possible to reduce both M and its spectral

inverse S �1.M/ over every nonempty index set S � N . Moreover, the eigenvaluesand inverse eigenvalues of these reduced matrices can be found using Theorem 2.1.

Page 35: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 2Dynamical Networks and Isospectral GraphReductions

This is a fundamental chapter of the book. It deals with networks, which are hereconsidered as graphs, and is built on the theory developed in the previous chapter,on matrices.

Although, the dynamical networks described in Chap. 3 are richer objectsthan their weighted adjacency matrices, the latter still carry the most importantinformation about a dynamical network. Indeed, from a theoretical point of view,a network’s weighted adjacency matrix describes a linearization of the network’sdynamics, which in applications is often the only network information available. Infact, it is not uncommon to have only the unweighted adjacency matrix of a network.

In this chapter, we introduce the notion of an isospectral graph reduction, whichis based on the essential idea of the branch structure of a graph (network). A graph’scollection of branches, which are related to the more familiar concepts of paths andcycles, form the foundation of the isospectral transformations considered in thisbook, both in this chapter and later in applications of this theory.

The isospectral graph reductions that we study in this chapter allow us to reducethe size of a graph (number of vertices) while maintaining the graph’s spectrum, upto a known set. Besides these reductions, we also introduce and analyze a number ofother graph transformations that affect the graph’s spectrum in a specific way. Theseinclude the operations of branch expansions, reweightings, and mergings.

These graph transformations can be used to simplify the structure of a graph(network) while preserving the graph’s (network’s) spectrum, up to a known set, aswell as its collection of edge weights. For example, it is possible to reduce a networkin which all edges have weight 1 to a smaller network with the same property. Thatis, these isospectral transformations do not somehow shift the complexity of theoriginal network to the edge weights of the reduced network.

We begin this chapter by introducing the fundamental operation of isospectralgraph reduction. After analyzing its properties, we show that a sequence ofisospectral reductions of the same graph results in the same reduced network,

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__2

19

Page 36: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

20 2 Dynamical Networks and Isospectral Graph Reductions

regardless of the order in which the vertices were removed, i.e., the result dependsonly on the final collection of vertices.

It is worthwhile mentioning that the procedure of isospectral graph reduction canbe applied not only to the vertices of graphs (elements of networks) but also to thegraph’s edges. To do this, one need only consider the line (dual) graph, in which alledges of the initial graph are vertices, and apply the same procedure to it. Because allthe results and operations are the same, we will deal only with removal of vertices.

We also introduce a new equivalence relation on the collection of graphs(networks) that we consider. Namely, two graphs (networks) are said to be spectrallyequivalent if they can be isospectrally reduced to one and the same graph (network).The basic idea here is that two graphs may look very different but be spectrallyequivalent, suggesting that the corresponding networks have similar dynamics.

Although each of these results is rigorously proved, we have attempted to makethe exposition as visual and accessible as possible. With this goal in mind, thedefinitions and results of this chapter are illustrated by numerous examples andfigures.

2.1 Dynamical Networks as Graphs

To begin, we note that to each dynamical network there is an associated weighteddirected graph G, which we call the network’s graph of interactions. As thename suggests, this graph describes the various interactions among the network’selements. In network theory, the unweighted version of this graph is often called thenetwork’s topology. Here, we do not use this term, since it might be confused withthe unrelated branch of mathematics called topology.

The graph G D .V; E; !/ is composed of a vertex set V , a set of directededges E, and a function ! that gives a weight to each edge in E. The vertex set V

represents the elements of the network, and the edge set E, the interactions amongthese elements. Because it is assumed that the graph G corresponds to a network,we consider only finite graphs, or those graphs in which V and E are finite andnonempty.

For V D fv1; : : : ; vng, we let eij denote the edge from vertex vi to vj . Theedge eij is an element of E if the i th network element interacts with (or directlyinfluences) the j th network element. The function ! gives the edge weights of G,where !.eij /, or the edge weight of eij , corresponds to the strength of the interactionbetween the i th and j th elements of a network. We adopt the standard conventionthat each edge of G has a nonzero weight. More formally, !.eij / D 0 if and onlyif eij … E, or the i th element of the network does not directly influence the j thnetwork element.

Suppose the graph G D .V; E; !/ has the vertex set V D fv1; : : : ; vng. We definethe n � n matrix M.G/ by

M.G/ij D !.eij /:

Page 37: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.1 Dynamical Networks as Graphs 21

The matrix M.G/ is called the weighted adjacency matrix of G. If G is the graphof interactions of a dynamical network, we say that the eigenvalues of the matrixM.G/ make up the spectrum of this dynamical network. In later chapters, we willconnect the spectrum of a network with its dynamics. For now, we simply assumethat to each dynamical network there is an associated graph G with adjacency matrixM.G/.

To compute the spectrum of a network, we need to know the weights of its graphof interactions G D .V; E; !/. At this point, we have yet to formally define theweight set !.E/ D f!.e/ W e 2 Eg that will be considered here. In practice, oneoften chooses !.E/ to be some subset of the real numbers. For example, if weconsider the graph G to be an unweighted graph, then M.G/ is the matrix with 0–1

entries given by

M.G/ij D(

1 if eij 2 E;

0 otherwise:

However, the set of weights we will use is not a subset of the real or even complexnumbers but the set of rational functions W, defined in Chap. 1. The class of graphswe consider is defined as follows.

Definition 2.1. Let G be the collection of weighted directed graphs given by

G D fG D .V; E; !/ W ! W E ! Wg:

The spectrum, or set of eigenvalues, of a graph G 2 G is the set �.G/ D �.M.G//.The inverse spectrum of G is the set ��1.G/ D ��1.M.G//.

Equivalently, one could define G as the set of all graphs G D .V; E; !/ for whichM.G/ 2 W

n�n for some n 2 N. In either case, Definition 1.1 gives the spectrumand inverse spectrum of the graph G 2 G.

To stress the generality of considering the set G, we note that graphs that areeither undirected or have parallel edges can be considered graphs in G. In particular,an undirected graph can be made into a directed graph by orienting each of its edgesin both directions. If a graph G has multiple edges between two vertices, a singleedge can be put in their place whose weight is the sum of the weights of the edgesthat it is replacing.

Example 2.1. Consider the graphs G and H shown in Fig. 2.1. The weightedadjacency matrices M.G/ 2 W

4�4 and M.H/ 2 C6�6 are given in (1.4) and (1.5),

respectively. Hence,

�.G/ D �.H/ D f�2; �2; �1 ˙ i; �1 ˙ p3g:

That is, G and H have the same spectrum, although H has more vertices than G.

Page 38: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

22 2 Dynamical Networks and Isospectral Graph Reductions

G

v1

v2

v4

v3

2λ +2

2λ +2

2λ +2

2λ +2 u1

u2

u3

u4

u5

u6

H

−2

−1 + i

−1 − i

−1 +√

3

−1− √3

−2

Fig. 2.1 The graph G 2 W4�4 (left) given in Example 2.1 and H 2 C

6�6, where �.G/ D �.H/

Again, our main question is whether a graph can be reduced to a smaller graphin such a way that its spectrum is preserved. Equivalently, we ask whether there is away to reduce a graph H to a graph G such that M.H/ and M.G/ have the sameeigenvalues.

The answer from Chap. 1 is that this is indeed possible. The goal, then, of thischapter is to describe what isospectral graph reductions means in terms of theoriginal graph (network) and second how to use this understanding to develop otherisospectral transformations of a graph (network).

One point we hope to make is that there are a number of useful isospectraltransformations that can be developed using the theory of isospectral graph reduc-tions as an initial starting point. In this chapter, we introduce a handful of suchtransformations that will be useful later for different purposes. The major point isthat one can develop the most relevant transformation for a given problem withrespect to both analytic simplicity and effectiveness as a computational tool.

2.2 Isospectral Graph Reductions

Because of the structural complexity and considerable size of many networks, thecorresponding graph of interactions G D .V; E; !/ may have an extremely largevertex set V as well as a large and irregular set of edges. To reduce this complexitywhile maintaining the network’s spectral properties, we introduce the concept ofisospectral graph reductions, which is related to isospectral matrix reduction, studiedin the previous chapter.

The spectrum of a graph is intimately related to its structure. Specifically,knowing the graph’s path and cycle structure along with its weights gives us enoughinformation to compute the graph’s spectrum. Simply put, a path is a sequence ofdistinct vertices that can be traversed by moving along edges of the graph. A cycleis a path that begins and ends at the same vertex.

More formally, a path P in the graph G D .V; E; !/ is an ordered sequence ofdistinct vertices P D v1; : : : ; vm 2 V such that ei;iC1 2 E for 1 � i � m � 1.We call the vertices v2; : : : ; vm�1 of P the interior vertices of P . If the vertices v1

Page 39: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.2 Isospectral Graph Reductions 23

Fig. 2.2 The graphG 2 W

4�4 with adjacencymatrix M.G/ considered inExample 1.5

v1 v21

1

1 λ

G

and vm are the same, then P is a cycle. If it is the case that a cycle contains a singlevertex, then we call this cycle a loop. In addition, since vi is a loop of G if and onlyif ei i 2 E, we may refer to the edge ei i as a loop.

The main idea behind the isospectral reduction of a graph G D .V; E; !/ is thatwe reduce G to a smaller graph on some subset S � V . Equating a graph G with itsadjacency matrix M.G/, we note that formally, an isospectral reduction of G couldbe defined as the graph with the “reduced” adjacency matrix R.M.G/I S/. Indeed,this will be the case. However, to investigate how the structure of a graph is affectedby an isospectral reduction, we deliberately limit the type of vertex sets over whichwe can reduce a graph. Such sets, called structural sets, are defined as follows.

Definition 2.2. Let G D .V; E; !/ 2 G. A nonempty vertex set S � V is astructural set of G if

(i) each cycle of G that is not a loop contains a vertex in S ; and(ii) !.eii / ¤ � for each vi 2 NS D V � S .

For G D .V; E; !/, suppose S is a subset of the vertex set V . Then the graphGjS is called the subgraph of G induced over the vertex set S and is given by

GjS D .S; E ; �/ where E D feij 2 E W vi ; vj 2 Sg and � D !jE :

If S is a structural set of G, part (i) of Definition 2.2 says that the subgraph Gj NShas no cycles except possibly for loops. Part (ii) of Definition 2.2 is the formalassumption that the loops of the vertices in NS , i.e., the complement of S , do nothave weight equal to � 2 W. That is, these loops are not weighted by the rationalfunction �=1 2 W.

Consider the graph G 2 G with adjacency matrix

M.G/ D�

1 1

1 �

�;

as in Example (1.5). The graph G is shown in Fig. 2.2. Notice that !.e22/ D �.Hence, the set S D fv2g is not a structural set of G.

For G 2 G, we let st.G/ denote the set of all structural sets of the graph G. Theidea behind the notion of a structural set S 2 st.G/ is the following. Every randomwalk along edges of G that begins at a vertex in S eventually finds its way to anothervertex of S , if we ignore loops. Therefore, a structural set allows us essentially topartition a random walk on G into finite paths and cycles that begin and end withvertices of S . We give these paths and cycles the following name.

Page 40: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

24 2 Dynamical Networks and Isospectral Graph Reductions

Definition 2.3. Suppose G D .V; E; !/ with the structural set S D fv1; : : : ; vmg.Let Bij .GI S/ be the set of paths from vi to vj , or cycles if i D j , with no interiorvertices in S . We call a path or cycle ˇ 2 Bij .GI S/ a branch of G with respect toS . We let

BS .G/ D[

1�i;j �m

Bij .GI S/

denote the branch set of all branches of G with respect to S .

If ˇ D v1; : : : ; vm is a branch of G with respect to S and m > 2, we define

P!.ˇ/ D !.e12/

m�1YiD2

!.ei;iC1/

� � !.eii /: (2.1)

For m D 1; 2, we let P!.ˇ/ D !.e1m/. We call P!.ˇ/ the branch product of ˇ.Notice that assumption (ii) in Definition 2.2 implies that the branch product of

every ˇ 2 BS .G/ is always defined and is a rational function in W. In fact, thereason we require that part (ii) of Definition 2.2 hold is to ensure that the branchproduct of each branch in BS .G/ exists.

To isospectrally reduce a graph over the set S 2 st.G/, we replace each branchBij .GI S/ with a single edge eij 2 E . The following definition specifies the weightsof these edges.

Definition 2.4. Let G D .V; E; !/ with structural set S D fv1; : : : ; vmg. Definethe edge weights

�.eij / D

8<:

Xˇ2Bij .GIS/

P!.ˇ/ if Bij .GI S/ ¤ ;;

0 otherwise,

for 1 � i; j � m: (2.2)

The graph RS .G/ D .S; E ; �/ in which eij 2 E if �.eij / ¤ 0 is the isospectralreduction of G over S .

Observe that �.eij / in Definition 2.4 is the weight of the edge eij in RS .G/.Moreover, since W is closed under both addition and multiplication, it follows thatthe edge weights �.eij / of RS .G/ are also in the set W. Hence the isospectralreduction RS .G/ is again a graph in G.

Example 2.2. Consider the graph G D .V; E; !/ given in Fig. 2.3 (left), where G isan unweighted graph, i.e., each edge of G is given unit weight. Note that the vertexset S D fv1; v3g � V is a structural set of G, since

(i) the three nonloop cycles of G, namely v1; v2; v3; v4; v1; v1; v5; v1; andv3; v6; v3, each contains a vertex in S ; and

Page 41: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.2 Isospectral Graph Reductions 25

G

v1 v1v2

v4

v5 v3 v3v6

1

1

1λ−1

λ−1

λ−1

λ−11

S(G)

Fig. 2.3 Reduction of G over S D fv1; v3g from Example 2.2, where each edge in G has unitweight

(ii) the loop weights of vertices in NS D fv2; v4; v5; v6g are !.e22/ D 1, !.e44/ D 1,!.e55/ D 1, and !.e66/ D 1 respectively.

Hence, !.eii / D 1 2 W is not equal to the rational function �=1 2 W for eachvi 2 NS .

In contrast, the vertex set T D fv1; v2; v5g is not a structural set of G, since the(nonloop) cycle v3; v6; v3 does not contain a vertex of T . Phrased another way, therandom walk v2; v3; v6; v3; v6; : : : cannot be partitioned into finite paths and cyclesthat begin and end with vertices in T .

Returning to the structural set S , we see that the branches in BS .G/ arerespectively given by B11.GI S/ D fv1; v5; v1g, B13.GI S/ D fv1; v2; v3g,B31.GI S/ D fv3; v4; v1g, and B33.GI S/ D fv3; v6; v3g. Using (2.1), we concludethat the branch product of each branch is given by

P!.v1; v5; v1/ D P!.v1; v2; v3/ D P!.v3; v4; v1/ D P!.v3; v6; v3/ D 1

� � 1:

Equation (2.2) then gives each edge of RS .G/ D .S; E ; �/ the weight

�.e11/ D �.e13/ D �.e31/ D �.e33/ D 1

� � 1:

Since each edge weight is nonzero, the edge set E of RS .G/ is E D fe11, e13, e31,e33g. In particular, an edge of E need not be an edge of E. The graph RS .G/ isshown in Fig. 2.3 (right).

To relate an isospectral graph reduction to an isospectral matrix reduction, weneed a way of connecting a structural set with a submatrix. For the graph G D.V; E; !/, suppose that S; T � V are nonempty. By a slight abuse of notation, letM.G/ST denote the submatrix of the graph’s weighted adjacency matrix with rowsindexed by the vertices in S and columns indexed by the vertices in T . This allowsus to state the following fundamental relation between isospectral graph and matrixreductions.

Page 42: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

26 2 Dynamical Networks and Isospectral Graph Reductions

Theorem 2.1. Let S be a structural set of the graph G 2 G. Then

M.RS .G// D R.M.G/I S/:

Before proving Theorem 2.1, we note the following. A matrix P 2 Cn�n is

a permutation matrix if each row and column of P has exactly one entry that is1 with all other entries equal to 0. We will use the fact that every permutationmatrix is invertible. Moreover, for G D .V; E; !/ 2 G, the matrix PM.G/P �1

is the adjacency matrix of G in which the vertices V have been relabeled by thepermutation P .

Proof. Let G D .V; E; !/, where V D fv1; : : : ; vng. Without loss in generality,suppose S D fv1; : : : ; vmg is a structural set of the graph G. Since the subgraphGj NS has no cycles except loops, the vertices of Gj NS can be relabeled such thatthe matrix M.Gj NS / is upper triangular. By assumption (ii) of Definition 2.2, thediagonal entries M.Gj NS /i i are not equal to �, implying

.M.Gj NS / � �I/i i ¤ 0 for all m < i � n:

Hence, M.G/ NS NS � �I is an upper triangular matrix with nonzero diagonal and istherefore invertible.

Letting M D M.G/, the upper triangular matrix M NS NS � �I can be written as

M NS NS � �I D D.I C N /;

where D D diagŒMmC1;mC1�� : : : Mnn��� and N 2 W.n�m/�.n�m/ is the nilpotent

matrix given by

Nij D(

MiCm;j Cm

MiCm;iCm��for i < j;

0 otherwise:(2.3)

The inverse of M NS NS � �I is then

.I C N /�1D�1 D .I C N � N 2 � � � � C .�1/n�m�1N n�m�1/D�1; (2.4)

where D�1 D diagŒ 1MmC1;mC1��

: : : 1Mnn��

�.

For 1 � ` � n � m � 1, the matrix N ` is given by

N `ij D

(PQ`kD1 Nik�1;ik for i < j;

0 otherwise;

where the sum is taken over all strictly increasing .` C 1/-tuples i D i0 < � � � <

i` D j starting at i and ending at j . Hence for 1 � i; j � m, we have

Page 43: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.2 Isospectral Graph Reductions 27

Mij C.MS NS .M NS NS � �I/�1M NSS /ij

D Mij C .MS NS .I C N � N 2 � � � � C .�1/n�m�1N n�m�1/D�1M NSS /ij

D Mij CnX

pDmC1

nXqDmC1

Mpi .

n�m�1X`D0

.�1/`N `D�1/pqMqj :

Observe that the entries of the matrix .�1/`N `D�1 are

.�1/`.N `D�1/ij D8<:PQ`

kD1

Mik�1Cm;ik Cm

��Mik�1Cm;ik�1Cm

1

��Mj Cm;j Cm

for i < j;

0 otherwise;

where as before, the sum is taken over all increasing .` C 1/-tuples i D i0 < � � � <i` D j starting at i and ending at j . Therefore,

Mij C.MS NS .M NS NS ��I/�1M NSS /ij D Mij CX0

@n�m�1X`D0

Mip

� � Mpp

YkD1

Mik�1;ik

� � Mik;ik

Mqj

1A ;

where the sum is taken over all increasing .`C1/-tuples p D i0 < i1 < � � � < i` D q

for m < p; q � n.Under the assumption that M.G/ NS NS � �I is upper triangular, the graph G has

the following property: if ˇ D vi ; vi2 ; : : : ; vi`�1; vj 2 BS .G/, then i2 < � � � < i`�1,

where i; j � m and i2; : : : ; i`�1 > m.Thus, the branch product of ˇ can be written as

P!.ˇ/ D Mi;i2

� � Mi2;i2

YkD1

Mik�1;ik

� � Mik;ik

Mi`�1;j:

Summing over all branches of BS .G/ from vi to vj , we arrive at

M.RS .G//ij D Mij CX

n�m�1X`D0

Mip

� � Mpp

YkD1

Mik�1;ik

� � Mik;ik

Mqj

!:

Therefore, the weighted adjacency matrix of RS .G/ is

M.RS .G// D M.G/SS � M.G/S NS .M.G/ NS NS � �I/�1M.G/ NSS D R.M.G/; S/;

which completes the proof. utThe reason we consider both graph and matrix reductions is that both points of

view have their advantages. For one, graphs allow one to analyze network structurevisually, while matrices allow for easy storage and manipulation of this network

Page 44: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

28 2 Dynamical Networks and Isospectral Graph Reductions

information. More to the point, equation (2.2) indicate how the spectrum of agraph is preserved in terms of the graph’s branch structure. This observation willallow us, in Sects. 2.5 and 2.6 of this chapter, to develop other types of isospectraltransformations. These transformations will be based on manipulating a graph’s pathand cycle structure in ways that modify the graph’s spectrum in a predictable way.

For now, we note that if S is a structural set of the graph G 2 G, then theisospectral reduction RS .G/ is also a graph in G. Hence, both G and RS .G/ havewell-defined spectra. The relation between �.G/ and �.RS .G// is given in thefollowing corollary of Theorems 1.1 and 2.1.

Corollary 2.1. Let S be a structural set of the graph G 2 G. Then

��RS .G/

� D ��.G/ [ ��1.Gj NS /

� � ��.Gj NS / [ ��1.G/

�I and

��1�RS .G/

� D ��.Gj NS / [ ��1.G/

� � ��.G/ [ ��1.Gj NS /

�:

Suppose that G D .V; E; !/ has the structural set S . Because the graph Gj NS hasa particular form, it is possible to compute both �.Gj NS / and ��1.Gj NS / quickly. Thatis, since M.Gj NS /��I is similar to an upper triangular matrix via some permutation,it follows that

det�M.Gj NS / � �I

� DYvi 2 NS

�!.eii / � �

�: (2.5)

Since the productYvi 2 NS

.!.ei i / � �/ D p.�/=q.�/ for some p.�/=q.�/ 2 W then

�.GjS / D f� 2 C W p.�/ D 0g and ��1.GjS / D f� 2 C W q.�/ D 0g: (2.6)

For complex-valued matrices, Corollary 2.1 together with (2.5) and (2.6) impliesthe following corollary.

Corollary 2.2. Let S be a structural set of the graph G 2 G. If M.G/ 2 Cn�n,

then

(i) ��1.G/ D ; and ��1.Gj NS / D ;;(ii) �.Gj NS / D f!.eii / W vi 2 NSg;

(iii) �.RS .G// D �.G/ � �.Gj NS /; and(iv) ��1.RS .G// D �.Gj NS / � �.G/.

In many applications, the graphs (matrices) that are used have real or positiveweights (entries). If G D .V; E; !/ has complex-valued weights and S 2 st.G/,then part (iii) of Corollary 2.2 states that the spectra of RS .G/ and G differ at mostby the spectrum of Gj NS . Moreover, part (ii) of Corollary 2.2 states that the spectrum�.Gj NS / consists of the weights of the loops ei i for vi 2 NS and is therefore easilyidentified.

Page 45: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.2 Isospectral Graph Reductions 29

G

v1v2

v4

v5 v3 v6v5

v2

v4

v6

G|S

Fig. 2.4 The restriction of the graph G in Example 2.5 to NS D fv2; v4; v5; v6g, where each edgein each of G and Gj NS has unit weight

Similar to Theorem 1.1 for matrices, Corollary 2.1 describes exactly whicheigenvalues we may gain from an isospectral reduction and which me may lose. Inthis way, an isospectral reduction of a graph preserves the spectral information of theoriginal graph. This will be important, for instance, in Sect. 2.6, where we considerisospectral reductions that do not affect the nonzero eigenvalues of a graph.

Example 2.3. Let G be the graph considered in Example 2.2. As previously shown,the vertex set S D fv1; v3g is a structural set of G. Moreover, M.G/ 2 C

6�6.Hence, Corollary 2.2 allows us to compute the eigenvalues of the reduced graphRS .G/ quickly once the eigenvalues of G are known.

As one can compute, the eigenvalues of the graph G are �.G/ Df2; �1; 1; 1; 1; 0g. The restricted graph Gj NS , shown in Fig. 2.4 (right), has loopweights !.e22/ D 1, !.e44/ D 1, !.e55/ D 1, and !.e66/ D 1. Corollary 2.2then implies that �.Gj NS / D f1; 1; 1; 1g. Additionally, since �.RS .G// D�.G/ � �.Gj NS /, the spectrum of the reduced graph is �

�RS .G/

� D f2; �1; 0g.Since the graph RS .G/ has two vertices, the matrix M.RS .G// belongs to W

2�2� .

However, notice that

det�M.RS .G// � �I

� D �3 � �2 � 2�

� � 1;

which is zero for � D 2; �1; 0. Similar to Example 1.1, this is an explicitdemonstration of the fact that an n � n matrix in W

n�n� may have more than n

eigenvalues.Therefore, the effect of reducing G over S is that we lose the eigenvalues

f1; 1; 1; 1g. However, even if �.G/ is unknown, we still know the following. Theset of eigenvalues �.Gj NS / D f1; 1; 1; 1g is the most by which �.RS .G// and �.G/

can differ.We note that this example is equivalent to Example 1.3 in Chap. 1. The difference

is that this example is taken from the point of view of a graph reduction. This isdone to emphasize the differences and similarities between isospectral reductions ofmatrices and graphs.

Page 46: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

30 2 Dynamical Networks and Isospectral Graph Reductions

We note that for every G 2 G, both �.Gj NS / and ��1.Gj NS / are easily calculatedvia equation (2.5). Therefore, Corollary 2.1 offers a quick way of computing theeigenvalues of a reduced graph if the spectrum of the original unreduced graph isknown.

Before moving on to the following section, on sequential graph reductions, wenote that undirected graphs are often studied in the theory of networks. This is thecase whenever the interaction between two network elements has the same effect onboth of them, i.e., when the interaction between elements is symmetric.

Undirected graphs are particular types of directed graphs and as such can bereduced using Definition 2.4. For the moment, we consider the theory of isospectralgraph reductions restricted to the class of undirected graphs.

Definition 2.5. A graph G D .V; E; !/ in G is an undirected graph if wheneverthe edge eij is in E, then eji 2 E and !.eij / D !.eji /.

A consequence of Definition 2.5 is that a graph G 2 G is undirected if and only ifits adjacency matrix M.G/ is symmetric. Typically, an undirected graph is assumedto have no loops. The reason we allow undirected graphs to be graphs possibly withloops is that it allows us to state the following results more concisely.

Theorem 2.2. Suppose S is a structural set of the undirected graph G 2 G. Thenthe reduced graph RS .G/ is an undirected graph.

Proof. If G 2 G is undirected, then its adjacency matrix M D M.G/ is symmetric.For S 2 st.G/, we have

�MS NS .M � �I/�1NS NS M NSS

�T D M TNSS

�.M � �I/�1NS NS

�TM T

S NS D MS NS .M � �I/�1NS NS M NSS ;

where the second inequality follows from the fact that M TNSSD MS NS , M T

S NS D M NSS ,and the fact that the inverse of the symmetric matrix .M ��I/ NS NS is also symmetric.Hence, the matrix MS NS .M � �I/�1NS NS M NSS is symmetric.

It then follows that the reduced matrix

R.M I S/ D .M � �I/SS � MS NS .M � �I/�1NS NS M NSS

is symmetric, implying that RS .G/ is an undirected graph. utExample 2.4. Let G be the undirected graph shown in Fig. 2.5 (left). The edgesfound in this figure are undirected, but each can be thought of as being two directededges oriented in opposite directions with equal (unit) weights.

As one can check, the set S D fv1; v3; v4; v6g is a structural set of G.Theorem 2.2 then implies that the reduced graph RS .G/ is undirected, which canbe seen in Fig. 2.5 (right).

Given a graph G D .V; E; !/, recall that two vertices vi ; vj 2 V are adjacentif eij 2 E. Since two adjacent vertices form a cycle in an undirected graph, this

Page 47: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.3 Sequential Graph Reductions 31

G

v1

v2 v3

v4

v5v6

v1

v3

v4v6

1+

1+

S(G)

Fig. 2.5 The undirected graph G with unit edge weights and its reduction RS .G/, where thestructural set S is given by S D fv1; v3; v4; v6g

naturally restricts which sets can be its structural sets. In fact, this observationallows us to characterize which sets can and cannot be structural sets of anundirected graph.

Theorem 2.3. Let G D .V; E; !/ be an undirected graph. Then S 2 st.G/ if andonly if no two vertices vi ; vj 2 NS are adjacent and !.ekk/ ¤ � for all vk 2 NS .

Proof. For S 2 st.G/, suppose vi ; vj 2 NS , where i ¤ j . Then vi and vj are notadjacent, since otherwise, vi ; vj would form a cycle. Moreover, !.ekk/ ¤ � for allvk 2 NS by Definition 2.2.

Conversely, suppose that no two vertices vi ; vj 2 NS are adjacent. Assuming thatv1; : : : ; vm for m � 2 is a cycle of G, then if it is the case that v1 2 NS , it followsthat v2 2 S , since v1 and v2 are adjacent. Otherwise, if v1 2 S , then in either case,the cycle v1; : : : ; vm will contain a vertex of S . Additionally, if !.ekk/ ¤ � for allvk 2 NS , then it follows that S is a structural set of G. ut

Based on Theorem 2.3, the set S D fv1; v3; v4; v6g of the graph G in Fig. 2.5(left) is a structural set, since G has no loops and v2; v5 are not adjacent.

2.3 Sequential Graph Reductions

Observe that if G 2 G, then every reduction RS .G/ is again a graph in G. SinceRS .G/ may have a structural set T , it may be possible to consider a sequence ofreductions on the graph G 2 G. However, to do so formally requires that we firstextend our notation to sequences of isospectral graph reductions.

For G D .V; E; !/, suppose Sm � Sm�1 � � � � � S1 � V such that S1 2 st.G/,R1.G/ D RS1.G/, and

SiC1 2 st.Ri .G// where RSiC1.Ri .G// D RiC1.G/; 1 � i � m � 1:

If this is the case, we say that S1; : : : ; Sm induce a sequence of reductions on G

with final vertex set Sm. By way of notation, we write Rm.G/ D R.GI S1; : : : ; Sm/,

Page 48: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

32 2 Dynamical Networks and Isospectral Graph Reductions

G

v1

v2 v3

v4

1

11

1

λ2

S(G) S(G)

v2 v3

v41 1

λ1λ

˜

v1

v3

v4

1

1

λ

Fig. 2.6 The graph G and its reductions considered in Example 2.5

where R.GI S1; : : : ; Sm/ denotes the graph G reduced over the vertex set S1, thenover S2, and so on until G is reduced over the final vertex set Sm.

Because an isospectral graph reduction is a special type of matrix reduction,Theorem 2.1 has the following corollary.

Corollary 2.3. If S1; : : : ; Sm induces a sequence of reductions on G 2 G, then

M.R.GI S1; : : : ; Sm// D R.M.G/I Sm/:

If S … st.G/, it is natural to ask whether there exists a sequence of vertex setsS � Sm�1 � � � � � S1 � V that induce a sequence of reductions on G, and if sucha sequence exists, whether it is the only such sequence.

To address these questions we consider the following example.

Example 2.5. Consider the graph G D .V; E; !/ 2 G shown in Fig. 2.6 (left) withadjacency matrix

M.G/ D

2664

0 �2 1 0

1 0 0 0

0 0 0 1

0 1 0 0

3775 :

Here, our goal is to remove the vertices v1 and v2 from G. However, since the setfv3; v4g is not in st.G/ our only option is to remove these vertices sequentially, oneat a time.

Removing v1 then v2 amounts to finding the isospectral reduction R.GI S; T /,where S D fv2; v3; v4g and T D fv3; v4g. Since the set S D fv2; v3; v4g is astructural set of G, it is possible to find the reduction R.GI S/, which is shown inFig. 2.6 (center). However, R.GI S/ cannot be reduced over the vertex set T , sincethe weight of v2 in this graph is equal to �.

Suppose we attempt to reverse the order in which we reduce G by first removingv2, and then v1. To do so, we let QS D fv1; v3; v4g, where we attempt to compute thereduction R.GI QS; T /. However, the reduction R.GI QS/ shown in Fig. 2.6 (right)has the same problem as before. Namely, the vertex set T is not a structural set of

Page 49: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.3 Sequential Graph Reductions 33

R.GI QS/. Since we cannot remove v1 and v2 together or sequentially from G, weare forced to conclude that it is not possible to reduce G isospectrally to a graphwith vertex set T .

To determine when it is possible to reduce a graph G D .V; E; !/ and when it isnot, we note the following. If the weight !.eii / is not equal to � for some vi 2 V ,then the vertex set S D V � fvi g is a structural set of G. This follows from thefact that NS D fvi g. Hence, the graph Gj NS is the graph restricted to the single vertexvi , and every cycle of Gj NS is a loop. Therefore, G 2 G can be reduced over thestructural set S D V � fvi g if it is known that !.eii / ¤ �.

Another way to state this is to say that it is possible to remove the vertex vi fromG via an isospectral graph reduction if !.eii / ¤ � even when nothing is knownabout the graph structure of G.

This has the following important implication. Suppose it is known that no loop ofG and no loop of any sequential reduction of G has weight �. If this is the case, thenit is possible to remove any sequence of single vertices from G via some sequenceof isospectral reductions. Consequently, the graph G can be reduced to a graph onany subset of its vertex set by sequentially removing any “unwanted” vertices.

In general, though, there is no way to know beforehand whether a set of verticescan be removed from a graph without actually going through the process of tryingto remove those vertices. However, as with matrices, there is a way to overcome thisproblem.

Definition 2.6. Let G� be the set of graphs given by

G� D fG 2 G W M.G/ 2 Wn�n� for some n 2 Ng:

Lemma 2.1. If G 2 G� and S 2 st.G/, then RS .G/ 2 G� . In particular, no loopof G and no loop of any reduction of G can have weight �.

Proof. Suppose G D .V; E:!/ 2 G� has the structural set S . Hence, M.G/ 2W

n�n� for some n 2 N. Using Theorems 2.1 and 1.2, it follows that

M.RS .G// D R.M.G/I S/ 2 WjS j�jS j� ;

implying RS .G/ 2 G� . Since each edge weight of G and RS .G/ belongs to W� , itfollows that no edge of these graphs can have the weight �=1 … W� . This completesthe proof. ut

A direct consequence of Lemma 2.1 is that if G 2 G� , then G can be(sequentially) reduced to a graph on any subset of its vertex set. This result is statedin the following theorem.

Theorem 2.4 (Existence of Sequential Graph Reductions). Suppose G D.V; E; !/ is in G� . If S � V is nonempty, then there are vertex sets S � Sm�1 �� � � � S1 � V such that S1; : : : ; Sm�1; S induces a sequence of reductions on G.

Page 50: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

34 2 Dynamical Networks and Isospectral Graph Reductions

Proof. Let G D .V; E; !/ be in G� . If S � V is nonempty, then suppose,without loss of generality, that S D fv1; : : : ; vn�mg, where jV j D n. LettingSi D fv1; : : : ; vn�i g, we see that for each 1 � i < n � m, the claim is that vertexset Si is in st.RSiC1

.G//.Since Si and SiC1 differ by a single vertex viC1, it follows that Si is a structural

set of RSiC1.G/, as long as the weight of the loop eiC1;iC1 in RSiC1

.G/ is not equalto � 2 W. Under the assumption that G 2 G� , repeated use of Lemma 2.1 impliesthat this is not the case for each 1 � i < n � m, verifying the claim. Therefore, thesets S1; : : : ; Sm�1; S induce a sequence of reductions on G. ut

It is therefore possible to reduce a graph G 2 G� to a graph on any (nonempty)subset of its vertex set via some sequence of isospectral reductions. Observe thatTheorem 2.4 does not apply to the graph in Example 2.5, since that graph does nothave all weights in the set W� .

Theorem 2.4 can be thought of as an existence result for sequences of isospectralgraph reductions. That is, assuming that S is a vertex subset of G 2 G� , there isalways a sequence of reductions on G with final vertex S . Sequences of reductionsalso have the following uniqueness property.

Theorem 2.5 (Uniqueness of Sequential Graph Reductions). Suppose the graphG D .V; E; !/ is in G� . If S � V is nonempty, where both S1; : : : ; Sm�1; S andT1; : : : ; Tn�1; S induce a sequence of reductions on G, then

R.GI S1; : : : ; Sm�1; S/ D R.GI T1; : : : ; Tn�1; S/:

Proof. Suppose G D .V; E; !/ 2 G� . If both S1; : : : ; Sm�1; S and T1; : : : ; Tn�1; S

induce a sequence of reductions on G, then

M.R.GI S1; : : : ; Sm�1; S// D R.M.G/I S/ D M.R.GI T1; : : : ; Tn�1; S//;

using Corollary 2.3. Since these isospectral graph reductions have the sameadjacency matrix, they are the same graph. ut

Suppose the graph G D .V; E; !/ is a graph in G� and that S � V is nonempty.By combining the existence and uniqueness results of Theorems 2.4 and 2.5, itfollows that there is exactly one graph that results from sequentially reducing G byany sequence of reductions with final vertex set S . This fact allows us to introducethe following definition.

Definition 2.7. Let G D .V; E; !/ be a graph in G� . If S � V is nonempty, define

RS ŒG� D R.GI S1; : : : ; Sm�1; S/;

where S1; : : : ; Sm�1; S is any sequence that induces a sequence of reductions on G

with final vertex set S .

Page 51: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.3 Sequential Graph Reductions 35

G

1

1

1

1

1

1

v1

v2

v3

v4 v1

v3

v411

v1

v2

v41/λ

1/λ

1/λ

1/λ

1/λ

1/λ

1/λ

1/λ

11

(G)

v1 v4

1λ 2−1

1λ 2−1

λλ 2−1

λλ 2−1

(G)

(G)

{v1,v3,v4}

{v1,v2,v4}

{v1,v2,v4}

{v1,v3,v4}

{v1,v4}

{v1,v4}

{v1,v4}

Fig. 2.7 Distinct sequences of isospectral reductions with the same final vertex set and outcome

The graph RS ŒG� is well defined as a result of Theorems 2.4 and 2.5. Thenotation RS ŒG� given in Definition 2.7 is intended to emphasize the fact that S

need not be a structural set of G.Similar to Remark 1.1, we note that if the adjacency matrix M.G/ is in C

n�n,then G 2 G� . Therefore, every graph with complex weights can be uniquelyreduced over every nonempty subset of its vertex set.

Example 2.6. Let G D .V; E; !/ be the graph shown in Fig. 2.7. Our goal is toreduce G over the vertex set fv1; v4g � V . Note that since G 2 G� , Theorem 2.4guarantees that there is at least one sequence of reductions that reduces G to thegraph Rfv1;v4gŒG�.

In fact, there are exactly two. This follows from the fact that fv1; v4g … st.G/.Hence, G cannot be reduced over fv1; v4g with a single reduction. However, every(nontrivial) reduction of G removes at least one vertex from G. Therefore, the twopossible ways of reducing G over the vertex set fv1; v4g are

Rfv1;v4gŒG� D R.GI fv1; v2; v4g; fv1; v4g/ and (2.7)

Rfv1;v4gŒG� D R.GI fv1; v3; v4g; fv1; v4g/: (2.8)

Both of the reductions given in (2.7) and (2.8) are shown in Fig. 2.7. The dashedarrows labeled RT in this figure represent the reduction of a graph over somestructural set T � V . This notation is meant to emphasize that this diagramcommutes. That is,

Rfv1;v4g�Rfv1;v2;v4g.G/

� D Rfv1;v4g�Rfv1;v3;v4g.G/

�;

as guaranteed by Theorem 2.5.

Page 52: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

36 2 Dynamical Networks and Isospectral Graph Reductions

Using Definition 2.7, we state the following general result for isospectral graphreductions, which follows from Theorems 1.1, 2.4, and 2.5.

Theorem 2.6. If G D .V; E; !/ 2 G� and S � V is nonempty, then

��RS ŒG�

� D ��.G/ [ ��1.Gj NS /

� � ��.Gj NS / [ ��1.G/

�I and

��1�RS ŒG�

� D ��.Gj NS / [ ��1.G/

� � ��.G/ [ ��1.Gj NS /

�:

Theorem 2.6 can be thought of as a generalization of Corollary 2.1 from structuralsets to any nonempty vertex subset of a graph. However, Theorem 2.6 holds onlyfor graphs in G� , whereas Corollary 2.1 holds for every graph G 2 G.

In the following section, the results established here will be used to give rules fordevising equivalence relations on the set of graphs G� .

2.4 Equivalence Relations

Theorems 2.4 and 2.5 from the previous section assert that a graph G 2 G� has aunique reduction to any (nonempty) subset of its vertex set via some sequence ofisospectral reductions. In this section, this fact will allow us to define equivalencerelations on the collection of graphs in G� . To define these equivalence relations,we need first to introduce the notion of isomorphic graphs.

Two weighted digraphs G1 D .V1; E1; !1/ and G2 D .V2; E2; !2/ areisomorphic if there is a bijection b W V1 ! V2 such that there is an edge eij inG1 from vi to vj if and only if there is an edge Qeij between b.vi / and b.vj / in G2

with !2. Qeij / D !1.eij /. If the map b exists, it is called an isomorphism, and wewrite G1 ' G2.

An isomorphism is essentially a relabeling of the vertices of a graph. Therefore,if two graphs are isomorphic, then their spectra are identical. The equivalent notionfor matrices is that A.�/ 2 W

n�n is similar to B.�/ 2 Wn�n by some perturbation

matrix P , i.e., A.�/ D PB.�/P �1. This notion of being isomorphic, together withthe uniqueness and existence of sequential graph reductions, allows us to define thefollowing equivalence relations on the graphs G� .

Theorem 2.7 (Spectral Equivalence). Suppose that for each graph G D.V; E; !/ in G� , is a rule that selects a unique nonempty subset .G/ � V .Then induces an equivalence relation � on the set G� , where G � H ifR.G/ŒG� ' R.H/ŒH �.

Proof. If G 2 G� the set .G/ � V is unique and nonempty, then the graphR.G/ŒG� is uniquely determined by the rule . Hence, induces a unique reductionon G 2 G� .

Page 53: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.4 Equivalence Relations 37

v1 v2

HG

v1 v2

4λ 4

λ

τ(G)[G] � τ(H)[H]

v1 v2

Fig. 2.8 The unweighted undirected graph G is equivalent to the unweighted directed graph H

under the relation induced by the rule given in Example 2.7. Here, .G/ D .H/ D fv1; v2g

The claim, then, is that the relation G � H if R.G/ŒG� ' R.H/ŒH � is anequivalence relation on G� . This follows from the fact that the relation of beingisomorphic is reflexive, symmetric, and transitive, which completes the proof. ut

Two graphs may look very different but be spectrally equivalent, suggesting thatthe corresponding networks have similar dynamics. The major idea in this sectionis that by choosing an appropriate rule , one can discover this similarity.

Example 2.7. Suppose G D .V; E; !/. For v 2 V , let din.v/ be the in-degree ofv, which is the number of incoming edges incident to v, excluding loops. In anundirected graph, the in-degree of a vertex v is then the same as the number ofnonloop edges incident to v.

If .G/ D maxv2V din.G/ indicates the maximum in-degree of G, let be therule

.G/ WD fv 2 V W din.v/ > .G/=2g:

Observe that for each graph G 2 G� , the set .G/ both exists and is unique. Thus,the relation of having an isomorphic reduction with respect to this rule induces anequivalence relation on G� .

Here, the rule selects all those vertices that have an in-degree greater than.G/=2. In Fig. 2.8, the graphs G and H have the vertex set .G/ D fv1; v2g D.H/. Moreover, as shown in the figure, the graph R.G/ŒG� is isomorphic toR.H/ŒH �, implying G � H under the relation � induced by the rule . What

Page 54: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

38 2 Dynamical Networks and Isospectral Graph Reductions

v1

v2

1 1+ 1

λ2

HG

v1

v2

v3

v4

v3

v4

K

11

λ2−1

Fig. 2.9 We have G D RS ŒH� and RT ŒH� D K for S D fv1; v2g and T D fv3; v4g, but thegraphs G and K do not have isomorphic reductions

is interesting here is that although these graphs do not appear to have a similarstructure, they are the same when reduced with respect to the rule .

We note that not every rule that one could propose will select a unique vertex setof a graph. The simplest example is the rule that randomly selects a single vertexof a graph. This selection is, of course, nonunique. The reason it does not lead toan equivalence relation is that without a unique vertex set, the relation cannot bereflexive.

Importantly, choosing a rule that selects a unique vertex set of each graph allowsone to study the graphs in G� modulo some particular graph feature. In Example 2.7,this graph feature, or vertex set, is the set of vertices that have in-degree less than orequal to .G/=2 for every graph G.

From a practical point of view, this allows those studying a particular class ofnetworks a way of comparing the reduced topology of these networks. Of course,the particular reduction rule should be designed by the particular biologist,chemist, physicist, etc., to have some significance with respect to the networks underconsideration.

Before continuing to the next section, we note that the relation of simply havingisomorphic reductions is not transitive. That is, if both RS ŒG� ' RT ŒH� andRU ŒH� ' RV ŒK�, it is not necessarily the case that there are sets X and Y , subsetsof the vertex sets of G and K respectively, such that RX ŒG� ' RY ŒK�.

As an example, in Fig. 2.9, we have both RS ŒG� ' RS ŒH� and RT ŒH� 'RT ŒK�, where S D fv1; v2g and T D fv3; v4g. However, one can quickly checkthat for no subsets X � S and Y � T do we have RX ŒG� ' RY ŒK�. That is, therelation of having isomorphic reductions is not an equivalence relation on G, sinceit is not transitive. Overcoming this intransitivity requires some rule that selects aunique set of vertices from each graph in G (as in Theorem 2.7).

Page 55: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.5 Weight-Preserving Isospectral Transformations 39

2.5 Weight-Preserving Isospectral Transformations

Suppose G D .V; E; !/ is a graph in G. Recall that the weight set of G, or thecollection of edge weights of G, is the set

!.E/ D f!.eij / W e 2 Eg:

The isospectral graph reductions introduced in Sect. 2.2 modify not only the graphstructure but also the weight set of a graph. That is, if RS .G/ D .S; E ; �/ is areduction of G D .V; E; !/, then typically, !.E/ ¤ �.E /.

This may lead one to assume that the procedure of reducing a graph simply shiftsthe complexity of the graph’s structure to its set of edge weights. While the edgeweights can become more complicated in the case of isospectral reductions, this isnot the case for the transformations we will consider in this section.

In this section, we introduce graph transformations that modify the structure ofa graph but preserve the weights of the graph’s edges. As before, this procedurepreserves the spectrum of the graph up to a known set of eigenvalues. The ideabehind an isospectral graph transformation that preserves a graph’s edge weights isbased on the following. If two graphs G; H 2 G have the same branch structure(including weights), then they should have similar spectra.

To make this precise, suppose G D .V; E; !/ and S 2 st.G/. For the branchˇ D v1; : : : ; vm 2 BS .G/, we define ˝G.ˇ/ to be the ordered sequence of weights

˝G.ˇ/ D !.e12/; : : : ; !.ei�1;i /; !.ei i /; !.ei;iC1/; : : : ; !.em�1;m/

for m > 1, and !.eii / if m D 1.Let G; H 2 G and suppose S D fv1; : : : ; vmg is a structural set of both G and

H . We say that the branch set Bij .GI S/ is isomorphic to Bij .H I S/ if there is abijection

b W Bij .GI S/ ! Bij .H I S/

such that ˝G.ˇ/ D ˝H .b.ˇ// for each ˇ 2 Bij .GI S/.If such a map exists, we write Bij .GI S/ ' Bij .H I S/. If

Bij .GI S/ ' Bij .H I S/ for all 1 � i; j � m;

we say that BS .G/ is isomorphic to BS .H/ and write BS .G/ ' BS .H/.

Definition 2.8. For G; H 2 G, suppose S is a structural set of both G and H . If

(i) BS .G/ ' BS .H/, and(ii) each vertex of G and H belong to a branch of BS .G/ and BS .H/ respectively,

we call H is a weight-preserving isospectral transformation (wpit) of G over S .

Page 56: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

40 2 Dynamical Networks and Isospectral Graph Reductions

v1 v2

v3v4

v5

G

1

2

i

2i

3i3 4 4i v6

v2

v1v5

v4

v3 v7

H

1

3

3i

i

2

1

2i

i

4

4i

Fig. 2.10 The graph H is a weight-preserving isospectral transformation of the graph G withrespect to the structural set S D fv1; v3; v5g

Notice that if BS .G/ ' BS .H/ and each vertex of G and H belongs to somebranch, then G and H have the same weight set. This justifies the term “weight-preserving” given in Definition 2.8.

More formally, suppose H D .V ; E ; �/ is a wpit of G D .V; E; !/. Byassumption, then, each vertex of G and H belongs to a branch of BS .G/ andBS .H/ respectively. Suppose that eij 2 E. If i D j , then since the vertex vi

belongs to some branch ˇ 2 BS .G/, !.eij / belongs to the weight sequence ˝G.ˇ/.If i ¤ j , then there are branches v1; : : : ; vi ; : : : ; vs and u1; : : : ; uj ; : : : ; ut in BS .G/

containing vi and vj D uj respectively. Hence, ˇ D v1; : : : ; vi ; vj ; : : : ; ut 2BS .G/. Therefore, !.eij / belongs to the weight sequence ˝G.ˇ/.

Thus, each edge weight of G belongs to a weight sequence of some ˇ 2 BS .G/.By similar reasoning, each edge weight of H belongs to a weight sequence of someˇ 2 BS .H/. Since BS .G/ ' BS .H/, then !.eij / is an edge weight of G if andonly if !.eij / is an edge weight of H . Therefore, G and H have the same set ofedge weights.

Example 2.8. Suppose G D .V; E; !/ and H D .V ; E ; �/ are the graphs shownin Fig. 2.10. We note that the vertex set S D fv1; v3; v5g is a structural set of both G

and H . Moreover,

B11.GI S/ D fv1; v2; v1g; B11.H I S/ D fv1; v6; v1gIB13.GI S/ D fv1; v2; v3g; B13.H I S/ D fv1; v2; v3gI

B15.GI S/ D fv1; v5g; B15.H I S/ D fv1; v5gIB31.GI S/ D fv3; v4; v1g; B31.H I S/ D fv3; v4; v1gIB33.GI S/ D fv3; v4; v3g; B33.H I S/ D fv3; v7; v3gI

B35.GI S/ D fv3; v5g; B35.H I S/ D fv3; v5gIB51.GI S/ D ;; B51.H I S/ D ;I

Page 57: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.5 Weight-Preserving Isospectral Transformations 41

B53.GI S/ D ;; B53.H I S/ D ;IB55.GI S/ D ;; B55.H I S/ D ;:

Hence, there is a bijection b W Bij .GI S/ ! Bij .H I S/ for all i; j 2 f1; 3; 5g.To determine whether the branch sets BS .G/ and BS .H/ are isomorphic, we

need to check whether each branch ˇ 2 BS .G/ and the corresponding branchb.ˇ/ 2 BS .H/ have the same sequence of weights. Observe that the branchˇ D v1; v2; v1 in B11.GI S/ has the weight sequence ˝G.ˇ/ D 1; 0; 2. Similarly,note that the branch b.ˇ/ D v1; v6; v1 in B11.H I S/ has the weight sequence˝H .b.ˇ// D 1; 0; 2. Hence, B11.GI S/ ' B11.H I S/. Continuing in this manner,one can check that each Bij .GI S/ is isomorphic to Bij .H I S/ for i; j 2 f1; 3; 5g,so that BS .G/ ' BS .H/. Since every vertex of G and H belongs to a branch ofBS .G/ or BS .H/ respectively, H is a wpit of G over S .

Observe that the set S is not the only structural set that is common to both G

and H . Another structural set of both graphs is the set T D fv1; v3g. For this set,one can again show that BT .G/ ' BT .H/. However, since the vertex v5 does notbelong to any branch in either BT .G/ or BT .H/, it follows that the graph H is nota wpit of G over T .

The reason we do not consider H to be a wpit of G with respect to the set T is thatsome edge weights of these graphs are unaccounted for by the branch sets BT .G/

and BT .H/. For instance, if !.e15/ D �=2, rather than !.e15/ D 4, as shown inFig. 2.10, then !.E/ ¤ �.E /, although the branch sets BT .G/ and BT .H/ wouldstill be isomorphic. That is, part (ii) of Definition 2.8 is a necessary condition forensuring that a wpit of a graph has the same weights as the original untransformedgraph.

Suppose that two graphs G; H 2 G both have the structural set S . If it is the casethat BS .G/ ' BS .H/, then RS .G/ D RS .H/. This leads to the following result,which is a corollary of Theorem 2.1.

Corollary 2.4. If the graph G 2 G is a weight-preserving isospectral transforma-tion of H over S , then

�.RS .G// D G� � G� D H� � H � and ��1.RS .G// D G� � G� D H � � H�;

where G� D �.G/[��1.Gj NS /, G� D �.Gj NS /[��1.G/, H� D �.H/[��1.H j NS /,and H � D �.H j NS / [ ��1.H/.

Example 2.9. As a demonstration of Corollary 2.4, consider the graphs G and H inFig. 2.10, which are wpits of each other over the set S D fv1; v3; v5g. Since M.G/

and M.H/ have complex-valued entries, each of ��1.G/, ��1.Gj NS /, ��1.H/, and��1.H j NS / is the empty set. Hence, Corollary 2.4 implies that

�.RS .G// D �.G/ � �.Gj NS / D �.H/ � �.H j NS /:

Page 58: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

42 2 Dynamical Networks and Isospectral Graph Reductions

Indeed, as one can compute, �.G/ D f˙.�5/14 ; ˙.�5/

14 i; 0g, �.Gj NS / D f0; 0g,

�.H/ D f˙.�5/14 ; ˙.�5/

14 i; 0; 0; 0g, and �.H j NS / D f0; 0; 0; 0g, verifying this

fact. Additionally, this implies that

��1.RS .G// D �.Gj NS / � �.G/ D �.H j NS / � �.H/;

demonstrating the second half of Corollary 2.4.

2.5.1 Branch Expansions

Typically, a graph G 2 G will have many weight-preserving isospectral transforma-tions over some S 2 st.G/. Because G and each of these transformations have thesame branch structure, there is some largest isospectral transformation of G. This isthe transformation H in which the branches BS .H/ overlap the least. We call sucha wpit the isospectral expansion of G.

More precisely, suppose ˛ D v1; : : : ; vm and ˇ D u1; : : : ; un are branches inBS .G/. Then these branches are said to be independent if

fv2; : : : ; vm�1g \ fu2; : : : ; un�1g D ;:

That is, ˛ and ˇ are independent if they share no interior vertices.

Definition 2.9. Let G; H 2 G and S 2 st.G/; st.H/. Suppose

(i) H is a weight-preserving isospectral transformation of G over S , and(ii) the branches of BS .H/ are independent.

Then we call H an isospectral expansion of G with respect to S .

Isospectral expansions are particular types of weight-preserving isospectraltransformations and moreover, are unique up to a labeling of vertices. That is, everytwo isospectral expansions of G with respect to S are isomorphic. By slight abuseof terminology, we let XS .G/ be any representative from the set of isospectralexpansions and call XS .G/ the isospectral expansion of G with respect to S .

Since an isospectral expansion transforms a graph in a very specific way, it ispossible to relate the spectrum of a graph with its expansion. This relation is givenin the following theorem.

Theorem 2.8. Let G D .V; E; !/ with structural set S . Then

det�M.XS .G// � �I

� D det�M.G/ � �I

� Yvi 2V �S

�!.eii / � �

�ni �1;

where ni is the number of branches in BS .G/ containing vi .

Page 59: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.5 Weight-Preserving Isospectral Transformations 43

Proof. Let XS .G/ D .V ; E ; �/ be an isospectral expansion of the graph G D.V; E; !/. Suppose ˇ D v1; : : : ; v` 2 BS .XS .G//. Since BS .G/ ' BS .XS .G//,there is a bijection

b W BS .G/ ! BS .XS .G//

such that ˝XS .G/.ˇ/ D ˝G.b�1.ˇ//.If ˇ.i/ D v … S , define

Qb.vi / D �b�1.ˇ/

�.i/ for 1 < i < `:

Since jˇj D jb�1.ˇ/j, then Qb maps interior vertices of ˇ to interior verticesof b�1.ˇ/. Since each vertex of V � S and V � S belong to some branch ofBS .XS .G// and BS .G/ respectively, it follows that Qb W V � S ! V � S isonto.

Let

Vi D fv 2 V � S W Qb.v/ D vi 2 V g:

Note that jVi j D ni is then the number of branches in BS .G/ containing vi .Moreover, V � S is the disjoint union

V � S D[

vi 2V �S

Vi :

Since b W BS .G/ ! BS .XS .G// preserves weight sequences, we have �.ejj / D!.eii / for each vj 2 Vi . Hence,

Yvj 2V �S

��.ejj / � �

� DY

vi 2V �S

Yvj 2Vi

��.ejj / � �

�! DY

vi 2V �S

�!.eii / � �

�ni:

(2.9)If S D fv1; : : : ; vmg, then by assumption,

Bij .GI S/ ' Bij .XS .G/I S/ for 1 � i; j � m:

Equation (2.2) then implies that the edges eij in both RS .G/ and RS .XS .G// havethe same weight. Therefore, RS .G/ D RS .XS .G//. Since

det�M.RS .G// � �I

� D det�M�RS .XS .G//

� � �I�;

equation (1.11), used in the context of the adjacency matrices of G and XS .G/,implies

Page 60: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

44 2 Dynamical Networks and Isospectral Graph Reductions

det�M.G/ � �I

�det

�M.Gj NS / � �I

� D det�M.XS .G// � �I

�det

�M.XS .G/j NS / � �I

� :

Equation (2.5) then implies that

det�M.XS .G// � �I

� D det�M.G/ � �I

�Qvj 2V �S

��.ejj / � �

�Q

vi 2V �S

�!.eii / � �

� :

From (2.9) we then have that

det�M.XS .G// � �I

� D det�M.G/ � �I

� Yvi 2V �S

�!.eii / � �

�ni �1;

completing the proof. utExample 2.10. Consider the graphs G D .V; E; !/ and H D .V ; E ; �/ inFig. 2.10. As demonstrated in Example 2.8, if S D fv1; v3; v5g, then the graph H isa wpit of G. Moreover, it can be seen from Fig. 2.10 that the six branches of BS .H/

share no interior vertices. That is, the branches of BS .H/ are pairwise independent.Therefore, the graph H is an isospectral expansion of G with respect to S or

H D XS .G/. Importantly, note that the edge weights of G and its expansion H areidentical, i.e., as sets, !.E/ D �.E / D f1; 2; 3; 4; i; 2i; 3i; 4ig.

Moreover, since V � S D fv2; v4g, observe that the vertex v2 of G is an interiorvertex of the branches v1; v2; v1; v1; w2; v3 2 BS .G/. Similarly, the vertex v4 of G

is an interior vertex of the branches v3; v4; v3; v3; v4; v1 2 BS .G/. Since !.e22/ D 0

and !.e44/ D 0, Theorem 2.8 implies that ��XS .G/

� D �.G/ [ f0; 0g.

The principal idea behind an isospectral expansion is the following. If G 2 G

and S 2 st.G/, then the set of branches BS .G/ is uniquely defined. However,there are typically many other graphs H with the same branch structure as G, i.e.,S 2 st.H/, such that BS .H/ ' BS .G/.

An isospectral expansion of G over S is then a graph H D XS .G/ with identicalbranch structure but with the following restriction. The branches of BS .H/ arepairwise independent, and every vertex of H belongs to a branch in BS .H/. Thatis, every vertex of NS in H is part of exactly one branch in BS .H/.

Hence, given a graph G and structural set S , we can algorithmically constructthe expansion XS .G/ as follows. Start with the vertices S . If ˇ 2 Bij .GI S/, thenboth vi and vj are in S . Construct a path (or cycle if i D j ) from vi to vj withthe weight sequence ˝.ˇ/ using “new” interior vertices. By new, we mean verticesthat do not already appear on the graph we are constructing. Repeat this for eachˇ 2 Bij .GI S/. The resulting graph is the isospectral expansion XS .G/.

Importantly, an isospectral expansion is not the only example of an isospectralgraph transformation that preserves the weight set of a graph. Many other weight-preserving isospectral transformations are possible.

Page 61: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.6 Isospectral Graph Transformations over Modified Weight Sets 45

2.6 Isospectral Graph Transformations over ModifiedWeight Sets

The isospectral expansions in Sect. 2.5 are a type of graph transformation thatseparates the various branches of a graph. In this section, we consider the reversalof this process. Specifically, we introduce a method of transforming a graph in away that merges branches. This type of isospectral transformation has the additionalproperty that it keeps the graph’s weight set in a fixed subset U � W. The particularsubsets for which this will hold are semirings of W.

The set U is a semiring of W if it has the following properties: Both 0 and 1

belong to U, and for every u1; u2 2 U, both the product u1u2 and the sum u1 C u2

are in U. However, the additive and multiplicative inverses of u1 and u2 need not bein U. Examples of such semirings include CŒ��, R, and Z

C D f0; 1; 2; : : : g.In order to merge the branches of a graph G, we will first need to choose a

structural set S 2 st.G/, so that these branches are defined. The particular structuralsets that we will consider in this section are defined as follows.

Definition 2.10. For G D .V; E; !/, let S 2 st.G/. The set S is a completestructural set of G if each cycle of G, including loops, contains a vertex in S .

The difference between a structural set and a complete structural set of a graphG is the following. If S is a complete structural set of G, then every cycle of G

contains a vertex in S . If S is simply a structural set of G, then loops of G need notcontain a vertex of S .

For G 2 G, let st0.G/ denote the set of all complete structural sets of G.Additionally, let �0.G/ be the nonzero elements of �.G/ including multiplicities,which we refer to as the nonzero spectrum of G. Moreover, we denote by �.G/ thespectral radius of G. That is,

�.G/ D max`2�.G/

j`j:

Analogously, if M 2 Wn�n, we say that S � N is a complete structural set of M if

there is a permutation matrix P 2 Cj NS j�j NS j such that the matrix PM NS NS P �1 is upper

triangular and each diagonal entry is zero. Let st0.M/ be the set of all completestructural sets of M , �0.M/ the nonzero eigenvalues of M , and �.M/ the spectralradius of M .

Note that if S is a complete structural set of a graph G, then Gj NS has no cycles.Hence, one can show that

det.M.Gj NS / � �I/ D �j NS j:

The following is then a corollary of Theorems 2.1 and 2.8.

Corollary 2.5. Let G D .V; E; !/ and suppose S 2 st0.G/. Then �0

�RS .G/

� D�0.G/ D �0

�XS .G/

�and �.RS .G/

� D �.G/ D �.XS .G/�.

Page 62: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

46 2 Dynamical Networks and Isospectral Graph Reductions

Proof. Suppose G D .V; E; !/ and S 2 st0.G/. Since no vertex in NS can have aloop, it follows that !.eii / D 0 for each vi 2 NS . Equation (2.5) then implies that

det.M.Gj NS / � �I/ DYvi 2 NS

�:

Hence ��1.Gj NS / D ; and �.Gj NS / D f0; : : : ; 0g, in which 0 has multiplicity j NS j.The corollary then follows from Theorem 2.1. ut

To phrase Corollary 2.5 in terms of matrices, suppose M 2 Wn�n and S 2

st0.M/. Then �0

�RS .M/

� D �0.M/ D �0

�XS .M/

�and �.RS .M/

� D �.M/ D�.XS .M/

�.

In what follows, we demonstrate how the branches BS .G/ of a graph can bemerged if S 2 st0.G/. To simplify the discussion, we will say the ej;j C1 is the j thedge belonging to the branch ˇ for each 1 � j � m � 1. This allows us to state thefollowing result.

Lemma 2.2. Let G D .V; E; !/ and S 2 st0.G/. Then XS .G/ D .V ; E ; �/ hasthe following properties:

(i) If eij 2 E , then eij belongs to exactly one branch of BS .XS .G//.(ii) If ˇ D v1; : : : ; vm is a branch in BS .XS .G//, then

˝XS .G/.ˇ/ D �.e12/; 0; : : : ; �.ek;k�1/; 0; �.ek;kC1/; : : : ; 0; �.em�1;m/:

(2.10)

Proof. Since each vertex of XS .G/ belongs to at least one branch of BS .XS .G//,the same holds for every eij 2 E . On the other hand, suppose eij belongs to both ˇ1

and ˇ2 in BS .XS .G//. Then neither vi nor vj can be an interior vertex of ˇ1 or ˇ2,since these branches, if distinct, are independent. Hence ˇ1 D ˇ2 D vi ; vj or eij

belongs to at most one branch of BS .XS .G//. This verifies property (i).Since S is a complete structural set of G, it follows that !.eii / D 0 for each vi 2

V � S . This implies that for every ˇ D v1; : : : ; vm 2 BS .G/, the weight sequence˝G.ˇ/ has the form given in equation (2.10). Since BS .G/ ' BS .XS .G//,property (ii) holds. ut

2.6.1 Branch Reweighting

Given an isospectral expansion XS .G/, our goal is to construct a new graph YS .G/

by reweighting the branches of XS .G/. The idea behind this construction is toreweight the branches BS .XS .G// in such a way that it preserves their branchproducts.

Suppose the expansion XS .G/ is equal to .V ; E ; �/, where S 2 st0.G/. Let thegraph YS .G/ be equal to .V ; E ; �/. That is, YS .G/ has the same vertex and edge

Page 63: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.6 Isospectral Graph Transformations over Modified Weight Sets 47

set as XS .G/ but possibly different edge weights. This implies that S is a completestructural set of YS .G/ and moreover, that the branch set BS .YS .G// is identicalto BS .XS .G//.

For the branch ˇ D v1; : : : ; vm 2 BS .YS .G//, let ˇ have the weight sequence

˝YS .G/.ˇ/ Dm�1YkD1

�.ek;kC1/; 0; : : : ; 1; 0; 1; : : : ; 0; 1 (2.11)

if m > 1. If m D 1, let ˝YS .G/.ˇ/ D �.e11/. Since XS .G/ and YS .G/ have thesame vertex and edge sets, Lemma 2.2 implies that each edge of YS .G/ belongs toexactly one branch of BS .YS .G//. Therefore, equation (2.11) completely specifiesthe edge weights of the graph YS .G/.

Observe that the edge eij 2 E in YS .G/ has weight �.eij / D 1 unless eij isthe first edge of a branch in BS .YS .G//. If eij happens to be the first edge of thebranch ˇ 2 BS .YS .G//, then its weight is the product of the nonzero entries of˝XS .G/.ˇ/.

In effect, the branch reweighting process simply transfers the entire weight of thebranch to the branch’s first edge, leaving all other edges with unit weight. Hence, thebranch product remains constant, but the entire weight of the branch is concentratedon its first edge. (Figure 2.11 gives an example of this branch reweighting.)

2.6.2 Branch Merging

From YS .G/, we construct the graph ZS .G/. The major idea behind this construc-tion is that the branches BS .YS .G// of the graph YS .G/ can be merged together ina way that maintains the weight set of the graph as well as its nonzero spectrum. Todefine the graph ZS .G/, we will need the following terminology.

For G D .V; E; !/, suppose S D fv1; : : : ; vmg is a structural set of G. Let

Bj .GI S/ D[

1�i�m

Bij .GI S/:

That is, Bj .GI S/ are the branches in BS .G/ terminating at the same vertex vj 2V . For every ˇ D u1; : : : ; uk in BS .G/, let jˇj D k, i.e., the number of vertices inˇ. Moreover, let fˇgint denote the set of interior vertices of ˇ.

To construct the graph ZS .G/, we first suppose that the graph YS .G/ is equalto .V ; E ; �/ and that S D fv1; : : : ; vmg 2 st0.G/. For each vj 2 S , we selecta branch ˇj 2 Bj .YS .G// with the property that jˇj j � jˇj for all branchesˇ 2 BS .YS .G//. That is, the branch ˇj is the “longest” branch that terminates atvj . If Bj .YS .G// D ;, we set ˇj D ;.

Page 64: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

48 2 Dynamical Networks and Isospectral Graph Reductions

Define B D BS .YS .G// � fˇ1; : : : ; ˇmg, and let

U D[ˇ2B

fˇgint: (2.12)

That is, U is the set of interior vertices of the branches ˇ ¤ fˇ1; : : : ; ˇmg inBS .YS .G//.

Set

Z 0S .G/ D YS .G/jV �U :

Recall that the branches of BS .YS .G// are mutually independent. Therefore, thegraph Z 0

S .G/ is equal to .V � U ; E 0; �0/, where e 2 E 0 if and only if e belongsto some ˇj . Furthermore, the edge weights of Z 0

S .G/ are given by the restriction�0 D �jE 0 .

If e is an edge from the vertex a to the vertex b, we will denote this by e D .a; b/.Suppose the branch ˇj is equal to v

j1 ; : : : ; v

j

k . For each

ˇ 2 Bij .YS .G/I S/ � ˇj ;

we add an edge .vi ; vj

k�jˇjC2/ to the graph Z 0

S .G/. The edge .vi ; vj

k�jˇjC2/ is given

the weight of the first edge belonging to ˇ in YS .G/. If this is done over all 1 �i; j � m, we call the resulting graph Z 00

S .G/.It is important to note that the graph Z 00

S .G/ may have parallel edges underthis construction. By parallel edges, we mean that there may be multiple edgesin the edges set of Z 00

S .G/ of the form .a; b/. In particular, if there are two branchesˇ1; ˇ2 2 Bij .YS .G/I S/ that have the same length, i.e., jˇ1j D jˇ2j D `, then thereare (at least) two edges in Z 00

S .G/ of the form .vi ; vj

`�jˇjC2/.

If the graph Z 00S .G/ has parallel edges e1; : : : ; eN of the form .vi ; vj / with

weights w1; : : : ; wN , we replace the edges e1; : : : ; eN in Z 00S .G/ with the single

edge eij having weight w1 C � � � C wN . If this is done for each set of parallel edgesin Z 00

S .G/, we denote the resulting graph by ZS .G/.Note that our construction of ZS .G/ depends on the initial choice of each ˇj .

We therefore write ZS .G/ D ZS .GI ˇ1; : : : ; ˇm/.

Definition 2.11. If S 2 st0.G/, then we call the graph ZS .G/ the merged graph ofG over S .

Because each operation of expanding, reweighting, and merging does not modifythe underlying branch structure of a graph or any of its branch products, thefollowing holds.

Page 65: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.6 Isospectral Graph Transformations over Modified Weight Sets 49

v1 v1 v1 v1

v2 v2 v2 v2

v3 v3 v3 v3v4 v5 v5 v5 v5

v6 v6 v6 v6v7 v7 v7 v7

u1 u2 u1 u2

12

3

4

5 6 7 8

12 2

3

4

5 6 7 8

512 14

96

1

1 1 1 1

17

96

1

1 1

14

G S(G)S(G)S(G)

Fig. 2.11 The isospectral transformation ZS .G/ of G over S D fv1; v2; v3g

Theorem 2.9. For G 2 G and S 2 st0.G/, suppose the edge weights of G are inthe semiring U � W. Then

(i) ZS .G/ has edge weights in the set U;(ii) the nonzero eigenvalues �0.ZS .G// are equal to �0.G/; and

(iii) the spectral radius �.ZS .G// is equal to �.G/.

Proof. The fact that ZS .G/ and G have weights in the semiring U follows fromthe fact that the weights of ZS .G/ are sums and products of the weights of G. Thenonzero spectrum and spectral radius of ZS .G/ and G follow from Corollary 2.4,since by construction, RS .G/ D RS .ZS .G//. ut

Example 2.11. Let G D .V; E; !/ be the graph shown in Fig. 2.11 (far left). Notethat the vertex set S D fv1; v2; v3g is a complete structural set of G, since G hasno cycles (including loops). The branches of G with respect to S are then ˇ1 Dv1; v3; v6, ˇ2 D v1; v4; v6, ˇ3 D v1; v4; v7, and ˇ4 D v1; v2; v5; v7. Observe thatonly ˇ2 and ˇ3 share an interior vertex.

The isospectral expansion XS .G/, shown in Fig. 2.11 (middle left), is then thegraph in which the branches ˇ2 and ˇ3 have been replaced with the independentbranches Q

2 D v1; u1; v6 and Q3 D v1; u2; v7 respectively. Thus, the expansion

XS .G/ has the vertex set V D fv1; v2; v3; v5; v6; v7; u1; u2g.Note that the products of the weights along the branches ˇ1, Q

2, Q3, ˇ4 are

5; 12; 14; 96 respectively. By construction, these are the first edge weights of eachof the branches ˇ1, Q

2, Q3, ˇ4 in YS .G/, respectively (see Fig. 2.11, middle right).

Each other edge of YS .G/ has unit weight.To construct Z 0

S .G/, note that ˇ1 D ;, ˇ7 D ˇ4, and ˇ6 is either ˇ1 or Q2. Here

we make the arbitrary choice of letting ˇ6 D ˇ1. Since f Q2gint D fu1g and f Q

3gint D

Page 66: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

50 2 Dynamical Networks and Isospectral Graph Reductions

fu2g, we have that Z 0S .G/ has vertex set V � fu1; u2g D fv1; v2; v3; v5; v6; v7g.

Given that the branch set

BS .Z 0S .G// � fˇ6; ˇ7g D f Q

2; Q3g;

constructing Z 00S .G/ amounts to adding two edges to Z 0

S .G/, one for Q2 and one

for Q3.

Observe that Q2 2 B16.YS .G/I S/ with first edge weight 12 and j Q

2j D 3. Sincejˇ6j D 3, we add to the graph Z 0

S .G/ a parallel edge from v1 to v3 with weight12. For the branch Q

3 2 B17.YS .G/I S/, note that it has first edge weight 14 andj Q

2j D 3. Since jˇ7j D 4, we add to the graph Z 0S .G/ an edge from v1 to v5 with

weight 14. The result is the graph Z 00S .G/.

Note that the two parallel edges from v1 to v3 in Z 00S .G/ have weights 5 and 12

respectively. Hence, in ZS .G/ D ZS .GI ˇ1; ˇ4/, shown in Fig. 2.11 (far right), theedge e13 has weight 17.

Since the edge weights of G are !.E/ D f1; : : : ; 8g, these weights belongto the semiring of nonnegative integers Z

C D f0; 1; 2 : : : g. As guaranteed byTheorem 2.9, the graph ZS .G/ also has edge weights in Z

C. Moreover, one canalso compute that �0.ZS .G// D �0.G/ and �.ZS .G// D �.G/.

By first expanding a graph and then merging its branches, we will typicallychange the size of the graph. To determine the size of the resulting graph, we willuse the following notation. For a graph G D .V; E; !/, let jGj D jV j, i.e., thenumber of vertices in V .

Proposition 2.1. For G D .V; E; !/, let S D fv1; : : : ; vmg 2 st0.G/. Then

jZS .G/j D jS j CmX

iD1

.jˇj j � 2/;

where jˇj j D 2 if ˇj D ;.

Proof. Let YS .G/ D .V ; U ; �/, where we suppose ˛; ˇ 2 BS .YS .G// and ˛ ¤ ˇ.Note that the branches of BS .YS .G// are pairwise independent and each vertex ofV belongs to a branch of BS .YS .G//. Hence, f˛gint\fˇgint D ; and fˇgint\S D ;.Therefore, V is the disjoint union

V D S [ [

ˇ2BS .YS .G//

fˇgint

!:

Recall that the set U , given by (2.12), is the set of interior vertices of the branchesBS .YS .G// � fˇ1; : : : ; ˇmg. Thus, the vertex set V � U is given by

V � U D S [

m[iD1

fˇj gint

!;

Page 67: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

2.6 Isospectral Graph Transformations over Modified Weight Sets 51

v1 v1

u4

u2

v2v3

u4

u2

v1 v3 u3v3

u1

G

S(G)S(G)

Fig. 2.12 The graphs XS .G/ and ZS .G/ for S D fv1; v3g, both of which are larger than G

where each fˇi gint \ fˇj gint is the empty set and fˇi gint \ S D ; for i ¤ j .Therefore,

jV � U j D jS j CmX

iD1

.jˇj j � 2/;

since each branch ˇj has jˇj j � 2 interior vertices. If ˇj D ;, then this branch hasno interior vertices, which is compensated by the assumption jˇj j D 2.

Note that jZ 0S S.G/j D jYS .G/jV �U j D jV � U j. Since constructing Z 00

S .G/

from Z 0G.S/ involves only the addition of edges, and constructing ZS .G/ from

Z 00S G.S/ involves only the removal of parallel edges, it follows that jZS .G/j D

jV � U j. This completes the proof. utIn Example 2.11, we considered the construction of the graph ZS .G/ in Fig. 2.11

over the complete structural set S D fv1; v6; v7g. From Proposition 2.1, it followsthat jZS .G/j D jS j C jˇ1j C jˇ6j C jˇ7j. Since ˇ1 D ;, ˇ6 D v1; v3; v6, andˇ7 D v1; v2; v5; v7, we have jZS .G/j D 6.

Note that in this example, jGj > jZS .G/j. Hence ZS .G/ can be considered areduction of G over the weight set ZC having the same nonzero spectrum. However,we note that it may happen that jZS .G /j > jGj for a particular graph G, as shownin the following example.

Example 2.12. Consider the unweighted graph G in Fig. 2.12 (left) with thecomplete structural set S D fv1; v3g. The graph ZS .G/ shown in Fig. 2.12 (right)is strictly larger than the original graph G, i.e., ZS .G/ has more vertices than G.Therefore, even though the branches of the expansion XS .G/, shown in Fig. 2.12(center), have been merged to form ZS .G/, this resulting graph is still larger thanthe original graph G. At this point, it is yet unknown whether there is a generalconstruction (algorithm) that allows one to reduce an arbitrary graph G 2 G in sizewhile preserving the semiring of its edge weights and (nonzero) spectrum.

As a final remark for this chapter, we note that it is possible to define a mergedgraph ZS .G/ over a general structural set S rather than a complete structural set.This more general construction is analogous to the procedure described in this

Page 68: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

52 2 Dynamical Networks and Isospectral Graph Reductions

section in that ZS .G/ is still constructed by expanding, reweighting, and mergingthe branches of G. The major difference, though, is that if S is a structural set, thenthese branches may have loops.

The reason we consider merged graphs over complete structural sets is for thesake of simplicity. We leave it as an exercise to the reader to apply these techniquesover structural sets that are not complete.

Page 69: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 3Stability of Dynamical Networks

In this chapter, we consider networks from a dynamical systems point of view. Whenwe speak of a system as being “dynamic,” what we mean is that the system has anumber of possible states and that at each moment of time, the system is in one ofthose states. The way in which the system’s state changes from one moment in timeto the next is referred to as the system’s dynamics.

If a network is dynamic, as most real networks are, the network’s state at aparticular point in time is the collection of each of the states of its elements. Thatis, the dynamics of a network is the collective dynamics of the network’s individualelements.

Network dynamics are a combination of the following two factors. First, everynetwork element may have its own intrinsic dynamics, so that it behaves, inisolation, as its own dynamical system. The second dynamic component of anetwork comes from the interactions among the various network elements.

Dynamical networks are therefore networks of interacting dynamical systems,which are composed of (i) local dynamical systems, (ii) interactions among theselocal systems, and (iii) the network’s graph of interactions, often called the networktopology (see Sect. 2.1). The network’s local systems describe the dynamics of thenetwork elements in isolation. The interactions and graph of interactions describe,respectively, the dynamics and structure of the interactions among these elements.

Our major focus in this chapter is the stability of a dynamical network. Theparticular type of stability we consider relates to whether a network has a globallyattracting fixed point.

We analyze the stability of a network in two different settings. In the first,we consider general networks of interacting dynamical systems in which theinteractions do not involve time delays. In the second, we allow the interactionsto have delays.

We also distinguish two types of delays; those that are single and those that areof a multiple-delay type. By single, we mean that the delayed interactions between

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__3

53

Page 70: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

54 3 Stability of Dynamical Networks

any two network elements happen on a single time scale. Otherwise, the networkhas a multiple-type delay.

One of the main results of this chapter is that despite the wide variety andpotential complexity of such systems, it is possible to give a criterion for the globalstability of a general dynamical network. In fact, using this criterion, we prove thata network’s stability is invariant with respect to the removal of time delays and theaddition of single-type time delays. This leads us to introduce the notion of intrinsicstability, which is a stronger form of global stability.

To carry out this analysis, we introduce a new family of graph transformationsbased on the isospectral graph transformations from Chap. 1. These transformations,which are called bounded radial transformations and isoradial expansions, are usedrespectively to modify and maintain the spectral radius of a graph.

We then introduce the notion of an implicit delay in an undelayed dynamicalnetwork and show that by removing a network’s implicit delays, the result is a lower-dimensional system called a network restriction. We then show that such restrictionscan be used to obtain improved estimates of a network’s global stability.

The reason for this improvement is that network restrictions concentrate theinformation stored in a network in a smaller set of interactions and elements inthe “restricted” network. This consolidation of information leads to improved localestimates and ultimately to an improved estimate of the network’s stability.

This idea of consolidating network information to gain improved estimates is anidea used throughout this book. In Chap. 4, this strategy is used to find improvedeigenvalue estimates. In Chap. 6, this technique is used to gain improved escaperate estimates in open dynamical systems or, alternatively, rates of informationabsorption in networks.

The ideas developed in this chapter are illustrated by various examples of Cohen–Grossberg neural networks. We note that this approach to analyzing the globalstability of networks is not limited in any way to the study of interacting dynamicalsystems but can be applied to any time-delayed or undelayed dynamical system.

In particular, this approach is applicable to every multidimensional dynamicalsystem described by a system of simultaneous differential or difference equations.Naturally, it makes sense to apply these techniques only to those systems whosevariables depend on one another in some irregular way, e.g., not all variables dependon all other variables.

3.1 Networks as Dynamical Systems

As mentioned in the introduction, dynamical networks are composed of (i) localdynamical systems (ii) interactions between these local systems, and (iii) thenetwork’s graph of interactions (see [1, 9, 10]).

To give ourselves a mathematical framework in which to study dynamicalnetworks, we let 'i W Xi ! Xi be maps on the complete metric space .Xi ; d/

for i 2 I D f1; : : : ; ng, where

Page 71: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.1 Networks as Dynamical Systems 55

Li D supxi ¤yi 2Xi

d.'i .xi /; 'i .yi //

d.xi ; yi /< 1: (3.1)

From the individual spaces .Xi ; d/, we define .X; dmax/ to be the metric space onX D ˚n

iD1Xi with distance given by

dmax.x; y/ D maxi2I

fd.xi ; yi /g; x; y 2 X:

We let .'; X/ be the direct product of the systems .'i ; Xi / over i 2 I , where werefer to .'; X/ as a collection of local systems.

Definition 3.1. A map F W X ! X is called an interaction if for every j 2 I ,there exist a nonempty collection of indices Ij � I and a continuous function

Fj WMi2Ij

Xi ! Xj ;

where Fj .x/ D Fj .xjIj / for j 2 I and x 2 X . The superposition of the interactionand local systems

F .x/ D .F ı '/.x/ for x 2 X

generates the dynamical system .F ; X/, which is a dynamical network.

A given dynamical network .F ; X/ may depend on a number of parametersp D .p1; : : : ; pm/. If this is the case, we write F .x/ D F .xI p/, although we mayat times suppress this dependence.

The purpose of this section, beyond introducing the class of dynamical networkswe wish to study, is to study the stability of this class of systems. Recall that thedynamical system .f; M/ on a metric space M has a globally attracting fixed pointQx 2 M if

limk!1 f k.x/ D Qx; for any x 2 M:

If a system has a globally attracting fixed point, we call it globally stable or simplystable.

One of the main questions we wish to address in this chapter is whether for agiven collection of local systems .'; X/, a particular interaction leads to a stabledynamical network .F ; X/. To consider this, suppose F satisfies the followingLipschitz condition for the finite constants �ij � 0:

d�Fj .x/; Fj .y/

� �Xi2Ij

�ij d.xi ; yi / (3.2)

Page 72: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

56 3 Stability of Dynamical Networks

for all x; y 2 X and i; j 2 I . The Lipschitz constants �ij in equation (3.2) formthe nonnegative matrix � 2 R

n�n, where �ij D 0 if i … Ij .Using the matrix � and the constants Li given in equation (3.1), we define the

matrix

�T � diagŒL1; : : : ; Ln� D

0B@

�11L1 : : : �n1Ln

:::: : :

:::

�1nL1 : : : �nnLn

1CA 2 R

n�n;

which we refer to as a stability matrix of .F ; X/. This notion of a stability matrixis used in the following theorem to give a sufficient condition for the stability of thedynamical network .F ; X/.

Theorem 3.1. Suppose A is a stability matrix of the dynamical network .F ; X/. Ifthe spectral radius �.A/ is less than 1, then the dynamical network .F ; X/ is stable.

Before proving Theorem 3.1, we observe that if v 2 Cn�1, then its `1 norm is

jjvjj1 D maxi jvi j. Moreover, the `1 norm of a matrix A 2 Cn�n is

jjjAjjj1 D max1�i�n

nXj D1

jaij j;

which is the maximum absolute row sum of A. The `1 matrix norm has theadditional property that it is submultiplicative, i.e., jjjABjjj1 � jjjAjjj1jjjBjjj1for every A; B 2 C

n�n. With this in place, we give a proof of Theorem 3.1.

Proof. For x; y 2 X and 1 � j � n,

d�F .x/j ; F .y/j

� Dd�Fj .'.x//; Fj .'.y/

��Xi2I

�ij d�'i .xi /; 'i .yi /

�Xi2I

�ij Li d.xi ; yi /:

Therefore, each entry of the column vector

264

d�F1.x/; F1.y/

�:::

d�Fn.x/; Fn.y/

375 � A

264

d�x1; y1

�:::

d�xn; yn

375 :

Page 73: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.1 Networks as Dynamical Systems 57

Since

264

d�F 2

1 .x/; F 21 .y/

�:::

d�F 2

n .x/; F 2n .y/

375 � A

264

d�F1.x/; F1.y/

�:::

d�Fn.x/; Fn.y/

375 � A2

264

d�x1; y1

�:::

d�xn; yn

375

by the same reasoning, it follows inductively that

dmax�F k.x/; F k.y/

� �ˇˇ

Ak

264

d.x1; y1/:::

d.xn; yn/

375ˇˇ

1 (3.3)

for all k > 0.By the Jordan canonical form theorem, there exist a nonsingular matrix B 2

Cn�n and a block-diagonal matrix J 2 C

n�n such that A D BJB�1. In particular,

J D

266664

Jm1.�1/ 0 : : : 0

0 Jm2.�2/:::

:::: : : 0

0 : : : 0 Jmt .�t /

377775 ;

where

Jmi .�i / D

2666664

�i 1 0 : : : 0

0 �i 1 0:::

: : :: : :

:::

0 �i 1

0 0 : : : 0 �i

3777775

2 Cmi �mi

and �i 2 �.A/ for 1 � i � t .Moreover, since J is block diagonal, it follows that for k � 0,

J k D

266664

J km1

.�1/ 0 : : : 0

0 J km2

.�2/:::

:::: : : 0

0 : : : 0 J kmt

.�t /

377775 :

Since the `1 norm of a matrix is its maximum absolute row sum, we have

jjjJ kjjj1 D max1�i�t

jjjJ kmi

.�i /jjj1:

Page 74: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

58 3 Stability of Dynamical Networks

The kth power of a Jordan block Jmi .�i / can be computed to be

J kmi

.�i / D

26666664

�k0

��k

i

�k1

��k�1

i

�k2

��k�2

i : : :�

kmi �1

��

k�mi C1i

0�

k0

��k

i

�k1

��k�1

i : : :�

kmi �2

��

k�mi C2i

:::: : :

:::

0�

k0

��k

i

�k1

��k�1

i

0 0 : : : 0�

k0

��k

i

37777775

for every k � mi � 1. Hence for k � mi � 1, the matrix norm is given by

jjjJ kmi

.�i /jjj1 Dmi �1Xj D0

ˇ k

j

!�

k�ji

ˇ

Dhˇ k

0

!�

mi �1i

ˇC ˇ k

1

!�

mi �2i

ˇC � � � C ˇ k

mi � 1

!ˇiˇ�

k�mi C1i

ˇ

� Cmi �.A/k�mi C1;

where Cmi D Pmi �1j D0

ˇ�kj

��

mi �j �1i

ˇ.

Suppose �.A/ < 1. Letting m D max0�i�t

mi and C D max1�i�t

Cmi yields

jjjJ kjjj1 � C�.A/k�mC1 for all k � m � 1:

Via equation (3.3), we obtain

1XkDm

dmax�F k.x/; F k.y/

� �1X

kDm

ˇˇAk

264

d.x1; y1/:::

d.xn; yn/

375ˇˇ

1

�1X

kDm

ˇˇˇBJ kB�1

ˇˇˇ1

ˇˇ 264d.x1; y1/

:::

d.xn; yn/

375ˇˇ

1

�ˇˇˇˇˇˇBˇˇˇˇˇˇ1ˇˇˇˇˇˇB�1

ˇˇˇˇˇˇ1ˇˇˇˇ264

d.x1; y1/:::

d.xn; yn/

375ˇˇˇˇ1

1XkDm

ˇˇˇˇˇˇJ k

ˇˇˇˇˇˇ1

�ˇˇˇ

Bˇˇˇ

1

ˇˇˇB�1

ˇˇˇ1

ˇˇ 264d.x1; y1/

:::

d.xn; yn/

375ˇˇ

1C�.A/

1 � �.A/< 1:

Letting y D F .x/, it then follows thatP1

kDm dmax�F k.x/; F kC1.x/

�< 1.

Page 75: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.1 Networks as Dynamical Systems 59

Hence, for � > 0, there exists N such thatP1

kDN dmax�F k.x/; F kC1.x/

�< �.

Therefore,

dmax�F k.x/; F `.x/

�< �

for every k; ` > N , implying that the sequence fF k.x/gk�1 is Cauchy. Since X iscomplete, this sequence converges.

If F k.x/ ! Qx, then since F is continuous, it follows that F .Qx/ D Qx.Moreover, since

P1kD1 dmax

�F k.Qx/; F k.y/

� � 1 for every y 2 X , it follows thatF k.y/ ! Qx, which completes the proof. ut

We note that the condition for stability given in Theorem 3.1 is a sufficientcondition. Indeed, it is possible to find systems that are stable but do not have astability matrix A such that �.A/ < 1. A natural question, then, is what it means fora dynamical network to be stable in the sense of Theorem 3.1.

As we will demonstrate, the stability condition in Theorem 3.1 guarantees morethan that a dynamical network is stable. For this reason, we give this type of stabilitythe following name.

Definition 3.2. If �.A/ < 1, where A is a stability matrix of .F ; X/, then we saythat this network is intrinsically stable.

For now, Theorem 3.1 implies that an intrinsically stable network is globallystable. Later, in Sect. 3.2, it will be shown that if a network is intrinsically stable,then it remains stable even if certain types of time delays are added to its dynamics(see Theorem 3.4). Since the addition of time delays can destabilize a network,intrinsic stability is in this sense a stronger form of global stability.

Before continuing, we note that a dynamical network .F ; X/ need not bedifferentiable to have a stability matrix. However, if a network is differentiable, thennot only is it computationally easier to compute a stability matrix of the network, itis possible to compute the network’s optimal stability matrix.

To demonstrate this, suppose the maps ' W X ! X and F W X ! X arecontinuously differentiable and each Xi � R is a closed but possibly unboundedinterval. If the constants

Li D maxx2X

j'0i .xi /j < 1; (3.4)

�ij D maxx2X

j.DF /ji .x/j < 1; (3.5)

where DF is the matrix of first partial derivatives of F , then we write F 2 C 11.X/.For F 2 C 11.X/, the matrix

AF D �T � diagŒL1; : : : ; Ln�

given by (3.4) and (3.5) can be shown to be a stability matrix of .F ; X/.

Page 76: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

60 3 Stability of Dynamical Networks

The matrix AF can be shown to be optimal for determining whether .F ; X/ isintrinsically stable. However, to demonstrate this, we let �.F / D �.AF / denote thespectral radius of .F ; X/ and note the following. For the matrices A; B 2 R

n�n,we write A � B if Aij � Bij for each 1 � i; j � n. If it is known that 0 � A � B ,where 0 is the zero matrix, then �.A/ � �.B/ (see, for instance, [22]). This is usedto prove the following.

Proposition 3.1. Suppose F 2 C 11.X/. If B is any stability matrix of .F ; X/,then �.F / � �.B/.

Proof. Suppose F 2 C 11.X/ and that B D Q�T � diagŒ QL1; : : : ; QLn� is a stabilitymatrix of .F ; X/. For ei , the i th standard basis vector of Rn, and x 2 X let h ¤ 0

such that y D x C hei 2 X . Then by (3.2),

d�Fj .x/; Fj .y/

� � Q�ij d.xi ; xi C h/ D Q�ij jhj:

Hence jF.x/j � F.y/j j=jhj � Q�ij . Taking the limit as h ! 0, we have

jDFji .x/j � Q�ij for all x 2 X:

By setting �ij D maxx2X jDFji .x/j, it follows that 0 � � � Q�.Similarly, one can show that Li D maxx2X j'0

i .xi /j � QLi . Hence the matrix AF

is less than or equal to B implying �.F / � �.B/. utFor F 2 C 11.X/, Proposition 3.1 implies that �.F / is optimal for directly

determining, via Theorem 3.1, whether .F ; X/ is intrinsically stable.

Remark 3.1. In what follows, we formally consider those dynamical networks.F ; X/ for which F 2 C 11.X/. We note that the results in this chapter can beshown to hold for every network satisfying (3.1) and (3.2) by slight modificationsof the proofs we give. The reason we restrict our discussion to the class C 11.X/ isto simplify the exposition.

As an example of the usefulness of Theorem 3.1, let .'i ;R/ be the local systems

'i .xi / D .1 � �/xi C ci ; (3.6)

where ci ; � 2 R and 1 � i � n. We note that these systems are stable if and only ifj1 � �j < 1. We then ask what kind of interaction between these local systems willlead to a stable dynamical network.

For the local systems given by (3.6), we are specifically interested in interactionsof the form

Cj .x/ D xj CnX

j D1

Wij i

xi � ci

1 � �

; (3.7)

Page 77: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.1 Networks as Dynamical Systems 61

where � ¤ 1, W 2 Rn�n, and i W R ! R is any smooth sigmoidal function with

Lipschitz constant L � 0. That is, i is a bounded differentiable function such that 0

i .x/ > 0 for all x 2 R.The reason we consider this particular interaction is that the dynamical network

.C ;Rn/ with C D C ı ' is then given by

Cj .x/ D .1 � �/xj CnX

iD1

Wij .xi / C cj ; (3.8)

which is a special case of a Cohen–Grossberg neural network in discrete time [17].Because of the large number of parameters involved, we suppress the dependenceof the function C .x/ on c1; : : : ; cn, �, and W in those systems that resemble Cohen–Grossberg neural networks.

For such neural networks, the variable xi represents the activation of the i thneuron population. Here, the function i describes the i th neuron population’sresponse to inputs. The matrix W gives the interaction strengths between each ofthe i th and j th neuron populations, which describe how the neurons are connectedwithin the network. The constants ci indicate constant inputs from outside thesystem.

To determine a stability criterion for the dynamical network .C ;Rn�n/, we letjW j denote the matrix with entries jW jij D jWij j. The following result gives ageneral stability condition for this class of Cohen–Grossberg neural networks.

Theorem 3.2 (Stability of Cohen–Grossberg Neural Networks). Let .C ;Rn/ bethe Cohen–Grossberg network given by (3.8), where i has Lipschitz constant L . If

j1 � �j C L �.jW j/ < 1;

then .C ;Rn/ is intrinsically stable.

Proof. The claim is that for the local systems and interaction given by (3.6)and (3.7), the matrix A D Q�T � diagŒL1; : : : ; Ln� with

maxx2X

j'0i .xi /j D Li D j1 � �j; and

maxx2X

j.DC /ji .x/j � Q�ij D8<:

1 CˇˇWji L

1��

ˇˇ for i D j;ˇ

Wji L1��

ˇfor i ¤ j

is a stability matrix of .C ;Rn/.To see this, note that the constants

�ij D maxx2X

j.DC /ji .x/j

Page 78: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

62 3 Stability of Dynamical Networks

satisfy (3.2). Since �ij � Q�ij , the constants Q�ij must also satisfy (3.2), whichverifies the claim.

Since the matrix A has the form

A D

26664

j1 � �j C jW11L j jW12L j : : : jW1nL jjW21L j j1 � �j C jW22L j : : : jW2nL j

::::::

: : ::::

jWn1L j jWn2L j : : : j1 � �j C jWnnL j

37775 ;

the spectral radius of the matrix A is then

�.A/ D ��j1 � �jI C L jW j� D j1 � �j C L � �.jW j/:

Therefore, if j1 � �j C L �.jW j/ < 1, then .C ;Rn/ is intrinsically stable. utOne of the major goals of this chapter is to extend results, such as Theorems 3.1

and 3.2, to the case in which the network’s interactions include time delays.However, this requires that we first consider the case in which a dynamical networkhas trivial local dynamics.

For a given dynamical network, it is always formally possible to absorb thedynamics of the network’s local systems into its interaction. This is done byconsidering the dynamical network .F ; X/ to have the interaction F ı ' and nolocal dynamics, i.e., local dynamics given by the identity map Id W X ! X .

In this sense, the theory we present here can be used to deal with generaldynamical systems, which typically do not have local dynamics. That is, in manysystems, the dynamics of each component in the absence of the others cannot beexplicitly defined.

If we have both a collection of local systems .'; X/ and an interaction .F; X/,the network .F ; X/ can always be considered to be a network with no localdynamics. If only the local systems .'; X/ are given, the question is then what kindof interaction induces a specific type of network dynamics. For instance, what typeof interaction induces stability?

The difference between these points of view is that the first focuses on the net-work interactions, while the second emphasizes local dynamics. Both are relevant,but for the moment, we consider the case in which we have a fixed interaction, whichwill be important in the following sections.

By absorbing the network’s local dynamics into its interaction, we can consider.F ; X/ to be the dynamical network with interaction F ı ' and no local dynamics,i.e., local dynamics given by Id W X ! X . By way of notation, we let .F ; X/

denote the network .F ; X/ considered as a network without local dynamics.

Proposition 3.2. Assuming F 2 C 11.X/, then �.F / � �.F /, where AF isgiven by

.AF /ij D maxx2X

jDFj i .x/j: (3.9)

Page 79: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.2 Time-Delayed Dynamical Networks 63

Proof. Since .F ; X/ has no local dynamics and the continuously differentiableinteraction F is equal to F ı ', it follows that Li D 1 satisfies (3.4), andmaxx2X jDFj i .x/j satisfies (3.5). Since diagŒL1; : : : ; Ln� D I , the n � n identitymatrix, it follows that equation (3.9) holds.

To show that �.F / � �.F /, note that

.AF /ij D maxx2X

jDFj i .x/j D maxx2X

ˇˇ@Fj

@xi

.x/'0i .xi /

ˇˇ �

maxx2X

ˇˇ@Fj

@xi

.x/

ˇˇ max

xi 2Xi

ˇ'0

i .xi /ˇ D �ij Li :

Hence, AF � �T �diagŒL1; : : : ; Ln�, implying �.AF / � �.�T �diagŒL1; : : : ; Ln�/,and the result follows. ut

Proposition 3.2 has the following consequence. Since �.F / � �.F /, then byconsidering .F ; X/ rather than .F ; X/, we obtain an improved estimate of thenetwork’s stability, since these systems have equivalent dynamics. The reason forthis improved estimate is that in .F ; X/, we have integrated different parts of thenetwork’s dynamics, i.e., the local dynamics and interaction are combined.

This technique of compressing network information is the basic method usedlater, in Sect. 3.4, to get improved stability estimates of .F ; X/ by removing thenetwork’s implicit time delays. However, before considering such improvements,we first turn our attention to analyzing the stability of time-delayed networks.

3.2 Time-Delayed Dynamical Networks

In most real networks, the network elements are physically separated by some dis-tance. Additionally, these elements are often used to process incoming information.The time required to send signals over a distance and to process that informationinevitably leads to time delays in the network’s interaction. These time delays areimportant to the network’s dynamics, since they are often a source of instability anda cause of poor performance.

Our first task in this section is to extend our mathematical framework to includedynamical networks with time delays. This requires that we allow the state F kC1.x/

of the dynamical network .F ; X/ to depend not only on F k.x/ but on some subsetof the previous T states F k.x/; F k�1.x/; : : : ; F k�.T �1/.x/ of the system.

Given the fixed integer T � 1, let T D f0; �1; : : : ; �T C 1g. We define theproduct space

XT DM2T

X;

Page 80: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

64 3 Stability of Dynamical Networks

where a point x D .x0; : : : ; x�T C1/ 2 XT has coordinates .x/ D x 2 X andwhere .x/

i D xi 2 Xi for .i; / 2 I � T . We say that x is x at time and that x

i

is the i th component of x .Given the local systems .'; X/, we define the function N' W XT ! XT as follows.

For x D .x0; : : : ; x�T C1/ 2 XT , let

N'.x0; x�1; : : : ; x�T C1/ D �'.x0/; x�1; : : : ; x�T C1

�;

so that only the present . D 0/ time step of x is affected by '.

Definition 3.3. A map H W XT ! X is called a time-delayed interaction if for allj 2 I , there exist a nonempty set I j � I � T and a function

Hj WM

.i;/2I j

Xi ! Xj :

The map H is defined by Hj .x/ D Hj .xjI j / for j 2 I and x 2 XT . Thesuperposition of the local systems and time-delayed interaction

H .x/ D .H ı N'/.x/ for x 2 XT

generates the time-delayed dynamical network .H ; XT /. The orbit of the point.x0; x�1; : : : ; x�T C1/ 2 XT under H is the sequence fxkgk>�T , where

xkC1 D H .xk; xk�1; : : : ; xk�T C1/:

Here we assume that both ' and H are continuously differentiable, so that bya slight abuse of our notation, H 2 C 11.XT /. As in Sect. 3.1, this assumption isfor convenience. We note that the results in this section can be extended to thosetime-delayed dynamical networks .H ; XT / that are Lipschitz continuous on thecomplete metric space XT (see Remark 3.1).

Remark 3.2. For the sake of generality, we could assume that the local systems.'; XT / are also time delayed. However, if such is the case, we can absorb theselocal time delays into the delayed interaction. Thus, the mathematical framework wepresent here can be used to investigate the more general case of interacting systemsthat have time delays in both the local systems and interaction.

In this section, we will again concentrate on finding a sufficient condition underwhich a time-delayed dynamical network has a globally attracting fixed point.A fixed point of .H ; XT / is an Qx 2 X such that Qx D H .Qx; : : : ; Qx/. The fixedpoint Qx 2 X is a global attractor of .H ; XT / if for every initial condition.x0; x�1; : : : ; x�T C1/ 2 XT , we have the limit

limk!1 xk D Qx:

As before, we call the time-delayed dynamical network .H ; XT / stable if it has aglobally attracting fixed point.

Page 81: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.2 Time-Delayed Dynamical Networks 65

Since time-delayed networks are similar in many ways to undelayed networks, itis tempting to use the results given in Sect. 3.1 to determine whether .H ; XT / isstable. However, in order to do so, we must first modify the function H .

Definition 3.4. For the time-delayed .H ; XT /, let NH W XT ! XT be the map

NH .x/ D .H .x/; x0; x�1; : : : ; x�T C2/

for x D .x0; x�1; : : : ; x�T C1/ 2 XT . We call . NH ; XT / the dynamical networkassociated with .H ; XT /.

We assume that the dynamical network . NH ; XT / has no local dynamics. That is,we let NH .x/ D . NH ı Id/.x/, so that the network has the interaction NH W XT !XT and local dynamics Id W XT ! XT .

Remark 3.3. If the local systems .'; X/ are invertible, then it is possible to define. NH ; XT / as a dynamical network with the nontrivial local systems . N'; XT / andinteraction

NH.x/ D .H.x/; '�1.x0/; x�1; : : : ; x�T C2/:

The dynamical network . NH ; XT / is assumed, for the sake of generality, to have theinteraction NH and no local dynamics.

The dynamical network . NH ; XT / has components

NH i .x/ D

(Hi .x/ D 0;

xC1i � T < < 0

for i 2 I : (3.10)

This, together with (3.9), implies that the stability matrix A NH has the entries

.A NH /ij D maxx2XT

j.D NH /j i .x/j (3.11)

for i; j 2 I � T . Additionally, since . NH k.x//0 D H .xk; xk�1; : : : ; x�T C1/ forevery initial condition x D .x0; x�1; : : : ; x�T C1/ 2 XT and k � 0, the followingholds.

Lemma 3.1. The time-delayed network .H ; XT / is stable if and only if itsassociated dynamical network . NH ; XT / is stable.

In light of lemma 3.1, we let �.H / D �. NH / denote the spectral radius of.H ; XT / and say that .H ; XT / is intrinsically stable if �.H / < 1. By combiningthis result with Theorem 3.1, it is possible to investigate the dynamic stability of.H ; XT / via the dynamical network . NH ; XT /.

Corollary 3.1. If �.H / < 1, then the time-delayed dynamical network .H ; XT /

is stable.

Page 82: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

66 3 Stability of Dynamical Networks

By associating a time-delayed network with a network that formally does nothave delays, it is possible to use the theory developed in Sect. 3.1 to study thestability of time-delayed dynamical networks. This approach is illustrated in thefollowing example.

Example 3.1. Consider the time-delayed dynamical network .H ; XT / given by

H .xk; xk�1; xk�2; xk�3/ D�

.1 � �/xk�11 C 2a tanh.bxk�3

2 / C c1

.1 � �/xk�12 C 2a tanh.bxk�3

1 / C c2

�(3.12)

with local systems 'i .xi / D .1 � �/xi C ci for i D 1; 2 and interaction

H.xk; xk�1; xk�2; xk�3/ D"

xk�11 C 2a tanh.b

xk�32 �c2

1��/

xk�12 C 2a tanh.b

xk�31 �c1

1��/

#; (3.13)

where a; b; ci 2 R, X D R2, T D 4, and � ¤ 1.

Note that this system has the form of a Cohen–Grossberg network, consideredin Sect. 3.1, but with time delays. The function i .xi / D tanh.L xi / is a standardsigmoidal function considered in the theory of Cohen–Grossberg neural networks(see, for example, [29]).

The dynamical network . NH ; XT / associated with this system has components

NH .x/ D

2666666666664

H 01 .x�1

1 ; x�32 /

H �11 .x0

1/

H �21 .x�1

1 /

H �31 .x�2

1 /

H 02 .x�3

1 ; x�12 /

H �12 .x0

2/

H �22 .x�1

2 /

H �32 .x�2

2 /

3777777777775

D

2666666666664

.1 � �/x�11 C 2a tanh.bx�3

2 / C c1

x01

x�11

x�21

.1 � �/x�12 C 2a tanh.bx�3

1 / C c2

x02

x�12

x�22

3777777777775

:

Using equation (3.11), we compute the stability matrix A NH to be

A NH D

2666666666664

0 1 0 0 0 0 0 0

j1 � �j 0 1 0 0 0 0 0

0 0 0 1 0 0 0 0

0 0 0 0 j2abj 0 0 0

0 0 0 0 0 1 0 0

0 0 0 0 j1 � �j 0 1 0

0 0 0 0 0 0 0 1

j2abj 0 0 0 0 0 0 0

3777777777775

:

Page 83: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.2 Time-Delayed Dynamical Networks 67

Taking the spectral radius of this matrix, we find that

�.A NH / Ds

j1 � �j Cpj1 � �j2 C 8jabj2

: (3.14)

From this, it follows that �.H / < 1, or . NH ; XT / is intrinsically stable if andonly if j1 � �j C 2jabj < 1. By corollary 3.1, the time-delayed network .H ; XT /

is then stable if this condition holds.

We now turn our attention to determining how modifications of a network’s timedelays affect the network’s stability.

Our first result in this direction states that if a time-delayed dynamical networkis known to be intrinsically stable, then removing the network’s time delays will notaffect the network’s stability. This result is novel in the sense that a time-delayednetwork (or general delayed dynamical system) may become unstable if its delaysare removed (see Example 3.2). Hence, an intrinsically stable network has a strongerform of stability, since it remains stable even when its delays are removed.

Following this, we show that it is possible to destabilize a dynamical network.F ; X/ by adding delays to the system, even if �.F / < 1. Since the delays thatdestabilize an intrinsically stable network have a specific form, this will enable us tosplit network delays into two categories. The first class, called a multiple-type delay,comprises those that can have a destabilizing effect on the network. The second classof delays, called single-type delays, consists of those that do not have such an effect.

Before continuing, we formalize the notion of modifying a system’s time delays.Our first objective is to describe the removal of delays from a time-delayeddynamical network.

Definition 3.5. Let .H ; XT / be a time-delayed dynamical network. Then the mapU W X ! X given by

U .x/ D H .x; : : : ; x/ for x 2 X

generates the undelayed dynamical network .U ; X/, with local systems .Id; X/

and interaction U W X ! X .

Remark 3.4. If the local systems .'; X/ are invertible, we can define .U ; X/ to bethe dynamical network with local systems .'; X/ and interaction

U.x/ D H.x; '�1.x/; '�1.x/; : : : ; '�1.x//

for x 2 X . For the sake of generality, though, we define .U ; X/ to have theinteraction U W X ! X and no local dynamics, similar to our definition of. NH ; XT /.

Page 84: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

68 3 Stability of Dynamical Networks

The dynamical network .U ; X/ is called the undelayed version of .H ; XT /.The following result relates the stability of the undelayed network .U ; X/ to thestability of .H ; XT /.

Theorem 3.3. If �.H / < 1, then the undelayed version of this network .U ; X/ isintrinsically stable.

An important consequence of Theorem 3.3 is that if a time-delayed network isintrinsically stable, then the network’s stability is invariant under the removal of itstime delays. We save the proof of Theorem 3.3 for Sect. 3.3 but illustrate its usewith the following example.

Example 3.2. Consider the time-delayed dynamical network .H ; XT / given by

H .xk; xk�1/ D .˛ C �/xk C ˛xk�1;

with local system '.x/ D .˛ C �/x and interaction H.xk; xk�1/ D xk C ˛xk�1,where XT D R

2 and ˛; � 2 R.This system has the associated dynamical network

NH .xI ˛; �/ D� NH 0

1 .x01 ; x�1

1 /NH �11 .x0

1/

�D�

.˛ C �/x01 C ˛x�1

1

x01

�;

which is linear and therefore stable if the matrix of first partial derivatives

D NH .0/ D�

˛ C � ˛

1 0

has a spectral radius strictly within the unit circle.In contrast, the stability matrix of .H ; XT / is given by

A NH D� j˛ C � j j˛j

1 0

�:

Let ˝1 D f.˛; �/ 2 R2 W �.D NH .0// < 1g and ˝2 D f.˛; �/ 2 R

2 W �.AH / < 1g.Then we have the strict inclusion ˝2 � ˝1, which is shown in Fig. 3.1 (left).Consequently, there are parameter values .˛; �/ 2 R

2 for which .H ; XT / is stablebut �.H / � 1.

Recall that Theorem 3.1 together with lemma 3.1 gives a sufficient conditionstating that if �.AH / < 1, then .H ; XT / is stable. However, Theorem 3.3additionally guarantees that if this condition holds, the undelayed version of thisnetwork is also stable. In this example, the undelayed system .U ; X/ is given by

U .xI ˛; �/ D .2˛ C �/x for x; ˛; � 2 R;

which has a globally attracting fixed point if and only if j2˛ C � j < 1.

Page 85: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.2 Time-Delayed Dynamical Networks 69

−1.0 −0.5 0.0 0.5 1.0−1

0

1

2

3

W1

W2

−2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5−2

−1

0

1

2

3

α-axisα-axis

γ-axisγ-axis

W1

W2

W3

Fig. 3.1 The parameter regions ˝1 (blue triangle), ˝2 (red quadrilateral), and ˝3 (yellow strip)from Example 3.2

With ˝3 D f.˛; �/ 2 R2 W j2˛ C � j < 1g, Fig. 3.1 (right) shows the inclusion

˝2 � .˝1\˝3/. Hence, for every .˛; �/ 2 ˝2, the time-delayed network .H ; XT /

and its undelayed version .U ; X/ are stable, as guaranteed by Theorem 3.3 andCorollary 3.1.

Example 3.2 also points out the following. Since ˝1 ª ˝3, it is possible todestabilize a network, or more generally a dynamical system, by removing its timedelays. Conversely, given that ˝3 ª ˝1, it is also possible to destabilize a systemby introducing time delays. Therefore, the converse of Theorem 3.3 does not hold ingeneral. However, this theorem does have a partial converse if we restrict ourselvesto specific types of time-delayed interactions.

To demonstrate this, recall from Definition 3.3 that a time-delayed interaction isa function H W XT ! X given by

Hj WM

.i;/2I j

Xi ! Xj ;

where I j � I � T .

Definition 3.6. The time-delayed network .H ; XT / is said to have single-typedelays if for all j 2 I , the set I j � I � T has the property

.i; �/ 2 I j ) .i; �/ … I j for all � ¤ �:

If some .H ; XT / does not have this property, we say that it has a multiple-typedelay.

Page 86: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

70 3 Stability of Dynamical Networks

Theorem 3.4. Suppose .H ; XT / has single-type delays. Then �.H / < 1 if andonly if �.U / < 1.

As with the other theorems in this section, we prove Theorem 3.4 in Sect. 3.3.Combining Theorems 3.1 and 3.4, we have the following corollary.

Corollary 3.2. Suppose .H ; XT / has single-type delays. If either �.H / < 1 or�.U / < 1, then both .H ; XT / and .U ; X/ are stable.

Phrased another way, Corollary 3.2 states that if a network is intrinsically stable,then that stability is invariant under the removal or addition of single-type delays.

Example 3.3. Consider the time-delayed dynamical network .H ; XT / given byequation (3.12) in Example 3.1. The undelayed version of .H ; XT / is the network.U ; X/ given by

U .x/ D�

.1 � �/x1 C 2a tanh.bx2/ C c1

.1 � �/x2 C 2a tanh.bx1/ C c2

for a; b; ci 2 R, X D R2, and � ¤ 1.

In computing the undelayed network’s stability matrix, we find that

AU D� j1 � �j 2jabj

2jabj j1 � �j�

;

from which it follows that

�.U / D j1 � �j C 2jabj: (3.15)

As can be seen from equation (3.13), the network .H ; XT / has single-type timedelays. Corollary 3.2 then implies that .H ; XT / is stable if j1 � �j C 2jabj < 1.

Observe that the same condition for the stability of .H ; XT / was formulatedin Example 3.1. However, this equivalence of conditions does not imply that thenumber of computations required to compute �.H / is the same as the numberneeded to compute �.U /, as can be seen by comparing equations (3.14) and (3.15).In fact, from a computational point of view, it is always easier to analyze the stabilityof an undelayed dynamical network .U ; X/ than that of the associated time-delayednetwork .H ; XT /.

If a time-delayed dynamical network .H ; XT / has a multiple-type delay, thenthe conclusions of Theorem 3.4 do not necessarily hold. For example, the dynamicalnetwork .U ; X/ in Example 3.2 has spectral radius �.U / D 0 < 1 for .˛; �/ D.1; �2/, although the associated time-delayed network does not have a globallyattracting fixed point at these parameter values. The reason is that the time-delayednetwork has a multiple-type delay.

Combining Theorems 3.3 and 3.4, we see that a network that is intrinsicallystable remains stable under the removal of time delays and the introduction of

Page 87: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.3 Graph Structure of a Dynamical Network 71

single-type delays. To apply these results to the class of Cohen–Grossberg networksintroduced in Sect. 3.1, suppose .C ; XT / is the time-delayed dynamical networkgiven by

Cj .xk; : : : ; xk�.T �1// D .1 � �/xk�jj

j CnX

iD1

Wij .xk�ij

i / C cj (3.16)

with local systems 'i .xi / D .1 � �/xi C ci and time-delayed interaction

C.xk; : : : ; xk�.T �1// D xk�jj

j CnX

iD1

Wij x

k�ij

i � ci

1 � �

: (3.17)

Here, 1 � i; j � n, 0 � ij � T � 1, and � ¤ 1. As before, W 2 Rn�n, X D R

n,and W R ! R is a smooth function with Lipschitz constant L .

Equations (3.16) and (3.17) describe a class of time-delayed Cohen–Grossbergnetworks, whose stability analysis has attracted a good deal of interest [14, 25, 29].Since .C ;RnT / has single time delays, the following result is a consequence of theproofs of Theorems 3.2 and 3.4.

Theorem 3.5 (Stability of Time-Delayed Cohen–Grossberg Networks). Sup-pose .C ;RnT / is the time-delayed Cohen–Grossberg network given by (3.16), where i has Lipschitz constant L . If

j1 � �j C L �.jW j/ < 1;

then .C ;RnT / is intrinsically stable.

Before proving the results in this section, we note that the notion of intrinsicstability of a dynamical network (or more generally, of a dynamical system) ispotentially important in the construction of real networks, since such networks areinherently time-delayed. Specifically, if a network is designed to be intrinsicallystable, then its stability is much more robust to changes affecting its delays than ifit were designed to be only stable.

3.3 Graph Structure of a Dynamical Network

To understand how time delays effect the stability of a dynamical network, wefirst consider how modifying a network’s delays affects the network’s graphstructure. This will, in turn, leads us to the notion of an isoradial graph (matrix)transformation, which is a transformation that preserves the spectral radius of agraph (matrix). This concept will then allow us to prove Theorems 3.3 and 3.4,found in Sect. 3.2.

Page 88: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

72 3 Stability of Dynamical Networks

v10 v2

0

v1−1 v1

−2 v1−3

v2−3 v2

−2 v2−1

|1− ε| |1− ε||1− ε||1− ε|

1 1 1

1 1 1G

G

v1 v2|2ab|

|2ab|

|2ab|

|2ab|

Fig. 3.2 The graph of interactions �H and �U of the time-delayed network and its undelayedversion from Examples 3.1 and 3.3, respectively

To each dynamical network there is an associated weighted directed graph calledthe network’s graph of interactions.

Definition 3.7. The graph �F D .V; E; !/ with vertex set V D fv1; : : : ; vng, edgesE D feij W i 2 Ij ; j 2 I g, and edge weights !.eij / D .AF /ij is the graph ofinteractions of the dynamical network .F ; X/.

The vertex vi 2 V of �F corresponds to the i th element of the dynamicalnetwork .F ; X/. Moreover, there is an edge eij 2 E if and only if the j thcomponent of the interaction F.x/ depends on the i th coordinate of x. The weight!.eij / measures the strength of the interaction between the i th and j th networkelements.

As . NH ; XT / is a dynamical network, it also has a graph of interactions � NH . Forconvenience, we let this be the graph of interactions of the time-delayed dynamicalnetwork .H ; XT /, so that �H D � NH .

In this graph, the vertex vi of �H corresponds to the component NH

i of NHand represents the i th network element but time steps in the past. Each edgefrom the vertex vC1

i to the vertex vi has unit weight, since NH

i .xC1i / D xC1

i .This corresponds to the fact that these edges simply pass along information untilthis information is used to update the state of some network element. The graphof interactions of the time-delayed network .H ; XT / from Example 3.1 and itsundelayed version .U ; X/ considered in Example 3.3 are shown in Fig. 3.2.

Single and multiple network delays of .H ; XT / can be characterized in terms ofthe path and cycle structure of �H . A time-delayed network .H ; XT / has single-type delays if there is at most one path of the form v0

i ; v�1i ; : : : ; v�C1

i ; v0j from v0

i

to v0j in �H for all i; j 2 I . If there are two or more such paths, then the network

has a multiple-type delay. Using this criterion, the graph structure of �H in Fig. 3.2(left) indicates that the network .H ; XT / has single-type delays.

We are now in a position to introduce a type of transformation that allowsus to modify the structure and spectral radius of a graph in a specific way.

Page 89: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.3 Graph Structure of a Dynamical Network 73

G

vj

q

vi v j

vkq w1

w2

viw(ei j)

Fig. 3.3 Replacing the edge eij of G shown left by the subgraph shown right yields the graph G�

Before proceeding, we note that a graph G is a nonnegative graph if its adjacencymatrix M.G/ 2 R

n�n is nonnegative, i.e., M.G/ij � 0 for all 1 � i; j � n.

Definition 3.8. For the nonnegative graph G D .V; E; !/, let G� be the graph G inwhich eij 2 E is replaced as in Fig. 3.3, where !1; !2 � 0, !1 C !2 D !.eij /, and� > 0. We call G� a bounded radial transformation of G.

Recall that for every graph G, there is a corresponding matrix M.G/, and everymatrix is the adjacency matrix of some graph. Therefore, the graph transformationdescribed in Definition 3.8 can equivalently be viewed as a matrix transformation.Before using this transformation, we first consider the following special case.

Lemma 3.2. Let G� be a bounded radial transformation of G. If � D �.G/, then�.G� / D �.G/.

Lemma 3.2 states that G� is an isoradial transformation of G if � D �.G/, i.e., atransformation that preserves the spectral radius of G. In order to prove Lemma 3.2,we use the Perron–Frobenius theorem and the following standard terminology.

A directed graph G is called strongly connected if there is a path from each vertexto every other vertex in G or if G consists of a single vertex. The strongly connectedcomponents of G are its maximal strongly connected subgraphs. If M 2 R

n�n, thenM is said to be irreducible if the graph G with adjacency matrix M is stronglyconnected.

Theorem 3.6. (Perron-Frobenius) Let M 2 Rn�n and suppose that M is irre-

ducible and nonnegative. Then

(a) �.M/ > 0;(b) �.M/ is an eigenvalue of M ;(c) �.M/ is an algebraically simple eigenvalue of M ; and(d) the left and right eigenvectors x and y associated with �.M/ have strictly

positive entries.

If a graph G is not strongly connected, then it has strongly connected componentsS.G/1 : : : ;S.G/N . For M D M.G/, let Mj be the adjacency matrix of the graphSj .G/. Then the eigenvalues of M are

�.M/ DN[

j D1

��Mj

�; (3.18)

Page 90: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

74 3 Stability of Dynamical Networks

from which it follows that

�.M/ D max1�j �N

�.Mj /: (3.19)

We call a strongly connected component Sj .G/ trivial if it consists of a single vertexwithout a loop, in which case �

�Sj .G/

� D f0g. We now give a proof of Lemma 3.2.

Proof. Let G D G� be a bounded radial transformation of G D .V; E; !/ in which� D �.G/, V D fv2; : : : ; vng, and the edge e23 2 E is replaced as in Fig. 3.3, wherevk D v1. Let M.G / � �I have the block form

M.G / � �I D�

A B

C D

�;

where A is a 1 � 1 matrix. Assuming that G is strongly connected, then A D Œ� � isinvertible by part (a) of the Perron–Frobenius theorem. Using the identity

det

�A B

C D

�D det.A/ � det.D � CA�1B/;

we have det.M.G / � �I / D � det.D � ��1CB/, where

.D � ��1CB/ij D(

!1 C !2 for i D 2, j D 3,

Dij otherwise.

Since !1 C !2 D !.e23/, it follows that D � ��1CB D M.G/ � �I , so that

det.M.G / � �I / D � det.M.G/ � �I / D 0;

which implies � 2 �.G /.With G written as a function of !1, the claim is that �.G .!1// D � for !1 2

Œ0; !.eij /�. To verify this, we note that G .0/ has the strongly connected componentsG and v1, where v1 is trivial. Hence, equation (3.19) implies �.G .0// D � . Since� 2 �.G .!1//, and �.G .!1// is continuous with respect to !1 for !1 2 Œ0; !.eij /�,parts (b) and (c) of the Perron–Frobenius theorem imply �.G .!1// D � . Thisverifies the claim and completes the proof. ut

The reason the graph G� is called a bounded radial transform in Definition 3.8 isdue to the following result.

Lemma 3.3. If G is nonnegative, then �.G/ < �.G� / < � if and only if � > �.G/.

That is, the spectral radius of the graph G� is bounded by � as long as � >

�.G/. To prove Lemma 3.3, we require the following result, which can be found,for instance, in [22].

Page 91: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.3 Graph Structure of a Dynamical Network 75

Proposition 3.3. Let A; B 2 Rn�n be nonnegative matrices and suppose that A is

irreducible. Then �.A C B/ > �.A/ if B ¤ 0.

That is, the spectral radius of a nonnegative irreducible matrix is strictlymonotone in each of its entries. This allows us to give the following proof ofLemma 3.3.

Proof. Let M� D M.G� / and M D M.G/. Supposing G is nonnegative andstrongly connected, then M�.M/ is nonnegative and irreducible. Moreover, M�.M/

is the adjacency matrix of the isoradial transformation G , so Lemma 3.2 implies that�.M�.M// D �.M/. This together with Proposition 3.3 implies that for � > 1,

�.M�.M// < ��M��.M/

�< �

��M�.M/

�:

Since ���M�.M/

� D ��.M/, then letting � D ��.M/, we have

�.M/ < ��M�

�< � if and only if � > �.M/

since � > 0. This implies the result when G is strongly connected.Suppose, then, that G is nonnegative but is not strongly connected. In this case, G

can be decomposed into its strongly connected components S.G/1; : : : ;S.G/N . Byway of notation, we let Mm denote the adjacency matrix of the strongly connectedcomponent S.G/m for 1 � m � N . Note that since G is nonnegative, each Mm isnonnegative.

If eij is an edge of S.G/`, then we consider two cases. First, suppose S.G/` isnontrivial. Then the graph G� with adjacency matrix M� has the strongly connectedcomponents

S.G/1 : : : ;S� .G/`; : : : ;S.G/N ;

where we let .M� /` denote the nonnegative adjacency matrix of S� .G/`. Since theadjacency matrix of a nontrivial strongly connected component is irreducible, itfollows that �

�.M� /`

�< � if and only if �.M/ < � .

From equation (3.19), we have

�.M� / D maxf�.M1/; : : : ; ��.M� /`

�; : : : ; �.MN /g D maxf�.M/; �

�.M� /`

�g:Hence, if �.M/ < � , then �

�.M� /`

�< � , implying �.M� / < � . Conversely, if

�.M� / < � , then �.M/ < � , and the result holds in this case.If S.G/` is trivial, then !1; !2 D 0, and M� has the strongly connected

components

S.G/1; : : : ;S.G/N ;S.G/N C1;

Page 92: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

76 3 Stability of Dynamical Networks

G

v1 v2

|2ab|

|2ab|

|2ab|

|2ab|

G1

v1 v21

1

1

1 1

1

1

1

|1− e |

|1− e |

|1− e ||1− e |

Fig. 3.4 A transform �1 of the graph �U from Example 3.3, via a sequence of bounded radialtransformations

where S.G/N C1 is trivial with single vertex vk . Hence, �.MN C1/ D f0g, implying�.M� / D �.M/ by equation (3.18). Hence the result follows for the case in whichthe edge eij is an edge of the component S.G/`.

If vi 2 S.G/` and vj 2 S.G/m for ` ¤ m, then the graphs associated with M

and M� have the same nontrivial strongly connected components. Equation (3.19)then implies that �.M� / D �.M/, and the result follows for the case in which eij

does not belong to a strongly connected component. Since this exhausts all cases,this completes the proof. utExample 3.4. Consider the graph of interactions �U shown in Fig. 3.4 (left) ofthe undelayed network .U ; X/ from Example 3.3. By a series of bounded radialtransformations, it is possible to transform �U into the graph �1 shown on the right.This is done at each step by letting � D 1 and !2 D 0, so that the edge beingmodified is bisected, with one edge retaining the original edge weight and the otherreceiving the weight 1. Importantly, we note that Lemma 3.3 implies that �.U / < 1

if and only if �.�1/ < 1.

Although the graph �1 in Fig. 3.4 is not the graph of interactions �H of thenetwork .H ; XT / shown in fig. 3.2 (left), the two have a similar structure. Toquantify in what sense two graphs are similar, we introduce the following.

Definition 3.9. For the graph G D .V; E; !/, suppose S 2 st0.G/ such that eachvertex of V belongs to a branch of BS .G/. Then we call S a complete branchingset of G with respect to S .

For a graph G 2 G, we let br0.G/ denote the collection of complete branchingsets of the graph G. The single difference between a complete structural set S anda complete branching set T is that the branching set BS .G/ may not contain everyvertex of G, whereas BT .G/ does.

This can be seen, for instance, in Fig. 3.5. Here, the graph G has the completestructural set S D fv2; v3g. The set S , however, is not a complete branching set,

Page 93: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.3 Graph Structure of a Dynamical Network 77

v1 v2 v3 v4G

Fig. 3.5 The graph G has the complete structural set S D fv2; v3g and the complete branchingset T D fv1; v2; v3; v4g

since no branch of BS .G/ D fv2; v3I v3; v2g contains either v1 or v4. In fact, theonly complete branching set of G is the set T D fv1; v2; v3; v4g.

Definition 3.10. For the graph G D .V; E; !/, suppose S 2 br0.G/. Then thegraph XS .G/ is called an isoradial expansion of G with respect to S .

Recall from Sect. 2.5 that for some S 2 st.G/, an isospectral graph expansionXS .G/ is the graph G in which the branches of BS .G/ have been made indepen-dent. An isoradial expansion of a graph G is then an isospectral expansion withrespect to a special type of structural set S , namely, one that is both a completestructural set and has the property that each vertex of G belongs to some branch ofBS .G/.

As a corollary to Corollary 2.5, we have the following result.

Corollary 3.3. Let G D .V; E; !/ and S 2 br0.G/. Then �.XS .G// D �.G/.

Proof. Suppose G D .V; E; !/ and S 2 br0.G/. Since each cycle of G includingloops must contain a vertex in S , it follows that !.eii / D 0 for each i … IS .

Via Theorem 2.8, it then follows that

det�M.XS .G// � �I

� D det�M.G/ � �I

� Yvi 2V �S

� � ��ni �1

;

where ni is the number of branches in BS .G/ containing vi 2 V . Hence �.G/ and�.XS .G// differ by at most some number of zeros, implying that the two graphshave the same spectral radius. utExample 3.5. Consider the graph �H shown in Fig. 3.6 (left), which is the graph ofinteractions of the time-delayed network .H ; XT / from Example 3.1. Since eachcycle of �H passes through a vertex of S D fv0

1; v02g and each vertex of this graph

belongs to a cycle, we have S 2 br0.�H /. The isoradial expansion XS .�H / isshown in Fig. 3.6 (right), which has spectral radius �.XS .�H // D �.�H / byCorollary 3.3.

Note that the graph XS .�H / is identical to the graph �1 from Example 3.4,which was shown to have spectral radius �.�1/ < 1 if and only if �.U / < 1. Hence�.U / < 1 if and only if �.H / < 1, where .U ; X/ is the undelayed version of thetime-delayed network .H ; XT /. That is, using isoradial expansions together withbounded radial transformations, it is possible to show that the stability of a delayednetwork implies the stability of the network without delays, and vice versa.

Page 94: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

78 3 Stability of Dynamical Networks

G

v01

1

1 1

1

1

1

S(G )

v01 v02

1

1 1

1

1

1

|1− e|

1

1

|2ab|

|2ab||2ab|

|2ab|

|1− e||1− e| |1− e|

v02

Fig. 3.6 The graph �H from Fig. 3.2 and its branch expansion XS .�H / over the completebranching set S D fv0

1; v02g

v0i v0j

S(G )

1 1 (A ¯ )i j

v−1i v−2

i v−pi

. . . v0i v0j(A ¯ )i j

Ga

Fig. 3.7 The branch ˛ in XS .�H / (left) and its replacement by a single edge in �˛ (right)

Our goal in the remainder of this section is to use isoradial expansions andbounded radial transformations to prove Theorems 3.3 and 3.4.

A proof of Theorem 3.4 is the following.

Proof. Let .H ; XT / be a time-delayed network. The claim is that S D fv0i W i 2

I g is a complete branching set of �H . To see this, note that if v1; : : : ; vm is a pathor cycle in �H containing no vertices of S , then (3.10) implies that there is somei 2 I such that v` D v�`

i for 1 � ` � m. Since each v` is distinct, then in orderto form a cycle of �H , we need an element of S . Moreover, each vertex of �H

belongs to a path (cycle) of the form v0i ; v�1

i ; : : : ; v�i ; v0

j (where i D j ). Hence,S 2 br0.�H /.

Suppose .H ; XT / has single-type delays. Then for i; j 2 I , there is at most onebranch in BS .�H / from v0

i to v0j , which implies that the same is true of the isoradial

expansion XS .�H /. If ˛ D v0i ; Nv�1

i ; : : : ; Nv�pi ; v0

j is a branch in BS .XS .�H //, then

it must have the form shown in Fig. 3.7 (left), where Oi D .i; �p/ and Oj D .j; 0/ arein I � T .

Via a sequence of bounded radial transformations with !1 D .A NH /Oi Oj , !2 D 0,and � D 1, the single edge of �˛ shown in Fig. 3.7 (right) can be replaced by thebranch ˛. Lemma 3.3 then implies that �.�˛/ < 1 if and only if �.XS .�H // < 1.For the same reason, replacing each ˇ D v0

` ; v�1` ; : : : ; v

�q

` ; v0m 2 BS .XS .�H // by

a single edge from v0` to v0

m with weight .A NH / Om Oq will result in the graph � with theproperty

�.� / < 1 if and only if �.XS .�H // < 1: (3.20)

Page 95: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.3 Graph Structure of a Dynamical Network 79

For i; j 2 I the adjacency matrix M.� / of � has entries

M.� /ij D .A NH /Oi Oj D maxx2X

j.D NH / Oj Oi .x; : : : ; x/j D maxx2X

j.DU /j i .x/j D .AU /ij ;

(3.21)where the second-to-last equality follows from the fact that .H; XT / has only single-type delays.

Thus, �.� / D �.U /. The reason that we have

.A NH /Oi Oj D maxx2X

j.D NH / Oj Oi .x; : : : ; x/j and maxx2X

j.DU /j i .x/j D .AU /ij

is that both . NH ; XT / and .U ; X/ are assumed to have no local dynamics. Thefact that �.H / D �.XS .�H //, via Corollary 3.3, together with (3.20) implies that�.U / < 1 if and only if �.H / < 1. This completes the proof. ut

We now give a proof of Theorem 3.3.

Proof. For the time-delayed network .H ; XT /, it follows from the proof ofTheorem 3.4 that S D fv0

i W i 2 I g is a complete branching set of �H . Suppose,then, that there is a single branch ˛ D v0

i ; Nv�1i ; : : : ; Nv�p

i ; v0j in BS .XS .�H // from

v0i to v0

j , as shown in Fig. 3.7 (left) with Oi D .i; �p/ and Oj D .0; j / in I � T .Following the proof of Theorem 3.4, the branch ˛ can be replaced with a single

edge, as in Fig. 3.7 (right), such that

�.�˛/ < 1 if and only if �.XS .�H // < 1:

Suppose, then, that there are two branches

˛ D v0i ; Nv�1

i ; : : : ; Nv�pi ; v0

j ; ˇ D v0i ; Qv�1

i ; : : : ; Qv�qi ; v0

j

in BS .XS .�H // from v0i to v0

j , as shown in Fig. 3.8 (left), where Ni D .i; �p/ andQi D .i; �q/ are in I � T .

By repeated use of Lemma 3.3 with !2 D 0 and � D 1, the branch ˛ can bereplaced with a single edge and ˇ by two edges, as in the intermediate graph �int inFig. 3.8 (center). By another use of Lemma 3.3 with !1 D .A NH /Qi Oj , !2 D .A NH /Ni Oj ,and � D 1, it follows that the graph �˛=ˇ shown in Fig. 3.8 (right) has the property

�.�˛=ˇ/ < 1 if and only if �.XS .�H // < 1:

Continuing in this manner, suppose there are branches ˛1; : : : ; ˛m 2BS .XS .�H // from v0

i to v0j . Then, through a sequence of bounded radial

transformations, these branches can be replaced by a single edge with weight

!ij DmX

`D1

.A NH /i`; Oj where i` D .i; j˛`j C 1/ 2 I � T :

Page 96: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

80 3 Stability of Dynamical Networks

v0i v0j

S(G )

1 1 1. . .

(A ¯ )i j

v−1i v−2

i v−p+1i v−pi

1 1 1. . . (A ¯ )i j

v−1i v−2

i v−p+1i v−pi

v0i v0j

1 (A ¯ )i j

(A ¯ )i jGint

v0i v0j

(A ¯ )i j+(A ¯ )i j

Ga/b

Fig. 3.8 The branches ˛ and ˇ in XS .�H / (left) and their replacements in �int (center) and �˛=ˇ

(right)

The claim is that .AU /ij < !ij , where .U ; X/ is the undelayed version of.H ; XT /. Indeed, since ˛1; : : : ; ˛m 2 BS .XS .�H // are the branches of XS .�H /

from v0i to v0

j , the function Hj W XT ! X depends on the variables indexed by i

of the form

xk�j˛`jC1i for 1 � ` � m:

Therefore, .AU /ij � !ij , since

maxx2X

j.DU /j i .x/j D maxx2X

j.DH /j;i` .x; : : : ; x/j � maxx2XT

mX`D1

j.D NH /Oj ;i`

.x/j D !ij :

(3.22)

Let � be the graph in which each set of branches in XS .�H / from v0i to v0

j isreplaced by the edge with weight !ij for all i; j 2 I . Then equation (3.22) impliesthat �.U / � �.� /.

Since � can be transformed via a sequence of bounded radial transformationswith � D 1 into XS .�H /, Lemma 3.3 implies that �.� / < 1 if and only if�.XS .�H // < 1. The fact that �.XS .�H // D �.�H / implies that if �.H / < 1,then �.U / < 1, completing the proof. ut

3.4 Implicit Delays and Restrictions of Dynamical Networks

One of the major obstacles to understanding the dynamics of a network (high-dimensional dynamical system) is that the information needed to do so is spreadthroughout the various network elements and interactions (system components). Inthis section, we consider whether it is possible to consolidate this information togain improved estimates of a network’s stability.

Suppose we consider two different elements of a dynamical network. Even ifthere is no direct interaction between the two, one may influence the dynamics of

Page 97: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.4 Implicit Delays and Restrictions of Dynamical Networks 81

the other after some number of time steps > 1. This amounts to an implicit time-delayed interaction between these elements.

In terms of the graph structure of .F ; X/, there is an implicit time-delayedinteraction from the i th to the j th network element if there is a path in �F fromvi to vj . By choosing a particular subset of network elements, the claim is that it ispossible to associate a time-delayed network with .F ; X/ whose delays representthe implicit delays among this collection of network elements.

To construct this time-delayed network, let xkC1 D F .xk/, and suppose that theset S is in br0.�F /. For j 2 IS , let Fj;0.xk/ D Fj .xk/. For ` � 1, we recursivelydefine Fj;` D Fj;`.xk; : : : ; xk�`/ to be the function

Fj;`�1 D Fj;`�1.xk; : : : ; xk�`C1/;

in which the variable xk�`C1i is replaced by the function Fi .xk�`/ for all i … IS .

If for some m � 0, each variable of Fj;m.xk; : : : ; xk�m/ is indexed by an elementof IS , then we let

HSFj .xkS ; : : : ; xk�m

S / D Fj;m.xkS ; : : : ; xk�m

S /; (3.23)

where each xk�S belongs to XS D X jIS .

By choosing S to be a complete branching set, we are guaranteed that thefunction HSFj exists, i.e., that there is some m < 1 such that each variable ofFj;m is indexed by an element of IS . The reason for this is that otherwise, therewould be a cycle of �F containing no element of the structural set S .

Definition 3.11. For S 2 br0.�F /, let .HSF ; XTS / be the dynamical network with

components defined by equation (3.23) for j 2 IS . This network is called the time-delayed version of .F ; X/ with respect to S .

The time-delayed network .HSF ; XTS / is considered to be a network with no

local dynamics, since the compositions needed to construct this system do not, ingeneral, allow for a decomposition into an interaction and local systems.

Example 3.6. Consider the dynamical network .F ; X/ given by

F .xI c/ D

266666664

tanh.x6/ C c

tanh.x1/ C c

tanh.x2/ C tanh.x5/ C c

tanh.x3/ C c

tanh.x4/ C c

tanh.x2/ C tanh.x5/ C c

377777775

; x 2 R6

Page 98: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

82 3 Stability of Dynamical Networks

v1 v4

G

v2 v3

v5v6

1 1 1

1 1 1

1 1 v1 v4

v2

v5

G S

sech2(c−2)

sech2(c−2)sech2(c−2)

sech2(c−2)

1

1

Fig. 3.9 The graph �F of the network in Example 3.6 and the graph of its restriction �RS F toS D fv1; v2; v4; v5g

with local systems 'i .xi / D tanh.xi / and interaction

F.x/ D(

xi�1 C c for i D 1; 2; 4; 5;

x2 C x5 C c for i D 3; 6;

where the indices are taken modulo 6, X D R6, and c 2 R. The network .F ; X/ can

be thought of as an undelayed Cohen–Grossberg network, where we allow � D 1.Formally, .F ; X/ can be written as the time-delayed network xkC1 D F .xk/

with

xkC1 D

266666664

tanh.xk6 / C c

tanh.xk1 / C c

tanh.xk2 / C tanh.xk

5 / C c

tanh.xk3 / C c

tanh.xk4 / C c

tanh.xk2 / C tanh.xk

5 / C c

377777775

; xk 2 R6:

As can be seen in Fig. 3.9, the vertex set S D fv1; v2; v4; v5g is a completebranching set of �F . To construct .HSF ; XT

S /, we let Fj;0.xk/ D Fj .xk/ for eachj 2 IS and note that HSFj .xk/ D Fj;0.xk/ for j D 2; 5, since each variable ofthese two functions is indexed by an element of IS . Replacing the variables xk

3 andxk

6 by F3.xk�1/ and F6.xk�1/, respectively, in Fj;0.xk/ for j D 1; 4 yields thefunction

Fj;1.xk; xk�1/ D tanh.tanh.xk�12 / C tanh.xk�1

5 / C c/ C c:

Since each of this function’s variables is indexed by an element of IS , the functionHSFj .xk; xk�1/ is equal to Fj;1.xk; xk�1/ for j D 1; 4.

Page 99: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.4 Implicit Delays and Restrictions of Dynamical Networks 83

The time-delayed network .HSF ; XTS / associated with .F ; X/ is then given by

xkC1 D

2664

HSF1.xk�12 ; xk�1

5 /

HSF2.xk1 /

HSF4.xk�12 ; xk�1

5 /

HSF5.xk4 /

3775 D

2664

tanh.tanh.xk�12 / C tanh.xk�1

5 / C c/ C c

tanh.xk1 / C c

tanh.tanh.xk�12 / C tanh.xk�1

5 / C c/ C c

tanh.xk4 / C c

3775 ;

(3.24)

where T D 2. The system .HSF ; XTS / is the network .F ; X/, in which the implicit

delays between the elements indexed by S are made explicit.

When .F ; X/ is written in the form of a delayed network xkC1 D F .xk/, theinitial condition x0 2 X has the orbit fxkgk�0 under the action of F . For x0 2 X , itfollows from this construction that

Fj .xk/ D HSFj .xkS ; : : : ; xk�T C1

S /

for each j 2 IS and k � T � 1. That is, .HSF ; XTS / and .F ; X/ have the same

dynamics when restricted to S .Since every vertex of �F belongs to a branch of BS .�F /, this implies the

following result.

Lemma 3.4. For S 2 br0.�F /, the time-delayed network .HSF ; XTS / is stable if

and only if .F ; X/ is stable.

By constructing the time-delayed dynamical network .HSF ; XTS /, we absorb

the dynamics of the elements of .F ; X/, that are not indexed by IS , into the time-delayed interaction of .HSF ; XT

S /. In this sense, .HSF ; XTS / is a compressed

version of the original undelayed network .F ; X/.In this section, we show that this type of network compression can be used to

gain improved stability estimates of .F ; X/. However, our main tool for gainingthese improved estimates is not the time-delayed network .HSF ; XT

S / itself, butthe undelayed version of that network.

Definition 3.12. For S 2 br0.�F /, let .RSF ; XS / be the undelayed version of thetime-delayed network .HSF ; XT

S /. We call this network the restriction of .F ; X/

to S .

Although Lemma 3.4 states that the stability of .F ; X/ is equivalent to thatof .HSF ; XS/, there is no guarantee that the stability of .RSF ; X/ implies thestability of .HSF ; XT

S /. To ensure that the stability of a restriction implies thestability of the original network, we define the following.

Definition 3.13. Let S be a complete branching set of the graph G. Then S is calleda basic structural set of �F if

jBij .�F I S/j � 1 for all i; j 2 IS :

Page 100: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

84 3 Stability of Dynamical Networks

If S is a basic structural set of G, then we write S 2 stB.G/. The notion of a basicstructural set allows for the following theorem.

Theorem 3.7. For the network .F ; X/, suppose S 2 stB.�F /. If �.RSF / < 1,then .F ; X/ is stable.

Proof. Given the dynamical network .F ; X/, suppose that S 2 stB.�F /. Underthis assumption, the claim is that the time-delayed system .HSF ; XT

S / has single-type delays. To see this, suppose v1; v2 : : : ; vj is a branch of BS .�F /. Then inparticular, the function Fj;0.xk/ D Fj .xk/ depends on the variable xk

j �1, and

Fj �1.xk/ on the variable xkj �2, implying that Fj;1.xk; xk�1/ is a function of xk�1

j �2.Continuing in this manner, we conclude that

Fj;j �2 D Fj;j �2.xk; : : : ; xk�j C2/

is a function of xk�j C21 .

Since the index 1 is in IS , the component HSFj must depend on the variable

xk�j C21 . Therefore, HSFj depends on the variable xk�C2

i if there is a branchvi ; : : : ; vj 2 BS .�F /. Conversely, if HSFj depends on the variable xk�C2

i , thenthere must be a branch in BS .�F / from vi to vj . Since S is a basic structural set,the time-delayed network .HSF ; XT

S / has single-type delays, verifying the claim.Assuming �.RSF / < 1, then Theorem 3.4 implies that �.HSF / < 1. Hence,

the network .HSF ; XTS / is stable by Theorem 3.1, and Lemma 3.4 then implies

that .F ; X/ is stable. utThe procedure of restricting .F ; X/ can be thought of as removing the implicit

delays between elements of the network indexed by elements of S . The result of thisprocess is the lower-dimensional system .RSF ; XS/. Theorem 3.7 states that if therestricted network is intrinsically stable, then the original “unrestricted” network isalso stable. In this sense, the stability of .F ; X/ can be investigated by studying thestability of any one of its restrictions.

This procedure is illustrated in the following example.

Example 3.7. Let .F ; X/ be the dynamical network considered in Example 3.6,which has the stability matrix

AF D

266666664

0 1 0 0 0 0

1 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 1 0 1

1 0 0 0 0 0

377777775

:

From this, one can compute �.F / D 21=3, implying �.F / > 1. That is,Theorem 3.1 cannot be used to determine whether the dynamical network .F ; X/

is stable, at least not directly.

Page 101: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.4 Implicit Delays and Restrictions of Dynamical Networks 85

However, by removing the delays from .HSF ; XTS /, found in (3.24), the

restriction RSF .xI c/ is given by

RSF .xI c/ D

2664

RSF1.x2; x5/

RSF2.x1/

RSF4.x2; x5/

RSF5.x4/

3775 D

2664

tanh.tanh.x2/ C tanh.x5/ C c/ C c

tanh.x1/ C c

tanh.tanh.x2/ C tanh.x5/ C c/ C c

tanh.x4/ C c

3775 :

Since the branches of BS .�F / are given by

B12.�F I S/ D v1; v2I B21.�F I S/ D v2; v1I B24.�F I S/ D v2; v3; v4I

and

B45.�F I S/ D v4; v5I B54.�F I S/ D v5; v4I B51.�F I S/ D v5; v6; v1I

it follows that S 2 stB.�F /.To compute a stability matrix of the restriction .RSF ; XS /, we note that

.DRSF /ij D sech2.xj / sech2Œtanh.x2/ C tanh.x5/ C c� �sech2Œtanh.x2/ C tanh.x5/ C c� � sech2.c � 2/

for i D 1; 4, j D 2; 5, and c � 2. Similarly, .DRSF /ij � sech2.cC2/ for c � �2.Considering the case c � 2, we have

ARS F �

2664

0 1 0 0

sech2.c � 2/ 0 sech2.c � 2/ 0

0 0 0 1

sech2.c � 2/ 0 sech2.c � 2/ 0

3775 ; (3.25)

from which it follows that

�.RSF / � 2p

2e2

e4�c C ec:

Thus, if c > 2 C ln.1 C p2/ � 2:88, then �.RSF / < 1. Similarly, using the

inequality .DRSF /ij � sech2.c C 2/ for c � �2, we have �.RSF / < 1 ifc < �2 � ln.1 C p

2/. Therefore, Theorem 3.7 implies that .F ; X/ is stable forjcj > 2:88.

By restricting the network .F ; X/ to the set S , we not only change the structureof the network but also the type of weights that appears in the network’s graph ofinteractions. This can be seen in Fig. 3.9, where �F has edges with unit weight but�RS F has weights that involve the parameter c. Since �.F / > 1 and �.RSF / < 1

Page 102: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

86 3 Stability of Dynamical Networks

for jcj < 2:88, we stress the fact that only by use of a network reduction was itpossible to deduce the stability of .F ; X/ for these parameter values.

Remark 3.5. We note that a restriction .RSF ; XS / can be constructed without firstconstructing the time-delayed network .HSF ; XS/. This can be done by lettingFj;`.x/ be the function Fj;`�1.x/ in which each variable xi 2 X is replaced byFi .x/ if i … IS . If each variable of Fj;m.x/ is indexed by an element of IS forsome m � 0, then the function Fj;m.xS / is equal to RSFj .xS / for j 2 IS .

In certain cases, it may be possible to further restrict a dynamical network andthereby obtain an improved estimate of the original network’s stability.

Example 3.8. In Example 3.7, the dynamical network .F ; X/ is restricted to thebasic structural set S D fv1; v2; v4; v5g. Using the restricted network .RSF ; XS /,we were able to show that .F ; X/ is stable if jcj > 2:88. However, S is not the onlybasic structural set of this dynamical network.

In particular, it is possible to restrict .F ; X/ to the set QS D fv1; v4g � S . Here,the result is the restriction .R QSF ; X QS / given by

R QSF .xI c/ D�

R QSF1.x1; x4/

R QSF4.x1; x4/

D�

tanhŒtanh.tanh.x1/ C c/ C tanh.tanh.x4// C c� C c

tanhŒtanh.tanh.x1/ C c/ C tanh.tanh.x4// C c� C c

�:

For c D 1, one can show that the stability matrix of .R QSF ; X QS / is the matrix

ARQS F D

�0:482 0:482

0:482 0:482

�:

Since �.R QSF / D 0:964, Theorem 3.7 implies that .F ; X/ is stable at this valueof c.

If one computes the stability matrix of .RSF ; XS/ for c D 1, the result is

ARS F D

2664

0 1 0 0

1 0 1 0

0 0 0 1

1 0 1 0

3775 ; (3.26)

for which �.RSF / D p2 > 1. For this parameter value, the restriction of .F ; X/

to the structural set S cannot be used to establish the stability of the originalunreduced network. It is only by restricting .F ; X/ to the smaller structural setQS � S that we are able to show that this network is stable at the parameter valuec D 1.

Page 103: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.4 Implicit Delays and Restrictions of Dynamical Networks 87

We note that by combining Theorems 3.4 and 3.7, it is possible to use networkrestrictions to get improved stability estimates of time-delayed networks.

Corollary 3.4. Suppose .H ; XT / has single-type delays. If .U ; X/ has the restric-tion .RSU ; XS / with S 2 stB.�U / and spectral radius �.RSU / < 1, then.H ; XT / is stable.

At this point, we have described how to take a dynamical network .F ; X/, findits time-delayed version .HSF ; XT

S / with respect to some S 2 br0.�F /, and thenremove the system’s delays to create the restriction .RSF ; XS /. This last step ofremoving a system’s time delays has the advantage that it simplifies the dynamicalnetwork both in its dynamics and in how it is formally represented.

However, by removing the system’s delays, we end up modifying both the sys-tem’s dynamics and typically its entire spectrum. If want to preserve the network’sdynamics and spectrum to a large degree, then there is another option. Rather thanremoving the delays of .HSF ; XT

S /, we simply consider the dynamical network.HSF ; XT

S /, which is the undelayed version of the system (see Definition 3.4).

Definition 3.14. For the dynamical network .F ; X/, let S 2 br0.�F /. Thedynamical network .XSF ; XT

S / D .HSF ; XTS / is called the dynamical network

expansion of .F ; X/ with respect to S .

By combining Lemmas 3.1 and 3.4 with Theorem 3.1, we have the followingtheorem, relating the stability of a dynamical network and its expansions.

Theorem 3.8. Let .XSF ; XTS / be a dynamical network expansion of .F ; X/.

Then .F ; X/ is stable if �.XSF / < 1.

Example 3.9. Consider the dynamical network .F ; X/, similar to that in Exam-ple 3.6, given by

F .xI c/ D

2664

F1.x1; x2; x4/

F2.x1/

F3.x2; x3; x4/

F4.x3/

3775 D

2664

12

tanh.x1/ C tanh.x2/ C tanh.x4/ C c

tanh.x1/ C c

tanh.x2/ C 12

tanh.x3/ C tanh.x4/ C c

tanh.x3/ C c

3775 ;

where x 2 R4 and c 2 R. Here, one can compute that the dynamical network has

the stability matrix

AF D

2664

12

1 0 1

1 0 0 0

0 1 12

1

0 0 1 0

3775 with spectral radius �.AF / D 1

4.1 C p

33/: (3.27)

Since �.AF / > 1, we are again in the situation in which the stability of the dynam-ical network cannot be determined by direct use of Theorem 3.1. Additionally, ascan be seen from Fig. 3.10, the graph of interactions �F has no nontrivial basic

Page 104: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

88 3 Stability of Dynamical Networks

v1

v2

v3

v4

1 1

1 1

1 112

12 v01

v−11

v03

v−13

1 sech2(c−1)

sech2(c−1) 1

sech2(c−1) sech2(c−1)12

12

Fig. 3.10 The graph of interactions �F of the network in Example 3.9 and the graph of itsexpansion �XS F over the complete branching set S D fv1; v3g

structural sets, so there is no way to use a network restriction to determine thestability of .F ; X/.

The set of vertices S D fv1; v3g, however, is a complete branching set of �F .Therefore, we can construct the time-delayed version of this dynamical network.HSF ; XT

S /, which is

xkC1 D�

12

tanh.xk1 / C tanh.tanh.xk�1

1 / C c/ C tanh.tanh.xk�13 / C c/ C c

tanh.tanh.xk�11 / C c/ C 1

2tanh.xk

3 / C tanh.tanh.xk�13 / C c/ C c

�;

where T D 2. The undelayed version of .HSF ; XTS /, which is the dynamical

network expansion .XSF ; XTS /, is given by

XSF .xI c/ D

2664

XSF 01 .x0

1 ; x�11 ; x�1

3 /

XSF �11 .x0

1/

XSF 02 .x�1

1 ; x03 ; x�1

3 /

XSF �12 .x0

3/

3775

D

2664

12

tanh.x01/ C tanh.tanh.x�1

1 / C c/ C tanh.tanh.x�13 / C c/ C c

x01

tanh.tanh.x�11 / C c/ C 1

2tanh.x0

3/ C tanh.tanh.x�13 / C c/ C c

x03

3775 :

Using the same type of argument as in Example 3.8 to bound the entries of AXS F ,we find that

AXS F �

2664

12

sech2.c � 1/ 0 sech2.c � 1/

1 0 0 0

0 sech2.c � 1/ 12

sech2.c � 1/

0 0 1 0

3775 (3.28)

Page 105: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

3.4 Implicit Delays and Restrictions of Dynamical Networks 89

for c > 1. From this, it follows that

�.XSF / � e2 C e2c C ecp

34e2 C e4�2c C e2c

4.e2 C e2c/;

so that if c > logŒ.2 C p3/e� � 2:31, then �.XSF / < 1. Similarly, one can show

that if c < � logŒ.2 C p3/e� � �2:31, then �.XSF / < 1, so that Theorem 3.8

implies that .F ; X/ is globally stable if jcj > 2:31.

Since every basic structural set is a complete branching set, dynamical networkexpansions are a more general tool than network restrictions for determining thestability of a given dynamical network. The main reason we consider both isthat network restrictions are much easier to construct and analyze than dynamicalnetwork expansions. In fact, if S is a basic structural set of �F , then .RSF ; XS / isalways of lower dimension than .XSF ; XT

S /.Dynamical network expansions and restrictions are similar, though, in the sense

that they are built around the same idea. If we are going to use a stability matrix,i.e., a global linearization, to evaluate the stability of a dynamical network, then themore we can compress the network around a specific set of network elements, thebetter our estimates become. These specific elements are, of course, the structuralsets over which we choose to restrict or expand our network.

We note that if we have a linear network, i.e., a network .F ; X/, where F WX ! X is described by a matrix, then the processes of restricting and expandinga network do not change the network’s spectral radius. In a nonlinear network, ormore generally dynamical system, the operations of restricting and expanding thesystem lead to an averaging of the network’s nonlinear processes, which allows forbetter estimates of the network’s overall behavior.

Page 106: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 4Improved Eigenvalue Estimates

The classical Abel–Ruffini theorem, often called Abel’s impossibility theorem,states that there is no algebraic formula for the roots of the general polynomial ofdegree five or higher. Consequently, one cannot, in general, compute the spectrumof a matrix A 2 C

n�n, for n � 5.Beginning on the mid-nineteenth century, a number of methods were developed

to approximate the eigenvalues of complex-valued matrices. The main idea in eachof these methods is to associate a bounded region of the complex plane to eachA 2 C

n�n that contains �.A/. This bounded region is the approximation of theeigenvalues of A.

In this chapter, we investigate how isospectral matrix reductions affect theseapproximation methods. Our main result is that isospectral reductions can be usedin conjunction with each of these methods to gain improved eigenvalue estimatesof any matrix A 2 C

n�n. Specifically, we show that under certain conditions, theeigenvalue region associated with a reduced matrix is contained in the eigenvalueregion associated with the original unreduced matrix.

Informally, the way this works is the following. The classical methods weconsider associate to each A 2 C

n�n an eigenvalue region in the complex plane,which is, in fact, the union of a number of subregions. Each of these subregions iscomputed based on a limited number of rows of A. In this sense, these subregions,and therefore the eigenvalue approximations, use only “local information” from thematrix to approximate its eigenvalues.

By isospectrally reducing a matrix, we are in a sense compressing the informationcontained in the matrix. Hence, the local information in the smaller reduced matrixcomes from information in the larger matrix that is less local. This increase in localinformation in the reduced matrix allows us, via these classical methods, to gainimproved eigenvalue estimates.

We show, in two of the three classical methods we consider, that isospectralmatrix reductions always lead to improved eigenvalue estimates. For the third

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__4

91

Page 107: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

92 4 Improved Eigenvalue Estimates

method, we give a sufficient condition under which an isospectral reduction leads toimproved estimates.

We apply these techniques in a variety of settings, including estimating the spec-tra of combinatorial and normalized graph Laplacians. Additionally, we demonstratehow this approach can be used to estimate the eigenvalues of large matrices orgraphs in which only limited or local information is known. We also apply thesetechniques to the problem of estimating the spectral radius of a graph (network),which has implications to the stability of dynamical networks.

4.1 Gershgorin-Type Regions

A theorem of Gershgorin’s, originating from [21], gives a simple method forbounding the eigenvalues of a square matrix with complex-valued entries. Thisresult is found in Theorem 4.1, which we formulate after introducing some notation.

If A 2 Cn�n, let

ri .A/ DnX

j D1; j ¤i

jAij j; 1 � i � n; (4.1)

be the i th absolute row sum of A.

Theorem 4.1 (Gershgorin [21]). Let A 2 Cn�n. Then all eigenvalues of A are

contained in the set

� .A/ Dn[

iD1

f� 2 C W j� � Aii j � ri .A/g:

Geometrically, Gershgorin’s theorem states that the eigenvalues of A 2 Cn�n

are contained in the union of n circles in the complex plane. The i th circle,corresponding to the i th row of A, is centered at the diagonal entry Aii 2 C withradius ri .A/.

Example 4.1. Let A 2 C3�3 be the matrix with entries

A D24�3 0 1

1 1 �1

1 0 5

35 ;

which has the eigenvalues �.A/ D f1; 1˙p17g. The region � .A/ is made up of the

three circles shown in Fig. 4.1 (left). The blue, red, and tan circles correspond to thefirst, second, and third rows of A, respectively. The spectrum of A is also indicated.

Recall from Chap. 1 that if a matrix M.�/ is in Wn�n, then this matrix has a

well-defined spectrum. A natural question is whether Gershgorin’s theorem can be

Page 108: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 93

−4 −2 0 2 4 6

−4

−2

0

2

4

•− 3 • 5 •

−1.0 −0.5 0.0 0.5 1.0

−1.0

−0.5

0.0

0.5

1.0

G(B)G(A)

1

Fig. 4.1 The Gershgorin regions for the matrices A 2 C3�3 (left) and B.�/ 2 W

2�2 (right) inExamples 4.1 and 4.2, respectively

directly applied to M.�/ to estimate its spectrum as if it were a matrix with complexentries. This we consider in the following example.

Example 4.2. Consider the matrix B.�/ 2 W2�2 given by

B.�/ D"� 1

�� 1

1�

1�

#:

In principle, it is possible to use Gershgorin’s theorem formally on the matrix B.�/.The resulting region

� .B/ D

� 2 C W ˇ� C 1

ˇ � ˇ 1�

ˇ� [

� 2 C W ˇ� � 1

ˇ � ˇ 1�

ˇ�

is shown in Fig. 4.1 (right). As one can compute, det.B.�/ � �I/ D �2, implying�.B/ D f0; 0g. However, the region � .B/ does not include the eigenvalue(s) 0 2�.B/. This is indicated by the open circle in the figure.

Based on Example 4.2, we conclude that Gershgorin’s theorem does not directlyapply to matrices with rational function entries. However, if we take a closer lookat Example 4.2, we note that the reason we have �.B/ ª � .B/ is because �.B/ ªdom.B/.

Since the Gershgorin region � .M/ of every M 2 Wn�n is contained in

dom.M/, only the eigenvalues of M in its domain can be in its Gershgorin region.To adapt Gershgorin’s theorem to the matrices Wn�n, we need a way of including

Page 109: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

94 4 Improved Eigenvalue Estimates

those eigenvalues of M that are not in its domain. With this in mind, we define thenotion of a polynomial extension of a matrix M 2 W

n�n.

Definition 4.1. If M.�/ 2 Wn�n and Mij D pij =qij , where pij .�/; qij .�/ 2 CŒ��,

let Li .M/=Qn

j D1 qij .�/ for 1 � i � n. We call the matrix NM given by

NMij D(

Li .M/Mij i ¤ j;

Li .M/�Mij � �

�C �; i D j1 � i; j � n;

the polynomial extension of M .

To justify this name, note that each entry NMij is an element of CŒ��, or NM

has entries that are complex polynomials. The reason we consider the polynomialextension of a matrix M 2 W

n�n is that dom. NM/ D C, so that NM.�/ is definedeverywhere. Moreover, the spectrum of M is contained in the spectrum of NM .

Lemma 4.1. Let M.�/ 2 Wn�n. Then �.M/ � �. NM/.

Proof. For M 2 Wn�n, note that the matrix NM � �I is given by

. NM � �I/ij D(

Li .M/Mij i ¤ j;

Li .M/�Mij � �

�; i D j

for 1 � i � n:

The matrix NM � �I is then the matrix M � �I whose i th row has been multipliedby Li . Therefore,

det� NM � �I

� D nY

iD1

Li .M/

det�M � �I

�;

implying �.M/ � �. NM/. utIn order to extend Theorem 4.1 to the matrices W

n�n, we use the followingadaptation of the notation given by (4.1). For M 2 W

n�n, let

ri .M/ DnX

j D1;j ¤i

jMij j for 1 � i � n

be the i th absolute row sum of M .Note that since NM 2 CŒ��n�n for every M 2 W

n�n, we can view NM.�/ as afunction NM.�/ W C ! C

n�n and NM .�/ij W C ! C. Likewise, we can considerri . NM/ D ri . NM; �/ to be the function ri . NM; �/ W C ! C. However, we will typicallysuppress the dependence of NM and ri . NM/ on �, for ease of notation.

Theorem 4.2. Let M.�/ 2 Wn�n. Then �.M/ is contained in the set

�W.M/ Dn[

iD1

f� 2 C W j� � NMii j � ri . NM/g:

Page 110: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 95

G

v1

v3

v2

λ+1λ2

2λ+1λ2 1

λ

1

λ+1λ

−2 −1 0 1 2

−2

−1

0

1

2

-1 2

−i

i

Fig. 4.2 For M in Example 4.3, the graph G, where M.G/ D M , is shown (left) and the region�W.M / is shown (right), where �.M / D f�1; �1; 2; �i; ig is indicated

Proof. First note that for ˛ 2 �.M/, the matrix NM.˛/ is in Cn�n. Since Lemma 4.1

implies that ˛ is an eigenvalue of NM.˛/, then by an application of Gershgorin’stheorem, the inequality j˛ � NM.˛/ii j � ri . NM; ˛/ holds for some 1 � i � n. Hence,˛ 2 �W.M/. ut

Because it will be useful later in comparing different regions in the complexplane, for M 2 W

n�n we let

�W.M/i D f� 2 C W j� � NMii j � ri . NM /g; where 1 � i � n;

and call this the i th Gershgorin-type region of M . Similarly, we call the union�W.M/ of these n sets the Gershgorin-type region of the matrix M .

As an illustration of Theorem 4.2, consider the following example.

Example 4.3. Let M 2 Wn�n be the matrix given by

M D

264

�C1�2

1�

�C1�

2�C1�2

1�

1�

0 1 0

375 : (4.2)

Since det.M.�/ � �I/ D .��5 C 2�3 C 2�2 C 3� C 2/=.�2/, one can compute that�.M/ D f�1; �1; i; �i; 2g. The corresponding Gershgorin-type region �W.M/ isshown in Fig. 4.2 (right), where

NM D24��5 C �3 C �2 C � �3 �4 C �3

2�3 C �2 ��5 C �3 C � �3

0 1 0

35 :

Page 111: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

96 4 Improved Eigenvalue Estimates

The set �W.M/ is the union of the three regions �W.M/1, �W.M/2, and�W.M/3, whose boundaries are shown in blue, red, and tan. The interior colorsof these regions reflect their intersections, and the eigenvalues �.M/ are indicatedas points. We also include the graph G with adjacency matrix M , shown in Fig. 4.2(left). This is to illustrate that each graph G 2 G has an associated Gershgorin-typeregion.

Since every matrix M 2 Wn�n� has an associated Gershgorin-type region � .M/,

a natural question is how this region changes as the matrix M is reduced. As it turnsout, a reduced matrix (equivalently reduced graph) has a smaller Gershgorin-typeregion than the associated unreduced matrix (graph). Because isospectral reductionspreserve the eigenvalues of a matrix, up to a known set, an immediate consequenceof this is that isospectral reductions can be used to improve the eigenvalue estimatesgiven in Theorems 4.1 and 4.2.

Theorem 4.3 (Improved Gershgorin Regions). For M.�/ 2 Wn�n� , suppose that

the set S � N is nonempty. Then �W.R.M I S// � �W.M/.

In terms of graphs, Gershgorin’s original theorem can be thought of as estimatingthe spectrum of a graph G with M.G/ 2 C

n�n by considering the paths oflength 1 starting at each graph vertex. Isospectral graph reductions allow for bettereigenvalue estimates by considering longer paths in the graph through those verticesthat have been removed. That is, an isospectral graph reduction collapses theinformation stored in a branch of G to a single edge of RS .G/. The entire branch,and therefore more information, is then considered in calculating �W.R.M I S//,which leads to better eigenvalue estimates.

Theorem 4.3 together with Theorem 1.1 has the following corollary.

Corollary 4.1. For M.�/ 2 Wn�n� , suppose that the set S � N is nonempty. Then

�.M/ � �W

�R.M I S/

� [ �.M j NS NS /:

Proof. Since �W.R.M I S// � �W.M/, it follows from Theorem 1.1 that

�.M/ � �W

�R.M I S/

� [ �.M j NS NS / [ ��1.M/:

Since det.M.�/ � �I/ D p.�/=q.�/ for some p.�/, q.�/ 2 CŒ�� with no commonfactors, we have �.M/ \ ��1.M/ D ;, and the result follows. ut

In order to prove Theorem 4.3, we will need to be able to evaluate functions in W

at some fixed value � 2 C. In each case, we consider such functions first as elementsin W with common factors removed, which we will then evaluate at �. In fact, mostof these functions, once common factors are removed, will be polynomials in CŒ��.

To simplify notation, we will use the following. For M.�/ 2 Wn�n, where n � 2

and S D f2; : : : ; ng, we let R.M I S/ D R1, Lk.M/ D Lk , Lk.R1/ D L1k , Mk` D

!k`, and � � !kk D �kk . Also, we let !k` D pk`=qk` for pk`; qk` 2 CŒ��, where weassume that qk` D 1 if !k` D 0. Lastly, we set Rk.M/ D P

`D1;`¤k j!k`Lkj.

Page 112: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 97

Before proceeding, we state the following lemma.

Lemma 4.2. If M.�/ 2 Wn�n� for n � 2, then q11qi1L1

i D �qi1.q11� �

p11/�n�1

L1Li .

Proof. First, note that

.R1/ij D pi1p1j qij q11 C qi1q1j pij .q11� � p11/

qi1q1j qij .q11� � p11/for each 2 � i; j � n;

from which it follows that L1i D

nYj D2

qi1q1j qij .q11� � p11/. Therefore,

L1i D �

qi1.q11� � p11/�n�1

nYj D2

q1j

nYj D2

qij : (4.3)

Since Lk DnY

j D1

qkj for 1 � k � n, the result follows by multiplication of q11qi1 to

both sides of this equation. utA proof of Theorem 4.3 is the following.

Proof. Suppose that � 2 �W.R1/i for fixed � 2 C and 2 � i � n. Since each.R1/ij is equal to !ij C !i1!1j =�11 for 2 � j � n, we have

j.�ii � !i1!1i

�11

/L1i j �

nXj D2;j ¤i

j.!ij C !i1!1j

�11

/L1i j:

Multiplying both sides of this inequality by j�11q11qi1j implies, via Lemma 4.2, that

Qi .M/j�11L1�ii Li � !i1!1i L1Li j � Qi .M/

nXj D2;j ¤i

j.!ij �11 C !i1!1j /L1Li j;

where Qi .M/ D j�qi1.q11� � p11/�jn�1. If Qi .M/ ¤ 0, then by the triangle

inequality,

j�11L1�ii Li j � j!i1!1i L1Li j �nX

j D2;j ¤i

j�11L1!ij Li j CnX

j D2;j ¤i

j!i1Li !1j L1j:

Page 113: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

98 4 Improved Eigenvalue Estimates

Therefore,

j�11L1�ii Li j �nX

j D1;j ¤i

j�11L1!ij Li j �nX

j D2

j!i1!1j L1Li j � j!i1Li �11L1j:

By factoring, we have

j�11L1jj�ii Li j � Ri .M/

� j!i1Li j

R1.M/ � j�11L1j

: (4.4)

If we assume � … �W.G/i [ �W.G/1, then we have both

j�iiLi j � Ri .M/ > 0 and R1.M/ � j�11L1j < 0:

These inequalities together with (4.4) imply that �11L1 D 0. However, this, in turn,implies that � 2 �W.M/1, which is impossible.

Hence, � 2 �W.M/i [ �W.M/1 unless Qi .M/ D 0. Supposing, then, that this

is the case, note that if Lij DnY

`D1;`¤j

qi` for 1 � i; j � n, then

�W.M/k D f� 2 C W jLkk.qkk� � pkk/j �nX

j D1;j ¤k

jpkj Lkj jg for 1 � k � n:

(4.5)Under the assumption Qi .M/ D �

qi1.q11� � p11/�n�1 D 0, note that if qi1 D 0,

then Lii D 0, implying � 2 �W.M/i . If q11� � p11 D 0, then � 2 �W.M/1

by (4.5).It then follows that

�W.R1/i � �W.M/1 [ �W.M/i ; (4.6)

implying �W.R1/ � �W.M/. The theorem follows by repeated use of Theorem 1.3,since it is always possible to reduce the matrix M sequentially over an arbitraryindex set S by removing a single index at each step. ut

The reason Gershgorin-type estimates improve under reduction can be foundin equation (4.6). From the point of view of graph reduction, the inclusion givenin (4.6) states that if we remove the vertex vj from a graph, then the i th Gershgorinregion associated with the reduced graph is contained in the union of the i th and j thGershgorin regions of the unreduced graph.

In Fig. 4.3 (left), we show the inclusion

�W.R.M I S//2 � �W.M/1 [ �W.M/2

Page 114: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 99

−2 −1 0 1 2 3

−2

−1

0

1

2

−1 0 1 2 3

−2

−1

0

1

2

Fig. 4.3 Left: the regions �W.M /1 (blue), �W.M /2 (red), and �W.R.M I S//2 (tan, with dashedboundary) are shown. Right: the regions KW.M /12 (blue), KW.M /13 (red), KW.M /23 (green), andKW.R.M I S//23 (tan, with dashed boundary) are shown

for the matrix M 2 C3�3 given by

M D240 1 1

1 1 1

1 1 2

35 ; (4.7)

where the index set S is equal to f2; 3g.To understand in which situations �W.R.M I S// is strictly contained in �W.M/,

we consider the following. For M 2 Wn�n� , let

@�W.M/i D f� 2 C W j� � NMii j D ri . NM/g for 1 � i � n:

We note here that the boundary of the region �W.M/i in the complex plane iscontained in the set @�W.M/i for each 1 � i � n. This follows from the continuityof j� � NMii j � ri . NM/ in the variable �. However, if � 2 @�W.M/i , it may be thecase that � is contained in a neighborhood lying entirely within �W.M/i , in whichcase � is not on the boundary of �W.M/i . That is, the boundary of the set �W.M/i

is contained in what we have defined as @�W.M/i , but this containment may not bestrict.

Theorem 4.4. Let M.�/ 2 Wn�n� . Suppose the subset

@�W.M/i nn[

j D1;j ¤i

�W.M/j

is an infinite set of points. Then �W.R.M I S// � �W.M/ for every nonemptyS � N if i … S .

Page 115: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

100 4 Improved Eigenvalue Estimates

Proof. Let � 2 C be fixed such that

� 2 @�W.M/1 nn[

j D2

�W.M/j : (4.8)

Then we have both

j.�11/L1j D R1.M/; (4.9)

and

j.�ii /Li j > Ri .M/ (4.10)

for all 1 < i � n. Supposing � 2 �W.R1/i for some fixed 1 < i � n and thatQi .M/ ¤ 0, then (4.4) holds. Combining (4.4) with (4.9), we then have that

j�11L1jj�ii Li j � Ri .M/

� 0:

Moreover, since j�ii Li j > Ri .M/, from equation (4.10), this together with theprevious inequality implies that �11L1 must be zero. However, given that �11L1

is a nonzero polynomial, this happens in at most finitely many values of � 2 C.Similarly, the polynomial Qi .M/ equals 0 on only a finite set of C; hence theassumption that

@�W.M/1 nn[

j D2

�W.M/j

is an infinite set in the complex plane yields a contradiction to assumption (4.8) forinfinitely many points in this set. Hence, the result follows in the case that S Df2; : : : ; ng. By sequentially removing single indices from N , the result follows byrepeated use of Theorem 1.3. ut

For a matrix M 2 Wn�n� , there is typically some region �W.M/i whose boundary

is not contained in the union of the other j th Gershgorin regions. If this boundaryis not a finite set of isolated points, then Theorem 4.4 guarantees that reducing M

over S D N � fig strictly improves one’s Gershgorin-type estimate of �.M/.This is demonstrated in the following example.

Example 4.4. Consider the matrix M0 2 W5�5� given by

M0 D

2666664

0 0 1 0 1

0 0 0 1 1

0 1 0 0 0

1 0 0 0 0

1 1 1 1 0

3777775

:

Page 116: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 101

−4 −2 0 2 4−4

−2

0

2

4

-1 2

−i

i

−4 −2 0 2 4−4

−2

0

2

4

-1 2

−i

i

−4 −2 0 2 4−4

−2

0

2

4

-1 2

−i

i

Fig. 4.4 Left: �W.M0/. Middle: �W.M1/. Right: �W.M2/, where in each, the spectrum �.M0/ Df�1; �1; �i; i; 2g is indicated

Let S1 D f1; 2; 3g and S2 D f1; 2g. If M1 D R.M0I S1/ and M2 D R.M0I S2/, onecan compute that

M1 D

264

�C1�2

1�

�C1�

2�C1�2

1�

1�

0 1 0

375 and M2 D

"�C1�2

2�C1�2

2�C1�2

�C1�2

#: (4.11)

The Gershgorin regions of M0, M1, and M2 are shown in Fig. 4.4. Since

@�W.M0/5 n4[

j D1

�W.M0/j and @�W.M1/3 n2[

j D1

�W.M1/j

consist of curves in C, Theorem 4.4 implies the strict inclusions

�W.M2/ � �W.M1/ � �W.M0/;

which can be seen in the figure. In addition, since

.M0/ NS1 NS1D�

0 0

0 1

�and .M1/ NS2 NS2

D24 0 0 0

0 0 0

1 1 0

35 ;

it follows that �..M0/ NS1 NS1/ D �..M1/ NS2 NS2

/ D f0g, not including multiplicities.Since the point 0 is in �W.M1/; �W.M2/, both �W.M1/ and �W.M2/ contain�.M0/. (Note that the matrix M1 has been previously considered in Example 4.3.)

An important consequence of Theorem 4.3 is that a sequence of isospectralreductions on a matrix M 2 W

n�n� can be used to obtain a sequence of estimates

of �.M/, each of which is better than the last. This can be seen, for instance, inExample 4.4. The extent to which a matrix M is reduced therefore determines theextent to which we have improved our estimate of �.M/.

Page 117: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

102 4 Improved Eigenvalue Estimates

With this in mind, we note that the most that a matrix M 2 Wn�n� can be reduced

is over a single-element set fig, for some i 2 N . In this case, the Gershgorin-typeregion of the fully reduced matrix R.M I fig/ is equal to

�W.R.M I fig// D f� 2 C W p.�/ D 0g

for some polynomial p.�/ 2 CŒ��. Hence, the Gershgorin region of this matrix is afinite set of points in the complex plane.

Additionally, if the matrix M is in Cn�n, then �W.R.M I fig// � �.M/. That is,

the Gershgorin-type region of a fully reduced complex-valued matrix is a finite setof points that is contained in the matrix’s spectrum.

As an example of a fully reduced matrix, i.e., a matrix reduced to a single index,let M3 D R.M2I f1g/, where M2 is the 2 � 2 matrix considered in the previousexample. For this matrix, we find that �W.M3/ D f�1; �1; �i; i; 2g, which is, infact, equal to the spectrum of the original unreduced matrix M0 in this example.

Having considered how isospectral reductions can be used to improve eigenvalueestimates, we now turn our attention to how these techniques can be used to estimatethe inverse spectrum of a matrix. Our main tool in this regard will be the spectralinverse operator introduced in Chap. 1.

If M.�/ 2 Wn�n, then its inverse spectrum ��1.M/ comprises the complex

numbers at which the determinant det.M � �I/ is undefined. Since the determinantof a matrix is composed of various products and sums of its entries, equations (1.1)and (1.2) imply the following proposition, in which dom .M/ denotes the comple-ment of dom.M/.

Proposition 4.1. If M.�/ 2 Wn�n, then ��1.M/ � dom .M/.

Phrased another way, the inverse eigenvalues of a matrix M 2 Wn�n are complex

numbers at which the matrix M is undefined, i.e., are contained in the complementof dom .M/. However, it is not always the case that an element of dom .M/ is aninverse eigenvalue of M , as the following example shows.

Example 4.5. Consider the matrix M 2 W2�2 given by

M.�/ D"

1��1

1��1

1�

�C1�

#:

As can be computed, ��1.M/ D ;, yet dom .M/ D f0; 1g. That is, neither of thevalues 0; 1 2 dom .M/ is an inverse eigenvalue of M , although M is not defined atthese points.

Proposition 4.1 gives us a way of estimating the inverse spectrum of a matrixM 2 W

n�n. Alternatively, we can use Theorem 4.2 along with the spectral inverseS �1.M/ to generate a Gershgorin-type region that contains ��1.M/ if M 2 W

n�n� .

Page 118: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.1 Gershgorin-Type Regions 103

−2 −1 0 1 2

−1.0

−0.5

0.0

0.5

1.0

0

−2 −1 0 1 2

−1.0

−0.5

0.0

0.5

1.0

0

Fig. 4.5 Left: �W.S �1.M //. Right: �W.R.S �1.M /I B// where the inverse spectrum��1.M / D f0; 0; 0; 0g is indicated

Corollary 4.2. Let M.�/ 2 Wn�n� . Then ��1.M/ is contained in the set

�W

�S �1.M/

� Dn[

iD1

˚� 2 C W j� � S �1.M/ii j � ri .S �1.M//

�:

.

Example 4.6. Let M 2 W4�4� be the matrix given by

M.�/ D

2664

1�

1�

0 0

0 1�

1 0

0 0 1�

1

0 0 0 1�

3775 :

The polynomial extension of its spectral inverse is the matrix

S �1.M/ D

2664

��.�2 � 1/9 ��.�2 � 1/8 ��2.�2 � 1/7 ��3.�2 � 1/6

0 ��.�2 � 1/5 ��2.�2 � 1/4 ��3.�2 � 1/3

0 0 ��.�2 � 1/2 ��2.�2 � 1/

0 0 0 ��

3775C �I;

which can be used to find the region �W

�S �1.M/

�. This set is shown in Fig. 4.5

(left).

Page 119: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

104 4 Improved Eigenvalue Estimates

We note that the Gershgorin-type region �W.S �1.M// is the union of the sets

�W.S �1.M//i D ˚� 2 C W j� � S �1.M/ii j �

nXj D1;j ¤i

jS �1.M/ij j�;

for i D 1; 2; 3; 4. The regions �W.S �1.M//1, �W.S �1.M//2, and �W.S �1.M//3

in Fig. 4.5 are shown in red, blue, and tan, respectively. The set �W.S �1.M//4 Df0g is contained in the inverse spectrum ��1.M/ D f0; 0; 0; 0g, which is indicatedin the figure.

As with the eigenvalues of a matrix an isospectral matrix reduction can be usedto gain improved estimates of the inverse spectrum of a matrix M 2 W

n�n� .

Theorem 4.5 (Improved Inverse Eigenvalue Estimates). Let M.�/ 2 Wn�n� ,

where S � N is nonempty. Then �W

�R.S �1.M/I S/

� � �W

�S �1.M/

�.

A proof of Theorem 4.5 can be obtained by following the proof of Theorem 4.3using the fact that the spectral inverse S �1.M/ can be sequentially reduced to aunique matrix via Theorem 1.5.

Example 4.7. Let M 2 W4�4 be the matrix given in Example 4.6. For the index set

S D f1; 2; 3g, the reduction of the spectral inverse of M is

R.S �1.M/I S/ D

264

���2�1

��.�2�1/2

��2

.�2�1/3

0 ���2�1

��2

.�2�1/2

0 0 ���2�1

:

375C �I:

Its polynomial extension is given by

R.S �1.M/I S/ D24��.�2 � 1/9 ��.�2 � 1/8 ��2.�2 � 1/7

0 ��.�2 � 1/5 ��2.�2 � 1/4

0 0 ��.�2 � 1/2

35C �I:

The Gershgorin-type region of the reduced matrix R.S �1.M/I S/ is shown inFig. 4.5 (right), where one can see that ��1.M/ � �

�R.S �1.M/I S/

� ���S �1.M/

�. The regions � .R.S �1.M/I S//1 and � .R.S �1.M/I S//2 are

shown in red and blue, respectively.

4.2 Brauer-Type Regions

By relating each row of a matrix A 2 Cn�n to a circle in the complex plane,

Gershgorin’s theorem allows for an algorithmically simple method for estimatingthe spectrum of a complex-valued matrix. One of the first successful attempts toimprove on this result was given by Brauer.

Page 120: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.2 Brauer-Type Regions 105

Similar to that of Gershgorin, Brauer’s result associates the rows of a complex-valued matrix with a number of regions in the complex plane, the union of whichcontains the matrix’s eigenvalues. The difference is that instead of each row beingassociated with a circle, as in Gershgorin’s theorem, Brauer’s regions are determinedby pairs of rows and are oval in shape.

Theorem 4.6 (Brauer [27]). Let A 2 Cn�n, where n � 2. Then all eigenvalues of

A are located in the set

K.A/ D[

1�i;j �n

i¤j

f� 2 C W j� � Aii jj� � Ajj j � ri .A/rj .A/g: (4.12)

Also, K.A/ � � .A/.

The individual regions, given by f� 2 C W j� � Aii jj� � Ajj j � ri .A/rj .A/g inequation (4.12), are known as Cassini ovals and may consist of one or two distinctcomponents. Moreover, there are

�n2

�such regions for every n � n matrix with

complex entries. As with Gershgorin’s theorem, we prove an extension to Brauer’stheorem for matrices in W

n�n.

Theorem 4.7. Let M.�/ 2 Wn�n� , where n � 2. Then �.M/ is contained in the set

KW.M/ D[

1�i;j �n

i¤j

f� 2 C W j� � NMii jj� � NMjj j � ri . NM/rj . NM/g:

Also, KW.M/ � �W.M/.

Proof. As in the proof of Theorem 4.2, if ˛ 2 �.M/, then ˛ 2 �. NM/, and thematrix NM.˛/ is in C

n�n. Brauer’s theorem therefore implies that

j˛ � NM.˛/ii jj˛ � NM .˛/jj j � ri . NM; ˛/rj . NM; ˛/

for some pair of distinct integers i and j . It then follows that ˛ 2 KW.M/ or,�.M/ � KW.M/.

To prove the assertion that KW.M/ � �W.M/, let

KW.M/ij D f� 2 C W j� � NM.�/ii jj� � NM.�/jj j � ri . NM; �/rj . NM; �/g (4.13)

for distinct i and j . The claim then is that KW.M/ij � �W.M/i [ �W.M/j . Tosee this, assume for a fixed � 2 C that � 2 KW.M/ij or,

j� � NM.�/ii jj� � NM .�/jj j � ri . NM; �/rj . NM; �/:

Page 121: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

106 4 Improved Eigenvalue Estimates

−2 −1 0 1 2

−2

−1

0

1

2

-1 2

−i

i

−2 −1 0 1 2

−2

−1

0

1

2

-1 2

−i

i

Fig. 4.6 Left: The Brauer region K.M / for M from Example 4.3. Right: K.M / � � .M /

If ri . NM; �/rj . NM; �/ D 0, then either � � NM.�/ii D 0 or � � NM.�/jj D 0. Since� D NM.�/ii implies � 2 �W.M/i and � D NM.�/jj implies � 2 �W.M/j , wehave � 2 �W.M/i [ �W.M/j .

If ri . NM; �/rj . NM; �/ > 0, then it follows that

j� � NM.�/ii jri . NM; �/

j� � NM.�/jj jrj . NM; �/

� 1:

Since at least one of the two quotients on the left must be less than or equal to 1

we must have � 2 �W.M/i [ �W.M/j , which verifies the claim, and the resultfollows. ut

We call the region KW.M/ the Brauer-type region of the matrix M , and theregion KW.M/ij given in (4.13), the ij th Brauer-type region of M . ApplyingTheorem 4.7 to the matrix M , given in Example 4.3, we have the Brauer-typeregion shown on the left-hand side of Fig. 4.6. On the right is a comparison betweenKW.M/ and �W.M/, where the inclusion KW.M/ � �W.M/ is demonstrated.Here, KW.M/ is shown in blue, and �W.M/ in red.

As with Gershgorin’s theorem, it is possible to improve a Brauer-type estimateof a matrix’s eigenvalues by reducing the matrix.

Theorem 4.8 (Improved Brauer Regions). Suppose M.�/ 2 Wn�n� . If S � N ,

where jS j � 2, then KW.R.M I S// � KW.M/.

Theorem 4.8 has the following corollary.

Corollary 4.3. If M.�/ 2 Wn�n� , where S � N and jS j � 2, then

KW.R.M I S// � KW.M/ [ �.M NS NS /:

Page 122: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.2 Brauer-Type Regions 107

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

-1 2

−i

i

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

-1 2

−i

i

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

-1 2

−i

i

Fig. 4.7 Left: KW.M0/. Middle: KW.M1/. Right: KW.M2/, where in each the spectrum, �.M0/ Df�1; �1; �i; i; 2g is indicated

Proof. Since the region KW.R.M I S// is a subset of KW.M/, it follows fromTheorem 1.1 that

�.M/ � KW

�R.M I S/

� [ �.M j NS NS / [ ��1.M/:

Since the determinant det.M.�/ � �I/ is equal to p.�/=q.�/, where p.�/, q.�/ 2CŒ�� have no common factors, it follows that �.M/ \ ��1.M/ D ;, implying theresult. ut

Continuing Example 4.4, the Brauer-type regions of M0, M1, and M2 are shownin Fig. 4.7, where by Theorem 4.8, KW.M2/ � KW.M1/ � KW.M0/. Moreover,Theorem 4.7 implies KW.Mi / � �W.Mi / for i D 0; 1; 2.

We note that if M 2 Wn�n� is reduced over the index set S , where jS j D m, then

there are�

n2

�� �m2

�fewer ij th Brauer-type regions to calculate in the reduced matrix

R.M I S/ than in M . Hence the number of regions quickly decreases as a matrix isreduced.

We now give a proof of Theorem 4.8.

Proof. Let M 2 Wn�n� and n � 3. The claim is that

KW.R1/ij � KW.M/1i [ KW.M/1j [ KW.M/ij (4.14)

for every pair 2 � i; j � n, where i ¤ j .To see this, let � 2 KW.R1/ij for fixed i and j , from which it follows that

j�

�ii � !i1!1i

�11

�L1

i jj��jj � !j1!1j

�11

�L1

j j

0BB@

nX`D2`¤i

j

!i` C !i1!1`

�11

!L1

i j

1CCA

0BB@

nX`D2`¤j

j�

!j` C !j1!1`

�11

�L1

j j

1CCA :

(4.15)

Page 123: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

108 4 Improved Eigenvalue Estimates

If we multiply both sides of (4.15) by j�11q11qi1j and j�11q11qj1j, then Lemma 4.2implies

YkDi;j

Qk.M/j�kk�11L1Lk � !k1!1kL1Lkj

�Y

kDi;j

Qk.M/ nX

`D2`¤k

j�!k`�11 C !k1!1`

�L1Lkj

:

Assuming for now that Qi .M/Qj .M/ ¤ 0 then by the triangle inequality

YkDi;j

j�11L1�kkLkj � j!1kL1!k1Lkj

�Y

kDi;j

nX`D2`¤k

j�11L1!k`Lkj CnX

`D2`¤k

j!1`L1!k1Lkj:

(4.16)

Suppose � … KW.M/1i [ KW.M/1j . Then j�11L1jj�kkLkj > R1.M/Rk.M/ fork D i; j . Moreover, if j�11L1j � R1.M/, then from (4.16), we obtain

YkDi;j

R1.M/Rk.M/ � j!1kL1!k1Lkj

<Y

kDi;j

R1.M/

nX`D2`¤k

j!k`Lkj CnX

`D2`¤k

j!k1L1!1`Lkj:

(4.17)

Then, from the fact that

R1.M/Rk.M/ � j!k1L1!1kLkj

D R1.M/

nX`D2`¤k

j!k`Lkj CnX

`D2`¤k

j!k1L1!1`Lkj; (4.18)

it follows that (4.17) cannot hold. Therefore, if � 2 KW.R1/ij , Qi .M/Qj .M/ ¤ 0,and � … KW.M/1i [ KW.M/1j , then j�11L1j > R1.M/.

Proceeding as before, we assume that � 2 KW.R1/ij , so that in particular, (4.15)holds. Note that if �11 D 0, then � 2 KW.M/1i [ KW.M/1j , and the claimgiven in (4.14) holds. In what follows, we assume, then, that �11 ¤ 0. Moreover,if Qi .M/Qj .M/ ¤ 0, then multiplying both sides of (4.15) by j�11q11qi1j andj�ii Li q11qj1j yields

Page 124: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.2 Brauer-Type Regions 109

j�11L1�ii Li j � j!1i L1!i1Li j

j�ii Li �jj Lj L1j � j!1j L1!j1Lj

�ii Li

�11

j

� nX

`D2`¤i

j�11L1!i`Li j CnX

`D2`¤i

j!1`L1!i1Li j

� nX

`D2`¤j

j�iiLi !j`Lj L1j CnX

`D2`¤j

j!1`L1!j1Lj

�ii Li

�11

j;

(4.19)

by use of the triangle inequality.Supposing that � … KW.M/1i [ KW.M/ij , then we have both R1.M/Ri .M/ <

j�11L1�ii Li j and Ri .M/Rj .M/ < j�ii Li �jj Lj j. This together with (4.19) implies

R1.M/Ri .M/ � j!1i L1!i1Li j

Ri .M/Rj .M/L1 � j!1j L1!j1Lj

�ii Li

�11

j

< nX

`D2`¤i

j�11L1!i`Li j CnX

`D2`¤i

j!1`L1!i1Li j

�j�ii Li L1j�Rj .M/ � j!j1Lj j�C j!j1Lj

�ii Li

�11

j�R1.M/ � j!1j L1j�:

If j�ii Li j � Ri .M/, then

R1.M/Ri .M/ � j!1i L1!i1Li j

Ri .M/Rj .M/L1 � j!1j L1!j1Lj

�ii Li

�11

j

< nX

`D2`¤i

j�11L1!i`Li j CnX

`D2`¤i

j!1`L1!i1Li j

�Ri .M/jL1j�Rj .M/ � j!j1Lj j�C j!j1Lj

�ii Li

�11

j�R1.M/ � j!1j L1j�:

(4.20)

The claim then is that if � … KW.M/1i [ KW.M/1j , which implies j�11L1j >

R1.M/ by the above, then the second term in each product of (4.20) has the relation

Ri .M/Rj .M/ � j!1j L1!j1Lj

�ii Li

�11

j

� Ri .M/jL1j�Rj .M/ � j!j1Lj j�C j!j1Lj

�ii Li

�11

j�R1.M/ � j!1j L1j�:(4.21)

Page 125: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

110 4 Improved Eigenvalue Estimates

To see this, note that this is true if and only if

Ri .M/j!j1Lj L1j � j!j1Lj �ii Li jR1.M/

j�11j :

Since this is true if and only if j�11L1jRi .M/ � R1.M/j�iiLi j, this verifiesthat (4.21) holds, since we have both Ri .M/ � j�ii Li j and j�11L1j > R1.M/.Therefore, equations (4.20) and (4.21) together imply that

R1.M/Ri .M/�j!1i L1!i1Li j <

nX`D2`¤i

j�11L1!i`Li jCnX

`D2`¤i

j!1`L1!i1Li j: (4.22)

Rewriting the right-hand side of this inequality in terms of Rk.M/ (for k D 1; i )yields

R1.M/Ri .M/ < j�11L1jRi .M/ � j�11L1!i1Li j C j!i1Li jR1.M/:

This, in turn, implies that Ri .M/�R1.M/�j�11L1j� < j!i1Li j

�R1.M/�j�11L1j�.

However, it then follows that

Ri .M/ DnX

`D1;`¤i

j!i`Li j < j!i1Li j;

which is impossible.Therefore, if both Qi .M/Qj .M/ ¤ 0 and � … KW.M/1i [ KW.M/1j [

KW.M/ij , then j�ii Li j > Ri .M/. Moreover, since this argument is symmetricin the indices i and j , it can be modified to show that if both

Qi .M/Qj .M/ ¤ 0 and � … KW.M/1i [ KW.M/1j [ KW.M/ij ;

then j�jj Lj j > Rj .M/.With this in mind, by multiplying (4.15) by jq11qi1j and jq11qi1j and assuming

once again that Qi .M/Qj .M/ ¤ 0, we see that the triangle inequality implies

YkDi;j

j�kkLk jjL1j � j!k1!1k

�11

L1Lk j

�Y

kDi;j

nX`D1`¤k

j!k`Lk jjL1j � j!k1LkL1j CnX

`D2

j!k1!1`

�11

LkL1j � j!k1!1k

�11

LkL1j:

(4.23)

Hence, if � … KW.M/1i [ KW.M/1j [ KW.M/ij , then from the previouscalculations, Rk.M/ < j�kkLkj for k D 1; i; j , which together with (4.23) impliesthat

Page 126: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 111

YkDi;j

Rk.M/jL1j � j!k1!1k

�11

L1Lkj

<Y

kDi;j

Rk.M/jL1j � j!k1LkL1j C j!k1LkjR1.M/

j�11j � j!k1!1k

�11

LkL1j:

Hence, for either k D i or k D j , it follows that

�j!k1LkL1j C j!k1LkjR1.M/

j�11j > 0:

Therefore, R1.M/ > j�11L1j, which is impossible. Since this implies that the value� is not in KW.M/1i [ KW.M/1j [ KW.M/ij unless Qi .M/Qj .M/ D 0, supposethat this product is, in fact, equal to zero.

In that case, note that by modifying equation (4.5), we have

KW.M/ij Dn� 2 C W

YkDi;j

jLkk.qkk� � pkk/j �Y

kDi;j

nXj D1;j ¤k

jpkj Lkj jo

for 1 � k � n. Hence if Qk.M/ D 0 for either k D i or k D j , then by calculationsanalogous to those given in the proof of Theorem 4.3, it follows that � 2 KW.M/ik .This verifies the claim given in (4.14). Hence Theorem 4.8 holds for S D N � f1g.

As in the previous proofs, Theorem 1.3 can be invoked to generalize this resultto the reduction over the any index set S , as long as jS j � 2. ut

The following is the reason Brauer-type estimates improve under reduction. Ifwe reduce M 2 W

n�n� over the set S D N � fkg, then the ij th Brauer region of

the reduced matrix is contained in the ij th, ikth, and jkth Brauer regions of theunreduced matrix. As an example of this, the inclusion

KW.R.M I S//23 � KW.M/12 [ KW.M/13 [ KW.M/23

is shown in Fig. 4.3 (right) for the matrix M given in (4.7) and index set S D f2; 3g.

4.3 Brualdi-Type Regions

The eigenvalue estimates of both Gershgorin and Brauer use a technique ofassociating regions in the complex plane with the rows and pairs of rows of a matrixM 2 C

n�n. Brualdi [5] was able to improve on these results using combinations ofthe matrix rows corresponding to the cycle structure of the graph G associated with

Page 127: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

112 4 Improved Eigenvalue Estimates

the matrix M . This result was later extended by Varga [27], who likewise used thecycles of G to estimate the spectrum of M .

In this section, we show that the results of Brualdi and Varga can be extended tomatrices in W

n�n. Because of the graph-theoretic nature of these results, we take thepoint of view in this section that we are considering the spectrum of a graph G 2 G

instead of a matrix M 2 Wn�n. This shift in perspective from Sections 4.1 and 4.2

will require some changes in our notation. However, we will endeavor to keep thisas uniform as possible.

For notational convenience, we let Gn be the set of graphs

Gn D fG D .V; E; !/ W M.G/ 2 W

n�ng;

or the graphs with n vertices for some n 2 N. Hence G D [n�1Gn. We note that

there is a one-to-one correspondence between the graphs in Gn and the matrices

Wn�n. Therefore, we may talk of a graph G 2 G

n associated with a matrix M DM.G/ in W

n�n and conversely without ambiguity.Let G D .V; E; !/ be a graph in G

n. A strong cycle of G is a cycle v1; : : : ; vm

such that m � 2. Furthermore, if vi 2 V has no strong cycle passing through it,then we define its associated weak cycle as vi , regardless of whether ei i 2 E. ForG 2 G, we let Cs.G/ and Cw.G/ denote the sets of strong and weak cycles of G,respectively, and let C.G/ D Cs.G/ [ Cw.G/ denote the cycle set of G.

Recall from Chap. 3 that a directed graph G is strongly connected if there is apath from each vertex of the graph to every other vertex. Moreover, the stronglyconnected components of G D .V; E; !/ are its maximal strongly connectedsubgraphs. Moreover, the vertex set V D fv1; : : : ; vng of G can always be labeledin such a way that M.G/ has the triangular block structure

M.G/ D

266664

M.S1.G// 0 : : : 0

M.S2.G//:::

:::: : : 0

: : : M.Sm.G//

377775 ;

where Si .G/ is a strongly connected component of G, and are block matrices withpossibly nonzero entries (see [6, 22, 27] for more details).

Since the strongly connected components of a graph are unique, then for G 2 Gn,

we define

Qri .G/ DX

j 2N`;j ¤i

jM.S`.G//ij j for 1 � i � n;

where i 2 N`, and N` is the set of indices indexing the vertices in S`.G/. Thatis, Qri .G/ is the i th absolute row sum of M.G/ restricted to the strongly connected

Page 128: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 113

component containing vi . We let NG be the graph with adjacency matrix M.G/.Furthermore, we let Qri . NG/ D Qri . NG; �/, where we consider Qri . NG; �/ W C ! C.

If A 2 Cn�n, then we write Qri .G; �/ D Qri .A/, where A D M.G/. Moreover, we

let C.A/ D Cs.A/ [ Cw.A/. This allows us to state the following theorem of Varga.

Theorem 4.9 (Varga [27]). Let A 2 Cn�n. Then the eigenvalues of A are contained

in the set

B.A/ D[

�2C.A/

f� 2 C WYvi 2�

j� � Aii j �Yvi 2�

Qri .A/g:

Also, B.A/ � K.A/.

As with the theorems of Gershgorin and Brauer, this result can be extended tomatrices in W

n�n.

Theorem 4.10. Let G 2 G. Then �.G/ is contained in the set

BW.G/ D[

�2C. NG/

f� 2 C WYvi 2�

j� � M. NG/ii j �Yvi 2�

Qri . NG/g: (4.24)

Also, BW.G/ � KW.G/.

We call BW.G/ the Brualdi-type region of the graph G, and the set

BW.G/� D f� 2 C WYvi 2�

j� � M. NG/ii j �Yvi 2�

Qri . NG/g;

the Brualdi-type region associated with the cycle � 2 C. NG/.

Proof. For G 2 Gn, let NG D NG.�/, where for fixed ˛ 2 C, NG.˛/ is the graph with

adjacency matrix M. NG; ˛/ 2 Cn�n. Moreover, for every � D fv1; : : : ; vmg in C. NG/

and fixed ˛ 2 C, let �.˛/ be the set of vertices fv1; : : : ; vmg in the graph NG.˛/.Using this notation, if ˛ 2 �.G/, then Lemma 4.1 and Theorem 4.9 imply that

there exists � 0 2 C. NG.˛// such that

Yvi 2� 0

j˛ � M. NG; ˛/i i j �Y

vi 2� 0

Qri . NG; ˛/: (4.25)

There are then two possibilities: either � 0 2 C. NG/ or � 0 … C. NG/. If � 0 2 C. NG/,then the set of vertices � 0.˛/ is also a cycle in NG, in which case, relations (4.24)and (4.25) imply ˛ 2 BW.G/. Suppose, then, that � 0 … C. NG/.

Note that if � 0 2 Cs. NG.˛//, then since M. NG; ˛/ij ¤ 0 implies M. NG; �/ij ¤ 0

for i ¤ j , it follows that � 0 2 Cs. NG/. Since this is impossible, � 0 2 Cw. NG.˛//

or � 0 must be a loop of some vertex vj , where the graph induced by fvj g in NG.˛/

Page 129: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

114 4 Improved Eigenvalue Estimates

is a strongly connected component of NG.˛/. Thus, equation (4.25) is equivalent toj˛ � M. NG; ˛/jj j � 0, implying ˛ D M. NG; ˛/jj .

Since some cycle � 2 C. NG/ contains the vertex vj , it follows that ˛ is containedin the set

f� 2 C WYvi 2�

j� � M. NG; �/i i j �Yvi 2�

Qri . NG; �/g;

implying that ˛ 2 BW.G/.To show now that BW.G/ � KW.G/, let � 2 C. NG/. Supposing � 2 Cw. NG/,

then � D fvi g for some vertex vi of G and

BW.G/� D f� 2 C W j� � M. NG; �/i i j D 0g;

since vi is the vertex set of some strongly connected component of NG. It followsfrom (4.13) that BW.G/� � KW.G/ij for every 1 � j � n, where i ¤ j . Inparticular, note that if Qri . NG; �/ D 0, then � 2 KW.G/ij for every 1 � j � n, wherei ¤ j .

If, on the other hand, � 2 Cs. NG/, then for convenience, let � D fv1; : : : ; vpg,where p > 1, and note that

BW.G/� D f� 2 C WpY

iD1

j� � M. NG; �/i i j �pY

iD1

Qri . NG; �/g: (4.26)

Assuming 0 < Qri . NG; �/ for all 1 � i � p, then for fixed � 2 BW.G/� , it followsby raising both sides of the inequality in (4.26) to the .p � 1/st power that

Y1�i;j �p

i¤j

j� � M. NG; �/i i jj� � M. NG; �/jj jQri . NG; �/ Qrj . NG; �/

� 1: (4.27)

Since not all the terms of the product in (4.27) can exceed unity, it follows that forsome pair of indices ` and k, where 1 � `; k � p and ` ¤ k, we must have

j� � M. NG; �/kkjj� � M. NG; �/``j � Qrk. NG; �/ Qr`. NG; �/: (4.28)

Using the fact that Qri . NG; �/ � ri . NG; �/ for all 1 � i � n, we conclude that � 2KW.G/k`, completing the proof. ut

The Brualdi-type region for the graph G with adjacency matrix M D M.G/,given by (4.2), is shown in Fig. 4.8. We note that BW.G/ D KW.M/ in thisparticular case.

We now consider Brualdi’s original result, which can be stated as follows.

Page 130: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 115

Fig. 4.8 The Brualdi-typeregion BW.G/ for G inFig. 4.2 (left)

−2 −1 0 1 2

−2

−1

0

1

2

-1 2

−i

i

Theorem 4.11 (Brualdi [5]). Let A 2 Cn�n, where Cw.A/ D ;. Then the

eigenvalues of A are contained in the set

br.A/ D[

�2C.A/

f� 2 C WYvi 2�

j� � Aii j �Yvi 2�

ri .A/g:

Also, BW.A/ � brW.A/ � KW.A/.

As with the theorems of Gershgorin, Brauer, and Varga, this result generalizes tomatrices with entries in W as follows.

Theorem 4.12. Let G 2 G, where Cw.G/ D ;. Then �.G/ is contained in the set

brW.G/ D[

�2C. NG/

f� 2 C WYvi 2�

j� � M. NG/ii j �Yvi 2�

ri . NG/g: (4.29)

Also, BW.G/ � brW.G/ � KW.G/.

Proof. Note that for every graph G 2 G, we have Qri . NG/ � ri . NG/ for all � 2 C.Hence

BW.G/ �[

�2C. NG/

f� 2 C WYvi 2�

j� � M. NG/ii j �Yvi 2�

ri . NG/g:

Theorem 4.10 then implies that �.G/ is contained in the set brW.G/. Furthermore,if Qri .G/ is replaced by ri .G/ in the proof of Theorem 4.10, then (4.28) implies thatbrW.G/ � KW.G/, completing the proof. ut

We will refer to the region brW.G/, given in (4.29), as the original Brualdi-typeregion of G.

Page 131: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

116 4 Improved Eigenvalue Estimates

−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

−3

−2

−1

0

1

2

3

−3

−2

−1

0

1

2

3

-1 2

−i

i

-1 2

−i

i

-1 2

−i

i

Fig. 4.9 Left: BW.G0/. Middle: BW.G1/. Right: BW.G2/, where in each, the spectrum �.G0/ Df�1; �1; �i; i; 2g is indicated

As demonstrated in sections 4.1 and 4.2, the eigenvalue regions associated withboth Gershgorin and Brauer shrink under isospectral reduction. However, for bothBrualdi-type and original Brualdi-type regions, the situation is more complicated.For certain reductions, the Brualdi-type (original Brualdi-type) region of a graphmay decrease in size similarly to Gershgorin-type and Brauer-type regions. In othercases, the Brualdi-type (original Brualdi-type) region of a graph may increase insize as the graph is reduced.

Example 4.8. For example, suppose Gi is the graph with adjacency matrixM.Gi / D Mi considered in Example 4.4 for i D 0; 1; 2. Here, the graph GiC1

is the reduction of the graph Gi for i D 0; 1. In this particular case, we have theinclusions

BW.G2/ � BW.G1/ � BW.G0/;

which can be seen in Fig. 4.9. That is, the Brualdi-type regions of the graph G0

shrink under this sequence of isospectral reductions.

In the following, we give an example of a graph whose Brualdi-type regions donot have this property.

Example 4.9. Consider the graph H 2 G� given in Fig. 4.10. If H is reduced overthe set S D Nv1 D fv2; v3; v4g and T D Nv4 D fv1; v2; v3g, then

M.RS .H// D24

1�

110

010�

0 1

0 1 0

35 and M.RT .H// D

24

1�

1�

0

1 0 1

0 1 0

35 :

In this case, we have the strict inclusions

BW.RT .H// � BW.H/ � BW.RS .H//;

Page 132: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 117

−2 −1 0 1 2

−2

−1

0

1

2

−2

−1

0

1

2

−2

−1

0

1

2

−2 −1 0 1 2 −2 −1 0 1 2

H

10

v1

110

1

1v2

1

v3v4

110

Fig. 4.10 Top Left: BW.H/. Top Middle: BW.RS .H//. Top Right: BW.RT .H//, where S D Nv1

and T D Nv4. The eigenvalues �.H/ are indicated

shown in Fig. 4.10. In particular, since BW.H/ � BW.RS .H//, then reducing thegraph H over S increases the size of the graph’s Brualdi-type region. Therefore,isospectral graph reductions do not always improve Brualdi-type estimates.

The fact that isospectral graph reductions do not always improve Brualdi-typeregions should not be too surprising. The major reason, as we will see, is thatisospectral graph reductions do not preserve the cycle structure of a graph and mayboth create and destroy cycles. Since Brualdi’s eigenvalue estimates rely on thisstructure, this is a very different technique for finding the eigenvalues of a matrixfrom the methods of Gershgorin and Brauer, which do not take into account anyparticular structure.

To give a sufficient condition under which a graph’s Brualdi-type region shrinksas the graph is reduced, we require the following definitions. First, let G D.V; E; !/, where G 2 G

n� for some n � 1, and let G have the strongly connected

components S1.G/; : : : ;Sm.G/. Define

Escc D fe 2 E W e 2 Si .G/; 1 � i � mg:

The cycle � 2 C.G/ is said to be adjacent to vi 2 V if vi … � and there is somevertex vj 2 � such that eji 2 Escc .

Second, for every vi 2 V , we define the set of cycles

Adj.vi ; G/ D f� 2 C.G/ W � is adjacent to vi g:

Moreover, if C.vi ; G/ D f� 2 C.G/ W vi 2 �g, then let S.vi ; G/ � C.vi ; G/ bethe set defined as follows.

Page 133: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

118 4 Improved Eigenvalue Estimates

For a fixed i 2 N , let � D v˛1 ; : : : ; v˛m be a cycle in C.vi ; G/, where n � m � 1

and vi D v˛1 . If m D 1, that is, � D fvi g, then � 2 S.vi ; G/. Otherwise, supposing1 < m � n, relabel the vertices of G such that v˛j is vj for 1 � j � m anddenote this relabeled graph by Gr D .Vr ; Er ; !r/. Then � 2 S.v1; G/ if ej1 … Er

for 1 < j < m and emk … Esccr for m < k � n.

Since it will be needed later, we furthermore define the set Sbr .vi ; G/ to be theset of cycles in S.vi ; G/, where � 2 Sbr .v1; G/ if ej1 … Er for 1 < j < m andemk … Er for m < k � n. Moreover, if v 2 V , we let Nv D V � fvg. With this inplace, we state the following theorem.

Theorem 4.13 (Improved Brualdi-Type Regions). Let G D .V; E; !/, whereG 2 G� and jV j � 2. If v 2 V such that both Adj.v; G/ D ; and C.v; G/ DS.v; G/, then BW.R Nv.G// � BW.G/.

Theorem 4.13 states that if the vertex v is adjacent to no cycle in C.G/, and eachcycle passing through v is in S.v; G/, then removing this vertex from the graph willimprove its Brualdi-type region.

We note that for the graph H in Fig. 4.10, the vertex v1 has the property thatAdj.v1; H/ D fv2; v3g ¤ ;. Hence, Theorem 4.13 does not apply to the reductionof H over S D Nv1. On the other hand, the vertex v4 of H has the property thatAdj.v4; H/ D ; and S.v4; H/ D C.v4; H/. Therefore, reducing H over the vertexset T D Nv4 improves the Brualdi-type region of this graph, which can be seen bycomparing the upper left-hand and upper right-hand sides of Fig. 4.10.

Example 4.10. To see why the condition C.v; G/ D S.v; G/ is necessary inTheorem 4.13, we consider the following. Let J; RS .J / 2 G be the matrices givenby

M.J / D

2664

0 1 0 0

0 0 1 0

1 0 0 1

1 0 0 0

3775 and M.RS .J // D

24 0 1 0

1�

0 11�

0 0

35 ;

where S D Nv1 D fv2; v3; v4g. In this case, the region BW.RS .J // is not asubset of BW.J /. We note that Adj.v1; J / D ;, but S.v1; J / consists of the cyclev1; v2; v3, whereas the cycle set C.v1; J / is equal to fv1; v2; v3I v1; v2; v3; v4g. Thatis, C.v1; J / ¤ S.v1; J /.

Observe that graph reductions can increase, decrease, or maintain the numberof cycles in a graph’s cycle set. For instance, the graph G0 in Example 4.8 has 12

cycles in its cycle set, whereas G1 has 3, and G2 has 1 (see Fig. 4.9). In contrast, letP , RU .P / 2 G be the graphs with adjacency matrices

Page 134: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 119

M.P / D

2666664

0 1 0 0 0

0 0 0 0 1

0 0 0 1 0

0 0 0 0 1

1 0 1 0 0

3777775

and M.RU .P // D

2664

0 1 0 01�

0 1�

0

0 0 0 11�

0 1�

0

3775 ;

where U D fv1; v2; v3; v4g. Here, the cycle set C.P / is equal to fv1; v2; v5I v3;

v3; v5g, whereas C.RU .P // D fv1; v2I v3; v4I v1; v2; v3; v4g. That is, reducing P

over U increases the number of cycles needed to compute the associated Brualdi-type region from 2 to 3. Hence, in this case the number of subregions that makeupthe Brualdi-type region increase as the graph is reduced. This is in contrast toGershgorin and Brauer-type regions in which the number of subregions alwaysdecrease as the associated graph is reduced.

Before continuing on to Brualdi’s original result, we note that as with matrices,it is possible to fully reduce a graph G 2 G� to a single vertex. That is, if S D fvi g,then the graph RS .G/ consists of a single vertex vi and has the Brualdi-type region

BW.RS .G// D f� 2 C W p.�/ D 0g

for some polynomial p.�/ 2 CŒ��. Using equation (1.11), one can show that

BW.RS .G// � �.G/ [ ��1.Gj NS /;

and so in the case of a fully reduced graph, the associated Brualdi-type region is afinite collection of points in the complex plane.

Moreover, if the graph G has complex-valued weights, then ��1.G/ D ;. Thistogether with Theorem 4.10 implies that BW.RS .G// � BW.G/. This fact leads tothe following remark.

Remark 4.1. If G D .V; E; !/ is a graph with complex-valued weights, then thereis always some S � V such that BW.RS .G// � BW.G/. That is, it is alwayspossible to improve the Brualdi-type region of G by reducing G over some subsetof its vertices.

In the case of Brualdi’s original result (Theorem 4.12), we must deal with thefollowing complications. First, for a given graph G 2 G� , where Cw.G/ D ;, itmay not be the case that Cw.R Nv.G// D ;. Furthermore, since the edges betweenstrongly connected components play a role in the associated eigenvalue inclusionregion (see (4.29)), this also complicates whether estimates given by the originalBrualdi-type region improve as the graph is reduced. However, it is possible to givesufficient conditions under which this is the case.

Theorem 4.14 (Improved Original Brualdi-Type Regions). Let G D .V; E; !/

be in G� and v 2 V . If Adj.v; G/ D ;, C.v; G/ D Sbr .v; G/, and both of the setsCw.G/ and Cw.R Nv.G// are empty, then brW.R Nv.G// � brW.G/.

Page 135: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

120 4 Improved Eigenvalue Estimates

We finish this section now by giving a proof of Theorem 4.13, and then a proof ofTheorem 4.14. In order to prove Theorem 4.13, we first give the following lemma.

Lemma 4.3. Let G 2 Gn� for n � 2 and suppose that Adj.v1; G/ D ; and

C.v1; G/ D S.v1; G/. Moreover, let � D fv1; : : : ; vmg and � 0 D fv2; : : : ; vmgfor m � 2. If � 2 C.G/ and � 0 D2 C.R1.G//, then BW.R1.G//� 0 � BW.G/.

Proof. Suppose first that the hypotheses of Lemma 4.3 hold. We then make theobservation that the edges e 2 Escc are not used to calculate BW.G/. Furthermore,every cycle of G is contained in exactly one strongly connected component of thisgraph. This implies that the Brualdi-type region of the graph is the union of theBrualdi-type regions of its strongly connected components. Therefore, we may,without loss in generality, assume that G consists of a single strongly connectedcomponent.

Suppose that both � D v1; : : : ; vm and ı D v1; vm are cycles in C.v1; G/ forsome 1 < m � n. Note the fact that � 2 C.v1; G/ implies that � 0 D v2; : : : ; vm is acycle in C.R1.G//, where we let R1.G/ D R Nv.G/.

From the assumption that v1 has no adjacent cycles, it follows that !mi D 0

for 1 < i � m, since otherwise, fvi ; viC1; : : : ; vmg 2 Adj.v1; G/. Also, since� 2 C.v1; G/ D S.v1; G/, we must have !i1 D 0 for 1 < i < m as well as!mi D 0 for m < i � n, since G is assumed to have one strongly connectedcomponent. Therefore,

BW.G/� D f� 2 C WmY

iD1

j�ii Li j � j!m1Lmjm�1YiD1

Ri .G/g; (4.30)

BW.G/ı D f� 2 C W j�11L1jj�mmLmj � j!m1LmjR1.G/g; (4.31)

where Ri .G/ D Ri .M.G//.Suppose, then, that � 2 BW.R1.G//� 0 . Then

j.�mm � !m1!1m

�11

/L1mj

m�1YiD2

j�iiLi j �m�1XiD2

j!m1!1i

�11

L1mj

m�1YiD2

Ri .G/: (4.32)

Here, L1i D Li for 1 < i < m, since for each such i , the edge ei1 is not in E.

By multiplying both sides of (4.32) by jq11q1m�11j, it follows from the triangleinequality that

Qm.G/j�11L1�mmLmj � j!1mL1!m1Lmj

m�1YiD2

j�iiLi

� Qm.G/j!m1LmjR1.G/ � j!1mL1!m1Lmj

m�1YiD2

Ri .G/;

(4.33)

Page 136: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.3 Brualdi-Type Regions 121

where Qm.G/ D Qm.M.G//. By use of equation (4.5), we then have

BW.G/ı D f� 2 C WY

kD1;m

jLkk.qkk� � pkk/j �Y

kD1;m

nXj D1;j ¤k

jpkj Lkj jg:

Hence if Qm.G/ D 0, then by calculations analogous to those given in the proof ofTheorem 4.3, it follows that � 2 BW.G/ı .

Let us now assume that Qm.G/ ¤ 0. In this case, ifQm�1

iD2 Ri .G/ D 0,it then follows from (4.33) that either

Qm�1iD2 j�iiLi j D 0 or j�11L1�mmLmj �

j!m1L1!1mLmj D 0. If the first is the case, then � 2 BW.G/� . If the latter isthe case, then � 2 BW.G/ı , since j!1mL1j � R1.G/.

If bothQm�1

iD2 Ri .G/ ¤ 0 and j�11L1�mmLmj� j!m1L1!1mLmj ¤ 0, then (4.33)implies

Qm�1iD2 j�ii Li jQm�1iD2 Ri .G/

� j!m1LmjR1.G/ � j!1mL1!m1Lmjj�11L1�mmLmj � j!m1L1!1mLmj : (4.34)

Here, we note that if

j!m1LmjR1.G/ � j!1mL1!m1Lmj/j�11L1�mmLmj � j!m1L1!1mLmj � j!m1LmjR1.G/

j�11L1�mmLmj ;

then it follows from (4.34) together with (4.30) that � 2 BW.G/� . On the other hand,if this inequality does not hold, then j�11L1jj�mmLmj < j!m1LmjR1.G/, implying� 2 BW.G/ı . Therefore, BW.R1.G//� 0 � BW.G/� [ BW.G/ı � BW.G/.

Conversely, if ı … C.G/, then !1mL1 D 0. In this case, equation (4.33) togetherwith (4.30) implies that BW.R1.G//� 0 � BW.G/� . Hence, BW.G/� 0 � BW.G/,completing the proof. ut

We now give a proof of Theorem 4.13.

Proof. First, as in the previous proof, suppose G consists of a single stronglyconnected component. Moreover, for the vertex v1 2 V , suppose Adj.v1; G/ D ;and C.v1; G/ D S.v1; G/. Also, let � 0 D v2; : : : ; vm be a cycle in C.R1.G// forsome 1 < m � n.

Since Adj.v1; G/ D ;, if � 0 2 C.G/, then M.G; �/ij D M.R1.G/; �/ij for2 � i � m and 1 � j � n, since � 0 would otherwise be adjacent to v1. From this,it follows that BW.R1.G//� 0 D BW.G/� 0 � BW.G/.

On the other hand, if � 0 … C.G/, then at least one edge of the form ei�1;i for3 � i � m or the edge em2 is not in E. If this is the case, then without loss ofgenerality, assume for notational simplicity that em2 … E. Furthermore, let

Ind.E/ D fi W ei�1;i … E; 3 � i � mg [ f2g:

Page 137: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

122 4 Improved Eigenvalue Estimates

We give the set Ind.E/ the ordering Ind.E/ D fi1; : : : ; i`g such that ij < ik ifand only if j < k. Hence, for each 1 � j � `, the ordered sets

�j D fv1; vij ; vij C1; : : : ; vj˛ g (4.35)

are cycles in C.v1; G/, where j˛ D ij C1 � 1 and `˛ D m.By removing the vertex v1 from G, it follows from (4.35) that each of the ordered

sets

� 0j D fvij ; vij C1; : : : ; vj˛ g

is a cycle in C.R1.G//. Since both Adj.v1; G/ D ; and C.v1; G/ D S.v1; G/,Lemma 4.3 implies that

[j D1

BW.R1.G//� 0

j� BW.G/:

The claim, then, is that

BW.R1.G//� 0 �[j D1

BW.R1.G//� 0

j: (4.36)

To see this, let �1i i D .� � !ii � !i1!1i

�11

/L1i and R1

i DnX

j D2;j ¤i

jM. NR1; �/ij j.

Then

BW.R1.G//� 0 D f� 2 C WmY

iD2

j�1i i j �

mYiD2

R1i g and (4.37)

BW.R1.G//� 0

jD f� 2 C W

mYi2�j

j�1i i j �

mYi2�j

R1i g for 1 � j � `: (4.38)

Since the vertex set � 0 is the disjoint union of the vertex sets of the cycles � 0j , the

assumption that � … BW.R1.G//� 0

jfor each 1 � j � ` implies � … BW.R1.G//� 0

by comparing the product of (4.38) over all 1 � j � ` to (4.37). This verifies theclaim given in (4.36), which implies that BW.R1.G//� 0 � BW.G/.

Since � 0 was an arbitrary cycle in C.R1.G//, it follows that BW.R1.G// �BW.G/, completing the proof. ut

A proof of Theorem 4.14 is the following.

Page 138: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.4 Some Applications 123

Proof. If the conditions given in Theorem 4.14 hold for v D v1, both brW.G/ andbrW.R1.G// exist, since it is assumed that Cw.G/ D ; and Cw.R1.G// D ;. More-over, if S.v1; G/ is replaced by Sbr.v1; G/, and BW.�/ by brW.�/, the conclusionsof Lemma 4.3 hold by the same argument with the exception that G is not assumedto have a single strongly connected component. Since the same holds for the proofof Theorem 4.13, the result follows. ut

4.4 Some Applications

In this section, we discuss some natural applications of isospectral reductions. Ourfirst application deals with estimating the spectra of the Laplacian matrix of a givengraph. Following this, we give a method for estimating the spectral radius of a graphusing specific types of isospectral reductions. Last, we will use the structure ofa given graph to identify particularly types of structural sets that can be used toimprove our eigenvalue estimates.

To begin, we note that it is not only possible to reduce a graph G, but it isalso possible to reduce both the combinatorial Laplacian matrix and the normalizedLaplacian matrix of G. Such matrices are typically defined for undirected graphswithout loops or weights, but this definition can be extended to graphs in G

(see Remark 4.2 below). However, here we give the standard definitions of thesematrices, since they are of interest in their own right (see [15, 16]).

Let G D .V; E/ be an unweighted undirected graph without loops, i.e., a simplegraph. If G has vertex set V D fv1; : : : ; vng, and d.vi / is the degree of vertex vi ,then the combinatorial Laplacian matrix ML.G/ of G is given by

ML.G/ij D

8<ˆ:

d.vi / if i D j;

�1 if i ¤ j and vi is adjacent to vj ;

0 otherwise.

On the other hand, the normalized Laplacian matrix ML .G/ of G is defined as

ML .G/ij D

8<ˆ:

1 if i D j and d.vj / ¤ 0;�1p

d.vi /d.vj /if vi is adjacent to vj ;

0 otherwise.

The interest in the eigenvalues of ML.G/ is that �.ML.G// gives us structuralinformation regarding the graph G (see, for instance, [15]). Additionally, knowingthe eigenvalues �.ML .G// is useful in determining the behavior of a number ofalgorithms on the graph G (see [16]).

Page 139: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

124 4 Improved Eigenvalue Estimates

0 2 4 6 8−4

−2

0

2

4

0 1 2 4 5

0 2 4 6 8−4

−2

0

2

4

0 1 2 4 5

Fig. 4.11 Left: �W

�ML.H/

�. Right: �W

�R.ML.H/I S/

�, where in each, �

�ML.H/

� Df0; 1; 2; 4; 5g is indicated

Since ML.G/ and ML .G/ are real matrices, either may be reduced over anysubset of their index sets. For example, if H 2 G� is the graph with adjacencymatrix

M.H/ D

2666664

0 0 0 0 1

0 0 0 1 1

0 0 0 1 1

0 1 1 0 1

1 1 1 1 0

3777775

; then ML.H/ D

2666664

1 0 0 0 �1

0 2 0 �1 �1

0 0 2 �1 �1

0 �1 �1 3 �1

�1 �1 �1 �1 4

3777775

:

Reducing ML.H/ over the set S D fv1; v2; v3; v4g yields the matrix

R.ML.H/I S/ D

266664

��3��4

1��4

1��4

1��4

1��4

2��7��4

1��4

��C5��4

1��4

1��4

2��7��4

��C5��4

1��4

��C5��4

��C5��4

3��11��4

377775 :

Figure 4.11 shows the Gershgorin-type regions of both ML.H/ and R.ML.H/I S/.Recall that the adjacency matrix of an undirected real graph is symmetric, so its

eigenvalues must be real numbers. With this in mind, we note that the Gershgorin-type region associated with an undirected real graph and any of its reductions canbe reduced to intervals of the real line.

Remark 4.2. It is possible to generalize the combinatorial Laplacian matrix ML.G/

to any G 2 Gn if G has no loops by setting ML.G/ij D �M.G/ij for i ¤ j and

Page 140: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.4 Some Applications 125

−2 −1 0 1 2 3 −2 −1 0 1 2 3

−2

−1

0

1

2

−2

−1

0

1

2

3 2

K

v1 v2v3

v4

v5 v6

v1 v5

v3

{v1,v2,v3}(K)

1

1

1l

1l

l+1l

l+1l

Fig. 4.12 Top Left: The region �W.K/ from which �.K/ � 3. Top Right: The region�W.Rfv1;v2;v3g.K// from which �.K/ � 2

ML.G/ii D Pnj D1;j ¤i M.G/ij . This generalization is consistent, for example, with

what is done for weighted digraphs in [31].

We now turn our attention to estimating the spectral radius of a graph, orequivalently, a matrix. Recall that for a graph G 2 G� , the spectral radius of G,denoted by �.G/, is the maximum among the absolute values of the elements in�.G/, i.e.,

�.G/ D max`2�.G/

j`j:

If a graph G 2 G� has a complete structural set S , then by Corollary 2.5, thespectral radius �.G/ is equal to �.RS .G/

�. For example, in the graph K shown

in Fig. 4.12, the vertices v2; v4; v6 are the vertices of K without loops. Sincefv1; v3; v5g 2 st.K/, these vertices form a complete structural set of K, from whichit follows that �.K/ D �.Rfv1;v3;v5g.K//.

By calculating the region �W.K/, which is shown in Fig. 4.12 (top left), we canthen estimate that �.K/ � 3. However, using the Gershgorin-type region of thereduced graph �W.Rfv1;v3;v5g.K//, shown in Fig. 4.12 (top right), our estimate ofthe graphs spectral radius improves to �.K/ � 2.

Page 141: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

126 4 Improved Eigenvalue Estimates

v5

v2 v3

v4

v1

−4 −2 0 2 4−4

−2

0

2

4

Fig. 4.13 The graph G (left) and its associated Gershgorin-type region �W.G/ (right) consideredin Example 4.11

It should be noted that a graph G 2 G� may have more than one completestructural set. In this case, there may be many ways to reduce G while maintainingits spectral radius.

As a final application of the theory developed in this chapter, we considerreducing graphs over specific structural sets. Our goal is to improve our eigenvalueestimates when some structural feature of the graph is known.

Suppose G D .V; E; !/ is a graph in Gn� . If the sets �W.G/i for 1 � i � n are

known or can be estimated by some structural knowledge of G, then it is possibleto decide over which structural sets to reduce. That is, it may be possible to identifystructural sets U � V such that vi … U and

@�W.G/i ª[j ¤i

�W.G/j :

If this can be done, Theorem 4.4 implies that a strictly better estimate of �.G/ canbe achieved by reducing over U .

Example 4.11. Consider the graph G D .V; E; !/ in Fig. 4.13 (left), where G 2 Gn

for some n � 5. Suppose it is known that G is an unweighted undirected graph, inwhich d.v1/ D 4, d.v2/ D d.v3/ D d.v4/ D d.v5/ D 3, and d.vi / 2 f0; 1; 2; 3gfor all 6 � i � n. Then the sets �W.G/i are each disks centered at the origin ofradius r 2 f0; 1; 2; 3; 4g, shown in Fig. 4.13 (right).

Since @�W

�M.G/

�1

D f� 2 C W j�j D 4g is an infinite set of points and

@�W

�M.G/

�1

ªn[

iD2

�W

�M.G/

�i;

Page 142: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

4.4 Some Applications 127

1l

1l

1l

1l 1

l1l

1l

1l

1l

1l

v5

v2 v3

v4

−4 −2 0 2 4−4

−2

0

2

4

Fig. 4.14 The reduced graph RS .G/ (left) and its associated Gershgorin-type region �W.RS .G//

(right) considered in Example 4.11

then Theorem 4.4 implies the following. If S D V � fv1g, the graph RS .G/ has astrictly smaller Gershgorin-type region than G, as can be seen in Fig. 4.14.

Considering the fact that n may be quite large, this example is intended toillustrate that eigenvalue estimates can be improved with a minimal amount of effortif some simple structural feature(s) of the graph are known.

Observe that as a matrix M 2 Cn�n is reduced, its entries may contain

increasingly larger powers of �. Therefore, the more M is reduced, the morecomplicated it can become to compute its Gershgorin-, Brauer-, or Brualdi-typeregion.

Fortunately, there is a fairly simple bound for how large these powers of �

can become. If S � N , let R.M I S/ij D pij =qij , where pij ; qij 2 CŒ��. ThenLemma 1.1 can be used to show that

deg.pij / � deg.qij / � j NS1j < n:

For instance, in Example 4.4, the matrix M0 2 C5�5 is reduced over S1 D f1; 2; 3g.

The result is the matrix

M1 D24

�C1�2

1�

�C1�

2�C1�2

1�

1�

0 1 0

35 : (4.39)

As can be seen, the largest power of � in any entry of M1 is 2 D j NS j.Therefore, the eigenvalue regions associated with a matrix may become harder

to compute as a matrix is reduced, but only marginally so. For both Gershgorin- andBrauer-type regions, this is offset by the fact that there are fewer individual regionsto compute once the matrix has been reduced.

Page 143: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 5Pseudospectra and Inverse Pseudospectra

The pseudospectrum of a complex-valued matrix A 2 Cn�n is the collection of

scalars that behave, to within a given tolerance, like an eigenvalue of A. In thischapter, we extend the definition of pseudospectrum to matrices with entries that arerational functions and introduce the notion of a matrix’s inverse pseudospectrum.

One of the main results we prove is that for a given tolerance, the pseudospectrumof a reduced matrix is always contained in the pseudospectrum of the originalmatrix. As a consequence, the eigenvalues of a reduced matrix are less susceptible toperturbations than those that correspond to the matrix itself. This is important, forinstance, in systems that correspond to isospectrally reduced matrices. Our primeexample of such systems is that of linear mass–spring networks in which there islimited access.

The reason we study mass–spring networks is that such systems allow us to givea physical interpretation of the concepts of pseudospectra and inverse pseudospectraintroduced in this chapter. The major result, that the pseudospectrum of a networkshrinks under reduction, also has a physical meaning in this context.

5.1 Pseudospectra

We begin by giving the definition of the pseudospectra of a matrix A 2 Cn�n. In

fact, we give three equivalent definitions. To do so, we first define the notion of acompatible matrix and vector norm.

A matrix norm k � k on Cn�n is compatible with a vector norm k � k0 on C

n if

kAvk0 � kAkkvk0 for all A 2 Cn�n and v 2 C

n:

We now define the pseudospectra of a matrix A 2 Cn�n.

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__5

129

Page 144: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

130 5 Pseudospectra and Inverse Pseudospectra

Definition 5.1. Let � > 0. The �-pseudospectrum of A 2 Cn�n is defined

equivalently by the following:

(a) Eigenvalue perturbation:

��.A/ D f� 2 C W jj.A � �I/vjj < � for some v 2 Cn with jjvjj D 1gI

(b) The resolvent:

��.A/ D f� 2 C W jj.A � �I/�1jj > ��1g [ �.A/I and

(c) Perturbation of the matrix:

��.A/ D f� 2 C W � 2 �.A C E/ for some E 2 Cn�n with jjEjj < �g:

The matrix and vector norms in (a)–(c) are assumed to be compatible.

A proof that (a)–(c) of Definition 5.1 define the same region can be found in [26].We note that the assumption kvk D 1 in part (a) of Definition 5.1 is necessary,

since any given � 2 C could be in ��.A/ by choosing jjvjj small enough.Also, having �.A/ on the right-hand side of part .b/ is necessary, since thepseudospectrum ��.A/ in parts (a) and (c) contain �.A/, but the matrix A � �I

in part (b) is noninvertible for every � 2 �.A/.

Example 5.1. Consider the matrix A 2 Cn�n with .0; 1/-entries given by

A D

266666664

0 0 1 1 0 0

0 1 0 0 1 1

1 0 1 0 0 0

0 1 0 1 0 0

1 0 0 0 0 0

0 1 0 0 0 0

377777775

;

considered in Example 1.3. The pseudospectra ��.A/, for � D 1, 1=2, and 1=4,are shown in Fig. 5.1 (left). The eigenvalues �.M/ D f2; �1; 1; 1; 0; 0g are alsoindicated.

For A 2 Cn�n, suppose that for a given tolerance � > 0, there exist a scalar

� 2 C and a unit vector v 2 Cn for which k.A � �I/vk < �. If this is the case, then

the vector v is said to be an �-pseudoeigenvector of the matrix A corresponding tothe �-pseudoeigenvalue �. Definition 5.1 (a) states that the �-pseudospectrum of A

is defined as the set of all such � that have an associated �-pseudoeigenvector.To extend the notion of an �-pseudospectrum to a matrix M.�/ 2 W

n�n� , we

first need to have an idea of what it means to be an eigenvector of M.�/. If � 2�.A/, where A 2 C

n�n, then there is always at least one eigenvector v 2 Cn of

A associated with �. However, recall from Chap. 1 that a matrix M.�/ 2 Wn�n�

Page 145: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.1 Pseudospectra 131

−2 −1 0 1 2 3−2

−1

0

1

2

se(A)−2 −1 0 1 2 3

−2

−1

0

1

2

se ( (A;S))

Fig. 5.1 Pseudospectra of the matrices given in Example 5.3 for � D 1 (blue), � D 1=2 (red),and � D 1=4 (tan), obtained using the matrix 1-norm. The spectra �.A/ D f2; �1; 1; 1; 0; 0g and�.R.AI S// D f2; �1g are shown as black dots.

may have an eigenvalue �0 at which the matrix M.�0/ is undefined. This may seemproblematic, especially if we wish to associate an eigenvector with �0 2 �.M/. Asit turns out, we can always associate an eigenvector v 2 C

n with each eigenvalue ofM.�/.

Suppose �0 is a solution of the equation det.M.�/ � �I/ D 0. Then the standardtheory of linear algebra implies that there is a vector v such that .M.�/ � �I/v D 0

when this product is evaluated at � D �0. Keeping this sequence in mind of firstmultiplying then evaluating, we define the product of a matrix and vector as follows.For every M.�/ 2 W

n�n� , v 2 C

n, and � 2 C, we let the matrix/vector product begiven by

.M.�/ � �I/v Œ.M.s/ � sI /v�jsD�: (5.1)

This definition allows us to associate an eigenvector to each eigenvalue of a matrixM.�/ 2 W

n�n� . To give an explicit demonstration of this procedure, we consider the

following example.

Example 5.2. Consider the matrix M.�/ 2 W2�2 given by

M.�/ D�

1 1��1

0 1

�:

Page 146: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

132 5 Pseudospectra and Inverse Pseudospectra

Here, one can readily see that �.M/ D f1; 1g but 1 … dom.M/. Although M.1/ isundefined, the vector v D Œ1 0�T has the property

.M.1/ � 1I /v D�

1 � s 1s�1

0 1 � s

� �1

0

� ˇsD1

D�

1 � s

0

� ˇsD1

D�

0

0

�:

Therefore, the vector v is an eigenvector associated with the eigenvalue 1 2 C

despite the fact that M.�/ is not defined for � D 1.Moreover, we can observe that for a given vector norm jj � jj, we have

jj.M.�/ � �I/vjj D�����

1 � �

0

����� :

Hence, the size of .M.�/ � �I/v varies continuously with respect to � even whereM.�/ is undefined. This type of continuity can be shown to hold in general for eachmatrix M.�/ 2 W

n�n� near �.M/, and it is essential in describing the pseudospectra

of this class of matrices.

We are now in a position to define the �-pseudospectrum of a matrix M.�/ 2W

n�n� . To do this, we let cl.˝/ denote the closure of ˝ in C for the set ˝ � C.

Definition 5.2. Let � > 0. The �-pseudospectrum of M.�/ 2 Wn�n� is defined

equivalently by the following conditions:

(a) Eigenvalue perturbation:

��.M/ D cl�f� 2 C W k.M.�/ � �I/vk < � for some v 2 C

n with kvk D 1g�:(b) The resolvent:

��.M/ D cl�f� 2 C W jj.M.�/ � �I/�1jj > ��1g�:

(c) Perturbation of the matrix:

��.M/ D cl�f� 2 C W � 2 �.M.�/CE/ for some E 2 C

n�n with jjEjj < �g�:We assume that the matrix and vector norms in (a)–(c) are compatible.

An immediate consequence of Definition 5.2 is that the eigenvalues of a matrixM 2 W

n�n� belong to its pseudospectra:

�.M/ � ��.M/ for each � > 0:

The proof that Definitions 5.2(a)–(c) are equivalent relies on the proof thatDefinitions 5.2(a)–(c) are equivalent for scalar-valued matrices. For completeness,

Page 147: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.1 Pseudospectra 133

x1 x2 x3 x4

Fig. 5.2 The mass–spring network of Example 5.4 with boundary nodes S D f1; 4g and interiornodes NS D f2; 3g

the proof that Definitions 5.2(a)–(c) are equivalent is included at the end of thechapter, in Sect. 5.4.

Remark 5.1. In Definition 5.2(c), we could have defined 511.5

��.M/ D cl�f� 2 C W � 2 �.M.�/ C E.�// for some E 2 W

n�n with jjE.�/jj < �g�;so that M.�/ is perturbed by the matrix E.�/ of rational functions. However, for a

fixed � 2 dom.E/, the matrix E.�/ is in Cn�n, so that this alternative definition is

equivalent to the one given above, which is simpler.

To give an example of the �-pseudospectrum of a matrix of rational functions,we consider the following.

Example 5.3. Let A be the matrix given in Example 5.1 and S D f1; 2g. Then

R.AI S/ D"

1��1

1��1

1�

�C1�

#2 W

2�n� :

The pseudospectra of both A and R.AI S/ are displayed in Fig. 5.1 for � D 1, 1=2,1=4 using the matrix 1-norm. Notice that although 0; 1 2 �.A/, these values do notbelong to �.R.AI S//, because of cancellations resulting from the matrix reduction,i.e., A NS NS D f0; 0; 1; 1g. However, for the � we consider, 0; 1 2 ��.R.AI S//,meaning that these eigenvalues remain as pseudoeigenvalues of the reduced matrix.

In the previous chapters we described a number of ways of using isospectralmatrix and graph reductions. In Chap. 2, graph reductions were used to simplifythe structure of a network while maintaining its spectral properties. In Chap. 4,matrix reductions were used to gain improved eigenvalue estimates. Here, we useisospectral reductions to study the dynamics of mass–spring networks in which thereis limited access.

Example 5.4. Consider the mass–spring network illustrated in Fig. 5.2, with nodesat locations xi , i D 1; 2; 3; 4, lying on a line and springs linking nearest neighbors.For simplicity, we assume that all the springs have the same spring constant k D 1

and that all the nodes have unit mass. (The precise position of the nodes on the linedoes not matter for this discussion.)

Suppose each node xi is subject to a time-harmonic displacement ui .!/ej!t

with frequency ! in the direction of the line and j D p�1. Then the resultingforce at node xi is also time-harmonic in the direction of the line and is of theform fi .!/ej!t . Writing the balance of forces acting on each node with the laws

Page 148: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

134 5 Pseudospectra and Inverse Pseudospectra

of motion, one can show that the vector of forces f.!/ D Œf1.!/; : : : ; f4.!/�T islinearly related to the vector of displacements u.!/ D Œu1.!/; : : : ; u4.!/�T by theequation

f.!/ D .K � !2I /u.!/: (5.2)

Here the matrix K is the stiffness matrix

K D

2664

1 �1 0 0

�1 2 �1 0

0 �1 2 �1

0 0 �1 1

3775 :

If we let � !2, we see that the eigenmodes �.K/ D f2˙p2; 2; 0g of the stiffness

matrix K correspond to nonzero displacements that do not generate forces on thenetwork nodes. For instance, the eigenmode corresponding to the zero frequency isu D Œ1; 1; 1; 1�T ; i.e., by displacing all nodes by the same amount, there are no netforces at the nodes.

Since the eigenvalues of K correspond to frequencies for which there exists anonzero displacement that generates no forces on these nodes, the pseudoeigenval-ues of this system have a similar physical interpretation. Namely, the pseudospectraindicate the frequencies for which there is a displacement that generates “small”forces relative to the (norm of the) displacement.

For example, the frequency !2 D 2:1 in Fig. 5.3 (left) is within the tan toleranceregion for � D 1=4. Hence, there is a nonzero vector of displacements such thatthe forces generated from this displacement have norm equal to � D 1=4 times thenorm of this displacement vector.

Now suppose that we have access only to certain boundary nodes of this network,say S D f1; 4g. Then we can write the equilibrium of forces at the interior nodesNS D f2; 3g and conclude that the net forces fS at the boundary nodes S depend

linearly on the displacements uS at the terminal nodes according to the equation

fS .!/ D .R!2.KI S/ � !2I /uS .!/: (5.3)

The spectrum and inverse spectrum of the network response are then

�.R!2.KI S// D f2 ˙ p2; 2; 0g and ��1.R!2.KI S// D f3; 1g:

The eigenvalues of R!2.KI S/ correspond to frequencies for which there is adisplacement of the boundary nodes S that generates no forces at these nodes.Conversely, the inverse eigenvalues of R!2.KI S/ correspond to frequencies atwhich there is a displacement of the boundary nodes for which the resulting forcesare infinitely large.

Page 149: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.1 Pseudospectra 135

−1 0 1 2 3 4 −1 0 1 2 3 4−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

se(K) se ( (K;S))

Fig. 5.3 Pseudospectra of the stiffness matrix K for the mass–spring system and its reductionR�.K; S/ from Example 5.4. The latter corresponds to the effective stiffness of the mass–springsystem when we have access only to nodes S D f1; 4g. The tolerances shown are � D 1 (blue),� D 1=2 (red), and � D 1=4 (tan), using the matrix 1-norm. The spectra of the respective matricesare indicated

That is, if we have access only to the boundary nodes S D f1; 4g, then thepseudoeigenvalues of R!2.KI S/ correspond to frequencies for which there is adisplacement at the boundary nodes S that generates very small forces on thesenodes. The pseudospectra regions of R�.KI S/ are shown in Fig. 5.3 (right) for� D 1, 1=2, 1=4.

Observe that the pseudospectra of R�.KI S/ are included in the pseudospectraof K for a given tolerance �. This implies that less access to network nodes leads tofewer frequencies for which displacements generate relatively small forces. Phrasedless formally, the more a network is reduced, the less susceptible to perturbations itseigenvalues are.

In the standard theory of pseudospectra, if A 2 Cn�n has complex entries,

then its pseudospectrum ��.A/ can have at most n connected components, eachcorresponding to at least one eigenvalue of A. In contrast, a matrix M.�/ 2 W

n�n�

can have more than n eigenvalues. Hence, ��.M/ can have more than n connectedcomponents. This is why the region �1=4.R.KI f1; 4g//, shown in Fig. 5.3 (right),can have four connected components, although R.KI f1; 4g/ is a 2 � 2 matrix.

Before ending this section, we note that in Examples 5.3 and 5.4, we have both�.A/ � ��.R.AI S// and �.K/ � ��.R.KI S// for the � we consider. Basedon these examples, it seems that even under reduction, the �-pseudospectrum ofthe reduced matrix “remembers” where the eigenvalues of the original matrix are.However, this is not always the case, as the following example shows.

Page 150: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

136 5 Pseudospectra and Inverse Pseudospectra

Example 5.5. Consider the matrix M 2 C3�3 given by

M D240 1 0

1 0 0

0 1 0

35 ;

with �.M/ D f0; ˙1g. By reducing M over the set S D f1g, we obtain the matrixR.M I S/ D Œ1=�� 2 W

1�1� , for which

k.R.M I S/ � �I/�1k Dˇˇ �

1 � �2

ˇˇ:

Hence, 0 … ��.R.M I S// for every �. Moreover, since �.M NS NS / D f0; 0g forNS D f2; 3g, it is not always the case that either �.M/ or �.M NS NS / is contained

in ��.R.M I S//. Therefore, a matrix does not necessarily even approximatelyremember its eigenvalues under reduction.

5.2 Pseudospectra Under Isospectral Reduction

In this section, we investigate how the pseudospectra of a matrix M 2 Wn�n� is

affected by an isospectral reduction. In order to study this change in pseudospectra,we need to consider two vector norms. Specifically, we need one norm k � k definedon C

n for the pseudospectrum of M and another norm k � k0 defined on CjS j (0 <

jS j < n) for the pseudospectrum of R.M I S/. Our comparison of the pseudospectraof the original and reduced matrices assumes that for each vector v D .vT

S ; vTNS /T 2C

n, these two norms are related by

kvk D�����

vS

v NS

����� ������

vS

0

����� D kvS k0 : (5.4)

Examples of norms satisfying property (5.4) are the p�norms for 1 � p � 1. Forthe sake of simplicity, we use the same notation for both of these Cn and C

jS j norms.The following theorem describes how the �-pseudospectrum of a matrix M.�/ is

related to the �-pseudospectrum of the isospectral reduction R�.M I S/. It says thatthe �-pseudospectra of the reduced matrix is contained in the �-pseudospectra of theoriginal matrix, for each � > 0.

Theorem 5.1. For M.�/ 2 Wn�n� , let S � N . Then ��.R.M I S// � ��.M/

for every � > 0, provided the Cn and C

jS j norms in the pseudospectra definitionssatisfy (5.4).

Proof. For M.�/ 2 Wn�n� , let S and NS form a nonempty partition of N . We assume,

without loss of generality, that for the vector v 2 Cn, we have v D .vT

S ; vTNS /T .

Page 151: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.2 Pseudospectra Under Isospectral Reduction 137

Moreover, for Q�0 2 C and � > 0, suppose vS 2 CjS j is a unit vector such that

jj.R.M I S/ � Q�0I /vS jj < �: (5.5)

Since �.M NS NS / and dom .M/ are finite sets, then by continuity, there is a neighbor-hood U of Q�0 such that

(i) M.�/ 2 Cn�n for � 2 U � fQ�0g;

(ii) �.M NS NS / \ .U � fQ�0g/ D ;; and(iii) jj.R.M I S/ � �I/vS jj < � for � 2 U � fQ�0g.

Observe that for each �0 2 U � fQ�0g, it follows that the vector

v NS D �.M.�0/ NS NS � �0I /�1M.�0/ NSS vS

is defined. Since v D .vTS ; vTNS /T , we have

.M.�0/ � �0I /v D�

.M � �I/SS vS C .M � �I/S NS v NS

.M � �I/ NSS vS C .M � �I/ NS NS v NS

� ˇ�D�0

D�

MSS vS � MS NS .M NS NS � �I/�1M NSS vS

M NSS vS � .M NS NS � �I/.M NS NS � �I/�1M NSS vS

� ˇ�D�0

D�

.R.M I S/ � �I/vS

0

� ˇ�D�0

:

By the property (5.4), regarding the norms in Cn and C

jS j, we have

k.M.�0/ � �0I /vk D k.R.M.�0/I S/ � �0I /vS k < �: (5.6)

Since vS ¤ 0, consider the unit vector u D v=kvk 2 Cn. Again by (5.4), we have

kvk � kvS k D 1. Therefore, we obtain the bound

k.M.�0/ � �0I /uk D k.M.�0/ � �0I /vkkvk � k.M.�0/ � �0I /vk < �;

where the last inequality comes from (5.6). This implies �0 2 ��.M/.Since this holds for every �0 2 U � fQ�0g, we must have Q�0 2 cl.��.M//. Since

��.M/ is a closed set, it follows that Q�0 2 ��.M/. Since Q�0 is an arbitrary point in��.R.M I S//, the result follows by inequality (5.5). utExample 5.6. For the mass–spring network introduced in Example 5.4, we considerfour different sets of boundary nodes f1; 2; 3; 4g f1; 2; 4g f1; 4g f1g. Notethat Theorem 5.1 implies that the corresponding pseudospectra for a given � obeythe same inclusions. This is shown in Fig. 5.4 for � D 1, 1=2, 1=4.

Page 152: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

138 5 Pseudospectra and Inverse Pseudospectra

−1 0 1 2 3 4 −1 0 1 2 3 4 −1 0 1 2 3 4−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

−1.5

−1.0

−0.5

0.0

0.5

1.0

1.5

e = 1/2e = 1 e = 1/4

Fig. 5.4 For � D 1; 1=2; 1=4, the pseudospectra of the matrix K from the mass–spring system inExample 5.4 is shown (blue) together with the pseudospectra for the reduced matrices with terminalnodes S D f1; 2; 4g (red), S D f1; 4g (tan), and S D f1g (green). Again the matrix 1-norm isused, and the eigenvalues �.K/ D f0; 2; 2 ˙ p

2g are indicated. Note how the pseudospectrashrink as the number of boundary nodes decreases

In terms of the mass–spring network, this means that as we increase the numberof internal degrees of freedom (or decrease the number of boundary nodes), itbecomes harder to find frequencies for which there is a displacement that generatesforces of magnitude below a certain fixed level. Hence the fewer boundary nodeswe have, the more robust the frequencies are that generate small forces.

Notice that the inclusion given in Theorem 5.1 is not a strict inclusion. In fact,it may be the case that a matrix M and its reduction R.M I S/ have the samepseudospectra since the following example demonstrates.

Example 5.7. Consider the matrix M 2 C4�4 given by

M D

2664

1 1 0 0

1 1 0 0

0 0 1 1

0 0 1 1

3775 and its reduction R.M I S/ D

24

���1

0 0

0 1 1

0 1 1

35 ;

where S D f2; 3; 4g. Computing the 2-norm of the respective resolvents, we get

k.M � �I/�1k D max.j�j�1; j� � 2j�1/ and

k.R.M I S/ � �I/�1k D max.j�j�1; j� � 2j�1; j� � 1jj�j�1j� � 2j�1/:

To show that the pseudospectra of M and R.M I S/ are the same, we have only todemonstrate that the norms above are equal. We can do this by proving the inequality

j� � 1jj�j�1j� � 2j�1 � max.j�j�1; j� � 2j�1/: (5.7)

Page 153: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.3 Inverse Pseudoeigenvalues 139

Notice that the triangle inequality implies

j� � 1j � 1

2j� � 2j C 1

2j�j � max.j�j; j� � 2j/: (5.8)

Inequality (5.7) follows for � … f0; 2g once we divide (5.8) by j�jj� � 2j. Sincef0; 2g � �.M/; �.R.M I S//, both 0 and 2 are included in the pseudospectra ofthese matrices. It then follows that ��.M/ D ��.R.M I S// for all � > 0.

5.3 Inverse Pseudoeigenvalues

Recall that the inverse eigenvalues of a matrix M.�/ 2 Wn�n� are the eigenvalues

of its spectral inverse S �1.M/ (see Theorem 1.4 in Chap. 1). Thus, we maythink of the “almost inverse eigenvalues” or inverse pseudoeigenvalues of M.�/

as pseudoeigenvalues of S �1.M/. The precise definition is below, together withother equivalent definitions. These are analogous to Definitions 5.2(a)–(c) ofpseudospectra.

Definition 5.3. Let � > 0. The set of ��pseudoresonances of a matrix M.�/ 2W

n�n� is defined equivalently by the following conditions:

(a) Resonance perturbation:

��1� .M/ D cl

�f� 2 C W k.M.�/ � �I/�1vk < � for some v 2 Cn with kvk D 1g�:

(b) The inverse resolvent:

��1� .M/ D cl

�f� 2 C W jjM.�/ � �I jj > ��1g�:(c) Perturbation of the spectral inverse:

��1� .M/ D cl

�f� 2 C W � 2 �.S �1.M/ C E/ for some E 2 Cn�n with jjEjj < �g�:

We assume that the matrix and vector norms in (a)–(c) are compatible.

Note that Definition 5.3 is simply Definition 5.2 in which M.�/ is replaced bythe matrix S �1.M/ on the right-hand side of parts (a)–(c). Hence, the equivalenceof Definitions 5.3(a)–(c) follow from arguments similar to those in 5.4. Moreover,since ��1.M/ D �.S �1.M// for every M.�/ 2 W

n�n� , then using the matrix

E 0 in Definition 5.3(c) yields

��1.M/ � ��1� .M/ for each � > 0:

Additionally, observe that if w.�/ D p.�/=q.�/ 2 W� , then by definition,�.p/ � �.q/. Hence we have the limit

Page 154: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

140 5 Pseudospectra and Inverse Pseudospectra

Fig. 5.5 Pseudoresonanceregions of the matrixR.M I S/ given inExample 5.3 for � D 1=3

(blue), � D 1=4 (red), and� D 1=5 (tan), using thematrix 1-norm. Theeigenvalues �.M NS NS / D f0; 1gare indicated

−3 −2 −1 0 1 2 3 4

−3

−2

−1

0

1

2

3

−1se ( (M ;S))

limj�j!1

jw.�/j D c;

for some constant c � 0. Therefore, for each M.�/ 2 Wn�n� , jjM.�/��I jj D O.�/

for large �. This leads to the following remark.

Remark 5.2. If M 2 Wn�n� , then the value � D 1 is always an inverse

pseudoeigenvalue of M . This means that for each � > 0, the set ��1� .M/ contains

the complement of a ball centered at the origin with sufficiently large radius. (SeeFig. 5.5, for example.)

Example 5.8. In Fig. 5.5, we show the pseudoresonance regions of the matrixR.M I S/ from Example 5.3 for � D 1, 1=2, 1=4. As can be computed, theinverse spectrum of R.M I S/ is empty. However, the inverse �-pseudoeigenvalueregions reveal that the eigenvalues �.M NS NS / D f0; 1g act like inverse eigenvalues.Specifically, �.M NS NS / � ��1

� .R.M I S// for each � that we consider.

As it turns out, the situation in Example 5.8 does not hold for every matrixreduction. Similar to Example 5.5, if

M D�

1 1

0 0

�;

and we consider the sets S D f1g and NS D f2g, then one can show that �.M NS NS / Df0g is not contained in ��1

� .R.M I S// for small � > 0. That is, the eigenvalues�.M NS NS / do not always act like inverse eigenvalues of the matrix R.M I S/.

As with the pseudospectra studied in Sect. 5.1 of this chapter, we give a physicalinterpretation of inverse pseudospectra using a mass–spring system.

Page 155: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.3 Inverse Pseudoeigenvalues 141

Fig. 5.6 Pseudoresonanceregions of the matrixR�.KI S/ given inExample 5.4, with S D f1; 4gfor � D 1=2 (blue), � D 1=3

(red), and � D 1=4 (tan).The resonances��1.R�.KI S// D f1; 3g areindicated

−3 −2 −1 0 1 2 3 4−3

−2

−1

0

1

2

3

−1se ( (K ;S))

Example 5.9. The mass–spring system considered in Example 5.4 has inverseeigenvalues when restricted to the set of boundary nodes S � f1; 4g. The inversepseudoeigenvalues of the reduced system correspond to frequencies for which thereis a displacement on the boundary that generates relatively “large” forces at thesenodes. For this reason, we may think of inverse eigenvalues as resonances ofthe system, and inverse pseudoeigenvalues as pseudoresonances. In Fig. 5.6, wedisplay some pseudoresonance regions of the mass–spring system restricted to theset S D f1; 4g.

Since we allow � to be any positive value, there is nothing preventing aneigenvalue of a matrix M from also being an �-pseudoresonance of M (or aresonance from being a �-pseudoeigenvalue). In other words, we could have an� > 0 for which

��1.M/ \ ��.M/ ¤ ; or �.M/ \ ��1� .M/ ¤ ;;

as the following example shows.

Example 5.10. Consider the matrix M.�/ 2 W2�2� given by

M.�/ D�

1��1

0

0 0

�:

Page 156: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

142 5 Pseudospectra and Inverse Pseudospectra

The spectrum and inverse spectrum of M.�/ are respectively

�.M/ D f0; .1 ˙ p5/=2g and ��1.M/ D f1g:

Now notice that for 0 2 �.M/, we have

kM.0/ � 0Ik D 1;

which implies that 0 2 ��1� .M/ for all � � 1. The resolvent of M is

.M.�/ � �I/�1 D"

��1��2C�C1

0

0 � 1�

#:

Hence for � D 1, we have

k.M.1/ � I /�1k D 1;

which means that 1 2 ��.A/ for all � � 1.

Because the inverse pseudoeigenvalues of a matrix M 2 Wn�n� can be defined in

terms of the pseudoeigenvalues of the spectral inverse S �1.M/, we can generalizeTheorem 1.4 as follows.

Theorem 5.2. Suppose M.�/ 2 Wn�n� and � > 0. Then

��1� .M/ D ��.S

�1.M// and ��.M/ D ��1� .S �1.M//:

Proof. Let M.�/ 2 Wn�n� and � > 0. Observe that

��.M/ D cl�f� 2 C W jj.M.�/ � �I/�1jj > ��1g�I and

��1� .S �1.M// D cl

�f� 2 C W jjS �1.M/ � �I jj > ��1g/;

using Definitions 5.2(b) and 5.3(b), respectively. Since S �1.M/ � �I D .M.�/ ��I/�1, then ��.M/ D ��1

� .S.M//. The equality ��1� .M/ D ��.S �1.M//

similarly follows. utBecause of the seemingly invertible relationship between pseudospectra

and inverse pseudospectra in Theorem 5.2, it is tempting to think of the��pseudoresonances of a matrix as the complement of its 1=��pseudoeigenvalues.However, the two are not always equal, as can be seen in the following theorem.

Theorem 5.3. For M.�/ 2 Wn�n� , let � > 0. Then cl

��1=�.M/

� � ��1� .M/.

However, the reverse inclusion does not hold in general.

Page 157: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.3 Inverse Pseudoeigenvalues 143

This theorem means that in general, there is not enough information in thepseudispectra of a matrix to reconstruct its pseudoresonances. We now proceed withthe proof of Theorem 5.3.

Proof. For M.�/ 2 Wn�n� and a matrix norm jj � jj, the inequality

jjM.�/ � �I jj�1 � jj.M.�/ � �I/�1jj (5.9)

holds for every � 2 dom .M/ � �.M/. Let int.˝/ denote the interior of the set˝ � C, i.e., the largest open subset of ˝. For � > 0, using Definition 5.2(b), weobtain

cl��1=�.M/

� D cl�cl.f� 2 C W jj.M.�/ � �I/�1jj > �g/�

D cl�

int.f� 2 C W jj.M.�/ � �I/�1jj � �g/�D cl

�f� 2 C W jj.M.�/ � �I/�1jj � �g�:Similarly, it follows from Definition 5.3(b) that

��1� .M/ D cl

�f� 2 C W jjM.�/ � �I jj > ��1g�D cl

�f� 2 C W jjM.�/ � �I jj�1 � �g�:By inequality (5.9), we have

f� 2 C W jj.M.�/ � �I/jj�1 � �g � f� 2 C W jj.M.�/ � �I/�1jj � �g

implying the first half of the result.To show that the reverse inclusion does not hold in general, take, for instance,

the matrix M.�/ from Example 5.10. It is easy to compute kM.2/ � 2Ik D 2 andk.M.2/ � 2I /�1k D 1. Taking � D 2=3, we have 2 2 ��1

2=3.M/ \ �3=2.M/. utBefore finishing this section, we note that Theorem 5.1 states that the �-

pseudospectrum of a matrix becomes a subset of this region, since the matrix isreduced. However, for �-pseudoresonances of a matrix, there is no such inclusionresult, as is demonstrated in the following example.

Example 5.11. Consider the matrix M 2 C4�4 and its reduction R.M I S/ given by

M D

2664

0 1 1 0

1 0 0 0

1 0 0 1

0 0 1 0

3775 and R.M I S/ D

�1�

1

1 1�

�;

Page 158: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

144 5 Pseudospectra and Inverse Pseudospectra

Fig. 5.7 Thepseudoresonance regions��1

� .M / (blue) and��1

� .R.M I S// (red) areshown for � D 1=3, where M

and R.M I S/ are thematrices considered inexample 5.11

−2 −1 0 1 2

−2

−1

0

1

2

where S D f1; 4g. The �-pseudoresonances ��1� .M/ and ��1

� .R.M I S// are shownin Fig. 5.7 for � D 1=3 in blue and red, respectively. As can be seen,

��1� .M/ 6� ��1

� .R.M I S// and ��1� .R.M I S// 6� ��1

� .M/:

Hence, the inclusion result for pseudospectra given in Theorem 5.1 does not holdfor pseudoresonances.

5.4 Eigenvalue Inclusions and Equivalence of Definitions

In this section, we show that the three pseudoeigenvalue regions given in Defini-tion 5.2(a)–(c) coincide and include the eigenvalues of the matrix. To do so, wedefine the following regions. For M.�/ 2 W

n�n� and � > 0, let

��;a.M/ D f� 2 C W jj.M.�/ � �I/vjj < � for some v 2 Cn with jjvjj D 1g

(5.10)

��;b.M/ D f� 2 C W jj.M.�/ � �I/�1jj > ��1g [ �.M/ (5.11)

��;c.M/ D f� 2 C W � 2 �.M.�/ C E/ for some E 2 Cn�n with jjEjj < �g:

(5.12)

Note that the regions ��;a.M/, ��;b.M/, and ��;c.M/ are the regions given inDefinition 5.2(a)–(c) without taking the closure. Before proving the main result ofthis section, we require the following result.

Page 159: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

5.4 Eigenvalue Inclusions and Equivalence of Definitions 145

Lemma 5.1. Let � > 0. If M.�/ D M is a complex-valued matrix, then the regions��;a.M/, ��;b.M/, and ��;c.M/ coincide.

The reason lemma 5.1 holds if M 2 Cn�n is that in this case, ��;a.M/, ��;b.M/,

and ��;c.M/ are the same as the set(s) ��.M/ given in Definition 5.1 in (a), (b), and(c), respectively. With this in place, we are now in a position to prove the followingtheorem.

Theorem 5.4. Let M.�/ 2 Wn�n� and � > 0. Then the regions given in

Definition 5.2(a)–(c) are equivalent. Moreover, �.M/ � ��.M/.

Proof. Suppose M.�/ 2 Wn�n� and � > 0. If �0 2 �.M/, then there is a unit vector

v 2 Cn such that

.M.�/ � �I/v D w.�/ 2 Wn� ;

where w.�0/ D 0. Since �.M/ and dom .M/ are finite, there is a neighborhoodU 3 �0 such that for QU D U � f�0g, the following hold:

(i) QU � dom .M/;(ii) jjw.�/jj < � for � 2 QU ; and

(iii) .�.M/ � f�0g/ \ QU D ;.

In particular, (ii) implies the set inclusion QU � ��;a.M/.For each � 2 dom .M/, observe that the matrix M.�/ is in C

n�n. Since theregions given in (5.10)–(5.12) are equivalent for every complex-valued matrix,Lemma 5.1 implies

QU � ��;a.M/ � f�0g; ��;b.M/ � f�0g; ��;c.M/ � f�0g:

This, in turn, implies

�.M/ � cl���;a.M/

�; cl���;b.M/ � �.M/

�; cl���;c.M/

�: (5.13)

In particular, if ��;b.M/ � �.M/ is open, then ��;b.M/ is open.Note that the norm of a vector or matrix is continuous with respect to its entries.

Also, the eigenvalues of a matrix depend continuously on the matrix entries. Thus,the sets ��;a.M/, ��;b.M/��.M/, and ��;c.M/ are open. Therefore, the set ��;b.M/

is also open.Since the sets given in (5.10)–(5.12) are equivalent on dom .M/, and dom .M/

is a finite set, it follows that

��;a.M/ \ dom .M/ D ��;b.M/ \ dom .M/ D ��;c.M/ \ dom .M/

Page 160: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

146 5 Pseudospectra and Inverse Pseudospectra

is an open set. Taking the closure, it follows that

cl���;a.M/

� D cl���;b.M/ � �.M/

� D cl���;c.M/

�;

implying that Definitions 5.2(a)–(c) are equivalent. Moreover, equation (5.13)implies �.M/ � ��.M/. This completes the proof. ut

The proof that Definitions 5.3(a)–(c) are equivalent is very similar to the proofof Theorem 5.4 and is therefore omitted.

Page 161: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Chapter 6Improved Estimates of Survival Probabilities

In the theory of dynamical systems, the main object of study is the orbits of theelements of a set under a fixed rule. Most often, this means that for a given functionf W M ! M , we would like to understand the behavior of the sequence ff k.x/ Wk � 0g for each x 2 M . However, not all systems have this form.

If f W QM ! M , where QM � M , then it is possible that a point does not have anorbit under this rule. That is, it may happen that at some moment in time k < 1,the iterate f k.x/ is in M � QM , so that f kC1.x/ is undefined. This is the situationin the theory of open dynamical systems.

In this theory, the map f W QM ! M induces an open dynamical system in whichthe point x 2 QM is said to escape at time k if f k.x/ 2 M � QM . Here, the setM � QM is referred to as a hole, since it is the set through which the point escapesthe system.

If the point x 2 QM escapes the system, then we no longer consider what happensto it beyond that point in time. On the other hand, if f k.x/ 2 QM , then we say thatthe point x has survived up to time k. If the point x 2 QM survives for all k > 0, thenwe say that it survives for all time.

For an open dynamical system f W QM ! M , one of the most basic questionsthat can be asked is this: what is the probability of a point surviving in the system forall time? More precisely, if � is some probability measure on M , then what is themeasure of the set of points that never escape the system? A natural extension ofthis question is, what is the probability of a point surviving in the system until sometime k < 1?

The first question addresses the asymptotic chances of surviving, while thesecond considers the finite-time probability that a typical point will remain in thesystem until some time k < 1. The reason we consider these questions is that it ispossible to associate the dynamics of an open dynamical system with the dynamicsof a related dynamical network. This relationship between an open dynamicalsystem and a dynamical network will allow us to estimate, and in some cases give

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6__6

147

Page 162: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

148 6 Improved Estimates of Survival Probabilities

precisely, the asymptotic and finite-time survival probabilities of a general class ofopen systems.

Importantly, because of the interplay between the theory of open dynamicalsystems and dynamical networks, it is possible to use techniques similar to thoseintroduced in Chap. 3 to analyze these survival probabilities. Our main result in thisdirection is that there is a number of transformations that can be used to sharpersurvival probability estimates of open systems we consider here.

The type of systems we consider is that of one-dimensional maps that arepiecewise smooth, and have a finite Markov partition. The holes in these systemsconsist of a collection of these partition elements and therefore have a size that isboth finite and fixed.

The reason we consider only one-dimensional systems in this chapter is simplyfor the sake of clarity. The results that we give here can be extended, with only slightmodifications, to higher-dimensional systems.

6.1 Open Dynamical Systems

In this section, we define the basic concepts that we will use throughout the chapter.We begin by introducing the type of systems we will use to generate the opendynamical systems that we will consider.

Let f W I ! I , where I D Œ0; 1�. For 0 D q0 < q1 < � � � < qm�1 < qm D 1,we let �i D .qi�1; qi � for each i 2 M D f1; : : : ; mg and assume that the followinghold. First, the function f j�i is differentiable for each 1 � i � m. Second, the sets�i D .qi�1; qi � form a Markov partition � D f�i gm

iD1 of f . That is, for each i 2 M ,the closure cl.f .�i // is the interval Œqj ; qj Ck� for some k � 1 and j that dependson i .

Given this setup, we consider the situation in which orbits of f W I ! I escapethrough an element of the Markov partition � D f�i gm

iD1 or, more generally, someunion H of these partition elements. Equivalently, we can modify the function f

so that orbits cannot leave the set H once they have entered it. With this approach,orbits that enter H are considered to have escaped from the system. This latterapproach, of modifying f jH , turns out to be more convenient for our discussionand will be used to formally define the open systems that we consider.

Definition 6.1. Let f W I ! I have the Markov partition � D f�i gmiD1. If H DS

i2I �i , where I � M , then we let fH W I ! I be the map defined by

fH .x/ D(

f .x/ if x … H;

x otherwise:

We call the set H a hole and the function fH W I ! I the open dynamical system.fH ; I / generated by the (closed) dynamical system .f; I / over H .

Page 163: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.2 Piecewise Linear Functions 149

Note that the partition � is still a Markov partition of the open dynamical system.fH ; I /, but the dynamics of the original system .f; I / have been modified, so thateach point in H is now a fixed point of the open system .fH ; I /.

For n � 0, let

En.fH / D fx 2 I W f n.x/ 2 H; f k.x/ … H; 0 � k < ng

D fx 2 I W f nH .x/ 2 H; f k

H .x/ … H; 0 � k < ngI and

Tn.fH / D fx 2 I W f k.x/ 2 H; for some k; 0 � k � ng

D fx 2 I W f kH .x/ 2 H; for some k; 0 � k � ng:

The set En.fH / consists of those points that escape through the hole H at time n,while T

n.fH / comprises those points that escape through H before time n C 1.For the moment, suppose � is a probability measure on I , i.e., �.I / D 1. Then

�.En.fH // can be treated as the probability that an orbit of f enters H for the firsttime at time n, and �.Tn.fH // the probability that an orbit of f enters H beforetime n C 1. In this regard,

Pn.fH / D 1 � �.Tn.fH //

represents the probability that a typical point of I does not fall into the hole H bytime n. For this reason, the quantity P

n.fH / is called the survival probability, attime n, of the dynamical system fH for the measure �. Thus, the probability

P.fH / D limn!1P

n.fH /

is the probability that a point survives in the open system .fH ; I / for all time.One of the fundamental problems in the theory of open systems is finding ways

of approximating �.En.fH // and �.Tn.fH // for a finite n � 0. In the followingsection, we give exact formulas for these quantities in the case that fH is a piecewiselinear function with nonzero slope and � is Lebesgue measure. In Sect. 6.2, weremove the assumption that fH is piecewise linear and present a method forestimating �.En.fH // and �.Tn.fH // for functions that are nonlinear.

6.2 Piecewise Linear Functions

As a first step in investigating the survival probabilities of an open dynamicalsystem, we consider those systems .fH ; I / that are linear when restricted to theelements of the partition � . More formally, suppose H D S

i2I �i for someI � M . Let L be the set of all open systems .fH ; I / such that

jf 0H .x/j D ci > 0 for x 2 �i and i … I ;

Page 164: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

150 6 Improved Estimates of Survival Probabilities

where each ci is in R. The set L is then the collection of all open systems that havea nonzero constant slope when restricted to any �i ª H .

To each fH 2 L there is an associated matrix, which can be used to computeboth �.En.fH // and �.Tn.fH //. To define this matrix, let

�ij D �i \ f �1.�j / for 1 � i; j � m: (6.1)

Definition 6.2. Suppose fH 2 L , where H D Si2I �i for some I � M . The

matrix AH 2 Rm�m given by

.AH /ij D(

jf 0.x/j�1 for x 2 �ij ¤ ;; i … I ;

0 otherwise,1 � i; j � m;

is called the weighted transition matrix of fH .

Associated with the open system .fH ; I / there is also an unweighted directedgraph � D .V; EH / with vertices V and edges EH . As before, if V D fv1; : : : ; vmg,then we let eij denote the edge from vertex vi to vj .

Definition 6.3. Let fH W I ! I , where H D Si2I �i for some I � M . We

define �H D .V; EH / to be the graph with

(a) vertices V D fv1; : : : ; vmg,(b) edges EH D feij W cl.�j / � cl.f .�i //; i … I g.

The graph �H is called the transition graph of fH .

The vertex set V D fv1; : : : ; vmg of �H represents the elements of the Markovpartition � D f�i gm

iD1, and the edge set EH , the possible transitions betweenelements of � . Hence eij 2 EH only if there is an x 2 �i ª H such that fH .x/ 2 �j ,i.e., if it is possible to transition from �i to �j . We note that since H D ; is a possiblehole, the original (closed) system .f; I / has a well-defined transition graph, whichwe denote by � .

Note that the graph �H does not carry the same amount of information as thematrix AH . The reason is that �H designates only how orbits can transition betweenelements of � , whereas AH additionally gives each of these transitions a weight.That is, the adjacency matrix M.�H / is not equal to AH .

However, the graph �H does give us a way of visualizing how orbits escapefrom the system. This will be useful in Sect. 6.4, where we use the system’s graphstructure to improve our estimate of the system’s survival probabilities.

The transition graph of an open system is given in the following example.

Example 6.1. Let the function f W I ! I be the tent map

f .x/ D(

2x; 0 � x � 1=2;

2 � 2x; 1=2 < x � 1;

Page 165: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.2 Piecewise Linear Functions 151

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

x2x1 = H x3 x4

GH

v2

v1

v4

v3

Fig. 6.1 The the open system .fH ; I / (left) and its transition graph �H (right) considered inExample 6.3

with Markov partition � D f.0; 1=4�; .1=4; 1=2�; .1=2; 3=4�; .3=4; 1�g. Here, we letH be the hole H D .0; 1=4�. We then have the open system fH W I ! I shown inFig. 6.3 (left) with the graph of transitions �H (right).

We emphasize the fact that H D �1 is the hole in the open system fH by drawingthe vertex v1 as an open circle in �H . Note that the only difference between thetransition graph � of f W I ! I and �H is that there are no edges originating fromv1 in �H . In this sense, a hole H is an absorbing state, since nothing leaves H onceit enters (Fig. 6.1).

We now consider how to compute the quantities En.fH / and Tn.fH / for an open

system fH 2 L . To do this, we let 1 D Œ1; : : : ; 1� be the 1 � m vector of ones, andeH the m � 1 vector given by

.eH /i D(

�.�i / if i 2 I ;

0 otherwise:

For the sake of simplicity, we let � denote Lebesgue measure for the remainder ofthis chapter. With this in place, we have the following theorem.

Theorem 6.1. If fH 2 L and n � 0, then

�.En.fH // D 1AnH eH I

�.Tn.fH // D 1

nX

iD0

AiH

!eH :

Proof. For the open system fH W I ! I with Markov partition � D [miD1�i and

hole H D [i2I � for I � M , let

�i0i1:::in D fx 2 I W f k.x/ 2 �ik for 0 � k � ng:

Page 166: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

152 6 Improved Estimates of Survival Probabilities

Then for n > 0, the set En.fH / has the measure given by

�.En.fH // DX

�.�� /; (6.2)

where the sum is taken over all sequences � D i0i1 : : : in with �ik 6� H for 0 � k <

n, �in � H , and �� ¤ ;.Let zi D jf 0.x/j�1, where x 2 �i for each i 2 M . Since fH is piecewise

linear on each element of � , it follows that �.�i0i1 / D zi0�.�i1/ if �i0i1 ¤ ;,and �.�i0i1 / D 0 otherwise. By the same reasoning, �.�i0i1i2 / D zi0zi1�.�i2/ if�i0i1i2 ¤ ;, and �.�i0i1i2 / D 0 otherwise. Continuing in this manner, it follows that

�.�i0i1:::in / D8<:Qn�1

kD0 zik

�.�in/ if �i0i1:::in ¤ ;;

0 otherwise:

Combining this with equation (6.2), we have

�.En.fH // DmX

`D1

X�`

n�1YkD0

zik

!�.�in/

!; (6.3)

where the second sum is taken over all �` D i0i1 : : : in with i0 D `, �ik 6� H for all0 � k < n, ��`

¤ ;, and �in � H .For 1 � ` � m, let .An

H eH /` denote the `th component of AnH eH 2 R

m. Fromthe definition of matrix multiplication, it follows that

.AnH eH /` D

X�`

n�1YkD0

.AH /ikikC1

!.eH /in ; (6.4)

where the sum is taken over all �` D i0i1 : : : in with i0 D `.Note that .AH /ikikC1

D 0 if �ik � H or �ikikC1D ;. Also, .eH /in D �.�in/ D 0

if �in 6� H . Hence, the sum in equation (6.4) can be taken over all �` D i0i1 : : : in inwhich i0 D `, �ik 6� H for 0 � k < n, ��`

¤ ;, and �in � H . Since .AH /ikikC1D

zik for each 0 � k < n under the assumption that ��`¤ ; then, from (6.3) and (6.4),

it follows that

�.En.fH // DmX

`D1

X�`

n�1YkD0

.AH /ikikC1

!�.�in/

!

DmX

`D1

.AnH eH /`

D 1AnH eH :

Page 167: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.2 Piecewise Linear Functions 153

For the case n D 0, we have that �.E0.fH // D �.H/. Since

1A0H eH D 1eH D �.H/;

then �.En.fH // D 1AnH eH for all n � 0. Using this equality, we have

�.Tn.fH // DnX

iD0

�.Ei .fH // DnX

iD0

.1AnH eH / D 1

nXiD0

.AnH /eH

by the linearity of matrix multiplication. This verifies the result. utBefore moving on, we mention that in the theory of open dynamical systems, the

function fH W I ! I is typically assumed to have the property P.fH / D 0 for agiven hole H , so that almost every point escapes the system as n ! 1. This isdone, for instance, by assuming that f is an expanding map with jf 0j > 1.

This is assumed because otherwise, there is a set I� � I of positive measure thatnever escapes through H . If this were the case, then rather than investigating thesystem .fH ; I /, we could consider the open system . QfH ; QI /, where QI D I � I�, inwhich P. QfH / D 0.

However, a nice feature of Theorem 6.1 is that it can be used whether or notP.fH / D 0. In fact, if it is the case that P.fH / D p < 1, then we can simplyconsider the new measure � D �=p on the system fH , so that using this scaledversion of Lebesgue measure, we have

P�.fH / D 1 � �.Tn.fH // D 0:

In this sense, Theorem 6.1 gives us two pieces of useful information: the Lebesguemeasure of those points that escape through H at any moment in time and ultimatelythe Lebesgue measure of those points that survive for all time.

Recall that the eigenvalues and spectral radius of a matrix M 2 Cn�n are denoted

by �.M/ and �.M/, respectively. The following are corollaries to Theorem 6.1.

Corollary 6.1. Suppose fH 2 L , n � 0, and �.AH / < 1. Then

�.Tn.fH // D 1.I � AH /�1.I � AnC1H /eH I and

limn!1 �.Tn.fH // D 1.I � AH /�1eH ;

where I is the identity matrix.

Proof. If � 2 �.AH /, then AH v D �v for some v 2 Cm. Hence .I � AH /v D

.1 � �/v, implying �.I � AH / D f1 � � W � 2 �.AH /g. If �.AH / < 1, then itfollows that 0 … �.I � AH /, so that I � AH is invertible.

Page 168: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

154 6 Improved Estimates of Survival Probabilities

The equalities in the corollary follow from those in Theorem 6.1 together withthe identity

nXiD0

AiH D .I � AH /�1.I � AnC1

H /;

since for every matrix M 2 Cm�m, we have limn!1 M n D 0 if and only if

�.M/ < 1. utCorollary 6.2. Suppose fH 2 L with H D [m

iD1�i . If �.AH / D 0, then�.Tn.fH // D 1 for some n < 1.

Proof. If �.AH / D 0 then the graph �H has no cycles. Therefore, if x 2 �i 6� H

then f k.x/ … xi for all k > 0. Since � D [miD1�i is a finite Markov partition then

f m.x/ 2 H for all x 2 I . utThe matrix M 2 C

m�m is called defective if it does not have an eigenbasis, i.e., ifthere are not enough linearly independent eigenvectors of M to form a basis of Cm.A matrix with an eigenbasis is called nondefective.

Corollary 6.3. Let fH 2 L and suppose the matrix AH is nondefective witheigenpairs f.�1; v1/; : : : ; .�m; vm/g with no eigenvalue equal to 1. Then eH DPm

iD1 ci vi for some c1; : : : ; cm 2 C and

�.En.fH // DmX

iD1

ci si �ni (6.5)

�.Tn.fH // DmX

iD1

ci si

1 � �nC1

i

1 � �i

!; (6.6)

where si D 1vi .

Proof. Equations (6.5) and (6.6) follow from Theorem 6.1 together with the identityPmkD0 �k

i D .1 � �i /.1 � �nC1i /, since it is assumed that �i ¤ 1 for each

1 � i � m. utExample 6.2. Let the function f W I ! I be the tent map considered inExample 6.3 and let H D .0; 1=4�. Since fH 2 L , one can calculate that fH

has the weighted transition matrix

AH D

2664

0 0 0 0

0 0 1=2 1=2

0 0 1=2 1=2

1=2 1=2 0 0

3775 :

Page 169: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.2 Piecewise Linear Functions 155

The matrix AH is nondefective, since its eigenvalues �.AH / D f 1Cp5

4; 1�p

54

; 0; 0gcorrespond to the linearly independent eigenvectors

v1 D

26664

01Cp

54

1Cp5

4

1

37775 ; v2 D

26664

01�p

54

1�p5

4

1

37775 ; v3 D

2664

0

0

�1

1

3775 ; v4 D

2664

�1

1

0

0

3775 ;

respectively. Since the vector eH D Œ1=4; 0; 0; 0�T can be written as

eH D 5 � p5

20 C 20p

5v1 � 3 C p

5

8p

5v2 C 1

4v3 � 1

4v4;

equations (6.5) and (6.6) in Corollary 6.3 imply

En.fH / D 1

40.5 C p

5/�n1 C 1

40.5 � p

5/�n2 I and

Tn.fH / D 1

40.5 C p

5/1 � �nC1

1

1 � �1

C 1

40.5 � p

5/1 � �nC1

2

1 � �2

D 1 ��

1

2C 1p

5

��nC1

1 ��

1

2� 1p

5

��nC1

2 :

Since �.AH / < 1, it follows that by computing the matrix .I �AH /�1, we obtain

1.I � AH /�1eH D Œ 1 1 1 1 �

2664

1 0 0 0

1 2 2 2

1 1 3 2

1 1 1 2

37752664

1=4

0

0

0

3775 D 1:

Corollary 6.1 then implies that limn!1 Tn.fH / D 1. Hence, the probability P.fH /

of surviving indefinitely in this system, for a typical x 2 I , is in fact zero. This canbe seen in Fig. 6.2, where both �.En.fH // and �.Tn.fH // are plotted.

For a fixed map f W I ! I , it is possible to consider the two opensystems .fJ ; I / and .fK; I /, where J and K are two different holes. One of thequestions Theorem 6.1 allows us to answer is whether �.En.fJ // � �.En.fK// or�.Tn.fJ // � �.Tn.fK// for every n � 0. Stated less formally, we can determinewhich hole is leaking the most at any moment in time. This particular topic isconsidered in more detail in [2].

If the holes J and K are disjoint, another question we can address is whethermore phase space is escaping from J than from K in the open system fJ [K . Thisis quite different from asking whether more phase space escapes from fJ at time n

than from fK . The reason is that if we simultaneously have the holes J and K,

Page 170: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

156 6 Improved Estimates of Survival Probabilities

Fig. 6.2 Plots of �.En.fH //

and �.Tn.fH // for the opensystem .fH ; I / inExample 6.2

0 2 4 6 8 10 12 140.0

0.2

0.4

0.6

0.8

n−axis

m(En( fH))

m(Tn( fH))

1.0

then it may happen that the phase space that would have escaped through J attime n in the absence of K could already have escaped through K at some timek < n. In this way, K may “block” escape through J , and J may block escapethrough K. Answering the question through which hole more phase space escapesinvolves taking into account this blocking process.

Suppose f W I ! I has the Markov partition � D [miD1�i with holes

H1; : : : ; H`. Assuming H D [`iD1Hi , where the holes H1 : : : ; H` are disjoint, we

define

Eni .fH / D fx 2 I W f n

H .x/ 2 Hi ; f kH .x/ … Hi ; 0 � k < ngI and

Tni .fH / D fx 2 I W f k

H .x/ 2 Hi ; for some k; 0 � k � ngI

for each 1 � i � ` and n � 0. The set Eni .fH / consists of those points that escape

through Hi at time n, while Tni .fH / is the set of points that escape through Hi

before time n C 1 in the open system fH W I ! I . Hence,

[iD1

Eni .fH / D E

n.fH / and[iD1

Tni .fH / D T

n.fH /:

If Hi D [j 2I �j , then we let the vector eHi be given by

.eHi /j D(

�.�j / if j 2 I ;

0 otherwise:

The following result is a direct consequence of the proof of Theorem 6.1.

Page 171: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.2 Piecewise Linear Functions 157

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

x2x1 x3 x4

G

v2

v4 v1

v3

Fig. 6.3 The transition graph � (right) of the closed system f W I ! I (left) in Example 6.3

Corollary 6.4. Let fH 2 L , where H D [`iD1Hi and H1; : : : ; H` are disjoint.

Then

�.Eni .fH // D 1An

H eHi I and

�.Tni .fH // D 1

nX

j D0

AjH

!eHi

for each 1 � i � ` and n � 0.

Example 6.3. Let the function f W I ! I be the map

f .x/ D

8ˆ<ˆ:

2x C 3=4 0 � x � 1=8;

�6x C 7=4 1=8 � x � 1=4;

�.1=2/x C 3=8 1=4 � x � 3=4;

.1=2/x � 3=8 3=4 < x � 1;

with Markov partition � D f.0; 1=8�; .1=8; 1=4�; .1=4; 3=4�; .3=4; 1�g. This mapis shown in Fig. 6.3 (left) along with its graph of transitions � (right). Here, weconsider the holes H1 D .0; 1=4�, H2 D .3=4; 1�, and H D H1 [ H2. We note thatfH1; fH2; fH 2 L , so that each open system has a graph of transitions. These areshown in Fig. 6.4 (bottom), at the left, right, and center, respectively.

Beginning with the holes H1 and H2, the open systems .fH1; I / and .fH2; I /

have the weighted transition matrices given by

AH1 D

2664

0 0 0 0

0 0 1=6 1=6

2 2 0 0

2 0 0 0

3775 and AH2 D

2664

0 0 0 1=2

0 0 1=6 1=6

2 2 0 0

0 0 0 0

3775

Page 172: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

158 6 Improved Estimates of Survival Probabilities

0 2 4 6 80.0

0.2

0.4

0.6

0.8

1.0

m(Tn( fH1))

m(Tn( fH2))

0 2 4 6 80.0

0.2

0.4

0.6

0.8

1.0m(Tn( fH))

m(Tn1( fH))

m(Tn2( fH))

v2

v4 v1

v3 v2

v4 v1

v3 v2

v4 v1

v3

GH1 GH2 GH

Fig. 6.4 The measure of the phase space that has escaped in fH1 (blue) and fH2 (red) is shown(top left). The measure of the phase space that escapes through the holes H1, H2, and H in fH isshown (top right). The transition graphs �H1 , �H2 , and �H are shown below

respectively. Using Theorem 6.1, both �.En.fH1// and �.Tn.fH2// can be foundfor all n � 0. These are shown in Fig. 6.4 (top left) in blue and red, respectively. Wenote that in both cases,

limn!1P

n.fH1/ D 0 and limn!1P

n.fH2/ D 0;

so that almost every point eventually escapes from both these systems.The system fH , with both holes H1 and H2, has the weighted transition matrix

AH D

2664

0 0 0 0

0 0 1=6 1=6

2 2 0 0

0 0 0 0

3775 :

Using Theorem 6.1 and Corollary 6.4, it is possible to compute each of �.Tn1.fH //,

�.Tn2.fH //, and �.Tn.fH //. These are shown in Fig. 6.4 (top right), where we note

that �.Tn.fH // D �.Tn1.fH // C �.Tn

2.fH // for n � 0. However, the measure ofpoints that escape through H1 by time n is larger than the measure of those pointsthat escape through H2 for all n > 0.

This is interesting for the following reason. In the systems fH1 and fH2 , almostevery point escapes through these holes separately at nearly the same rate. However,

Page 173: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.3 Nonlinear Estimates 159

in the open system fH , different amounts of phase space escape the system throughH1 and H2 respectively. In particular,

limn!1 �

�T

n1.fH /

� D 0:5625 and limn!1 �

�T

n2.fH /

� D 0:4375:

One way to state this is to say that the hole H1 does a better job of blockingphase space from entering H2 than H2 does from keeping points from escapingthrough H1. This may seem surprising when one considers the fact that H1 has asmaller Lebesgue measure than H2.

We note that this idea of simultaneously open holes where H D [`iD1Hi is

quite natural, since every hole H in our setup is composed of a number of Markovpartition elements H D [i2I �i . Since each partition element �i or collection ofsuch elements can be thought of as a hole, Corollary 6.4 allows us to determinewhich part of the hole H is leaking the most.

6.3 Nonlinear Estimates

We now consider the open systems fH W I ! I , where f is allowed to bea nonlinear but differentiable function when restricted to the elements of � . Theformulas we derive in this section allow us to give upper and lower bounds on�.En.fH // and �.Tn.fH // for every finite time n � 0. In particular, the finite timeestimates of �.Tn.fH // can be used to bound the asymptotic survival probabilityP.fH /.

Suppose H D Si2I �i for some I � M . Let N be the open systems fH W

I ! I , which have the property that

infx2�i

jf 0H .x/j > 0 for i … I I and

supx2�i

jf 0H .x/j < 1 for i … I :

To each fH 2 N there are two associated matrices similar to the weightedtransition matrix AH defined for each open system in L .

Definition 6.4. Suppose fH 2 N , where H D Si2I �i for some I � M . The

matrix AH 2 Rm�m is defined by

.AH /ij D(

infx2�ij jf 0.x/j�1 for �ij ¤ ; i … I ;

0 otherwise,1 � i; j � m:

Page 174: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

160 6 Improved Estimates of Survival Probabilities

Similarly, the matrix AH 2 Rm�m is defined by

.AH /ij D8<:

supx2�ij

jf 0.x/j�1 for �ij ¤ ;; i … I ;

0 otherwise,1 � i; j � m:

For fH 2 N and n � 0, let

En.fH / D 1An

H eH and En.fH / D 1A

n

H eH I

Tn.fH / D 1

nX

iD0

AiH

!eH and T

n.fH / D 1

nX

iD0

Ai

H

!eH :

Since the function fH W I ! I may be nonlinear on the elements of � , it is notpossible to give an exact formula for either En.fH / or Tn.fH /, as in the previoussection. Instead, here we give bounds on both of these quantities.

Theorem 6.2. If fH 2 N and n � 0, then

En.fH / � �.En.fH // � E

n.fH /I and (6.7)

Tn.fH / � �.Tn.fH // � T

n.fH /: (6.8)

Proof. Similar to the proof of Theorem 6.1, let

�i0i1:::in D fx 2 I W f k.x/ 2 �ik for 0 � k � ng;

where fH W I ! I has the Markov partition � D [miD1�i and H D [i2I �i for

some I � M . Additionally, let zi

D infx2�i jf 0.x/j�1. Assuming �i0i1 ¤ ;, then�.�i0i1 / � z

i0�.�i1/. Continuing inductively, it follows that if �i0i1:::in ¤ ;, then

�.�i0i1:::in / �

n�1YkD0

zik

!�.�in/: (6.9)

By definition, the measure of the subset of I that escapes the system at time n is

�.En.fH // DX

�.�� / DmX

`D1

X�`

�.��`/

!; (6.10)

where the last sum is taken over all sequences �` D i0i1 : : : in in which i0 D `,�ik 6� H for 0 � k < n, �in � H , and ��`

¤ ;.

Page 175: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.3 Nonlinear Estimates 161

Following the proof of Theorem 6.1, we have that

.AnH eH /` D

X�`

n�1YkD0

.AH /ikikC1

!.eH /in ; (6.11)

where the sum is taken over all �` D i0i1 : : : in. Here, similar to (6.10), the index i0is equal to `, �ik 6� H for 0 � k < n, �in � H , and ��`

¤ ;.Since .AH /ikikC1

D zik

for each 0 � k < n, under the assumption that ��`¤ ;,

it follows from (6.9) and (6.11) that

.AnH eH /` D

X�`

n�1YkD0

zik

!�.�in/ �

X�`

�.��`/:

By use of (6.10), we have

1AnH eH D

mX`D1

X�`

n�1YkD0

zik

!�.�in/

!�

mX`D1

mX

`D1

�.��`/

!D �.En.fH //:

Similarly, one can show that 1An

H eH � �.En.fH //. It then follows from theinequalities E

n.fH / � �.Tn.fH // � En.fH / that T

n.fH / � �.Tn.fH // �T

n.fH /. This completes the proof. utTheorem 6.2 allows us to bound the measure of those points that escape through

H at time n and before time n C 1 in the open system fH W I ! I . If thematrices AH and AH are nondefective, then we have the following result similarto Corollary 6.3.

Corollary 6.5. Let fH 2 N and suppose both AH and AH are nondefective witheigenpairs f.�1; v1/; : : : ; .�k; vk/g and f.1; v1/; : : : ; .�k; vk/g respectively, where noeigenvalue is equal to 1. Then for each n � 0,

kXiD1

ci si �ni � �.En.fH // �

kXiD1

ci si �n

i I and (6.12)

kXiD1

ci si

1 � �nC1

i

1 � �i

!� �.Tn.fH // �

kXiD1

ci si

1 � �

nC1

i

1 � �i

!; (6.13)

where si D 1vi , si D 1vi , eH DnX

iD1

ci vi , and eH DnX

iD1

ci vi .

The proof of Corollary 6.5 is analogous to the proof of Corollary 6.3. Also, theupper and lower bounds given in (6.12) are the quantities E

n.fH / and En.fH /,

Page 176: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

162 6 Improved Estimates of Survival Probabilities

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

x2 x3 x4x1 = HGH

v2

v1

v4

v3

Fig. 6.5 The transition graph �H (right) of the open system gH W I ! I (left) in Example 6.4

respectively. Moreover, the upper and lower bounds given in (6.13) are Tn.fH / andT

n.fH /, respectively.

Example 6.4. Consider the function g W I ! I given by

g.x/ D(

112

x � 21x2 C 28x3; 0 � x � 1=2;112

.1 � x/ � 21.1 � x/2 C 28.1 � x/3; 1=2 < x � 1;

with Markov partition � D f.0; 1=4�; .1=4; 1=2�; .1=2; 3=4�; .3=4; 1�g and H D.0; 1=4�. The function g W I ! I can be considered a nonlinear version of thetent map f W I ! I in Example 6.3.

Here, as in Example 6.3, we let H D .0; 1=4�. For the open system .gH ; I /,one can compute that �23 D .1=4; 0:44�, �24 D .0:44; 1=2�, �34 D .1=2; 0:55�,�33 D .0:55; 0:3=4�, �42 D .3=4; 0:94�, and �41 D .0:94; 1�. From this, we find that

AH D

2664

0 0 0 0

0 0 0:29 2=11

0 0 0:29 2=11

2=11 0:29 0 0

3775 and AH D

2664

0 0 0 0

0 0 4 0:29

0 0 4 0:29

0:29 4 0 0

3775 :

Since AH and AH are nondefective, one can compute using Corollary 6.5 that

0:15�n1 C 0:15�n

2 � 0:6 � En.gH / � �0:02�

n

1 C 0:02�n

2 C 0:25; (6.14)

where �1 D 0:41, �2 D �0:12, �1 D 4:27, and �2 D �0:27. By plotting theinequalities in (6.14), we have the graphs shown in Fig. 6.5. Here the shaded areaindicates the region in which �.En.gH // must lie.

Page 177: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.4 Improved Escape Estimates 163

Fig. 6.6 The upper boundsE

n.gH / and lower bounds

En.gH / of �.En.gH // are

shown for the open systemgH W I ! I considered inExample 6.4

1 2 3 4

0.1

0.2

0.3

0.4

0.5

0.6

En(gH)

En(gH)

n−axis

Suppose fH 2 N , in which H is made up of a number of disjoint holesH1; : : : ; H`. We can again ask through which one of these holes escape is the fastest.To answer this, we let

Eni .fH / D 1An

H eHi and En

i .fH / D 1An

H eHi I

Tni .fH / D 1

nX

iD0

AiH

!eHi and T

n

i .fH / D 1

nX

iD0

Ai

H

!eHi :

As a direct consequence of the proof of Theorem 6.2, we have the following result.

Corollary 6.6. Let fH 2 N , where H D [`iD1Hi and H1; : : : ; H` are disjoint.

Then

Eni .fH / � �.En

i .fH // � En

i .fH /I and

Tni .fH / � �.Tn

i .fH // � Tn

i .fH /;

for each 1 � i � ` and n � 0.

Similarly to Corollary 6.4, Corollary 6.6 allows us to bound the Lebesguemeasure of the set that escapes through a specific part H .

We now turn our attention to finding ways of improving our estimates of En.fH /

and Tn.fH /, found in Theorem 6.2.

6.4 Improved Escape Estimates

In this section, we define a delayed first return map of an open system fH 2 N ,which we will use to improve the escape estimates given in Theorem 6.2. A key stepin this procedure is to choose a particular vertex set of �H over which this map willbe defined.

Page 178: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

164 6 Improved Estimates of Survival Probabilities

Definition 6.5. Let H D Si2I �i for some I � M and let �H D .V; EH /. The

set S � V D fv1; : : : ; vmg is an open structural set of �H if vi 2 S for i 2 I , andthe graph �H j NS has no cycles.

Open structural sets differ from the complete structural sets introduced in Chap. 2in two ways. The first is that each vertex vi that corresponds to a partition element�i � H must belong to this set. Second, an open structural set does not depend onthe edge weights of �H but only on its graph structure.

For the open system fH W I ! I , we let st.�H / denote the set of all openstructural sets of �H . If S 2 st.�H /, we let IS D fi 2 M W vi 2 Sg be the indexset of S and �S D S

i2IS�i .

Definition 6.6. Let S 2 st.�H /. For x 2 I , we let �.x/ D i0i1 : : : it , where ij D k

if fj

H .x/ 2 �k and t is the smallest k > 0 such that f kH .x/ 2 �S . The set

˝S D f� W � D �.x/ for some x 2 I n H g

consists of the admissible sequences of fH with respect to S .

For x 2 I , we say that �.x/ D i0i1 : : : it has length j�.x/j D t . The reasonthat we have j�.x/j < 1 is that the graph �H j NS has no cycles. Hence, after a finitenumber of steps, f t

H .x/ must enter �S .

Definition 6.7. Let S 2 st.�H /. For x0 2 I and k � 0, we inductively define thefunction xkC1 D RfS .xk; : : : ; x0/, where

xkC1 D(

fj�.xk/j

H .xk/ if xk�i D xk for each 0 � i � j�.xk/j � 1;

xk otherwise:

The function RfS W I kC1 ! I is called the delayed first return map of fH withrespect to S . The sequence x0; x1; x2; : : : is the orbit of x0 under RfS .

If T D maxx2I j�.x/j, then strictly speaking, the map xkC1 D RfS .xk; :::; x /

for some < T . As can be seen, the map RfS acts almost like a first returnmap of fH to the set �S . The difference is that a return to �S does not happeninstantaneously (as would happen in the case of a first return map) but is delayed,so that the trajectory of a point under fH coincides with the trajectory under RfS

after a return to �S .We say that the map RfS W I kC1 ! I generates the open dynamical system

.RfS ; I /. For n � 0, we let Rf nS .x0/ D xn and define the sets

En.RfS / D fx 2 I W Rf n

S .x/ 2 H; Rf kS .x/ … H; 0 � k < ngI and

Tn.RfS / D fx 2 I W Rf k

S .x/ 2 H; for some k; 0 � k � ng:

Lemma 6.1. If S 2 st�.�fH / and n � 0, then En.fH / D E

n.RfS /.

Page 179: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.4 Improved Escape Estimates 165

Proof. For x0 2 I , let Q�.x0/ D i0i1 : : : , where ij D k if fj

H .x0/ 2 �k . ChoosingS 2 st.�H /, let Q�S .x0/ D `0`1 : : : , where `j D k if Rf

jS .x0/ 2 �k . Let t > 0 be

the smallest number such that it 2 IS . Then �.x0/ D i0i1 : : : it , and Definition 6.7implies that Rf t

S .x0/ D f tH .x0/. Therefore, it D `t where t 2 IS .

Continuing in this manner, it follows that ij D `j for each j 2 IS . SinceH � �S , the point x0, if it escapes, will escape for both fH and RfS at exactly thesame time. This completes the proof. ut

As a consequence of Lemma 6.1, one can use .RfS ; I / to study the escape in theopen system .fH ; I /. However, because of the time delays involved, the weightedtransition matrix of .RfS ; I / cannot be defined in the same way that we have definedAH and AH .

To define a transition matrix of .RfS ; I /, we need the following. For S 2 st.�H /,let

MS D M [ f� I i W � 2 ˝S ; 0 < i < j� jg:

If � D i0 : : : ; it , we identify the index � I 0 with i0 and the index � I t with it . Wealso let �� D fx 2 I W �.x/ D �g for each admissible sequence � 2 ˝S , whichsimply extends our notation given by (6.1) in Sect. 6.2.

Definition 6.8. For S 2 st.�H /, let AS be the matrix with rows and columnsindexed by elements of MS , where

.AS /ij D

8<ˆ:

infx2��

j�f j� j.x/�0j�1 if i D � I j� j � 1; j D � I j� j; for some; � 2 ˝S ;

1 if i D � I k � 1; j D � I k; k ¤ j� j; for some; � 2 ˝S ;

0 otherwise:

(6.15)

We call AS the lower transition matrix of RfS . The matrix AS defined by replacingthe infimum in (6.15) by the supremum is the upper transition matrix of RfS .

Let 1S be the 1 � jMS j vector given by

.1S /i D(

1 if i 2 M;

0 otherwise:

Let eS be the jMS j � 1 vector given by

.eS /i D(

�.�i / if i 2 I ;

0 otherwise:

Page 180: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

166 6 Improved Estimates of Survival Probabilities

Lastly, for n � 0, let

En.RfS / D 1S An

S eS and En.RfS / D 1S A

n

S eS I

Tn.RfS / D 1S

nX

iD0

AiS

!eS and T

n.RfS / D 1S

nX

iD0

Ai

S

!eS :

Using these quantities, we give the following improved escape estimates.

Theorem 6.3. Let fH 2 N and suppose S 2 st.�H /. If n � 0, then

En.fH / � E

n.RfS / � En.fH / � E

n.RfS / � E

n.fH /I and

Tn.fH / � T

n.RfS / � Tn.fH / � T

n.RfS / � T

n.fH /:

Theorem 6.3 together will Lemma 6.1 implies that the escape of fH through H

is better approximated by considering any of its delayed first return maps RfS thanby considering fH itself. We now give a proof of Theorem 6.3.

Proof. For S 2 st.�H /, suppose i 2 M n I and j 2 I . Then

.AS /ij .eS /j D(

infx2�ij jf 0.x/j�1�.�j / if �ij ¤ ;;

0 otherwise� �fx 2 �i W fH .x/ 2 �j g:

To show that a similar formula holds for larger powers of AS , suppose k 2 IS . Ifik; kj 2 ˝S , then

.AS /ik.AS /kj .eS /j D infx2�ik

jf 0.x/j�1 infx2�kj

jf 0.x/j�1�.�j /

��fx 2 �i W fH .x/ 2 �k; f 2H .x/ 2 �j g:

If either ik … ˝S or kj … ˝S , then .AS /ik.AS /kj .eS /j D 0.Suppose k 2 M n IS . If ikj 2 ˝S , then ikj I 1 2 MS and

.AS /i;ikj I1.AS /ikj I1;j .eS /j D1 � infx2�ikj

j.f 2.x//0j�1�.�j /

��fx 2 �i W fH .x/ 2 �k; f 2H .x/ 2 �j g:

If ikj … ˝S , then ijkI 1 … MS . In this case,

.A2S /ij .eS /j D

Xk2MS

.AS /ik.AS /kj �.�j /

DX

ik;kj 2˝S

.AS /ik.AS /kj �.�j / CX

ikj 2˝S

.AS /i;ikj I1.AS /ikj I1;j �.�j /

Page 181: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.4 Improved Escape Estimates 167

�X

k2IS [.MnIS /

�fx 2 �i W fH .x/ 2 �k; f 2H .x/ 2 �j g

D �fx 2 �i W f 2H .x/ 2 �j g:

Continuing in this manner, it follows that

.AnS /ij .eS /j � �fx 2 �i W f n

H .x/ 2 �j g (6.16)

for i 2 M n I , j 2 I , and n � 1. Since .eS /j D 0 if j … I , then for n � 1,equation (6.16) implies

1S AnS eS D

Xi2M

Xj 2MS

.AnS /ij .eS /j �

Xi2MnI

�fx 2 �i W f nH .x/ 2 H g

D �fx 2 I n H W f nH .x/ 2 H g:

Since 1S A0S eS D �.H/, then E

n.RfS / � En.fH / � E

n.RfS / for n � 0, where

the second inequality follows by the same argument with the matrix AS .To show that En.fH / � E

n.RfS /, we again suppose that i 2 M nI and j 2 I .In this case, we have

.AH /ij .eH /j D(

infx2�ij jf 0.x/j�1�.�j / if �ij ¤ ;;

0 otherwiseD .AS /ij .eS /j :

For larger matrix powers, we have

Xk2MnIS

.AH /ik.AH /kj �.�j / DX

k2MnIS

infx2�ik

jf 0.x/j�1 infx2�kj

jf 0.x/j�1�.�j /

�X

k2MnIS

1 � infx2�ikj

j.f 2.x//0j�1�.�j /

DX

ikj 2˝S

.AS /i;ikj I1.AS /ikj I1;j .eS /j :

From this, it follows that

.A2H /ij .eH /j D

Xk2IS

.AH /ik.AH /kj �.�j / CX

k2MnIS

.AH /ik.AH /kj �.�j /

�X

ik;kj 2˝S

.AS /ik.AS /kj �.�j / CX

ikj 2˝S

.AH /i;ikj I1.AH /ikj I1;j �.�j /

D.A2S /ij .eS /j :

Page 182: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

168 6 Improved Estimates of Survival Probabilities

Fig. 6.7 The delayed firstreturn map RgS W I ! I inExample 6.5, where RgS isdelayed on the set �42, shownin red

0.0 0.2 0.4 0.6 0.8 1.0

0.4

0.2

0.6

0.8

1.0

x2x1 = H x3 x42 x41

Again continuing in this manner, we have .AnH /ij .eH /j � .An

S /ij .eS /j for i 2M n I , j 2 I , and n � 1. Since 1H A0

H eH D �H D 1S A0S eS , it follows that

1H AnH eH D

Xi2M

Xj 2I

.AnH /ij .eH /j �

Xi2M

Xj 2MS

.AnS /ij .eS /j D 1S An

S eS

for n � 0. Hence, En.fH / � En.RfS /.

By using the same argument with the matrix AS , we obtain the inequalityE

n.RfS / � E

n.fH /. The second set of inequalities in Theorem 6.3 then follows,

which completes the proof. ut

Example 6.5. Consider the open system gH W I ! I given in Example 6.4.Observe that the vertex set S D fv1; v3; v4g is an open structural set of �H , since v1

is the vertex that corresponds to H , and the graph �H j NS D fv2g has no cycles (seeFig. 6.3).

The delayed first return map RgS is then written as

RgS .xk/ D

8<ˆ:

gH .xk/ if xk … �42;

g2H .xk/ if xk; xk�1 2 �42;

xk otherwise:

The map RgS is shown in Fig. 6.7 as a one-dimensional map but is colored red on�42 to indicate that in fact, the system is delayed on this set. That is, gH .x/ is shownin blue and g2

H .x/ is shown in red, where the trajectories of RgS stay in �42 for twotime steps before leaving.

Page 183: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

6.4 Improved Escape Estimates 169

4 6 8 10 12

0.005

0.010

0.015

En(RgS)

En(gH)

lower bounds2 3 4 5 6

5

10

15

20

En(gH)

En(RgS)

upper bounds

Fig. 6.8 Comparison between En.gH / and E

n.RgS / (left) and En.gH / and E

n.RgS / (right) from

Examples 6.2 and 6.4

To compute the upper and lower transition matrices of RgS , note that thesystem’s admissible sequences are given by ˝S D f23; 24; 33; 34; 424; 423; 41g,implying

MS D f1; 2; 3; 4; 424I 1; 423I 1g: (6.17)

From ˝S , we compute that �424 D .3=4; 0:85� and �423 D .0:85; 0:94�. The otherpartition elements have been computed in Example 6.4. Using the order givenin (6.17), we have the upper and lower transition matrices

AS D

266666664

0 0 0 0 0 0

0 0 0:29 0:18 0 0

0 0 0:29 0:18 0 0

0:18 0 0 0 1 1

0 0 0 0:26 0 0

0 0:26 0 0 0 0

377777775

; AS D

266666664

0 0 0 0 0 0

0 0 4 0:29 0 0

0 0 4 0:29 0 0

0:29 0 0 0 1 1

0 0 0 0:73 0 0

0 1:18 0 0 0 0

377777775

:

Here, the vectors 1S and eS are given by

1S D Œ1; 1; 1; 1; 0; 0� and eS D Œ1=4; 0; 0; 0; 0; 0�T :

Figure 6.8 demonstrates that En.gH / < En.RgS / and E

n.RgS / < E

n.gH / for a

number of values of n and indicates the extent to which using the delayed first returnmap RgH improves our estimates of �.En.gH //. The shaded regions in these graphsrepresent the differences between these respective upper and lower estimates.

Creating a delayed first return map RfS is similar in spirit to the procedure,described in Chap. 3, of constructing the time-delayed version .HSF ; XT

S / of thedynamical network .F ; X/. Recall that the time-delayed network .HSF ; XT

S / iscreated by absorbing the dynamics of those elements that are not indexed by IS ,

Page 184: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

170 6 Improved Estimates of Survival Probabilities

into those that are. The resulting time-delayed network is then restricted to thesmaller set of elements S , which allows us to obtain better estimates of the originalnetwork’s stability (see Definition 3.14 and Theorem 3.8).

Similarly, when we create the delayed first return map RfS , we choose a set S 2st.�H / over which we concentrate the dynamics of the open system .fH ; I /. It isthis concentration of information in the time-delayed system .RfS ; I / and the time-delayed dynamical network .HSF ; XT

S / that allows for the improved estimates weobtain.

Page 185: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

References

1. V. Afraimovich, L. Bunimovich, Dynamical networks: interplay of topology, interactions, andlocal dynamics. Nonlinearity 20, 1761–1771 (2007)

2. V. Afraimovich, L. Bunimovich, Which hole is leaking the most: a topological approach tostudy open systems. Nonlinearity 23, 643–656 (2010)

3. R. Albert, A.-L. Barabási, Statistical mechanics of complex networks. Rev. Mod. Phys. 74,47–97 (2002)

4. M. Blank, L.A. Bunimovich, Long range action in networks of chaotic elements. Nonlinearity19, 329–344 (2006)

5. R. Brualdi, Matrices, eigenvalues, and directed graphs. Lin. Mult. Alg. 11, 143–165 (1982)6. R. Brualdi, H. Ryser, Combinatorial Matrix Theory (Cambridge University Press, Melbourne,

1991)7. L.A. Bunimovich, Ya.G. Sinai, Spacetime chaos in coupled map lattices. Nonlinearity 1,

491–516 (1988)8. L.A. Bunimovich, B.Z. Webb, Dynamical Networks, isospectral graph reductions, and

improved estimates of matrices’ spectra. Lin. Alg. Appl. 437(7), 1429–1457 (2012)9. L.A. Bunimovich, B.Z. Webb, Isospectral graph transformations, spectral equivalence, and

global stability of dynamical networks. Nonlinearity 25, 211–254 (2012)10. L.A. Bunimovich, B.Z. Webb, Restrictions and stability of time-delayed dynamical networks.

Nonlinearity 26, 2131–2156 (2013)11. L.A. Bunimovich, B.Z. Webb, Improved Estimates of Survival Probabilities via Isospectral

Transformations, “Proceedings in Mathematics & Statistics” (eds. W. Bahsoun, C. Bose, G.Froyland) Springer, New York 70, 119–135 (2014)

12. L.A. Bunimovich, A. Lambert, R. Lima, The emergence of coherent structures in coupled maplattices. J. Stat. Phys. 61, 253–262 (1990)

13. J.-R. Chazottes, B. Fernandez, Dynamics of Coupled Map Lattices and of Related SpatiallyExtended Systems. Lecture Notes in Physics, vol. 671 (Springer, Berlin, 2005)

14. S. Chena, W. Zhaoa, Y. Xub, New criteria for globally exponential stability of delayed Cohen–Grossberg neural network. Math. Comput. Simul. 79, 1527–1543 (2009)

15. F.R.K. Chung, Spectral Graph Theory (American Mathematical Society, Providence, 1997)16. F. Chung, L. Lu, Complex Graphs and Networks (American Mathematical Society, Providence,

2006)17. M. Cohen, S. Grossberg, Absolute stability and global pattern formation and parallel memory

storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. SMC-13, 815–821(1983)

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6

171

Page 186: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

172 References

18. E.B. Curtis, D. Ingerman, J.A. Morrow, Circular planar graphs and resistor networks. Lin. Alg.Appl. 283, 115–150 (1998)

19. S. Dorogovtsev, J. Mendes, Evolution of Networks: From Biological Networks to the Internetand WWW (Oxford University Press, Oxford, 2002)

20. M. Faloutsos, P. Faloutsos, C. Faloutsos, On power-law relationship of the internet topologyACMSIGCOMM,‘99. Comput. Comm. Rev. 29, 251–263 (1999)

21. S. Gershgorin, Über die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk SSSR Ser.Mat. 1, 749–754 (1931)

22. R. Horn, C. Johnson, Matrix Analysis (Cambridge University Press, Cambridge, 1990)23. M. Newman, A.-L. Barabási, D. Watts, The Structure of Dynamic Networks (Princeton

University Press, Princeton, 2006)24. S. Strogatz, Sync: The Emerging Science of Spontaneous Order (Hyperion, New York, 2003)25. L. Tao, W. Ting, F. Shumin, Stability analysis on discrete-time Cohen–Grossberg neural net-

works with bounded distributed delay, in Proceedings of the 30th Chinese Control Conference,Yantai, China, 22–24 July 2011

26. L.N. Trefethen, M. Embree, Spectra and Pseudospectra: The Behavior of Nonnormal Matricesand Operators (Princeton University Press, Princeton, 2005)

27. R.S. Varga, Gershgorin and His Circles (Springer, Berlin, 2004)28. F.G. Vasquez, and B.Z. Webb, Pseudospectra of isospectrally reduced matrices. Numer. Linear

Algebra Appl. (2014). doi: 10.1002/nla.194329. L. Wang, Stability of Cohen–Grossberg neural networks with distributed delays. Appl. Math.

Comput. 160, 93–110 (2005)30. D. Watts, Small Worlds: The Dynamics of Networks Between Order and Randomness (Prince-

ton University Press, Princeton, 1999)31. C.W. Wu, Synchronization in networks of nonlinear dynamical systems coupled via a direct

graph. Nonlinearity 18, 1057–1064 (2005)

Page 187: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Index

Aabsolute row sum, 94adjacency matrix, viii, xii, 1, 23, 73adjacent cycle, 117adjacent vertex, 30admissible sequence, 164

Bbasic structural set, 83bounded radial transformation, 73, 78branch, xi, 24branch product, 24, 47branch set, 24, 39, 45Brauer-type region, 106Brualdi-type region, 113

Ccharacteristic polynomial, 2combinatorial Laplacian matrix, 123complete structural set, 45, 76, 125cycle, 23cycle set, 112, 118

Ddefective matrix, 154degree of a rational function, 9delayed first return map, 164, 166directed graph, viii, 20, 21dynamical network, ix, 53, 55dynamical system, 53

Eedge set, 20edge weight, ix, 20, 39, 46eigenvalue multiplicity, 3eigenvalue region, 91, 116eigenvalues, viii, 2–4, 21eigenvector, 130escape, 147

Ffinal vertex set, 31, 34fully reduced matrix, 102

GGershgorin-type region, 95global attractor, 64global stability, 55globally attracting fixed point, 55graph isomorphism, 36graph of interactions, vii, ix, 20, 53, 72

Hhole, xiv, 147, 148

Iimplicit time delay, 81independent branches, 42induced subgraph, 23interaction, 55interior vertices, 22, 24, 42

© Springer Science+Business Media New York 2014L. Bunimovich, B. Webb, Isospectral Transformations: A New Approachto Analyzing Multidimensional Systems and Networks, Springer Monographsin Mathematics, DOI 10.1007/978-1-4939-1375-6

173

Page 188: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

174 Index

intrinsic stability, 59, 67, 68, 70, 71, 84inverse eigenvalues, 3, 102, 139inverse resolvent, 139inverse spectrum, 3, 7, 15, 21, 102, 104invertible matrix, 4irreducible matrix, 73isomorphic graphs, 36isoradial expansion, 78isoradial transformation, 73isospectral expansion, 42, 46, 77isospectral graph reduction, 19, 24, 28, 36isospectral graph reductions, 31isospectral matrix reduction, viii, 5–7, 10, 12,

91, 133, 136

Llocal systems, 55, 62loop, xi, 23lower transition matrix, 165

MMarkov partition, 148merged graph, 48multiple-type delay, 54, 69, 70, 72

Nnetwork expansion, 87, 89network restriction, 83, 86, 87, 89nondefective matrix, 154nonnegative graph, 73nonnegative matrix, 73nonzero eigenvalues, 45, 49nonzero spectrum, 45normalized Laplacian matrix, 123

Oopen dynamical system, xiv, 147, 148open structural set, 164orbit, 64original Brualdi-type region, 115

Pparallel edges, vii, 48path, 22permutation index, 26permutation matrix, 26permutation matrx, 45polynomial extension, 94pseudoeigenvalue, 130, 144

pseudoeigenvector, 130pseudoresonance, 139, 142pseudospectrum, xiii, 129, 130, 132, 136

Rresolvent, 130, 132resonance, 141

SSchur complement, 5semiring, 45sequence of reductions, x, 12, 32–34sequential reduction, 33, 34simple graph, 123single-type delay, 53, 69–72spectral equivalence, xi, 36spectral inverse, 15, 16, 102–104, 139spectral radius, xii, 45, 49, 56, 60, 65, 73, 74,

125, 153spectrum, viii, 1, 3, 7, 19, 21stability, xii, 53, 55, 56, 61, 65, 71, 77, 84, 87,

89stability matrix, 56, 59, 60, 65stable, 68, 70strong cycle, 112strongly connected, 73, 112strongly connected components, 73, 112structural set, xi, 23submatrix, 5, 8, 25survival probability, xiv, 149, 159symmetric matrix, 30, 124

Ttime-delayed dynamical network, 64time-delayed interaction, 64tolerance, 130topology, vii, ix, 20, 38, 53transition graph, 150

Uundelayed dynamical network, 67undirected graph, viii, 21, 30, 31, 123unweighted adjacency matrix, 19unweighted graph, 21, 24upper transition matrix, 165

Vvertex set, 20, 23

Page 189: Isospectral Transformations: A New Approach to Analyzing Multidimensional Systems and Networks

Index 175

Wweak cycle, 112weight set, 21, 39, 45, 47weight-preserving isospectral transformation,

39, 41, 42

weighted adjacency matrix, viii, 19,21

weighted graph, 21weighted transition matrix, 150,

159