A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix

By Parsiad Azimzadeh

Abstract

We present a test for determining if a substochastic matrix is convergent. By establishing a duality between weakly chained diagonally dominant (w.c.d.d.) L-matrices and convergent substochastic matrices, we show that this test can be trivially extended to determine whether a weakly diagonally dominant (w.d.d.) matrix is a nonsingular M-matrix. The test’s runtime is linear in the order of the input matrix if it is sparse, and quadratic if it is dense. This is a partial strengthening of the cubic test in [J. M. Peña., A stable test to check if a matrix is a nonsingular M-matrix, Math. Comp., 247, 1385–1392, 2004]. As a by-product of our analysis, we prove that a nonsingular w.d.d. M-matrix is a w.c.d.d. L-matrix, a fact whose converse has been known since at least 1964.

1. Introduction

The substochastic matrices are real matrices with nonnegative entries and whose row-sums are at most one. We establish two results relating to this family:

(i)

To each substochastic matrix we associate a possibly infinite index of contraction and show that for each nonnegative integer , is a contraction in the infinity norm (i.e., ) if and only if .

(ii)

We show that the index of contraction of a sparse (resp., dense) square substochastic matrix is computable in time linear (resp., quadratic) in the order of the input matrix.

It follows immediately from (i) that a square substochastic matrix is convergent if and only if its index of contraction is finite.

By establishing a duality between weakly chained diagonally dominant (w.c.d.d.) L-matrices and convergent substochastic matrices, we use point (ii) to obtain a test to determine whether a weakly diagonally dominant (w.d.d.) matrix is a nonsingular M-matrix. Previous work in this regard is the test in Reference 15 to determine if an arbitrary matrix (not necessarily w.d.d.) is a nonsingular M-matrix, which has a cost asymptotically equivalent to Gaussian elimination (i.e., cubic in the order of the input matrix).

W.d.d. M-matrices arise naturally from discretizations of differential operators and appear in the Bellman equation for optimal decision making on a controlled Markov chain Reference 3. As such, these matrices have attracted a significant amount of attention from the scientific computing and numerical analysis communities.

W.c.d.d. matrices were first studied in a wonderful work by P. N. Shivakumar and K. H. Chew Reference 18 in which they were proven to be nonsingular (see also Reference 1 for a short proof). Various authors have recently studied the family of w.c.d.d. M-matrices, obtaining bounds on the infinity norm of their inverses (i.e., ) Reference 5Reference 10Reference 14Reference 19Reference 23. While a w.c.d.d. matrix is w.d.d. by definition, the converse is not necessarily true in general (e.g., is w.d.d. but not w.c.d.d.).

It has long been known (possibly as early as 1964; see the work of J. H. Bramble and B. E. Hubbard Reference 4) that a w.c.d.d. L-matrix⁠Footnote1 is a nonsingular w.d.d. M-matrix. We obtain a proof of the converse as a by-product of our analysis. In particular, we establish that⁠Footnote2

1

In Reference 4, the authors refer to w.c.d.d. L-matrices as matrices of positive type.

2

Equation 1.1 remains true if we replace “L-matrix” by “Z-matrix with nonnegative diagonal entries”.

Equation 1.1 is also useful in that it gives a graph-theoretic characterization of nonsingular w.d.d. M-matrices by means of w.c.d.d. matrices. This characterization is often easier to use than the usual characterizations involving, say, inverse-positivity or positive principal minors Reference 16.

We list a few other interesting recent results concerning w.c.d.d. matrices and M-matrices here: Reference 11Reference 12Reference 13Reference 20Reference 22Reference 24Reference 25Reference 26.

Section 2 introduces and establishes results on substochastic matrices, M-matrices, and w.c.d.d. matrices. Section 3 gives the procedure to compute the index of contraction. Section 4 presents numerical experiments testing the efficacy of the procedure on randomly sampled matrices.

2. Matrix families

2.1. Substochastic matrices

Definition 2.1.

A substochastic matrix is a real matrix with nonnegative entries (i.e., ) and row-sums at most one (i.e., ). A stochastic (a.k.a. Markov) matrix is a substochastic matrix whose row-sums are exactly one.

Note that in our definition above, we do not require to be square.

Definition 2.2.

Let be an complex matrix.

(i)

The digraph of , denoted , is defined as follows:

(a)

If is square, is a tuple consisting of the vertex set and edge set satisfying if and only if .

(b)

If is not square, , where is the smallest square matrix obtained by appending rows or columns of zeros to .

(ii)

A walk in is a nonempty finite sequence of edges , , , in . The set of all walks in is denoted .

(iii)

Let . The length of , denoted , is the total number of edges in . (resp., ) is the first (resp., last) vertex in .

To simplify matters, hereafter we denote edges by instead of and walks by instead of , , , . We use the terms “row” and “vertex” interchangeably.

Let be an substochastic matrix. We define the sets

It is understood that when we write , we mean . Note that if is empty, so too is for each . We define the index of contraction associated with by

subject to the conventions and . We will see shortly that the matrix is convergent if and only if is finite.

Example 2.3.

The matrix in Figure 2.1 satisfies and

It follows that .

An immediate consequence of the definition of the index of contraction follows.

Lemma 2.4.

Let be an substochastic matrix. If (resp., ) is either infinite or strictly less than (resp., ).

Proof.

Suppose . Let and be a walk in . Since , it follows that . This implies that for all since by definition, has no edges of the form , where . Now, suppose . By the pigeonhole principle, we can find integers and such that and . That is, the walk contains a cycle (i.e., a subwalk starting and ending at the same vertex). “Removing” the cycle yields the new walk

in satisfying . If , we can continue removing cycles until we arrive at a walk satisfying . The case of is handled similarly.

We are now ready to present our main result related to substochastic matrices. In the statement below, it is understood that if is a square matrix, .

Theorem 2.5.

Let be a square substochastic matrix. If is finite,

Otherwise,

Before giving a proof, it is useful to record some consequences of the above.

Corollary 2.6.

Let be a square substochastic matrix. Then, its spectral radius is no larger than one. Moreover, the following statements are equivalent:

(i)

is finite.

(ii)

is convergent.

(iii)

is nonsingular.

The above can be considered a generalization of the well-known result that a square stochastic (a.k.a. Markov) matrix has spectral radius no larger than one and at least one eigenvalue equal exactly to one (recall that for any matrix , is singular if and only if is an eigenvalue of ).

Proof.

The claim that the spectral radius of is no larger than one in magnitude is a direct consequence of the fact that .

(i) (ii) follows immediately from Theorem 2.5, while (ii) (iii) is true for any matrix. We prove below, by contrapositive, the claim (iii) (i).

Suppose is infinite. Let be the set of rows for which is empty. Due to our assumptions, there is at least one such row and hence is nonempty. Without loss of generality, we may assume for some , where is the order of (otherwise, replace by , where is an appropriately chosen permutation matrix). Let be the column vector whose entries are all one. If , each row-sum of is one (i.e., so that ). Otherwise, has the block structure

The partition above ensures that for each row , or is nonempty. Therefore, is finite, and hence the linear system has a unique solution . Moreover, since the row-sums of are one, . Therefore,

Corollary 2.7.

A square irreducible substochastic matrix is convergent if and only if is nonempty.

The above result is well known. It can be obtained, for example, by Reference 21, Corollary 1.19 and Lemma 2.8. We give a short alternate proof using Corollary 2.6:

Proof.

Since a square matrix is irreducible if and only if its digraph is strongly connected Reference 21, is finite if and only if is nonempty. The result now follows from Corollary 2.6.

If is a square substochastic matrix, we can always find a permutation matrix and an integer such that has the block triangular structure

where each is a square substochastic matrix that is either irreducible or a zero matrix (it is understood that if , then ). Following Reference 7Reference 21, we refer to this as the normal form of (it is shown in Reference 7, p. 90 that the normal form of a matrix is unique up to permutations by blocks). Since , the spectrum of satisfies

This observation motivates the next result.

Theorem 2.8.

Let be a square substochastic matrix with normal form Equation 2.2. is convergent if and only if is nonempty for each . Moreover, if is convergent,

where and is the order of the matrix (it is understood that if , then ).

Proof.

The first claim is a consequence of Corollary 2.7 and Equation 2.3.

We prove now the leftmost inequality in Equation 2.4. First, note that . Moreover, the block diagonal entries of are the matrices . Therefore, for each , and hence by Theorem 2.5.

We prove now the rightmost inequality in Equation 2.4. If , the inequality is trivial. As such, we proceed assuming that . First, note that . Therefore, , where . Due to the block triangular structure of , we can write as

where and for all . Defining , it follows that is a walk whose length is no larger than any walk in , from which we obtain . Therefore,

Returning to our goal of proving Theorem 2.5, we first establish some lemmata related to substochastic matrices. The first lemma is a consequence of definitions and requires no proof.

Lemma 2.9.

Let be an substochastic matrix. Then, if and only if .

Lemma 2.10.

Let and be compatible (i.e., the product is well-defined) substochastic matrices. Then,

(i)

is a substochastic matrix.

(ii)

If , then .

(iii)

If , then if and only if there exists such that is an edge in .

(iv)

is an edge in if and only if there exist edges and in and , respectively.

Proof.
(i)

has nonnegative entries and .

(ii)

Note first that . If , then and the desired result follows.

(iii)

Suppose . If there exists such that is an edge in , then and . Otherwise, for all with and hence .

(iv)

Suppose and are edges in and , respectively. Then, . Otherwise, for each , at least one of or is zero and hence .

Lemma 2.11.

Let be a square substochastic matrix, , and be a positive integer. Then, if and only if there is a walk in such that .

Proof.

To simplify notation, let .

Suppose there exists a walk in . We claim that is an edge in . If this is the case, Lemma 2.10 (ii) and (iii) guarantee that . If , by Lemma 2.10 (ii), as desired.

We now return to the claim in the previous paragraph. Since the claim is trivial if , we proceed assuming . Let be an integer satisfying . If is an edge in , then since is an edge in , Lemma 2.10 (iv) implies that is an edge in . Since is an edge in , it follows by induction that is an edge in , as desired.

As for the converse, suppose . Let be the smallest positive integer such that and . Since and , it follows that . By Lemma 2.10 (iii), there exists such that is an edge in .

If , the trivial walk is in , and hence we proceed assuming . Let be an integer satisfying . If there exists a positive integer such that is an edge in , Lemma 2.10 (iv) implies that there exists a positive integer such that is an edge in and is an edge in . Since is an edge in , it follows by induction that is an edge in for each integer satisfying . Therefore, is a walk in , as desired.

We are now ready to prove Theorem 2.5.

Proof of Theorem 2.5.

Since , the inequalities follow trivially.

The remaining inequalities in the theorem statement follow by applying Lemma 2.11 to each row not in and invoking Lemma 2.9.

2.2. M-matrices

In this subsection, we recall some well-known results on M-matrices (see, e.g., Reference 2, Chapter 6).

Definition 2.12.

An M-matrix is a square matrix that can be expressed in the form , where is a nonnegative matrix and , where is the spectral radius of .

Definition 2.13.

A Z-matrix is a real matrix with nonpositive off-diagonal entries.

Definition 2.14.

An L-matrix is a Z-matrix with positive diagonal entries.

Proposition 2.15.

A nonsingular M-matrix is an L-matrix.

Definition 2.16.

Let be a square real matrix. is monotone if and only if it is nonsingular and its inverse consists only of nonnegative entries.

Proposition 2.17.

The following are equivalent:

(i)

is a nonsingular M-matrix.

(ii)

is a monotone Z-matrix.

We close this subsection by introducing the following enlargement of the family of L-matrices (Definition 2.14), to be used in what follows.

Definition 2.18.

An -matrix is a Z-matrix with nonnegative diagonal entries.

2.3. Weakly chained diagonally dominant (w.c.d.d.) matrices

Before we can define w.c.d.d. matrices, we require some preliminary definitions.

Definition 2.19.

Let be a complex matrix.

(i)

The th row of is w.d.d. (resp., s.d.d.) if (resp., ).

(ii)

is w.d.d. (resp., s.d.d.) if all of its rows are w.d.d. (resp., s.d.d.).

Let be an complex w.d.d. matrix. We define the sets

Note that if is empty, so too is for each . We will see shortly that the sets and are related to and .

We are now ready to introduce w.c.d.d. matrices:

Definition 2.20.

A square complex matrix is w.c.d.d. if the points below are satisfied:

(i)

is w.d.d.

(ii)

is nonempty.

(iii)

For each , is nonempty.

We now define the index of connectivity associated with a square complex w.d.d. matrix as

(compare this with the index of contraction defined in Equation 2.1). The lemma below is a trivial consequence of the definitions above and as such requires no proof.

Lemma 2.21.

A square complex w.d.d. matrix is w.c.d.d. if and only if is finite.

We are now able to establish a duality between w.d.d. L-matrices (or more accurately, -matrices) and substochastic matrices that, as we will see, connects the nonsingularity of the former to the convergence of the latter.

Lemma 2.22.

Let be an w.d.d. -matrix and be an diagonal matrix whose diagonal entries are positive and satisfy for each such that . Then, is substochastic and

Conversely, let be an substochastic matrix and be an diagonal matrix whose diagonal entries are positive. Then, is a w.d.d. -matrix and Equation 2.5 holds.

Proof.

We prove only the first claim, the converse being handled similarly.

Let and be given as in the lemma statement. To simplify notation, denote by and the elements of and . First, note that and whenever . Since

it follows that is substochastic and . Letting and , note that and

More concisely, is simply with zero or more self-loops (i.e., edges of the form ) removed. As a result of these facts, Equation 2.5 follows immediately.

Example 2.23.

Let be a square w.d.d. L-matrix of order and

denote the point Jacobi matrix associated with (cf. Reference 21, Chapter 3). By the previous results, is w.c.d.d. if and only if is finite.

Note that the substochastic matrix in Figure 2.1 is the point Jacobi matrix associated with the w.d.d. L-matrix in Figure 2.2.

We now restate and prove characterization Equation 1.1 from the introduction.

Theorem 2.24.

The following are equivalent:

(i)

is a nonsingular w.d.d. M-matrix.

(ii)

is a nonsingular w.d.d. L-matrix.

(iii)

is a w.c.d.d. L-matrix.

Since a nonsingular w.d.d. -matrix must be an L-matrix, we can safely replace all occurrences of “L-matrix” with -matrix” in the above theorem without affecting its validity (recall that any w.c.d.d. matrix is nonsingular Reference 18).

Proof.

(i)(ii) follows from Proposition 2.15 while (iii)(i) is established in Reference 4, Theorem 2.2. We prove below the claim (ii)(iii).

Let be a nonsingular w.d.d. L-matrix of order . Then, the associated point Jacobi matrix is substochastic and is nonsingular since

Corollary 2.6 and Lemma 2.22 imply that is finite. Therefore, by Lemma 2.21, is w.c.d.d.

Remark 2.25.

Instead of calling upon the results of Reference 4, it is also possible to prove (iii)(i) of Theorem 2.24 directly by using arguments involving the index of contraction. In particular, let be a w.c.d.d. L-matrix of order . Then, by Lemma 2.21 and Lemma 2.22, the associated point Jacobi matrix is substochastic with finite. By Corollary 2.6, is convergent and hence the Neumann series for the inverse of converges to a matrix whose entries are nonnegative. Therefore, is monotone by Definition 2.16, and hence a nonsingular M-matrix by Proposition 2.17.

An immediate consequence of Theorem 2.24, which can be considered an analogue of Corollary 2.7, is given below.

Corollary 2.26.

A square irreducible w.d.d. L-matrix is a nonsingular M-matrix if and only if is nonempty.

While the reverse direction in the above result is well known Reference 21, Corollary 3.20, we are not aware of a reference for the forward direction.

3. Computing the index of contraction

In this section, we present a procedure to compute the index of contraction of a substochastic matrix and show that it is robust in the presence of inexact (i.e., floating point) arithmetic.

By the results of the previous section, such a procedure can also be used to determine if an arbitrary w.d.d. matrix is a nonsingular M-matrix as follows. If is not a square L-matrix, it is trivially not a nonsingular M-matrix (Proposition 2.15). Otherwise, we can check the finitude of the index of contraction of its associated point Jacobi matrix to determine whether or not is a nonsingular M-matrix (recall Example 2.23 and Theorem 2.24).

3.1. The procedure

Before we can describe the procedure, we require the notion of a vertex contraction (a.k.a. vertex identification), a generalization of the well-known notion of edge contraction from graph theory.

Definition 3.1.

Let be a graph, , denote a new vertex (i.e., ), and be a function which maps every vertex in to itself and every vertex in to (i.e., and ). The vertex contraction of with respect to is a new graph , where and .

An overview of the procedure for computing the index of contraction for an arbitrary substochastic matrix is given below:

(1)

Obtain the vertex contraction of with respect to . Label the new vertex in the contraction and the new vertex set . Note that (recall that the superscript denotes complement).

(2)

Reverse all edges in the resulting graph (see, e.g., Figure 3.1).

(3)

In the resulting graph, find the shortest distances from the new vertex to all vertices by a breadth-first search (BFS) starting at . It is understood that and that if is unvisited in the BFS, .

(4)

Return .

That this procedure terminates is trivial (BFS is performed on a graph with finitely many vertices). As for the correctness of the procedure, it is easy to verify that

so that .

Remark 3.2.

Since BFS does not revisit vertices, the correctness of the procedure is unaffected if is preprocessed to remove self-loops (i.e., edges of the form ) and edges of the form with .

Algorithm 1 gives precise pseudocode for steps (1) to (4). Without loss of generality, it is assumed that the input matrix is square (the rectangular case is obtained by a few trivial additions to the code). The pseudocode makes use of the list and queue data structures (see, e.g., Reference 6, Chapter 10). The operation .add() appends the element to the list . The operation .enqueue() adds the element to the back of the queue . The operation .dequeue() removes and returns the element at the front of the queue .

It is obvious that if the input to Algorithm 1 is a dense matrix of order , operations are required. Suppose instead that we restrict our inputs to matrices that are sparse in the sense that , the maximum number of nonzero entries per row, is bounded independent of (i.e., as ). If the matrices are stored in an appropriate format (e.g., compressed sparse row (CSR) format, Ellpack-Itpack, etc. Reference 17), the loops on lines 6 and 20 require only a constant number of iterations for each fixed . In this case, operations are required. An obvious generalization of this fact is that if , operations are required.

3.2. Floating point arithmetic considerations

The loop on line 6 of Algorithm 1 computes the th row-sum of the substochastic matrix . In the presence of floating point arithmetic, the operation on line 7 can introduce error into calculations. In order to analyze this error, we take the standard model of floating point arithmetic in which floating point addition introduces error proportional to the size of the result:

is a machine-dependent constant (often referred to as machine epsilon) which gives an upper bound on the relative error due to rounding. In performing our analyses, we make the standard assumptions that the order of the input matrix satisfies Reference 9 and that the entries of are floating point numbers.

A floating point implementation of the loop on line 6 is represented by the recurrence with initial condition . Letting , this direct implementation has an error bound of Reference 9, Eq. (2.6)

Recall that is the maximum number of nonzero entries per row of the matrix . If the matrix is sparse (i.e., as ), we obtain

by the power series representation of . In this case, for each , the absolute error in computing is independent of .

Note that if the exact value of is close to , the comparison on line 9 may return either a false-positive or a false-negative. Motivated by Equation 3.2, an implementation of Algorithm 1 should use instead the condition , where is a small constant strictly larger than to preclude the possibility that the condition evaluates to true when the exact value of is (for simplicity, we assume has a precise floating point representation). Then, the error bound Equation 3.2 and discussion above yield the accuracy result below.

Lemma 3.3.

Let be a substochastic matrix with at most nonzero entries per row. Denoting by the quantity computed by Algorithm 1 under the standard model of floating point arithmetic Equation 3.1 and with condition replaced by , where , the following results hold:

(i)

if , then .

(ii)

if and for , then .

Remark 3.4.

If is not sparse, the error Equation 3.2 depends on . In this case, one should substitute the naïve summation outlined by the loop on line 6 for a more scalable algorithm, such as Kahan’s summation algorithm, whose absolute error in approximating , is Reference 9, Eq. (3.11), which is independent of due to the assumption . We can obtain an analogue of Lemma 3.3 under Kahan summation by choosing appropriately.

Note that Lemma 3.3 suggests that the value of and may disagree in certain cases. Fortunately, as demonstrated in the next example, this occurs only if the matrix is “nearly nonconvergent” (i.e., , where is close to zero). This error may even be considered desirable behaviour since a nearly nonconvergent matrix may not be convergent in the presence of floating point error.

Example 3.5.

Consider the matrix

where . Note that even though independent of the value of , as . When is very close to zero, floating point error may cause Algorithm 1 to erroneously determine that is empty and thereby mistakenly conclude that the index of contraction is infinite.

We close this section by discussing stability. The test in Reference 15, which determines if an arbitrary matrix is a nonsingular M-matrix, uses a modified Gaussian elimination procedure. As such, to establish numerical stability, the author proves that the growth factor (see the definition in Reference 8) of the test is bounded by the order of the input matrix Reference 15, Theorem 3.1. In our case, the floating point error made in computing has no bearing on the error made in computing for distinct rows and . That is, floating point errors do not propagate from row to row. Moreover, as demonstrated in the previous paragraphs, the error made in computing each row-sum can be bounded by a constant (without any additional effort in the sparse case, and with, e.g., Kahan summation in the dense case). As such, we conclude that Algorithm 1 is stable in the sense that it does not involve numbers that grow large due to floating point error.

4. Numerical experiments

In this section, we compare the efficiency of our test described at the beginning of Section 3 to Peña’s test detailed in Reference 15. To minimize bias, we run the tests on randomly sampled matrices (sampled according to the procedure in Appendix A).

We run the tests on matrices whose maximum number of nonzeros per row () are , , , and . We employ two versions of our test: a sparse version, in which the matrices are stored in compressed sparse row (CSR) format, and a dense version, in which the matrices are two-dimensional arrays. All tests are performed on an Intel Xeon E5440 2.83GHz CPU. The average time to process a randomly sampled matrix is shown in Figure 4.1a (error bars are omitted as even the 99% confidence interval is too small to be visible). We mention that in terms of accuracy, the tests produced the same results on all randomly sampled matrices (Figure 4.1b).

Figure 4.1a suggests that our test outperforms Peña’s. Even for the experiments involving the sparse matrices (small by most scientific computing standards), our sparse implementation executes on the order of tenths of milliseconds while Peña’s test executes on the order of seconds.

Appendix A. Sampling procedure

This appendix details the procedure (employed in the numerical experiments of Section 4) used to randomly sample w.d.d. -matrices. The procedure, for which pseudocode is given in Algorithm 2, works by sampling a matrix from the space of substochastic matrices and returning , which is a w.d.d. -matrix by Lemma 2.22.

We use to denote a uniform distribution on the sample space . For , we use to denote a Dirichlet distribution of order with parameter . It is well known that when is a vector whose entries are all one, is a uniform distribution over the unit simplex in . We use to mean that is a sample drawn from the distribution .

The inputs to the procedure are a positive integer corresponding to the order of the output matrix and a positive integer corresponding to the maximum number of nonzero entries per row.

Appendix B. Generalizing Theorem 2.5

This appendix generalizes Theorem 2.5. To present the generalization, we first extend our notion of walks.

Definition B.1.

Let be a sequence of compatible complex matrices (i.e., the product is well-defined for each ).

(i)

A walk in is a nonempty finite sequence of edges , , , such that each is an edge in . The set of all walks in is denoted .

(ii)

For , , , and are defined in the obvious way.

Note, in particular, that if we fix a square complex matrix , we are returned to the original definition of a walk given in Section 2 if we take for all .

It is also useful to generalize the sets of Section 2. In particular, given a sequence of compatible substochastic matrices, let

We are now ready to give the generalization.

Theorem B.2.

Let be a sequence of compatible substochastic matrices, be defined by and whenever is a positive integer, and

If is finite, then

Otherwise,

The proof of the above is nearly identical to that of Theorem 2.5, requiring only a simple generalization of Lemma 2.11. However, in this general case, the finitude of the index of contraction is no longer an indicator of convergence.

Example B.3.

Let be a sequence of compatible substochastic matrices satisfying and be defined as above. Clearly, each matrix is convergent, but as .

Moreover, even if each is itself convergent, it is still possible that the index of contraction is infinite:

Example B.4.

Let be given by

Defining as above, we find that

That is, independent of .

It is not hard to find interesting cases in which is finite.

Example B.5.

Let be a sequence of square substochastic matrices of order satisfying the following properties:

(i)

is convergent.

(ii)

and for all .

Then, .

Acknowledgments

The author thanks Edward Cheung (University of Waterloo) for discussions on M-matrices, a careful review of this document, and (more importantly) his unfaltering friendship.

Figures

Figure 2.1.

An example of an substochastic matrix and its graph

Figure 2.1(a)

The matrix

Figure 2.1(b)

(vertices in are highlighted)

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{tikzpicture}[node distance=2cm] \node[graph node, fill=mylinkcolor, text=white] (1) {1}; \node[graph node, right of=1] (2) {2}; \node[right of=2] (dots) {$\cdots$}; \node[graph node, right of=dots] (n) {$n$}; \draw[graph edge, ->] (2) to (1); \draw[graph edge, ->] (dots) to (2); \draw[graph edge, ->] (n) to (dots); \end{tikzpicture}
Figure 2.2.

An example of a w.c.d.d. matrix and its graph

Figure 2.2(a)

The matrix

Figure 2.2(b)

(vertices in are highlighted)

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{tikzpicture}[node distance=2cm] \node[graph node, fill=mylinkcolor, text=white] (1) {1}; \node[graph node, right of=1] (2) {2}; \node[right of=2] (dots) {$\cdots$}; \node[graph node, right of=dots] (m) {$m$}; \draw[graph edge, ->, loop above] (1) to (1); \draw[graph edge, ->, loop above] (2) to (2); \draw[graph edge, ->, loop above] (m) to (m); \draw[graph edge, ->] (2) to (1); \draw[graph edge, ->] (dots) to (2); \draw[graph edge, ->] (m) to (dots); \end{tikzpicture}
Figure 3.1.

Steps (1) and (2) applied to an example

Figure 3.1(a)

(vertices in are highlighted)

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{tikzpicture}[node distance=2cm] \node[graph node] (1) {1}; \node[graph node, right of=1] (2) {2}; \node[graph node, below of=1] (3) {3}; \node[graph node, fill=mylinkcolor, text=white, right of=3] (7) {7}; \node[graph node, fill=mylinkcolor, text=white, right of=7] (8) {8}; \node[graph node, above of=8] (4) {4}; \node[graph node, right of=4] (5) {5}; \node[graph node, below of=5] (6) {6}; \draw[graph edge, ->] (1) to (2); \draw[graph edge, ->] (2) to (3); \draw[graph edge, ->] (2) to (4); \draw[graph edge, ->] (2) to (7); \draw[graph edge, ->] (3) to (1); \draw[graph edge, ->] (3) to (7); \draw[graph edge, ->] (4) to [bend right] (5); \draw[graph edge, ->] (4) to (8); \draw[graph edge, ->] (5) to [bend right] (4); \draw[graph edge, ->] (5) to [bend right] (6); \draw[graph edge, ->] (6) to [bend right] (5); \draw[graph edge, ->] (6) to (8); \end{tikzpicture}
Figure 3.1(b)

The resulting graph

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{tikzpicture}[node distance=2cm] \node[graph node] (1) {1}; \node[graph node, right of=1] (2) {2}; \node[graph node, below of=1] (3) {3}; \node[graph node, right of=2] (4) {4}; \node[graph node, fill=mycitecolor, text=white, below of=2, xshift=1cm] (0) {0}; \node[graph node, right of=4] (5) {5}; \node[graph node, below of=5] (6) {6}; \draw[graph edge, ->] (2) to (1); \draw[graph edge, ->] (3) to (2); \draw[graph edge, ->] (4) to (2); \draw[graph edge, ->] (0) to (2); \draw[graph edge, ->] (1) to (3); \draw[graph edge, ->] (0) to (3); \draw[graph edge, ->] (5) to [bend left] (4); \draw[graph edge, ->] (0) to (4); \draw[graph edge, ->] (4) to [bend left] (5); \draw[graph edge, ->] (6) to [bend left] (5); \draw[graph edge, ->] (5) to [bend left] (6); \draw[graph edge, ->] (0) to (6); \end{tikzpicture}
Algorithm 1.

Computing the index of contraction of a square substochastic matrix

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{SVG} \begin{flushleft} \footnotesize\hspace*{\algorithmicindent} \textbf{Input:} a square substochastic matrix $B \coloneqq(b_{ij})_{1 \leq i, j \leq n}$ of order $n$ \\ \hspace*{\algorithmicindent} \textbf{Output:} $\widehat{\protect\conn}B$ \end{flushleft} \begin{multicols}{2} \footnotesize\begin{algor}[1] \item[{{*}}] \emph{// Find all rows in $\hat{J}(B)$} \item[{{*}}] $s\gets0$ \item[{{*}}] $S[1,\ldots,n]\gets\text{\textbf{new} array of bools}$ \item[{forall}] rows $i$ \begin{algor}[1] \item[{{*}}] $t\gets0$ \item[{forall}] cols $j$ s.t. $b_{ij}\neq0$\label{code:sparse_loop_1} \begin{algor}[1] \item[{{*}}] $t\gets t+b_{ij}$\label{code:sum} \end{algor} \item[{endfor}]~ \item[{if}] $t<1$\label{code:compare} \begin{algor}[1] \item[{{*}}] $s\gets s+1$ \item[{{*}}] $S[i]\gets\TRUE$\emph{ // $i\in\hat{J}(B)$} \end{algor} \item[{else}]~ \begin{algor}[1] \item[{{*}}] $S[i]\gets\FALSE$\emph{ // $i\notin\hat{J}(B)$} \end{algor} \item[{endif}]~ \end{algor} \item[{endfor}]~ \item[{{*}}]~ \item[{{*}}] \emph{// Find neighbours of each vertex \textup{(}ignoring extraneous edges as per Remark 3.2\textup{)}} \par\item[{{*}}] $N[0,\ldots,n]\gets\text{\textbf{new }array of lists}$ \item[{forall}] rows $i$ s.t. $S[i]=\FALSE$ \begin{algor}[1] \item[{forall}] cols $j\neq i$ s.t. $b_{ij}\neq0$\label{code:sparse_loop_2} \begin{algor}[1] \item[{if}] $S[j]=\TRUE$ \begin{algor}[1] \item[{{*}}] $N[0]$.add($i$) \end{algor} \item[{else}]~ \begin{algor}[1] \item[{{*}}] $N[j]$.add($i$) \end{algor} \item[{endif}]~ \end{algor} \item[{endfor}]~ \end{algor} \item[{endfor}]~ \item[{{*}}]~ \item[{{*}}] \emph{// Perform BFS starting at $0$} \item[{{*}}] $\operatorname{result}\gets0$ \item[{{*}}] $Q\gets\text{\textbf{new }queue}$ \item[{{*}}] $Q$.enqueue($(0,0)$) \item[{while}] $Q$ is not empty \begin{algor}[1] \item[{{*}}] $(j,d)\gets Q$.dequeue() \item[{{*}}] $\operatorname{result}\gets\max(\operatorname{result},d)$ \item[{forall}] $i$ in $N[j]$ s.t. $S[i]=\FALSE$ \begin{algor}[1] \item[{{*}}] $s\gets s+1$ \item[{{*}}] $S[i]\gets\TRUE$ \item[{{*}}] $Q$.enqueue($(i,d+1)$) \end{algor} \item[{endfor}]~ \end{algor} \item[{endwhile}]~ \item[{{*}}]~ \item[{if}] $s=n$ \begin{algor}[1] \item[{{*}}] $\widehat{\conn}B\gets\operatorname{result}$ \end{algor} \item[{else}]~ \begin{algor}[1] \item[{{*}}] $\widehat{\conn}B\gets\infty$ \end{algor} \item[{endif}]~ \end{algor} \end{multicols} \par\end{SVG}
Figure 4.1.
Graphic without alt text
Figure 4.1(a)

Timing results ( scale on time axis)

Graphic without alt text
Figure 4.1(b)

Probability that a randomly sampled matrix is a nonsingular M-matrix (99% confidence intervals shown)

Graphic without alt text
Algorithm 2.

Sampling a matrix from the space of w.d.d. -matrices

\renewcommand{\arraystretch}{1} \setlength{\unitlength}{1.0pt} \begin{SVG} \begin{flushleft} \footnotesize\hspace*{\algorithmicindent} \textbf{Input:} positive integers $n$ and $\operatorname{nnz} \leq n$ \\ \hspace*{\algorithmicindent} \textbf{Output:} matrix $A$ \end{flushleft} \begin{multicols}{2} \footnotesize\begin{algor}[1] \item[{{*}}] \emph{// Initialize zero matrix} \item[{{*}}] $B\equiv(b_{ij})\gets0$ \item[{{*}}]~ \item[{for}] $i$ from $1$ to $n$ \begin{algor}[1] \item[{{*}}] \emph{// Determine the number of nonzero entries $m$ in row $i$} \item[{{*}}] {$m\sim\operatorname{Unif}\{1,\ldots,\operatorname{nnz}\}$} \item[{{*}}]~ \item[{\emph{{*}}}] \emph{// Determine the row-sum of row $i$ (less than one with probability $1/n$)} \item[{{*}}] $u\sim\operatorname{Unif}[0,1]$ \item[{if}] $u<1/n$ \begin{algor}[1] \item[{{*}}] $s\sim\operatorname{Unif}[0,1]$ \end{algor} \item[{else}]~ \begin{algor}[1] \item[{{*}}] $s\gets1$ \end{algor} \item[{endif}]~ \item[{{*}}]~ \item[{\emph{{*}}}] \emph{// Determine the indices $j_{k}$ for which $b_{ij_{k}}$ is nonzero by uniformly sampling $\{1,\ldots,n\}$ without replacement} \item[{{*}}] {$\mathcal{A}\gets\{1,\ldots,n\}$} \item[{{*}}] $j_{1}\sim\operatorname{Unif}\mathcal{A}$ \item[{for}] $k$ from $2$ to $m$ \begin{algor}[1] \item[{{*}}] {$\mathcal{A}\gets\mathcal{A}\setminus\{j_{k-1}\}$} \item[{{*}}] $j_{k}\sim\operatorname{Unif}\mathcal{A}$ \end{algor} \item[{endfor}]~ \item[{{*}}]~ \item[{\emph{{*}}}] \emph{// Determine the values of the nonzero entries in row $i$} \item[{if}] $m\geq2$ \begin{algor}[1] \item[{{*}}] $\alpha\gets(1,\ldots,1)\in\mathbb{R}^{m}$ \item[{{*}}] $(b_{ij_{2}},\ldots,b_{ij_{m}})\sim\operatorname{Dir}\alpha$ \end{algor} \item[{endif}]~ \item[{{*}}] $b_{ij_{1}}\gets s$ \item[{for}] $k$ from $2$ to $m$ \begin{algor}[1] \item[{{*}}] $b_{ij_{k}}\gets sb_{ij_{k}}$ \item[{{*}}] $b_{ij_{1}}\gets b_{ij_{1}}-b_{ij_{k}}$ \end{algor} \item[{endfor}]~ \end{algor} \item[{endfor}]~ \item[{{*}}]~ \item[{{*}}] \emph{// Make a w.d.d.\ $\operatorname{L}_{0}$-matrix from the substochastic matrix $B$} \item[{{*}}] $A\gets I-B$ \end{algor} \end{multicols} \end{SVG}

Mathematical Fragments

Equation (1.1)
Equation (2.1)
Theorem 2.5.

Let be a square substochastic matrix. If is finite,

Otherwise,

Corollary 2.6.

Let be a square substochastic matrix. Then, its spectral radius is no larger than one. Moreover, the following statements are equivalent:

(i)

is finite.

(ii)

is convergent.

(iii)

is nonsingular.

Corollary 2.7.

A square irreducible substochastic matrix is convergent if and only if is nonempty.

Equation (2.2)
Equation (2.3)
Theorem 2.8.

Let be a square substochastic matrix with normal form Equation 2.2. is convergent if and only if is nonempty for each . Moreover, if is convergent,

where and is the order of the matrix (it is understood that if , then ).

Lemma 2.9.

Let be an substochastic matrix. Then, if and only if .

Lemma 2.10.

Let and be compatible (i.e., the product is well-defined) substochastic matrices. Then,

(i)

is a substochastic matrix.

(ii)

If , then .

(iii)

If , then if and only if there exists such that is an edge in .

(iv)

is an edge in if and only if there exist edges and in and , respectively.

Lemma 2.11.

Let be a square substochastic matrix, , and be a positive integer. Then, if and only if there is a walk in such that .

Definition 2.14.

An L-matrix is a Z-matrix with positive diagonal entries.

Proposition 2.15.

A nonsingular M-matrix is an L-matrix.

Definition 2.16.

Let be a square real matrix. is monotone if and only if it is nonsingular and its inverse consists only of nonnegative entries.

Proposition 2.17.

The following are equivalent:

(i)

is a nonsingular M-matrix.

(ii)

is a monotone Z-matrix.

Lemma 2.21.

A square complex w.d.d. matrix is w.c.d.d. if and only if is finite.

Lemma 2.22.

Let be an w.d.d. -matrix and be an diagonal matrix whose diagonal entries are positive and satisfy for each such that . Then, is substochastic and

Conversely, let be an substochastic matrix and be an diagonal matrix whose diagonal entries are positive. Then, is a w.d.d. -matrix and 2.5 holds.

Example 2.23.

Let be a square w.d.d. L-matrix of order and

denote the point Jacobi matrix associated with (cf. Reference 21, Chapter 3). By the previous results, is w.c.d.d. if and only if is finite.

Note that the substochastic matrix in Figure 2.1 is the point Jacobi matrix associated with the w.d.d. L-matrix in Figure 2.2.

Theorem 2.24.

The following are equivalent:

(i)

is a nonsingular w.d.d. M-matrix.

(ii)

is a nonsingular w.d.d. L-matrix.

(iii)

is a w.c.d.d. L-matrix.

Equation (3.1)
Equation (3.2)
Lemma 3.3.

Let be a substochastic matrix with at most nonzero entries per row. Denoting by the quantity computed by Algorithm 1 under the standard model of floating point arithmetic Equation 3.1 and with condition replaced by , where , the following results hold:

(i)

if , then .

(ii)

if and for , then .

References

Reference [1]
P. Azimzadeh and P. A. Forsyth, Weakly chained matrices, policy iteration, and impulse control, SIAM J. Numer. Anal. 54 (2016), no. 3, 1341–1364. MR3493959,
Show rawAMSref \bib{MR3493959}{article}{ author={Azimzadeh, P.}, author={Forsyth, P. A.}, title={Weakly chained matrices, policy iteration, and impulse control}, journal={SIAM J. Numer. Anal.}, volume={54}, date={2016}, number={3}, pages={1341--1364}, issn={0036-1429}, review={\MR {3493959}}, }
Reference [2]
A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Classics in Applied Mathematics, vol. 9, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994. Revised reprint of the 1979 original. MR1298430,
Show rawAMSref \bib{MR1298430}{book}{ author={Berman, Abraham}, author={Plemmons, Robert J.}, title={Nonnegative Matrices in the Mathematical Sciences}, series={Classics in Applied Mathematics}, volume={9}, note={Revised reprint of the 1979 original}, publisher={Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA}, date={1994}, pages={xx+340}, isbn={0-89871-321-8}, review={\MR {1298430}}, }
Reference [3]
O. Bokanowski, S. Maroso, and H. Zidani, Some convergence results for Howard’s algorithm, SIAM J. Numer. Anal. 47 (2009), no. 4, 3001–3026. MR2551155,
Show rawAMSref \bib{MR2551155}{article}{ author={Bokanowski, Olivier}, author={Maroso, Stefania}, author={Zidani, Hasnaa}, title={Some convergence results for Howard's algorithm}, journal={SIAM J. Numer. Anal.}, volume={47}, date={2009}, number={4}, pages={3001--3026}, issn={0036-1429}, review={\MR {2551155}}, }
Reference [4]
J. H. Bramble and B. E. Hubbard, On a finite difference analogue of an elliptic boundary problem which is neither diagonally dominant nor of non-negative type, J. Math. and Phys. 43 (1964), 117–132. MR0162367,
Show rawAMSref \bib{MR0162367}{article}{ author={Bramble, J. H.}, author={Hubbard, B. E.}, title={On a finite difference analogue of an elliptic boundary problem which is neither diagonally dominant nor of non-negative type}, journal={J. Math. and Phys.}, volume={43}, date={1964}, pages={117--132}, review={\MR {0162367}}, }
Reference [5]
G.-H. Cheng and T.-Z. Huang, An upper bound for of strictly diagonally dominant -matrices, Linear Algebra Appl. 426 (2007), no. 2-3, 667–673. MR2350685,
Show rawAMSref \bib{MR2350685}{article}{ author={Cheng, Guang-Hui}, author={Huang, Ting-Zhu}, title={An upper bound for $\Vert A^{-1}\Vert _\infty $ of strictly diagonally dominant $M$-matrices}, journal={Linear Algebra Appl.}, volume={426}, date={2007}, number={2-3}, pages={667--673}, issn={0024-3795}, review={\MR {2350685}}, }
Reference [6]
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 3rd ed., MIT Press, Cambridge, MA, 2009. MR2572804,
Show rawAMSref \bib{cormen2001introduction}{book}{ author={Cormen, Thomas H.}, author={Leiserson, Charles E.}, author={Rivest, Ronald L.}, author={Stein, Clifford}, title={Introduction to Algorithms}, edition={3}, publisher={MIT Press, Cambridge, MA}, date={2009}, pages={xx+1292}, isbn={978-0-262-03384-8}, review={\MR {2572804}}, }
Reference [7]
F. R. Gantmacher, Applications of the Theory of Matrices, Translated by J. L. Brenner, with the assistance of D. W. Bushaw and S. Evanusa, Interscience Publishers, Inc., New York; Interscience Publishers Ltd., London, 1959. MR0107648,
Show rawAMSref \bib{MR0107648}{book}{ author={Gantmacher, F. R.}, title={Applications of the Theory of Matrices}, series={Translated by J. L. Brenner, with the assistance of D. W. Bushaw and S. Evanusa}, publisher={Interscience Publishers, Inc., New York; Interscience Publishers Ltd., London}, date={1959}, pages={ix+317}, review={\MR {0107648}}, }
Reference [8]
G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed., Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 2013. MR3024913,
Show rawAMSref \bib{MR3024913}{book}{ author={Golub, Gene H.}, author={Van Loan, Charles F.}, title={Matrix Computations}, series={Johns Hopkins Studies in the Mathematical Sciences}, edition={4}, publisher={Johns Hopkins University Press, Baltimore, MD}, date={2013}, pages={xiv+756}, isbn={978-1-4214-0794-4}, isbn={1-4214-0794-9}, isbn={978-1-4214-0859-0}, review={\MR {3024913}}, }
Reference [9]
N. J. Higham, The accuracy of floating point summation, SIAM J. Sci. Comput. 14 (1993), no. 4, 783–799. MR1223274,
Show rawAMSref \bib{MR1223274}{article}{ author={Higham, Nicholas J.}, title={The accuracy of floating point summation}, journal={SIAM J. Sci. Comput.}, volume={14}, date={1993}, number={4}, pages={783--799}, issn={1064-8275}, review={\MR {1223274}}, }
Reference [10]
T.-Z. Huang and Y. Zhu, Estimation of for weakly chained diagonally dominant -matrices, Linear Algebra Appl. 432 (2010), no. 2-3, 670–677. MR2577710,
Show rawAMSref \bib{MR2577710}{article}{ author={Huang, Ting-Zhu}, author={Zhu, Yan}, title={Estimation of $\|A^{-1}\|_\infty $ for weakly chained diagonally dominant $M$-matrices}, journal={Linear Algebra Appl.}, volume={432}, date={2010}, number={2-3}, pages={670--677}, issn={0024-3795}, review={\MR {2577710}}, }
Reference [11]
C. Li and Y. Li, Weakly chained diagonally dominant -matrices and error bounds for linear complementarity problems, Numer. Algorithms 73 (2016), no. 4, 985–998. MR3579726,
Show rawAMSref \bib{MR3579726}{article}{ author={Li, Chaoqian}, author={Li, Yaotang}, title={Weakly chained diagonally dominant $B$-matrices and error bounds for linear complementarity problems}, journal={Numer. Algorithms}, volume={73}, date={2016}, number={4}, pages={985--998}, issn={1017-1398}, review={\MR {3579726}}, }
Reference [12]
C. Li, Y. Li, and R. Zhao, New inequalities for the minimum eigenvalue of -matrices, Linear Multilinear Algebra 61 (2013), no. 9, 1267–1279. MR3175363,
Show rawAMSref \bib{MR3175363}{article}{ author={Li, Chaoqian}, author={Li, Yaotang}, author={Zhao, Ruijuan}, title={New inequalities for the minimum eigenvalue of $M$-matrices}, journal={Linear Multilinear Algebra}, volume={61}, date={2013}, number={9}, pages={1267--1279}, issn={0308-1087}, review={\MR {3175363}}, }
Reference [13]
C. Li, R. Ma, Q. Liu, and Y. Li, Subdirect sums of weakly chained diagonally dominant matrices, Linear Multilinear Algebra 65 (2017), no. 6, 1220–1231. MR3615539,
Show rawAMSref \bib{li2016subdirect}{article}{ author={Li, Chaoqian}, author={Ma, Ruidan}, author={Liu, Qilong}, author={Li, Yaotang}, title={Subdirect sums of weakly chained diagonally dominant matrices}, journal={Linear Multilinear Algebra}, volume={65}, date={2017}, number={6}, pages={1220--1231}, issn={0308-1087}, review={\MR {3615539}}, }
Reference [14]
W. Li, The infinity norm bound for the inverse of nonsingular diagonal dominant matrices, Appl. Math. Lett. 21 (2008), no. 3, 258–263. MR2433738,
Show rawAMSref \bib{MR2433738}{article}{ author={Li, Wen}, title={The infinity norm bound for the inverse of nonsingular diagonal dominant matrices}, journal={Appl. Math. Lett.}, volume={21}, date={2008}, number={3}, pages={258--263}, issn={0893-9659}, review={\MR {2433738}}, }
Reference [15]
J. M. Peña, A stable test to check if a matrix is a nonsingular -matrix, Math. Comp. 73 (2004), no. 247, 1385–1392. MR2047092,
Show rawAMSref \bib{MR2047092}{article}{ author={Pe\~na, J. M.}, title={A stable test to check if a matrix is a nonsingular $M$-matrix}, journal={Math. Comp.}, volume={73}, date={2004}, number={247}, pages={1385--1392}, issn={0025-5718}, review={\MR {2047092}}, }
Reference [16]
R. J. Plemmons, -matrix characterizations. I. Nonsingular -matrices, Linear Algebra and Appl. 18 (1977), no. 2, 175–188. MR0444681,
Show rawAMSref \bib{MR0444681}{article}{ author={Plemmons, R. J.}, title={$M$-matrix characterizations. I. Nonsingular $M$-matrices}, journal={Linear Algebra and Appl.}, volume={18}, date={1977}, number={2}, pages={175--188}, review={\MR {0444681}}, }
Reference [17]
Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., Society for Industrial and Applied Mathematics, Philadelphia, PA, 2003. MR1990645,
Show rawAMSref \bib{MR1990645}{book}{ author={Saad, Yousef}, title={Iterative Methods for Sparse Linear Systems}, edition={2}, publisher={Society for Industrial and Applied Mathematics, Philadelphia, PA}, date={2003}, pages={xviii+528}, isbn={0-89871-534-2}, review={\MR {1990645}}, }
Reference [18]
P. N. Shivakumar and K. H. Chew, A sufficient condition for nonvanishing of determinants, Proc. Amer. Math. Soc. 43 (1974), 63–66. MR0332820,
Show rawAMSref \bib{MR0332820}{article}{ author={Shivakumar, P. N.}, author={Chew, Kim Ho}, title={A sufficient condition for nonvanishing of determinants}, journal={Proc. Amer. Math. Soc.}, volume={43}, date={1974}, pages={63--66}, issn={0002-9939}, review={\MR {0332820}}, }
Reference [19]
P. N. Shivakumar, J. J. Williams, Q. Ye, and C. A. Marinov, On two-sided bounds related to weakly diagonally dominant -matrices with application to digital circuit dynamics, SIAM J. Matrix Anal. Appl. 17 (1996), no. 2, 298–312. MR1384509,
Show rawAMSref \bib{MR1384509}{article}{ author={Shivakumar, P. N.}, author={Williams, Joseph J.}, author={Ye, Qiang}, author={Marinov, Corneliu A.}, title={On two-sided bounds related to weakly diagonally dominant $M$-matrices with application to digital circuit dynamics}, journal={SIAM J. Matrix Anal. Appl.}, volume={17}, date={1996}, number={2}, pages={298--312}, issn={0895-4798}, review={\MR {1384509}}, }
Reference [20]
G.-X. Tian and T.-Z. Huang, Inequalities for the minimum eigenvalue of -matrices, Electron. J. Linear Algebra 20 (2010), 291–302. MR2653540,
Show rawAMSref \bib{MR2653540}{article}{ author={Tian, Gui-Xian}, author={Huang, Ting-Zhu}, title={Inequalities for the minimum eigenvalue of $M$-matrices}, journal={Electron. J. Linear Algebra}, volume={20}, date={2010}, pages={291--302}, issn={1081-3810}, review={\MR {2653540}}, }
Reference [21]
R. S. Varga, Matrix Iterative Analysis, Second revised and expanded edition, Springer Series in Computational Mathematics, vol. 27, Springer-Verlag, Berlin, 2000. MR1753713,
Show rawAMSref \bib{MR1753713}{book}{ author={Varga, Richard S.}, title={Matrix Iterative Analysis}, series={Springer Series in Computational Mathematics}, volume={27}, edition={Second revised and expanded edition}, publisher={Springer-Verlag, Berlin}, date={2000}, pages={x+358}, isbn={3-540-66321-5}, review={\MR {1753713}}, }
Reference [22]
F. Wang and D.-s. Sun, Some new inequalities for the minimum eigenvalue of -matrices, J. Inequal. Appl. (2015), 2015:195, 7. MR3357017,
Show rawAMSref \bib{MR3357017}{article}{ author={Wang, Feng}, author={Sun, De-shu}, title={Some new inequalities for the minimum eigenvalue of $M$-matrices}, journal={J. Inequal. Appl.}, date={2015}, pages={2015:195, 7}, issn={1029-242X}, review={\MR {3357017}}, }
Reference [23]
P. Wang, An upper bound for of strictly diagonally dominant -matrices, Linear Algebra Appl. 431 (2009), no. 5-7, 511–517. MR2535528,
Show rawAMSref \bib{MR2535528}{article}{ author={Wang, Ping}, title={An upper bound for $\Vert A^{-1}\Vert _\infty $ of strictly diagonally dominant $M$-matrices}, journal={Linear Algebra Appl.}, volume={431}, date={2009}, number={5-7}, pages={511--517}, issn={0024-3795}, review={\MR {2535528}}, }
Reference [24]
M. Xu, S. Li, and C. Li, Inequalities for the minimum eigenvalue of doubly strictly diagonally dominant -matrices, J. Appl. Math. (2014), Art. ID 535716, 8. MR3253625,
Show rawAMSref \bib{MR3253625}{article}{ author={Xu, Ming}, author={Li, Suhua}, author={Li, Chaoqian}, title={Inequalities for the minimum eigenvalue of doubly strictly diagonally dominant $M$-matrices}, journal={J. Appl. Math.}, date={2014}, pages={Art. ID 535716, 8}, issn={1110-757X}, review={\MR {3253625}}, }
Reference [25]
Z. Yang, B. Zheng, and X. Liu, A new upper bound for of a strictly -diagonally dominant -matrix, Adv. Numer. Anal. (2013), Art. ID 980615, 6. MR3149442,
Show rawAMSref \bib{MR3149442}{article}{ author={Yang, Zhanshan}, author={Zheng, Bing}, author={Liu, Xilan}, title={A new upper bound for $\|A^{-1}\|$ of a strictly $\alpha $-diagonally dominant $M$-matrix}, journal={Adv. Numer. Anal.}, date={2013}, pages={Art. ID 980615, 6}, issn={1687-9562}, review={\MR {3149442}}, }
Reference [26]
J. Zhao and C. Sang, Several new inequalities for the minimum eigenvalue of -matrices, J. Inequal. Appl. (2016), Paper No. 119, 9. MR3486018,
Show rawAMSref \bib{MR3486018}{article}{ author={Zhao, Jianxing}, author={Sang, Caili}, title={Several new inequalities for the minimum eigenvalue of $M$-matrices}, journal={J. Inequal. Appl.}, date={2016}, pages={Paper No. 119, 9}, issn={1029-242X}, review={\MR {3486018}}, }

Article Information

MSC 2010
Primary: 65F30 (Other matrix algorithms), 15B48 (Positive matrices and their generalizations; cones of matrices), 15B51 (Stochastic matrices)
Secondary: 65F50 (Sparse matrices)
Keywords
  • M-matrix
  • substochastic matrix
  • diagonally dominant
  • weakly chained diagonally dominant
  • sparse matrix
Author Information
Parsiad Azimzadeh
David R. Cheriton School of Computer Science, University of Waterloo, Waterloo ON, Canada N2L 3G1
Address at time of publication: Department of Mathematics, University of Michigan, East Hall, 530 Church Street, Ann Arbor, Michigan 48109
parsiad@umich.edu
MathSciNet
Journal Information
Mathematics of Computation, Volume 88, Issue 316, ISSN 1088-6842, published by the American Mathematical Society, Providence, Rhode Island.
Publication History
This article was received on , revised on , , , and published on .
Copyright Information
Copyright 2018 American Mathematical Society
Article References
  • Permalink
  • Permalink (PDF)
  • DOI 10.1090/mcom/3347
  • MathSciNet Review: 3882284
  • Show rawAMSref \bib{3882284}{article}{ author={Azimzadeh, Parsiad}, title={A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix}, journal={Math. Comp.}, volume={88}, number={316}, date={2019-03}, pages={783-800}, issn={0025-5718}, review={3882284}, doi={10.1090/mcom/3347}, }

Settings

Change font size
Resize article panel
Enable equation enrichment

Note. To explore an equation, focus it (e.g., by clicking on it) and use the arrow keys to navigate its structure. Screenreader users should be advised that enabling speech synthesis will lead to duplicate aural rendering.

For more information please visit the AMS MathViewer documentation.