Maximum spread of graphs and bipartite graphs

By Jane Breen, Alex W. N. Riasanovsky, Michael Tait, and John Urschel

Abstract

Given any graph , the spread of is the maximum difference between any two eigenvalues of the adjacency matrix of . In this paper, we resolve a pair of 20-year-old conjectures of Gregory, Hershkowitz, and Kirkland regarding the spread of graphs. The first states that for all positive integers , the -vertex graph that maximizes spread is the join of a clique and an independent set, with and vertices, respectively. Using techniques from the theory of graph limits and numerical analysis, we prove this claim for all sufficiently large. As an intermediate step, we prove an analogous result for a family of operators in the Hilbert space over . The second conjecture claims that for any fixed , if maximizes spread over all -vertex graphs with edges, then is bipartite. We prove an asymptotic version of this conjecture. Furthermore, we construct an infinite family of counterexamples, which shows that our asymptotic solution is tight up to lower-order error terms.

1. Introduction

The spread of an arbitrary complex matrix is the diameter of its spectrum; that is,

where the maximum is taken over all pairs of eigenvalues of . This quantity has been well studied in general (see Reference 11Reference 16Reference 22Reference 33 for details and additional references). Most notably, Johnson, Kumar, and Wolkowitz Reference 16, Theorem 2.1 obtained the lower bound

for normal matrices , and Mirsky Reference 22, Theorem 2 determined the upper bound

for any matrix , which is tight for any normal matrix with of its eigenvalues all equal and equal to the arithmetic mean of the other two.

The spread of a matrix has also received interest in some particular cases. Consider a simple undirected graph of order . The adjacency matrix of is the matrix whose rows and columns are indexed by the vertices of , with entries satisfying if and otherwise. This matrix is real and symmetric, and so its eigenvalues are real and can be ordered . When considering the spread of the adjacency matrix of some graph , the spread is simply the distance between and , denoted by

In this instance, is referred to as the spread of the graph .

In Reference 13, Gregory, Hershkowitz, and Kirkland investigated a number of properties regarding the spread of a graph, determined upper and lower bounds on , and made two key conjectures. Let us denote the maximum spread over all -vertex graphs by , the maximum spread over all -vertex graphs of size by , and the maximum spread over all -vertex bipartite graphs of size by . The join of two vertex-disjoint graphs is the graph containing both and , and all edges between the two. The complement of a graph is the graph on containing only the edges not in . Let be the clique of order and be the join of the clique (the graph with all possible edges included) and the independent set (the graph with no edges). We say a graph is spread extremal if it has spread . The conjectures addressed in this article are as follows.

Conjecture 1 (Reference 13, Conjecture 1.3).

For any positive integer , the graph of order with maximum spread is ; that is, and this value is attained only by .

Conjecture 2 (Reference 13, Conjecture 1.4).

If is a graph with vertices and edges attaining the maximum spread , then must be bipartite. That is, for all .

Conjecture 1 is referred to as the Spread Conjecture, and Conjecture 2 is referred to as the Bipartite Spread Conjecture. Much of what is known about Conjecture 1 is contained in Reference 13, but the reader may also see Reference 29 for a description of the problem and references to other work. In this paper, we resolve both conjectures. We prove the Spread Conjecture for all sufficiently large, prove an asymptotic version of the Bipartite Spread Conjecture, and provide an infinite family of counterexamples to illustrate that our asymptotic version of the Bipartite Spread Conjecture is tight up to lower-order error terms. These results are given by Theorems 1.1, 1.2, and 1.3.

Theorem 1.1.

There exists a constant so that the following holds: Suppose is a graph on vertices with maximum spread. Then is the join of a clique on vertices and an independent set on vertices.

Theorem 1.2.

For all satisfying ,

Theorem 1.3.

For any , there exists some such that

for all and some depending on .

The proof of Theorem 1.1 constitutes the main subject of this work. The general technique consists of showing that a spread-extremal graph has certain desirable properties, solving an analogous problem for graph limits, and then using this result to say something about the Spread Conjecture for sufficiently large . For the interested reader, we state the analogous graph limit result in the language of functional analysis (see Reference 18 for an introduction to graph limits).

Theorem 1.4.

Let be a Lebesgue-measurable function such that for a.e. , and let be the kernel operator on associated with . For all unit functions ,

Moreover, equality holds if and only if there exists a measure-preserving transformation on such that for a.e. ,

The bound of in Theorem 1.4 is the scaled limit of the spread , and the function (up to a set of measure zero) corresponds to a scaled limit of . In addition to being a key ingredient in the proof of Theorem 1.1, Theorem 1.4 also immediately implies a result for arbitrary symmetric nonnegative matrices.

Corollary 1.5.

Let be an symmetric nonnegative matrix. Then

and

Corollary 1.5, paired with the lower bound provided by graph , implies that for all , i.e. the spread conjecture is true for all up to an additive factor. Furthermore, Corollary 1.5 implies that the maximum spread of a symmetric matrix is exactly for and is within of for . The second part of Corollary 1.5 gives a bound on the magnitude of off-diagonal entries of a nonnegative matrix under a unitary change of basis, and is also tight for (and tight up to for ).

The proof of Theorem 1.1 can be found in Sections 2 through 6, with certain technical details reserved for Appendices A and B. We provide an in-depth overview of the proof of Theorem 1.1 in Subsection 1.1. By comparison, the proofs of Theorems 1.2 and 1.3 are surprisingly short, making use of the theory of equitable decompositions and a well-chosen class of counterexamples. Their proof can be found in Section 7. Finally, in Section 8, we discuss further questions and possible future avenues of research.

1.1. High-level outline of spread proof

Here, we provide a concise, high-level description of our asymptotic proof of the Spread Conjecture. The proof itself is quite involved, making use of interval arithmetic and a number of fairly complicated symbolic calculations, but conceptually, it is quite intuitive. It consists of four main steps.

Step 1 (Graph-theoretic results).

In Section 2, we observe a number of important structural properties of any graph that maximizes spread for a given order . In particular, we show that

any graph that maximizes spread must be the join of two threshold graphs (Lemma 2.1),

both graphs in this join have order linear in (Lemma 2.2),

any unit eigenvectors and corresponding to and have infinity norms of order (Lemma 2.3),

the quantities , , are all nearly equal, up to a term of order (Lemma 2.4).

This last structural property serves as the backbone of our proof. In addition, we note that, by a tensor argument, an asymptotic upper bound for implies a bound for all .

Step 2 (Graphons and a finite-dimensional eigenvalue problem).

In Sections 3 and 4, we make use of graphons to understand how spread-extremal graphs behave as tends to infinity. Section 3 consists of a basic introduction to graphons, and a translation of the graph results of Step 1 to the graphon setting. In particular, we prove the graphon analogue of the spread-extremal graph properties that

vertices and are adjacent if and only if (Lemma 3.6),

the quantities , , are all nearly equal (Lemma 3.7).

Next, in Section 4, we show that the spread-extremal graphon for our problem takes the form of a particular stepgraphon with a finite number of blocks (Theorem 4.1). Through an averaging argument, we note that the spread-extremal graphon takes the form of a stepgraphon with a fixed structure of symmetric blocks, illustrated below (black equals one, white equals zero).

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \resizebox{15pc}{!}{ \renewcommand{\arraystretch}{0.8} \begin{tabular}{||cccc||ccc||}\hline\hline\black& \black& \black& \black& \black& \black& \black\\ \black& \black& \black& & \black& \black& \black\\ \black& \black& & & \black& \black& \black\\ \black& & & & \black& \black& \black\\\hline\hline\black& \black& \black& \black& \black& \black& \\ \black& \black& \black& \black& \black& & \\ \black& \black& \black& \black& & & \\\hline\hline\end{tabular}} \end{SVG}

The lengths , , of each row and column in the spread-extremal stepgraphon are unknown. For any choice of lengths , we can associate a matrix whose spread is identical to that of the associated stepgraphon pictured above. Let be the matrix with equal to the value of the above stepgraphon on block , and be a diagonal matrix with on the diagonal. Then the matrix has spread equal to the spread of the associated stepgraphon.

Step 3 (Computer-assisted proof of a finite-dimensional eigenvalue problem).

In Section 5, we show that the optimizing choice of is, without loss of generality, given by , , and all other (Theorem 5.1). This is exactly the limit of the conjectured spread-extremal graph as tends to infinity. The proof of this fact is extremely technical, and relies on a computer-assisted proof using both interval arithmetic and symbolic computations. This is the only portion of the proof (of Theorem 1.1) that requires the use of interval arithmetic. Though not a proof, in Figure 1 we provide intuitive visual justification of this result. In this figure, we provide contour plots resulting from numerical computations of the spread of the above matrix for various values of . The numerical results suggest that the block stepgraphon with lengths and is indeed optimal. See Figure 1 and the associated caption for details.

The actual proof of this fact consists of the following steps:

we reduce the possible choices of nonzero from to different cases (Lemma A.2),

using eigenvalue equations, the graphon versions of all nearly equal, and interval arithmetic, we prove that, of the cases, only the cases

can produce a spread-extremal stepgraphon (Lemma 5.2),

prove that the case cannot be spread extremal, using basic results from the theory of cubic polynomials and computer-assisted symbolic calculations (Lemma 5.4).

This proves that the spread-extremal graphon is a stepgraphon that, without loss of generality, takes value zero on the block and one elsewhere (Theorem 1.4/Theorem 5.1).

Step 4 (From graphons to an asymptotic proof of the spread conjecture).

Finally, in Section 6, we convert our result for the spread-extremal graphon to a statement for graphs. This process consists of two main parts:

using our graphon theorem (Theorem 5.1), we show that any spread-extremal graph takes the form for , , and (Lemma 6.2), i.e. any spread-extremal graph is equal up to a set of vertices to the conjectured optimal graph ,

we show that, for sufficiently large, the spread of , , is maximized when (Lemma 6.3).

Together, these two results complete our proof of the spread conjecture for sufficiently large (Theorem 1.1).

2. Properties of spread-extremal graphs

In this section, we review what has already been shown about spread-extremal graphs (-vertex graphs with spread ) in Reference 13. We then prove a number of properties of spread-extremal graphs and properties of the eigenvectors associated with the maximum and minimum eigenvalues of a spread-extremal graph.

Let be a graph, and let be the adjacency matrix of , with eigenvalues . For unit vectors , , we have

Hence (as observed in Reference 13), the spread of a graph can be expressed as

where the maximum is taken over all unit vectors . This maximum is attained only for orthonormal eigenvectors corresponding to the eigenvalues , respectively. We refer to such a pair of vectors as extremal eigenvectors of . For any two vectors , in , let denote the graph for which distinct vertices are adjacent if and only if . As noted in Reference 13, Proposition 3.3, any spread-extremal graph must be connected (as the join of nontrivial vertex-disjoint graphs is strictly greater than their union ). Then from the above, there is some graph that is a spread-extremal graph, with , orthonormal and positive Reference 13, Lemma 3.5. By the Perron-Frobenius Theorem, is a simple eigenvalue. However, may be repeated, and so may not be unique up to normalization.

In addition, we enhance Reference 13, Lemmas 3.4 and 3.5 using some helpful definitions and the language of threshold graphs. Whenever is understood, let and . In addition, let , , be the subgraph induced by , i.e., the graph with vertex set and containing all edges of between vertices in .

For our purposes, we say that is a threshold graph if and only if there exists a function such that for all distinct , if and only if .⁠Footnote1 Here, is a threshold function for (with as its threshold). The following detailed lemma shows that any spread-extremal graph is the join of two threshold graphs with threshold functions that can be made explicit.

1

Here, we take the usual convention that for all , .

Lemma 2.1.

Let , and suppose is an -vertex graph such that . Denote by and extremal unit eigenvectors for . Then

(i)

For any two vertices of , and are adjacent whenever and and are nonadjacent whenever .

(ii)

For any distinct , .

(iii)

Let , and let and . Then .

(iv)

For each , is a threshold graph with threshold function defined on all by

Proof.

Suppose is an -vertex graph such that , and write for its adjacency matrix. Item i is equivalent to Lemma 3.4 from Reference 13. For completeness, we include a proof. By Equation Equation 1 we have that

where the maximum is taken over all unit vectors of length . If and , then , a contradiction. And if and , then , a contradiction. So Item i holds.

For a proof of Item ii, suppose and denote by the graph formed by adding or deleting the edge from . With denoting the adjacency matrix of ,

so each inequality is an equality. It follows that are eigenvectors for . Furthermore, without loss of generality, we may assume that . In particular, there exists some such that

So . Let . By the above equation, and either or . To find a contradiction, it is sufficient to note that is a connected graph with Perron-Frobenius eigenvector . Indeed, let and let . Then for any and any , , and by Item i, . So is connected. This completes the proof of Item ii.

Now, we prove Item iii. To see that , note by Items i and ii, for all distinct , if and only if , and otherwise, and . To see that , note that for any and any , .

Finally, we prove Item iv. Suppose are distinct vertices such that either or . Allowing the possibility that , the following equivalence holds:

Since have the same sign, Item iv holds. This completes the proof.

From Reference 21, we recall the following useful characterization in terms of “nesting” neighborhoods: is a threshold graph if and only there exists a numbering of such that for all , if , implies that . Given this ordering, if is the largest natural number such that , then the set induces a clique and the set induces an independent set.

Lemma 2.2 shows that both and have linear size.

Lemma 2.2.

If is a spread-extremal graph, then both and have size .

Proof.

We will show that and both have size at least . First, since is spread-extremal, it has spread more than and hence has smallest eigenvalue . Without loss of generality, for the remainder of this proof we will assume that , and that is a vertex satisfying . By way of contradiction, assume that .

If , then we have

contradicting that . Therefore, assume that . Then

This gives , a contradiction.

Lemma 2.3.

If and are unit eigenvectors for and , then and .

Proof.

During this proof we will assume that and are vertices satisfying and and without loss of generality that . We will use the weak estimates that and . Define sets

We have the estimates

and so and . To complete the proof, it suffices to show that and both have size .

We now give a lower bound on the sizes of and using the eigenvalue-eigenvector equation and the weak bounds on and .

giving that . Similarly,

and so . This implies that and .

Lemma 2.4.

Assume that and are unit vectors. Then there exists a constant such that for any pair of vertices and , we have

Proof.

Let and be vertices, and create a graph by deleting and cloning . That is, and

Note that . Let be the adjacency matrix of . Define two vectors and by

and

Then and . Similarly,

and

By Equation Equation 1,

Rearranging terms, we obtain

By Lemma 2.3, we have that , , , and are all , and so the right-hand side of the previous inequality is . It follows that

for some absolute constant . Rearranging terms gives the desired result.

3. The spread-extremal problem for graphons

Graphons (or graph functions) are analytical objects that may be used to study the limiting behavior of large, dense graphs. They were originally introduced in Reference 6 and Reference 19.

3.1. Introduction to graphons

Consider the set of all bounded symmetric measurable functions (by symmetric, we mean for all ). A function is called a stepfunction if there is a partition of into subsets such that is constant on every block . Every graph has a natural representation as a stepfunction in taking values either 0 or 1 (such a graphon is referred to as a stepgraphon). In particular, given a graph on vertices indexed , we can define a measurable set as

This represents the graph as a bounded symmetric measurable function that takes value on and everywhere else. For a measurable subset , we will use to denote its Lebesgue measure.

This representation of a graph as a measurable subset of lends itself to a visual presentation sometimes referred to as a “pixel picture”; see, for example, Figure 2 for two representations of a bipartite graph as a measurable subset of . Clearly, this indicates that such a representation is not unique; neither is the representation of a graph as a stepfunction. Using an equivalence relation on derived from the so-called cut metric, we can identify graphons that are equivalent up to relabeling, and up to any differences on a set of measure zero (i.e. equivalent almost everywhere).

For all symmetric, bounded Lebesgue-measurable functions , we let

Here, is referred to as the cut norm. Next, we can also define a semidistance on as follows. First, we define the weak isomorphism of graphons. Let be the set of all measure-preserving functions on . For every and every , define by

for a.e. . Now for any , let

Define the equivalence relation on as follows: for all , if and only if . Furthermore, let be the quotient space of under . Note that induces a metric on . Crucially, by Reference 20, Theorem 5.1, is a compact metric space.

Given , we define the Hilbert-Schmidt operator by

for all and a.e. .

Since is symmetric and bounded, is a compact Hermitian operator. In particular, has a discrete, real spectrum whose only possible accumulation point is (cf. Reference 5), and so the maximum and minimum eigenvalues exist. Let and be the maximum and minimum eigenvalues of , respectively, and define the spread of as

By the Min-Max Theorem, we have that

and

Both and are continuous functions with respect to . In particular, we have the following.

Theorem 3.1 (cf. Reference 6, Theorem 6.6 or Reference 18, Theorem 11.54).

Let be a sequence of graphons converging to with respect to . Then as ,

If then and . By compactness, we may consider the optimization problem on the factor space

and there is a that attains the maximum. Since every graph is represented by , this allows us to give an upper bound for in terms of . Indeed, by replacing the eigenvectors of with their corresponding stepfunctions, we can show Proposition 3.2.

Proposition 3.2.

Let be a graph on vertices. Then

Proposition 3.2 implies that for all . Combined with Theorem 1.4, this gives Corollary 3.3 (a similar argument implies Corollary 1.5).

Corollary 3.3.

For all , .

This can be proved more directly using Theorem 1.1 and taking tensor powers.

3.2. Properties of spread-extremal graphons

Our main objective in the following two sections is to solve the maximum spread problem for graphons, in order to produce a tight estimate for . In this subsection, we prove some preliminary results that are largely a translation of what is known in the graph setting (see Section 2). First, we define what it means for a graphon to be connected, and show that spread-extremal graphons must be connected. We then prove a standard corollary of the Perron-Frobenius theorem. Finally, we prove graphon versions of Lemma 2.1 and Lemma 2.4.

Let and be graphons, and let be positive real numbers with . We define the direct sum of and with weights and , denoted , as follows. Let and be the increasing affine maps that send and to , respectively. Then for all , let

A graphon is connected if is not weakly isomorphic to a direct sum where . Equivalently, is connected if there does not exist a measurable subset of positive measure such that for a.e. .

Proposition 3.4.

Suppose are graphons and are positive real numbers summing to . Let . Then as multisets,

Moreover, with equality if and only if or is the all-zeroes graphon.

Proof.

For convenience, let for each and . The first claim holds simply by considering the restriction of eigenfunctions to the intervals and .

For the second claim, we first write , where . Let for each and . Clearly for each and . Moreover, . Since , with equality if and only if either or equals . So the desired claim holds.

Furthermore, the following basic corollary of the Perron-Frobenius holds. For completeness, we prove it here.

Proposition 3.5.

Let be a connected graphon, and write for an eigenfunction corresponding to . Then is nonzero with constant sign a.e.

Proof.

Let . Since

it follows without loss of generality that a.e. on . Let . Then for a.e. ,

Since on , it follows that a.e. on . Clearly . If then the desired claim holds, so without loss of generality, . It follows that is disconnected, a contradiction to our assumption, which completes the proof.

We may now prove a graphon version of Lemma 2.1.

Lemma 3.6.

Suppose is a graphon achieving maximum spread, and let be eigenfunctions for the maximum and minimum eigenvalues for , respectively. Then the following claims hold:

(i)

For a.e. ,

(ii)

for a.e. .

Proof.

We proceed in the following order:

Prove Item i holds for a.e. such that . We will call this Item i*.

Prove Item ii.

Deduce Item i also holds.

By Propositions 3.4 and 3.5, we may assume without loss of generality that a.e. on . For convenience, we define the quantity . To prove Item i*, we first define a graphon by

Then by inspection,

Since maximizes spread, both integrals in the last line must be , and hence Item i* holds.

Now, we prove Item ii. For convenience, we define to be the set of all pairs so that . Now let be any graphon that differs from only on . Then

Since , and are eigenfunctions for , and we may write and for the corresponding eigenvalues. Now, we define

Similarly, we define

Since and are orthogonal,

By definition of , we have that for a.e. , . In particular, since for a.e. , then a.e. has . So by letting

has measure .

First, let us fix to be the graphon defined by

For this choice of ,

Clearly and are nonnegative functions, so for a.e. . Since and are positive for a.e. , for a.e. on .

If instead we fix to be for all , it follows by a similar argument that for a.e. . So has measure . Repeating the same argument on , we similarly conclude that has measure . This completes the proof of Item ii.

Finally we note that Items i* and ii together imply Item i.

From here, it is easy to see that any graphon maximizing the spread is a join of two threshold graphons. Next we prove the graphon version of Lemma 2.4.

Lemma 3.7.

If is a graphon achieving the maximum spread with corresponding eigenfunctions , then almost everywhere.

Proof.

We will use the notation to denote that satisfies . Let be an arbitrary homeomorphism that is orientation-preserving in the sense that and . Then is a continuous strictly monotone increasing function which is differentiable almost everywhere. Now let , and . Using the substitutions and ,

Similarly, .

Note however that the norms of may not be . Indeed using the substitution ,

We exploit this fact as follows. Suppose are disjoint subintervals of of the same positive length , and for any sufficiently small (in terms of ), let be the (unique) piecewise linear function that stretches to length , shrinks to length , and shifts only the elements in between and . Note that for a.e. ,

Again with the substitution ,

The same equality holds for instead of . After normalizing and , by optimality of , we get a difference of Rayleigh quotients as

as . It follows that for all disjoint intervals of the same length, the corresponding integrals are the same. Taking finer and finer partitions of , it follows that the integrand is constant almost everywhere. Since the average of this quantity over all is , the desired claim holds.

4. From graphons to stepgraphons

The main result of this section is as follows.

Theorem 4.1.

Suppose maximizes . Then is a stepfunction taking values and of the following form

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \renewcommand{\arraystretch}{0.8} \begin{tabular}{||cccc||ccc||}\hline\hline\black& \black& \black& \black& \black& \black& \black\\ \black& \black& \black& & \black& \black& \black\\ \black& \black& & & \black& \black& \black\\ \black& & & & \black& \black& \black\\\hline\hline\black& \black& \black& \black& \black& \black& \\ \black& \black& \black& \black& \black& & \\ \black& \black& \black& \black& & & \\\hline\hline\end{tabular} \end{SVG}

Furthermore, the internal divisions separate according to the sign of the eigenfunction corresponding to the minimum eigenvalue of .

We begin Section 4.1 by mirroring the argument in Reference 30, which proved a conjecture of Nikiforov regarding the largest eigenvalue of a graph and its complement, . There, Terpai showed that performing two operations on graphons leads to a strict increase in . Furthermore based on previous work of Nikiforov from Reference 24, the conjecture for graphs reduced directly to maximizing for graphons. Using these operations, Terpai Reference 30 reduced to a stepgraphon and then completed the proof by hand.

In our case, we are not so lucky and are left with a stepgraphon after performing similar but more technical operations, detailed in this section. In order to reduce to a stepgraphon, we make use of interval arithmetic (see Section 5.2 and Appendices A and B). Furthermore, our proof requires an additional technical argument to translate the result for graphons (Theorem 5.1) to our main result for graphs (Theorem 1.1). In Section 4.2, we prove Theorem 4.1.

4.1. Averaging

For convenience, we introduce some terminology. For any graphon with -eigenfunction , we say that is typical (with respect to and ) if

Note that a.e. is typical. Additionally if is measurable with positive measure, then we say that is average (on , with respect to and ) if

Given , and as above, we define the function by setting

Clearly . Additionally, we define the graphon by setting

In the graph setting, this is analogous to replacing with an independent set whose vertices are clones of . Lemma 4.2 indicates how this cloning affects the eigenvalues.

Lemma 4.2.

Suppose is a graphon with a -eigenfunction , and suppose there exist disjoint measurable subsets of positive measures and , respectively. Let . Moreover, suppose a.e. on . Additionally, suppose is typical and average on , with respect to and . Let and . Then for a.e. ,

Furthermore,

Proof.

We first prove Equation Equation 2. Note that for a.e. ,

as desired. Now note that for a.e. ,

So again, the claim holds and this completes the proof of Equation Equation 2. Now we prove Equation Equation 3. Indeed by Equation Equation 2,

and this completes the proof of desired claims.

We have the following useful corollary.

Corollary 4.3.

Suppose with maximum and minimum eigenvalues corresponding respectively to eigenfunctions . Moreover, suppose that there exist disjoint subsets and so that the conditions of Lemma 4.2 are met for with , , , and . Then,

(i)

for a.e. , and

(ii)

is constant on .

Proof.

Without loss of generality, we assume that . Write for the graphon and for the corresponding functions produced by Lemma 4.2. By Proposition 3.5, we may assume without loss of generality that a.e. on . We first prove Item i. Note that

Since and by Lemma 3.6ii, for a.e. such that . Item i follows.

For Item ii, we first note that is a -eigenfunction for . Indeed, if not, then the inequality in Equation 4 holds strictly, a contradiction to the fact that . Again by Lemma 4.2,

for a.e. . Let and . We claim that . Assume otherwise. By Lemma 4.2 and by Cauchy-Schwarz, for a.e.

and by sandwiching, and for a.e. . Since , it follows that for a.e. , as desired.

So we assume otherwise, that . Then for a.e. , and

and since a.e. on , it follows that for a.e. . Altogether, for a.e. . So is a disconnected, a contradiction to Proposition 3.4. The desired claim holds.

4.2. Proof of Theorem 4.1

Proof.

For convenience, we write and and let denote the corresponding unit eigenfunctions. By Proposition 3.5, we may assume without loss of generality that .

First, we show without loss of generality that are monotone on the sets and . Indeed, we define a total ordering on as follows. For all and , we let if:

(i)

and , or

(ii)

Item (i) does not hold and , or

(iii)

Item (i) does not hold, , and .

By inspection, the function defined by

is a weak isomorphism between and its entrywise composition with . By invariance of under weak isomorphism, we make the above replacement and write for the replacement eigenfunctions. That is, we are assuming that our graphon is relabeled so that respects .

As above, let and . By Lemma 3.7, and are monotone nonincreasing on . Additionally, and are monotone nonincreasing on . Without loss of generality, we may assume that is of the form from Lemma 3.6. Now we let and . By Lemma 3.6 we have that for almost every and for almost every or . We have used the notation and because the analogous sets in the graph setting form a clique or a stable set respectively. We first prove Claim A.

Claim A.

Except on a set of measure , takes on at most values on , and at most values on .

We first prove Claim A for on . Let be the set of all discontinuities of on the interior of the interval . Clearly consists only of jump-discontinuities. By the Darboux-Froda Theorem, is at most countable and moreover, is a union of at most countably many disjoint intervals . Moreover, is continuous on the interior of each .

We show now that is piecewise constant on the interiors of each . Indeed, let . Since is a -eigenfunction function for ,

for a.e. . By continuity of on the interior of , this equation holds everywhere on the interior of . Additionally, since is continuous on the interior of , by the Mean Value Theorem, there exists some in the interior of so that

By Corollary 4.3, is constant on the interior of , as desired.

If , the desired claim holds, so we may assume otherwise. Then there exists distinct . Moreover, equals a constant on the interiors of , , and , respectively. Additionally, since , , and are separated from each other by at least one jump discontinuity, we may assume without loss of generality that . It follows that there exists a measurable subset of positive measure so that

By Corollary 4.3, is constant on , a contradiction. So Claim A holds on . For Claim A on , we may repeat this argument with and interchanged, and and interchanged.

Now we show Claim B.

Claim B.

For a.e. such that , we have that for a.e. , implies that .

We first prove Claim B for a.e. . Suppose . By Lemma 3.6, in this case . Then for a.e. such , by Lemma 3.7, . By Lemma 3.6i, implies that . Since and , . Again by Lemma 3.6i, for a.e. such , as desired. So the desired claim holds for a.e. such that . We may repeat the argument for a.e. to arrive at the same conclusion.

Claim C follows directly from Lemma 3.7.

Claim C.

For a.e. , if and only if , if and only if .

Finally, we show Claim D.

Claim D.

Except on a set of measure , takes on at most values on , and at most values on .

For a proof, we first write so that are disjoint, equals some constant a.e. on , and equals some constant a.e. on . By Lemma 3.7, equals some constant a.e. on and equals some constant a.e. on . By definition of , . Now suppose so that

Then by Lemma 3.6i,

By Claim B, this expression for may take on at most values. So the desired claim holds on . Repeating the same argument, Claim D also holds on .

We are nearly done with the proof of the theorem, as we have now reduced to a stepgraphon. To complete the proof, we show that we may reduce to at most . We now partition , and so that and are constant a.e. on each part as:

,

,

, and

.

Then by Lemma 3.6i, there exists a matrix so that for all ,

,

for a.e. ,

if and only if , and

if and only if .

Additionally, we set and also denote by and the constant values of on each , respectively, for each . Furthermore, by Claim C and Lemma 3.6 we assume without loss of generality that and that . Also by Lemma 3.7, and . Also, by Claim B, no two columns of are identical within the sets and within . Shading black and white, we let

\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \[ M = \resizebox{15pc}{!}{ \begin{tabular}{||ccc||cc|||ccc||cc||}\hline\hline\black& \black& \black& \black& \black& \black& \black& \black& \black& \black\\ \black& \black& \black& \black& & \black& \black& \black& \black& \black\\ \black& \black& \black& & & \black& \black& \black& \black& \black\\\hline\hline\black& \black& & & & \black& \black& \black& \black& \black\\ \black& & & & & \black& \black& \black& \black& \black\\\hline\hline\hline\black& \black& \black& \black& \black& \black& \black& \black& \black& \black\\ \black& \black& \black& \black& \black& \black& \black& \black& \black& \\ \black& \black& \black& \black& \black& \black& \black& \black& & \\\hline\hline\black& \black& \black& \black& \black& \black& \black& & & \\ \black& \black& \black& \black& \black& \black& & & & \\\hline\hline\end{tabular}} \quad. \] \end{SVG}

Therefore, is a stepgraphon with values determined by and the size of each block determined by the .

We claim that and . For the first claim, assume to the contrary that all of are positive and note that there exists some such that

Moreover for some measurable subsets and of positive measure so that with ,

Note that by Lemma 3.7, we may assume that is average on with respect to as well. The conditions of Corollary 4.3 are met for with . Since , this is a contradiction to Corollary 4.3, so the desired claim holds. The same argument may be used to prove that .

We now form the principal submatrix by removing the -th row and column from if and only if . Since , is a stepgraphon with values determined by . Let denote the principal submatrix of corresponding to the indices so that . That is, corresponds to the upper left hand block of . We use red to indicate rows and columns present in but not . When forming the submatrix , we borrow the internal subdivisions which are present in the definition of above to denote where and where (or between and ). Note that this is not the same as what the internal divisions denote in the statement of the theorem. Since , it follows that is a principal submatrix of

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \begin{align*}\renewcommand{\arraystretch}{0.8} \resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \black& \redcell& \black& \black\\ \black& \black& \redcell& \black& \\ \redcell& \redcell& \redcell& \redcell& \redcell\\\hline\hline\black& \black& \redcell& & \\ \black& & \redcell& & \\ \hline\hline\end{tabular}} \quad, \quad\resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \black& \black& \redcell& \black\\ \black& \black& \black& \redcell& \\ \black& \black& \black& \redcell& \\\hline\hline\redcell& \redcell& \redcell& \redcell& \redcell\\ \black& & & \redcell& \\ \hline\hline\end{tabular}} \quad, \text{ or }\quad\resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \black& \black& \black& \redcell\\ \black& \black& \black& \black& \redcell\\ \black& \black& \black& & \redcell\\\hline\hline\black& \black& & & \redcell\\ \redcell& \redcell& \redcell& \redcell& \redcell\\ \hline\hline\end{tabular}} \quad. \end{align*} \end{SVG}

In the second case, columns and are identical in , and in the third case, columns and are identical in . So without loss of generality, is a principal submatrix of one of

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \begin{align*}\renewcommand{\arraystretch}{0.8} \resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \black& \redcell& \black& \black\\ \black& \black& \redcell& \black& \\ \redcell& \redcell& \redcell& \redcell& \redcell\\\hline\hline\black& \black& \redcell& & \\ \black& & \redcell& & \\ \hline\hline\end{tabular}} \quad, \quad\resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \redcell& \black& \redcell& \black\\ \redcell& \redcell& \redcell& \redcell& \redcell\\ \black& \redcell& \black& \redcell& \\\hline\hline\redcell& \redcell& \redcell& \redcell& \redcell\\ \black& \redcell& & \redcell& \\ \hline\hline\end{tabular}} \quad, \text{ or }\quad\resizebox{7pc}{!}{ \begin{tabular}{||ccc||cc||}\hline\hline\black& \redcell& \black& \black& \redcell\\ \redcell& \redcell& \redcell& \redcell& \redcell\\ \black& \redcell& \black& & \redcell\\\hline\hline\black& \redcell& & & \redcell\\ \redcell& \redcell& \redcell& \redcell& \redcell\\ \hline\hline\end{tabular}} \quad. \end{align*} \end{SVG}

In each case, is a principal submatrix of

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \begin{align*}\renewcommand{\arraystretch}{0.8} \resizebox{10pc}{!}{ \begin{tabular}{||cc||cc||}\hline\hline\black& \black& \black& \black\\ \black& \black& \black& \\\hline\hline\black& \black& & \\ \black& & & \\ \hline\hline\end{tabular}} \quad. \end{align*} \end{SVG}

An identical argument shows that the principal submatrix of on the indices such that is a principal submatrix of

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \begin{align*}\renewcommand{\arraystretch}{0.8} \resizebox{10pc}{!}{ \begin{tabular}{||cc||cc||}\hline\hline\black& \black& \black& \black\\ \black& \black& \black& \\\hline\hline\black& \black& & \\ \black& & & \\ \hline\hline\end{tabular}} \quad. \end{align*} \end{SVG}

Finally, we note that . Indeed otherwise the corresponding columns are identical in , a contradiction. So without loss of generality, row and column were also removed from to form . This completes the proof of the theorem.

5. Spread maximum graphons

In this section, we complete the proof of the graphon version of the spread conjecture of Gregory, Hershkowitz, and Kirkland from Reference 13. In particular, we prove Theorem 5.1. For convenience and completeness, we state this result in the following level of detail.

Theorem 5.1.

If is a graphon that maximizes spread, then may be represented as follows. For all ,

Furthermore,

are the maximum and minimum eigenvalues of , respectively, and if are unit eigenfunctions associated to , respectively, then, up to a change in sign, they may be written as follows. For every ,

To help outline our proof of Theorem 5.1, let the spread-extremal graphon have block sizes . Note that the spread of the graphon is the same as the spread of matrix in Figure 3, and so we will optimize the spread of over choices of . Let be the unweighted graph (with loops) corresponding to the matrix.

We proceed in the following steps.

(1)

In Section A.1, we reduce the proof of Theorem 5.1 to cases, each corresponding to a subset of . For each such we define an optimization problem , the solution to which gives us an upper bound on the spread of any graphon in the case corresponding to .

(2)

In Section 5.2, we appeal to interval arithmetic to translate these optimization problems into algorithms. Based on the output of the programs we wrote, we eliminate of the cases. We address the multitude of formulas used throughout and relocate their statements and proofs to Appendix B.1.

(3)

Finally in Section 5.3, we complete the proof of Theorem 5.1 by analyzing the remaining cases. Here, we apply Viète’s Formula for roots of cubic equations and make a direct argument.

For concreteness, we define on the vertex set . Explicitly, the neighborhoods of are defined as:

More compactly, we may note that

5.1. Stepgraphon case analysis

Let be a graphon maximizing spread. By Theorem 4.1, we may assume that is a stepgraphon corresponding to . We will break into cases depending on which of the weights are zero and which are positive. For some of these combinations the corresponding graphons are isomorphic, and in this section we will outline how one can show that we need only consider cases rather than .

We will present each case with the set of indices which have strictly positive weight. Additionally, we will use vertical bars to partition the set of integers according to its intersection with the sets , and . Recall that vertices in block are dominating vertices and vertices in blocks , , and have negative entries in the eigenfunction corresponding to . For example, we use to refer to the case that are all positive and ; see Figure 4.

To give an upper bound on the spread of any graphon corresponding to case we solve a constrained optimization problem. Let and denote the eigenfunction entries for unit eigenfunctions and of the graphon. Then we maximize subject to

The first three constraints say that the weights sum to and that and are unit eigenfunctions. The fourth constraint is from Lemma 3.7. The final two lines of constraints say that and are eigenfunctions for and respectively. Since these equations must be satisfied for any spread-extremal graphon, the solution to this optimization problem gives an upper bound on any spread-extremal graphon corresponding to case . For each case we formulate a similar optimization problem in Appendix A.1.

First, if two distinct blocks of vertices have the same neighborhood, then without loss of generality we may assume that only one of them has positive weight. For example, see Figure 5: in case , blocks and have the same neighborhood, and hence without loss of generality we may assume that only block has positive weight. Furthermore, in this case the resulting graphon could be considered as case or equivalently as case ; the graphons corresponding to these cases are isomorphic. Therefore cases , , and reduce to considering only case .

Additionally, if there is no dominant vertex, then some pairs cases may correspond to isomorphic graphons and the optimization problems are equivalent up to flipping the sign of the eigenvector corresponding to . For example, see Figure 6, in which cases and reduce to considering only a single one. However, because of how we choose to order the eigenfunction entries when setting up the constraints of the optimization problems, there are some examples of cases corresponding to isomorphic graphons that we solve as separate optimization problems. For example, the graphons corresponding to cases and are isomorphic, but we will consider them separate cases; see Figure 7.

Repeated applications of these three principles show that there are only distinct cases that we must consider. The details are straightforward to verify, see Lemma A.2.

The distinct cases that we must consider are the following, summarized in Figure 8.

5.2. Interval arithmetic

Interval arithmetic is a computational technique which bounds errors that accumulate during computation. For convenience, let be the extended real line. To enhance order floating point arithmetic, we replace extended real numbers with unions of intervals which are guaranteed to contain them. Moreover, we extend the basic arithmetic operations , and to operations on unions of intervals. This technique has real-world applications in the hard sciences, but has also been used in computer-assisted proofs. For two famous examples, we refer the interested reader to Reference 15 for Hales’ proof of the Kepler Conjecture on optimal sphere-packing in , and to Reference 31 for Warwick’s solution of Smale’s th problem on the Lorenz attractor as a strange attractor.

As stated before, we consider extensions of the binary operations , , , and as well as the unary operation defined on to operations on unions of intervals of extended real numbers. For example if , then we may use the following extensions of and :

For , we must address the cases and . Here, we take the extension

where

Additionally, we may let

When endpoints of and include or , the definitions above must be modified slightly in a natural way.

We use interval arithmetic to prove the strict upper bound for the maximum graphon spread claimed in Theorem 5.1, for any solutions to of the constrained optimization problems stated in Lemma A.2. The constraints in each allow us to derive equations for the variables in terms of each other, and and . For the reader’s convenience, we relocate these formulas and their derivations to Appendix B.1. In the programs corresponding to each set , we find two indices and such that for all , , , and may be calculated, step-by-step, from , , , and . See Table 1 for each set , organized by the chosen values of and .

In the program corresponding to a set , we search a carefully chosen set for values of which satisfy . We first divide into a grid of “boxes”. Starting at depth , we test each box for feasibility by assuming that and that . Next, we calculate , , and for all in interval arithmetic using the formulas from Appendix B. When the calculation detects that a constraint of is not satisfied, e.g., by showing that some , , or lies in an empty interval, or by constraining to a union of intervals which does not contain , then the box is deemed infeasible. Otherwise, the box is split into two boxes of equal dimensions, with the dimension of the cut alternating cyclically.

For each , the program has norm constraints, linear eigenvector constraints, elliptical constraints, inequality constraints, and interval membership constraints. By using interval arithmetic, we have a computer-assisted proof of the following result.

Lemma 5.2.

Suppose . Then any solution to attains a value strictly less than .

To better understand the role of interval arithmetic in our proof, consider Example 5.3.

Example 5.3.

Suppose , and is a solution to . We show that . By Proposition B.1, . Using interval arithmetic,

Thus

Since , we have a contradiction.

Example 5.3 illustrates a number of key elements. First, we note that through interval arithmetic, we are able to provably rule out the corresponding region. However, the resulting interval for the quantity is over fifty times bigger than any of the input intervals. This growth in the size of intervals is common, and so, in some regions, fairly small intervals for variables are needed to provably illustrate the absence of a solution. For this reason, using a computer to complete this procedure is ideal, as doing millions of calculations by hand would be untenable.

However, the use of a computer for interval arithmetic brings with it another issue. Computers have limited memory, and therefore cannot represent all numbers in . Instead, a computer can only store a finite subset of numbers, which we will denote by . This set is not closed under the basic arithmetic operations, and so when some operation is performed and the resulting answer is not in , some rounding procedure must be performed to choose an element of to approximate the exact answer. This issue is the cause of roundoff error in floating point arithmetic, and must be treated in order to use computer-based interval arithmetic as a proof.

PyInterval is one of many software packages designed to perform interval arithmetic in a manner which accounts for this crucial feature of floating point arithmetic. Given some , let be the largest satisfying , and be the smallest satisfying . Then, in order to maintain a mathematically accurate system of interval arithmetic on a computer, once an operation is performed to form a union of intervals , the computer forms a union of intervals containing for all . The programs which prove Lemma 5.2 can be found in Reference 27.

5.3. Completing the proof of Theorem 5.1

Finally, we complete the second main result of this paper. We will need Lemma 5.4.

Lemma 5.4.

If is a solution to , then .

We delay the proof of Lemma 5.4 to Appendix A because it is technical. We now proceed with the Proof of Theorem 5.1.

Proof of Theorem 5.1.

Let be a graphon such that . By Lemma 5.2 and Lemma 5.4, is a stepgraphon. Let the weights of the parts be and .

Thus, it suffices to demonstrate the uniqueness of the desired solution and to . Indeed, we first note that with

the quantities and are precisely the eigenvalues of the characteristic polynomial

In particular,

and

Optimizing, it follows that . Calculating the eigenfunctions and normalizing them gives that and their respective eigenfunctions match those from the statement of Theorem 5.1.

6. From graphons to graphs

In this section, we show that Theorem 5.1 implies Conjecture 1 for all sufficiently large; that is, the solution to the problem of maximizing the spread of a graphon implies the solution to the problem of maximizing the spread of a graph for sufficiently large .

The outline for our argument is as follows. First, we define the spread-maximum graphon as in Theorem 5.1. Let be any sequence where each is a spread-maximum graph on vertices and denote by the corresponding sequence of graphons. We show that, after applying measure-preserving transformations to each , the extreme eigenvalues and eigenvectors of each converge suitably to those of . It follows for sufficiently large that except for vertices, is a join of a clique of vertices and an independent set of vertices (Lemma 6.2). Using results from Section 2, we precisely estimate the extreme eigenvector entries on this set. Finally, Lemma 6.3 shows that the set of exceptional vertices is actually empty, completing the proof.

Before proceeding with the proof, we state Corollary 6.1 of the Davis-Kahan theorem Reference 10, stated for graphons.

Corollary 6.1.

Suppose are graphons. Let be an eigenvalue of with a corresponding unit eigenfunction. Let be an orthonormal eigenbasis for with corresponding eigenvalues . Suppose that for all . Then

Before proving Theorem 1.1, we prove the following approximate result. For all nonnegative integers , let .

Lemma 6.2.

For all positive integers , let denote a graph on vertices which maximizes spread. Then for some nonnegative integers such that , , and .

Proof.

Our argument outline is:

(1)

show that the eigenvectors for the spread-extremal graphs resemble the eigenfunctions of the spread-extremal graphon in an sense

(2)

show that with the exception of a small proportion of vertices, a spread-extremal graph is the join of a clique and an independent set

Let and . By Theorem 5.1, the graphon which is the indicator function of the set maximizes spread. Denote by and its maximum and minimum eigenvalues, respectively. For every positive integer , let denote a graph on vertices which maximizes spread, let be any stepgraphon corresponding to , and let and denote the maximum and minimum eigenvalues of , respectively. By Theorems 3.1 and 5.1, and compactness of ,

Moreover, we may apply measure-preserving transformations to each so that without loss of generality, . As in Theorem 5.1, let and be unit eigenfunctions that take values . Furthermore, let be a nonnegative unit -eigenfunction for and let be a -eigenfunction for .

We show that without loss of generality, and in the sense. Since is the only positive eigenvalue of and it has multiplicity , taking , Corollary 6.1 implies that

where the last inequality follows from Lemma 8.11 of Reference 18. Since , this proves the first claim. The second claim follows by replacing with , and with .

Note.

For the remainder of the proof, we will introduce quantities in lieu of writing complicated expressions explicitly. When we introduce a new , we will remark that given sufficiently small, can be made sufficiently small enough to meet some other conditions.

Let and for all , define

Since

it follows that

For all , let be the subinterval of corresponding to in , and denote by and the constant values of on . For convenience, we define the following discrete analogues of :

Let . By Lemma 2.4 and using the fact that and ,

for all sufficiently large. Let . We next need Claim I, which says that the eigenvector entries of the exceptional vertices behave as if they have neighborhood .

Claim I.

Suppose is sufficiently small and is sufficiently large in terms of . Then for all ,

Indeed, suppose and let

We take two cases, depending on the sign of .

Case A.

.

Recall that . Furthermore, and by assumption, . It follows that for all sufficiently large, , so by Lemma 2.1, . Since is a -eigenfunction for ,

Similarly,

Let . Note that for all , as long as is sufficiently large and is sufficiently small, then

Let . By Equations Equation 5 and Equation 7 and with sufficiently small,

Substituting the values of from Theorem 5.1 and simplifying, it follows that

Let . It follows that if is sufficiently large and is sufficiently small, then

Combining Equations Equation 7 and Equation 8, it follows that with sufficiently small, then

Note that

Since , the second inequality does not hold, which completes the proof of the desired claim.

Case B.

.

Recall that . Furthermore, and by assumption, . It follows that for all sufficiently large, , so by Lemma 2.1, . Since is a -eigenfunction for ,

Similarly,

Let . Note that for all , as long as is sufficiently large and is sufficiently small, then

Let . By Equations Equation 5 and Equation 9 and with sufficiently small,

Substituting the values of from Theorem 5.1 and simplifying, it follows that

Let . It follows that if is sufficiently large and is sufficiently small, then

Combining Equations Equation 7 and Equation 10, it follows that with sufficiently small, then

Again, note that

Since , the second inequality does not hold.

Similarly, note that

Since , the first inequality does not hold, a contradiction. So the desired claim holds.

We now complete the proof of Lemma 6.3 by showing that for all sufficiently large, is the join of an independent set with a disjoint union of a clique and an independent set .

As above, we let be arbitrary. By definition of and and by Equation Equation 6 from Claim I, then for all sufficiently large,

With rows and columns respectively corresponding to the vertex sets , , and , we note the following inequalities: Indeed, note the following inequalities:

Let be sufficiently small. Then for all sufficiently large and for all , then if and only if , , or . By Lemma 2.1, since and , the proof is complete.

We have now shown that the spread-extremal graph is of the form where . Lemma 6.3 refines this to show that actually .

Lemma 6.3.

For all nonnegative integers , let . Then for all sufficiently large, the following holds. If is maximized subject to the constraint and , then .

Proof outline. We aim to maximize the spread of subject to . The spread of is the same as the spread of the quotient matrix

We reparametrize with parameters and representing how far away and are proportionally from and , respectively. Namely, and . Then . Hence maximizing the spread of subject to is equivalent to maximizing the spread of the matrix

subject to the constraint that and are nonnegative integer multiples of and . In order to utilize calculus, we instead solve a continuous relaxation of the optimization problem.

As such, consider the following matrix.

Since is diagonalizable, we may let be the difference between the maximum and minimum eigenvalues of . We consider the optimization problem defined for all and all such that and are sufficiently small, by

We show that as long as and are sufficiently small, then the optimum of is attained by

Moreover we show that in the feasible region of , is concave-down in . We return to the original problem by imposing the constraint that and are multiples of . Together these two observations complete the proof of the lemma. Under these added constraints, the optimum is obtained when

Since the details are straightforward but tedious calculus, we delay this part of the proof to Section A.3.

We may now complete the proof of Theorem 1.1.

Proof of Theorem 1.1.

Suppose is a graph on vertices which maximizes spread. By Lemma 6.2, for some nonnegative integers such that where

By Lemma 6.3, if is sufficiently large, then . To complete the proof of the main result, it is sufficient to find the unique maximum of , subject to the constraint that . This is determined in Reference 13 to be the join of a clique on and an independent set on vertices. The interested reader can prove that is the nearest integer to by considering the spread of the quotient matrix

and optimizing the choice of .

7. The Bipartite Spread Conjecture

In Reference 13, the authors investigated the structure of graphs that maximize spread over all graphs with a fixed number of vertices and edges . In particular, they proved the upper bound

and noted that equality holds throughout if and only if is the union of isolated vertices and , for some satisfying Reference 13, Theorem 1.5. This led the authors to make Conjecture 2, namely, that if has vertices, edges, and spread , then is bipartite Reference 13, Conjecture 1.4. Recall that is the maximum spread over all graphs with vertices and edges, and , , is the maximum spread over all bipartite graphs with vertices and edges. This conjecture can be equivalently stated as for all and . In this section, we prove an asymptotic form of this conjecture and provide an infinite family of counterexamples to the exact conjecture, which verifies that the error in our asymptotic result is of the correct order of magnitude (Theorem 1.2).

To compute the spread of certain graphs explicitly, we make use of the theory of equitable partitions. In particular, we note that if is an automorphism of , then the quotient matrix of with respect to , denoted by , satisfies , and therefore is at least the spread of (for details, see Reference 9, Section 2.3). Additionally, we require two propositions, one regarding the largest spectral radius of subgraphs of of a given size, and another regarding the largest gap between sizes that correspond to a complete bipartite graph of order at most .

Let , , be the subgraph of resulting from removing edges all incident to some vertex in the larger side of the bipartition (if , the vertex can be from either set). In Reference 17, the authors proved the following result.

Proposition 7.1.

If , then maximizes over all subgraphs of of size .

We also require estimates regarding the longest sequence of consecutive sizes for which there does not exist a complete bipartite graph on at most vertices and exactly edges. As pointed out by Reference 4, the result follows quickly by induction. However, for completeness, we include a brief proof.

Proposition 7.2.

The length of the longest sequence of consecutive sizes for which there does not exist a complete bipartite graph on at most vertices and exactly edges is zero for and at most for .

Proof.

We proceed by induction. By inspection, for every , , there exists a complete bipartite graph of size and order at most , and so the length of the longest sequence is trivially zero for . When , there is no complete bipartite graph of order at most five with exactly five edges. This is the only such instance for , and so the length of the longest sequence for is one.

Now, suppose that the statement holds for graphs of order at most , for some . We aim to show the statement for graphs of order at most . By our inductive hypothesis, it suffices to consider only sizes and complete bipartite graphs on vertices. We have

When , the difference between the sizes of and is at most

Let be the largest value of satisfying and . Then

and the difference between the sizes of and is at most

Combining these two estimates completes our inductive step, and the proof.

We are now prepared to prove an asymptotic version of Reference 13, Conjecture 1.4, and provide an infinite class of counterexamples that illustrates that the asymptotic version under consideration is the tightest version of this conjecture possible.

Theorem 7.3 (Restatement of Theorems 1.2 and 1.3).

For all satisfying ,

In addition, for any , there exists some such that

for all and some depending on .

Proof.

The main idea of the proof is as follows. To obtain Inequality Equation 12 (the upper bound on ), we upper bound by using Inequality Equation 11, and we lower bound by the spread of some specific bipartite graph. To obtain Inequality Equation 13 (the lower bound on ) for a specific and , we explicitly compute using Proposition 7.1, and lower bound by the spread of some specific non-bipartite graph.

First, we analyze the spread of , , a quantity that will be used in the proofs of both Inequalities Equation 12 and Equation 13. Let us denote the vertices in the bipartition of by and , and suppose without loss of generality that is not adjacent to . Then

is an automorphism of . The corresponding quotient matrix is given by

and has characteristic polynomial

and, therefore, by the Perron-Frobenius Theorem,

We are now prepared to prove Inequality Equation 12 (the upper bound). For some fixed and , let , where , , and is as small as possible. If , then by Reference 13, Theorem 1.5 (described above), , and we are done. Otherwise, we note that , and so Equation Equation 14 is applicable (in fact, by Proposition 7.2, ). Using the upper bound and Equation Equation 14, we have

To upper bound , we use Proposition 7.2 with and . This implies that

Recall that for all , and so

for . To simplify Inequality Equation 15, we observe that

Therefore,

This completes the proof of Inequality Equation 12 (the upper bound).

Finally, we proceed with the proof of Inequality Equation 13 (the lower bound). Let us fix some , and consider some sufficiently large . Let , where is the smallest number satisfying and (here we require ). Denote the vertices in the bipartition of by and , and consider the graph resulting from adding one edge to between two vertices in the smaller side of the bipartition. Then

is an automorphism of , and

has characteristic polynomial

By matching higher order terms, we obtain

and

Next, we aim to compute , . By Proposition 7.1, is equal to the maximum of over all , . As previously noted, the quantity is given exactly by Equation Equation 14, and so the optimal choice of minimizes

We have

and if , then . Therefore the minimizing is in . The derivative of is given by

For ,

for sufficiently large . This implies that the optimal choice is , and . The characteristic polynomial equals

By matching higher order terms, the extreme root of is given by

and so

and

This completes the proof.

A concrete example where the Bipartite Spread Conjecture fails is as follows.

Example 7.4.

Let and , and consider the graphs , , and (see Figure 9 for a visual representation). We have

Every bipartite graph with vertices and edges is either a subgraph of or , and so, by Proposition 7.1, . The graph is a lower bound for , and so

It is unclear whether this is the smallest counterexample (w.r.t. ) to the Bipartite Spread Conjecture, though we note that in Reference 13 the authors verified the conjecture by computer for all graphs of order . Verifying that Example 7.4 is the smallest counterexample would require computations for , a task that we leave to the motivated reader.

8. Concluding remarks

In this work we provided a proof of the spread conjecture for sufficiently large , a proof of an asymptotic version of the bipartite spread conjecture, and an infinite class of counterexamples illustrating that our asymptotic version of this conjecture is the strongest result possible. There are a number of interesting future avenues of research, some of which we briefly describe below. These avenues consist primarily of considering the spread of more general classes of graphs (for instance, directed graphs, graphs with loops) or considering more general objective functions.

Our proof of the spread conjecture for sufficiently large immediately implies a nearly-tight estimate for the adjacency matrix of undirected graphs with loops, also commonly referred to as symmetric matrices. Given a directed graph , the corresponding adjacency matrix has entry if the arc , and is zero otherwise. In this case, is not necessarily symmetric, and may have complex eigenvalues. One interesting question is what digraph of order maximizes the spread of its adjacency matrix, where spread is defined as the diameter of the spectrum. Is this more general problem also maximized by the same set of graphs as in the undirected case? This problem for either loop-less directed graphs or directed graphs with loops is an interesting question, and the latter is equivalent to asking the above question for the set of all matrices.

Another approach is to restrict ourselves to undirected graphs or undirected graphs with loops, and further consider the competing interests of simultaneously producing a graph with both and large by understanding the trade-off between these two goals. To this end, we propose considering the class of objective functions

When , this function is maximized by the complete bipartite graph and when , this function is maximized by the complete graph . This paper treats the specific case of , but none of the mathematical techniques used in this work rely on this restriction. In fact, the structural graph-theoretic results of Section 2, suitably modified for arbitrary , still hold (see the thesis Reference 32, Section 3.3.1 for this general case). Understanding the behavior of the optimum between these three well-studied choices of is an interesting future avenue of research.

More generally, any linear combination of graph eigenvalues could be optimized over any family of graphs. Many sporadic examples of this problem have been studied. Nikiforov Reference 23 proposed a general framework for it and proved some conditions under which the problem is well behaved. We conclude with some specific instances of the problem that we think deserve consideration.

Given a graph , maximizing over the family of -vertex -free graphs can be thought of as a spectral version of Turán’s problem. Many papers have been written about this problem, which was proposed in generality in Reference 25. We remark that these results can often strengthen classical results in extremal graph theory. Maximizing over the family of triangle-free graphs has been considered in Reference 8 and is related to an old conjecture of Erdős on how many edges must be removed from a triangle-free graph to make it bipartite Reference 12. In general it would be interesting to maximize over the family of -free graphs. When a graph is regular, the difference between and (the spectral gap) is related to the graph’s expansion properties. Aldous and Fill Reference 3 asked for the minimum of over the family of -vertex connected regular graphs. Partial results were given by Reference 1Reference 2Reference 7Reference 14. A nonregular version of the problem was proposed by Stanić Reference 28, who asked for the minimum of over connected -vertex graphs. Finally, maximizing or over the family of -vertex graphs seems to be a surprisingly difficult question and even the asymptotics are not known (see Reference 26).

Appendix A. Technical proofs

A.1. Reduction to 17 cases

Now, we introduce the following specialized notation. For any nonempty set and any labeled partition of , we define the stepgraphon as follows. For all , equals on if and only if is an edge (or loop) of , and otherwise. If where for all , we may write to denote the graphon up to weak isomorphism.

To make the observations from Section 5.1 more explicit, we note that Theorem 4.1 implies that a spread-optimal graphon has the form where is a labeled partition of , , and each is measurable with positive measure. Since is a stepgraphon, its extreme eigenfunctions may be taken to be constant on , for all . With denoting the extreme eigenfunctions for , we may let and be the constant value of and , respectively, on step , for all . Appealing again to Theorem 4.1, we may assume without loss of generality that for all , and for all , implies that . By Lemma 3.7, for each , . Combining these facts, we note that and belong to specific intervals as in Figure 10.

For convenience, we define the following sets and , for all . First, let and . With some abuse of notation, we denote and .

For each , we define the intervals and by

Given that the set and the quantities are clear from context, we label the following equation:

Furthermore when is understood from context, we define the equations

Additionally, we consider the following inequalities. For all and all distinct ,

Finally, for all nonempty , we define the constrained-optimization problem by:

For completeness, we state and prove the following observation.

Proposition A.1.

Let such that and write for the maximum and minimum eigenvalues of , with corresponding unit eigenfunctions . Then for some nonempty set , the following holds. There exists a triple , where is a labeled partition of with parts of positive measure and for all , such that:

(i)

.

(ii)

Allowing the replacement of by and of by , for all , and equal and a.e. on .

(iii)

With for all , is solved by , and .

Proof.

First we prove Item i. By Theorem 4.1 and the definition of , there exist a nonempty set and a labeled partition such that . By merging any parts of measure into some part of positive measure, we may assume without loss of generality that for all . So Item i holds.

For Item ii, the eigenfunctions corresponding to the maximum and minimum eigenvalues of a stepgraphon must be constant on each block by convexity and the Courant-Fischer Min-Max Theorem.

Finally, we prove Item iii, we first prove that for all , . By Lemma 3.7,

for all . In particular, either or . By Lemma 3.6, for all , and if and only if . Note that the loops of are and . It follows that for all , if and only if , and , otherwise. Since is positive on , this completes the proof that for all . Similarly since is positive on and negative on , by inspection for all . Similarly, Inequalities Equation 20 follow directly from Lemma 3.6.

Continuing, we note the following. Since is a stepgraphon, if is an eigenvalue of , there exists a -eigenfunction for such that for all , on for some . Moreover for all , since ,

In particular, any solution to is at most . Since are eigenfunctions corresponding to and the eigenvalues , respectively, Equations Equation 18 and Equation 19 hold. Finally since is a partition of and since , Equation Equation 16 holds. So , and lie in the domain of . This completes the proof of item iii, and the desired claim.

We enhance Proposition A.1 as follows.

Lemma A.2.

Proposition A.1 holds with the added assumption that .

Proof.

We begin our proof with Claim A.

Claim A.

Suppose and are distinct such that . Then Proposition A.1 holds with the set replacing .

First, we define the following quantities. For all , let , and also let . If , let , and otherwise, let . Additionally let and for each , let . By the criteria from Proposition A.1, the domain criterion as well as Equation Equation 17 holds for all . Since we are reusing , the constraint also holds.

It suffices to show that Equation Equation 16 holds, and that Equations Equation 18 and Equation 19 hold for all . To do this, we first note that for all , and on . By definition, and on for all as needed by Claim A. Now suppose . Then and and on the set , matching Claim A. Finally, suppose . Note by definition that and on . Since and , it suffices to prove that and on . We first show that and . Indeed,

and since , . Similarly, . So and on the set .

Finally, we claim that . Indeed, this follows directly from Lemma 3.6 and the fact that . Since is a partition of and since are unit eigenfunctions for Equation Equation 16 holds, and Equations Equation 18 and Equation 19 hold for all . This completes the proof of Claim A.

Next, we prove Claim B.

Claim B.

If satisfies the criteria of Proposition A.1, then without loss of generality the following holds.

(a)

If there exists some such that , then .

(b)

.

(c)

is one of , and .

(d)

is one of , and .

Since , item a follows from Claim A applied to the pair . Since are orthogonal and is positive on , is positive on a set of positive measure, so item b holds.

To prove item c, we have cases. If , then and we may apply Claim A to the pair . If or , then and we may apply Claim A to the pair . If , then and we may apply Claim A to the pair . So item c holds. For item d, we reduce to one of , and in the same fashion. To eliminate the case where , we simply note that since and are orthogonal and is positive on , is negative on a set of positive measure. This completes the proof of Claim B.

After repeatedly applying Claim B, we may replace with one of the cases found in Table 2. Let denote the sets in Table 2. By definition,

Finally, we eliminate the cases in . If , then is a bipartite graphon, hence , a contradiction since .

For the three remaining cases, let be the permutation on defined as follows. For all , and . If is among , , , we apply to in the following sense. Replace with and replace with . By careful inspection, it follows that satisfies the criteria from Proposition A.1. Since , , and , this completes the proof.

A.2. Proof of Lemma 5.4

Let be a solution to .

First, let , and for all , let

As a motivation, suppose , and are part of a solution to . Then with , and are the maximum and minimum eigenvalues of , respectively. By the end of the proof, we show that any solution of has .

To proceed, we prove the following claims.

Claim A.

For all , has two distinct positive eigenvalues and one negative eigenvalue.

Since is diagonalizable, it has real eigenvalues which we may order as . Since , has an odd number of negative eigenvalues. Since , it follows that . Finally, note by the Perron-Frobenius Theorem that . This completes the proof of Claim A.

Next, we define the following quantities, treated as functions of for all . For convenience, we suppress the argument in most places. Let be the characteristic polynomial of . By inspection,

Continuing, let

Let be the difference between the maximum and minimum eigenvalues of . We show Claim B.

Claim B.

For all ,

Moreover, is analytic on .

Indeed, by Viéte’s Formula, using the fact that has exactly distinct real roots, the quantities are analytic on . Moreover, the eigenvalues of are where, for all ,

Moreover, are analytic on . For all , let

For all , note the trigonometric identities

By inspection, for all ,

Since and , the claimed equality holds. Since are analytic, is analytic on . This completes the proof of Claim B.

Next, we compute the derivatives of on . For convenience, denote by and for the partial derivatives of and by , respectively, for . Furthermore, let

Claim C follows directly from Claim B.

Claim C.

For all , then on the set , we have

Moreover, each expression is analytic on .

Finally, we solve .

Claim D.

If is a solution to , then .

With and using the fact that is analytic on , it is sufficient to eliminate all common zeroes of and on . With the help of a computer algebra system and the formulas for and from Claim C, we replace the system and with a polynomial system of equations and whose real solution set contains all previous solutions. Here,

and is a polynomial of degree , with coefficients between and . For brevity, we do not express explicitly.

To complete the proof of Claim D, it suffices to show that no common real solution to which lies in also satisfies . Again using a computer algebra system, we first find all common zeroes of and on . Included are the rational solutions and which do not lie in . Furthermore, the solution may also be eliminated. For the remaining zeroes, . A notebook showing these calculations can be found in Reference 27.

Claim E.

If , and is part of a solution to such that , then .

By definition of , and are eigenvalues of the matrix

Furthermore, has characteristic polynomial

Recall that . By Claim D, , and it follows that where

If , then , and if , then . So , which completes the proof of Claim E.

This completes the proof of Lemma 5.4.

A.3. Proof of Lemma 6.3

First, we find using Viète’s Formula. In doing so, we define functions . To ease the burden on the reader, we suppress the subscript and the arguments when convenient and unambiguous. Let be the characteristic polynomial of . By inspection,

Continuing, let

By Viète’s Formula, the roots of are the suggestively defined quantities:

First, We prove Claim A.

Claim A.

If is sufficiently close to , then

Indeed, suppose and . Then for all , . With the help of a computer algebra system, we substitute in and to find the limits:

Using a computer algebra system, these substitutions imply that

So for all sufficiently small, . After some trigonometric simplification,

and Equation Equation 21. This completes the proof of Claim A.

Now we prove Claim B.

Claim B.

There exists a constant such that the following holds. If is sufficiently small, then is concave-down on and strictly decreasing on .

First, we define

As a function of , is the determinant of the Hessian matrix of . Using a computer algebra system, we note that

Since is analytic to , there exist constants such that the following holds. For all , is concave-down on . This completes the proof of the first claim. Moreover for all and for all ,

To complete the proof of the second claim, note also that since is analytic at , there exist constants such that for all and all ,

Since is a local maximum of ,

Since , this completes the proof of Claim B.

Next, we prove Claim C.

Claim C.

If is sufficiently small, then is solved by a unique point . Moreover as ,

Indeed, the existence of a unique maximum on follows from the fact that is strictly concave-down and bounded on for all sufficiently small. Since is strictly decreasing on , it follows that . For the second claim, note that since is analytic at ,

for both and . Let

for both and . Then by Equation Equation 21,

for both and . We first consider linear approximation of the above quantities under the limit . Here, we write to mean that

With the help of a computer algebra system, we note that

By inspection, the constant terms match due to the identity

Since , replacing with implies that

as . After applying Gaussian Elimination to this -variable system of equations, it follows that

This completes the proof of Claim C.

For the next step, we prove Claim D. First, let denote the program formed from subject to the added constraint that .

Claim D.

For all sufficiently large, is solved by a unique point which satisfies .

Note by Lemma 6.2 that for all sufficiently large,

Moreover, by Claim C, is solved uniquely by

Since

and , it follows for sufficiently large that where

Similarly since

and , it follows that . Altogether,

Note that to solve , it is sufficient to maximize on the set

Since is concave-down on , is a corner of the square . So , which implies Claim D. This completes the proof of the main result.

Appendix B. A computer-assisted proof of Lemma 5.2

In this appendix, we derive a number of formulas that a stepgraphon corresponding to some set in Lemma A.2 satisfies, and detail how these formulas are used to provide a computer-assisted proof of Lemma 5.2.

B.1. Formulas

In this subsection, we derive the formulas used in our computer-assisted proof, from the equations described in Section A.1. First, we define a number of functions which will ease the notational burden in the results that follow. Let

Letting , we prove the following six formulas.

Proposition B.1.

Let and be such that . Then

and

Moreover, and are negative.

Proof.

By Lemma 3.7,

By taking the difference of the eigenvector equations for and (and also and ), we obtain

or, equivalently,

This leads to the system of equations

If the corresponding matrix is invertible, then after substituting the claimed formulas for and simplifying, it follows that they are the unique solutions. To verify that and are negative, it is sufficient to inspect the formulas for and , noting that is negative and both and are positive.

Suppose the matrix is not invertible. By assumption , and so

But, since and ,

a contradiction.

Proposition B.2.

Let and be such that . Then

and

Moreover, is positive and is negative.

Proof.

The proof of Proposition B.1, slightly modified, gives the desired result.

Proposition B.3.

Suppose where is either or . Then

and

Proof.

Using the eigenfunction equations for and for , it follows that

Combined with Lemma 3.7, it follows that

After expanding, we note that the right-hand side can be expressed purely in terms of , , , , , , , , and . Note that Proposition B.1 gives explicit formulas for , and , as well as , and , purely in terms of , and . With the help of a computer algebra system, we make these substitutions and factor the right-hand side as:

Since , the desired claim holds.

Proposition B.4.

Suppose where is either or . Then

and

Proof.

Using the eigenfunction equations for and for , it follows that

and

Altogether,

Combined with Lemma 3.6, it follows that

After expanding, we note that the right-hand side can be expressed purely in terms of , , , , , , , and . Note that Proposition B.1 gives explicit formulas for , , , , , and purely in terms of , and . With the help of a computer algebra system, we make these substitutions and factor the right-hand side as:

So the desired claim holds.

Proposition B.5.

Suppose and where is either or . Then,

and

Proof.

Using the eigenfunction equations for and for , it follows that

and

Altogether,

Combined with Lemma 3.6, it follows that

After expanding, we note that the right-hand side can be expressed purely in terms of , , , , , , , and . Note that Proposition B.1 gives explicit formulas for , , , , , and purely in terms of , and . With the help of a computer algebra system, we make these substitutions and factor the right-hand side as:

So the desired claim holds.

Proposition B.6.

Suppose and where is either or . Then

Proof.

Using the eigenfunction equations for and for , it follows that

and

Altogether,

Combined with Lemma 3.7, it follows that

After expanding, we note that the right-hand side can be expressed purely in terms of , , , , , , , and . Note that Proposition B.1 gives explicit formulas for purely in terms of , and . With the help of a computer algebra system, we make these substitutions and factor the right-hand side as:

Proposition B.7.

Suppose and let . Then

and

Proof.

Taking the difference of the eigenvector equations for and , and for and , we have

Combining these equalities with the equation completes the proof.

B.2. Algorithm

In this subsection, we briefly detail how the computer-assisted proof of Lemma 5.2 works. This proof is via interval arithmetic, and, at a high level, consists largely of iteratively decomposing the domain of feasible choices of for a given into smaller subregions (boxes) until all subregions violate some required equality or inequality. We provide two similar, but slightly different computer assisted proofs of this fact, and both of these can be found at the spread_numeric GitHub repository Reference 27. The first, found in folder interval, is a shorter and simpler version, containing slightly fewer formulas, albeit at the cost of overall computation and run time. The second, found in the folder interval, contains slightly more formulas and makes a greater attempt to optimize computation and run time. Below, we further detail the exact output and run time of both versions (exact output can be found in Reference 27), but for now, we focus on the main aspects of both proofs, and consider both together, saving a more detailed discussion of the differences for later.

These algorithms are implemented in Python using the PyInterval package. The algorithms consist of two parts: a main file containing useful formulas and subroutines and different files used to rule out each of the cases for . The main file, casework_helper, contains functions with the formulas of Subsection B.1 (suitably modified to limit error growth), and functions used to check that certain equalities and inequalities are satisfied. In particular, casework_helper contains formulas for

, assuming (using Proposition B.3)

, assuming (using Proposition B.4)

, assuming , (using Proposition B.6)

, assuming , (using Proposition B.5)

and , assuming (using Proposition B.1)

and , assuming (using Proposition B.1)

and , assuming (using Proposition B.3)

and , assuming (using Propositions B.4 and B.5)

and , assuming , (using Proposition B.2)

and , assuming , (using Proposition B.2)

as a function of , , and (and and , which can be computed as functions of , , and ). Some of the formulas are slightly modified compared to their counterparts in this Appendix, for the purpose of minimizing accumulated error. Each formula is performed using interval arithmetic, while restricting the resulting interval solution to the correct range. In addition, we recall that we have the inequalities

, for

, , for

, , for

, , for

, , for ,

, for (using Proposition B.1)

, for , (using Proposition B.2).

These inequalities are also used at various points in the algorithms. This completes a brief overview of the casework_helper file. Next, we consider the different files used to test feasibility for a specific choice of , each denoted by case, i.e., for , the associated file is case. For each specific case, there are a number of different properties which can be checked, including eigenvector equations, bounds on edge density, norm equations for the eigenvectors, and the ellipse equations. Each of these properties has an associated function which returns FALSE if the property cannot be satisfied, given the intervals for each variable, and returns TRUE otherwise. The implementation of each of these properties is rather intuitive, and we refer the reader to the programs themselves (which contain comments) for exact details Reference 27. Each feasibility file consists of two parts. The first part is a function is_feasible(mu,nu,a3,a6) that, given bounding intervals for , , , , computes intervals for all other variables (using interval arithmetic) and checks feasibility using the functions in the casework_helper file. If any checked equation or inequality in the file is proven to be unsatisfiable (i.e., see Example 5.3), then this function outputs ‘FALSE’, otherwise the function outputs ‘TRUE’ by default. The second part is a divide and conquer algorithm that breaks the hypercube

into sub-boxes of size by by by , checks feasibility in each box using is_feasible, and subdivides any box that does not rule out feasibility (i.e., subdivides any box that returns ‘TRUE’). This subdivision breaks a single box into two boxes of equal size, by subdividing along one of the four variables. The variable used for this subdivision is chosen iteratively, in the order . The entire divide and conquer algorithm terminates after all sub-boxes, and therefore, the entire domain

has been shown to be infeasible, at which point the algorithm prints ‘infeasible’. Alternatively, if the number of subdivisions reaches some threshold, then the algorithm terminates and outputs ‘feasible’.

Next, we briefly detail the output of the algorithms casework_helper/intervals and casework_helper/intervals. Both algorithms ruled out 15 of the 17 choices for using a maximum depth of 26, and failed to rule out cases and up to depth 51. For the remaining 15 cases, intervals considered a total of 5.5 million boxes, was run serially on a personal computer, and terminated in slightly over twelve hours. For these same 15 cases, intervals considered a total of 1.3 million boxes, was run in parallel using the Penn State math department’s ‘mathcalc’ computer, and terminated in under 140 minutes. The exact output for both versions of the spread_numeric algorithm can be found in Reference 27.

Acknowledgments

The authors are grateful to Louisa Thomas for greatly improving the style of presentation, and to Michel Goemans and Ryan Martin for their helpful discussions and suggestions.

Figures

Figure 1.

Contour plots of the spread for some choices of . Each point of Plot (a) illustrates the maximum spread over all choices of satisfying and (and therefore, ) on a grid of step size . Each point of Plot (b) illustrates the maximum spread over all choices of satisfying , , and on a grid of step size . The maximum spread of Plot (a) is achieved at the two black “x”s, and implies that, without loss of generality, , and therefore (indices and can be combined when ). Plot (b) treats this case when , and the maximum spread is achieved on the black line. This implies that either or . In both cases, this reduces to the block case (or, if , then ).

Figure 1(a)

for all

Graphic without alt text
Figure 1(b)

Graphic without alt text
Figure 2.

Two presentations of a bipartite graph as a stepfunction

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \resizebox{25pc}{!}{ \renewcommand{\arraystretch}{0.8} \begin{tabular}{||cccccccc||}\hline\hline& \black& & \black& & \black& & \black\\ \black& & \black& & \black& & \black& \\ & \black& & \black& & \black& & \black\\ \black& & \black& & \black& & \black& \\ & \black& & \black& & \black& & \black\\ \black& & \black& & \black& & \black& \\ & \black& & \black& & \black& & \black\\ \black& & \black& & \black& & \black& \\\hline\hline\end{tabular} \qquad\qquad\begin{tabular}{||cccccccc||}\hline\hline& & & & \black& \black& \black& \black\\ & & & & \black& \black& \black& \black\\ & & & & \black& \black& \black& \black\\ & & & & \black& \black& \black& \black\\ \black& \black& \black& \black& & & & \\ \black& \black& \black& \black& & & & \\ \black& \black& \black& \black& & & & \\ \black& \black& \black& \black& & & & \\\hline\hline\end{tabular}} \end{SVG}
Figure 3.

The matrix with corresponding graph , where is the diagonal matrix with entries

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3.5,2) {$1$}; \node[my-vx] (v2) at (-3.5,0.5) {$2$}; \node[my-vx] (v3) at (-3.5,-1) {$3$}; \node[my-vx] (v4) at (-3.5,-2.5) {$4$}; \par\node[my-vx] (v5) at (1.5,1.5) {$5$}; \node[my-vx] (v6) at (1.5,0) {$6$}; \node[my-vx] (v7) at (1.5,-1.5) {$7$}; \par\draw[my-arc] (v1) -- (v2); \draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=25] (v3); \draw[my-arc] (v1) edge[bend right=35] (v4); \par\draw[my-arc] (v2) edge[loop, looseness=3.5, min distance=0mm, in = 45, out=315] (v2); \draw[my-arc] (v2) edge (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \draw[my-arc] (v5) edge (v6); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3.5,0) ellipse (1.1 and 3); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v5); \draw[my-arc] (v1) edge (v6); \draw[my-arc] (v1) edge (v7); \draw[my-arc] (v2) edge (v5); \draw[my-arc] (v2) edge (v6); \draw[my-arc] (v2) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v6); \draw[my-arc] (v3) edge (v7); \draw[my-arc] (v4) edge (v5); \draw[my-arc] (v4) edge (v6); \draw[my-arc] (v4) edge (v7); \begin{pgfonlayer}{bg} \coordinate(c1) at (-3.5,-3) {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate(c2) at (1.5,-2.5) {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate(c3) at (1.5,2.5) {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate(c4) at (-3.5,3) {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \par\end{pgfonlayer} \par\end{tikzpicture}
Figure 4.

The family of graphons and the graph corresponding to case

\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \begin{tabular}{||c||cc||}\hline\hline& \black& \black\\ \hline\hline\black& \black& \\ \black& & \\\hline\hline\end{tabular}\end{SVG}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v4) at (-2,0) {$4$}; \par\node[my-vx] (v5) at (1.5,.75) {$5$}; \node[my-vx] (v7) at (1.5,-.75) {$7$}; \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-2,0) ellipse (.7 and 0.8); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.8 and 2.0); \par\end{pgfonlayer} \par\draw[my-arc] (v4) edge (v5); \draw[my-arc] (v4) edge (v7); \begin{pgfonlayer}{bg} \par\end{pgfonlayer} \par\end{tikzpicture}
Figure 5.

Redundancy, then renaming: we can assume in the family of graphons corresponding to , which produces families of graphons corresponding to both cases and

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \renewcommand{\arraystretch}{0.8} \resizebox{8pc}{!}{\begin{tabular}{||ccc||ccc||}\hline\hline\black& \black& \black& \black& \black& \black\\ \black& \black& \black& \black& \black& \black\\ \black& \black& & \black& \black& \black\\ \hline\hline\black& \black& \black& \black& \black& \\ \black& \black& \black& \black& & \\ \black& \black& \black& & & \\\hline\hline\end{tabular}}\qquad\resizebox{8pc}{!}{\begin{tabular}{||cc||ccc||}\hline\hline\black& \black& \black& \black& \black\\ \black& & \black& \black& \black\\ \hline\hline\black& \black& \black& \black& \\ \black& \black& \black& & \\ \black& \black& & & \\\hline\hline\end{tabular}}\qquad\resizebox{8pc}{!}{\begin{tabular}{||cc||ccc||}\hline\hline\black& \black& \black& \black& \black\\ \black& & \black& \black& \black\\ \hline\hline\black& \black& \black& \black& \\ \black& \black& \black& & \\ \black& \black& & & \\\hline\hline\end{tabular}}\par\resizebox{9pc}{!}{\begin{tikzpicture} \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3,1.5) {$1$}; \node[my-vx] (v2) at (-3,0) {$2$}; \node[my-vx] (v3) at (-3,-1.5) {$3$}; \par\node[my-vx] (v5) at (1.5,1.5) {$5$}; \node[my-vx] (v6) at (1.5,0) {$6$}; \node[my-vx] (v7) at (1.5,-1.5) {$7$}; \par\draw[my-arc] (v1) -- (v2); \draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=45] (v3); \par\draw[my-arc] (v2) edge[loop, looseness=3.5, min distance=0mm, in = 225, out=135] (v2); \draw[my-arc] (v2) edge (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \draw[my-arc] (v5) edge (v6); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3,0) ellipse (1.2 and 2.5); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v5); \draw[my-arc] (v1) edge (v6); \draw[my-arc] (v1) edge (v7); \draw[my-arc] (v2) edge (v5); \draw[my-arc] (v2) edge (v6); \draw[my-arc] (v2) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v6); \draw[my-arc] (v3) edge (v7); \begin{pgfonlayer}{bg} \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \par\end{pgfonlayer} \par\end{tikzpicture}}\quad\resizebox{9pc}{!}{\begin{tikzpicture} \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3,1) {$1$}; \node[my-vx] (v3) at (-3,-1) {$3$}; \par\node[my-vx] (v5) at (1.5,1.5) {$5$}; \node[my-vx] (v6) at (1.5,0) {$6$}; \node[my-vx] (v7) at (1.5,-1.5) {$7$}; \par\draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=0] (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \draw[my-arc] (v5) edge (v6); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3,0) ellipse (0.9 and 2.2); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v5); \draw[my-arc] (v1) edge (v6); \draw[my-arc] (v1) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v6); \draw[my-arc] (v3) edge (v7); \begin{pgfonlayer}{bg} \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \end{pgfonlayer} \end{tikzpicture}}\quad\resizebox{9pc}{!}{\begin{tikzpicture} \par\pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3,1) {$1$}; \node[my-vx] (v3) at (-3,-1) {$4$}; \par\node[my-vx] (v5) at (1.5,1.5) {$5$}; \node[my-vx] (v6) at (1.5,0) {$6$}; \node[my-vx] (v7) at (1.5,-1.5) {$7$}; \par\draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=0] (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \draw[my-arc] (v5) edge (v6); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3,0) ellipse (.9 and 2.0); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v5); \draw[my-arc] (v1) edge (v6); \draw[my-arc] (v1) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v6); \draw[my-arc] (v3) edge (v7); \begin{pgfonlayer}{bg} \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \par\end{pgfonlayer} \par\end{tikzpicture}} \end{SVG}
Figure 6.

Changing the sign of : The optimization problems in these cases are equivalent

\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \resizebox{8pc}{!}{\begin{tabular}{||ccc||cc||}\hline\hline\black& \black& & \black& \black\\ \black& & & \black& \black\\ & & & \black& \black\\ \hline\hline\black& \black& \black& \black& \\ \black& \black& \black& & \\\hline\hline\end{tabular}}\end{SVG}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[scale=0.7, baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v2) at (-3.5,1.5) {$2$}; \node[my-vx] (v3) at (-3.5,0) {$3$}; \node[my-vx] (v4) at (-3.5,-1.5) {$4$}; \par\node[my-vx] (v5) at (1.5,1) {$5$}; \node[my-vx] (v7) at (1.5,-1) {$7$}; \par\draw[my-arc] (v2) edge[loop, looseness=3.5, min distance=0mm, in = 135, out=45] (v2); \draw[my-arc] (v2) edge (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3.5,0) ellipse (1.1 and 3); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v2) edge (v5); \draw[my-arc] (v2) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v7); \draw[my-arc] (v4) edge (v5); \draw[my-arc] (v4) edge (v7); \begin{pgfonlayer}{bg} \par\end{pgfonlayer} \par\end{tikzpicture}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG}\resizebox{8pc}{!}{\begin{tabular}{||cc||ccc||}\hline\hline\black& & \black& \black& \black\\ & & \black& \black& \black\\ \hline\hline\black& \black& \black& \black& \\ \black& \black& \black& & \\ \black& \black& & & \\\hline\hline\end{tabular}}\end{SVG}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[scale=0.8, baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v2) at (5.5,1.5) {$5$}; \node[my-vx] (v3) at (5.5,0) {$6$}; \node[my-vx] (v4) at (5.5,-1.5) {$7$}; \par\node[my-vx] (v5) at (1.5,1) {$2$}; \node[my-vx] (v7) at (1.5,-1) {$4$}; \par\draw[my-arc] (v2) edge[loop, looseness=3.5, min distance=0mm, in = 135, out=45] (v2); \draw[my-arc] (v2) edge (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tpink] (5.5,0) ellipse (1.1 and 3); \draw[my-arc, fill=tblue] (1.5,0) ellipse (0.9 and 2.5); \par\end{pgfonlayer} \par\draw[my-arc] (v2) edge (v5); \draw[my-arc] (v2) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v7); \draw[my-arc] (v4) edge (v5); \draw[my-arc] (v4) edge (v7); \begin{pgfonlayer}{bg} \par\end{pgfonlayer} \end{tikzpicture}
Figure 7.

The cases and correspond to the same family graphons but we consider the optimization problems separately, due to our prescribed ordering of the vertices

\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG}\resizebox{8pc}{!}{\begin{tabular}{||ccc||c||}\hline\hline\black& \black& \black& \black\\ \black& \black& & \black\\ \black& & & \black\\ \hline\hline\black& \black& \black& \\\hline\hline\end{tabular}}\end{SVG}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[scale=0.7,baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3.5,1.5) {$1$}; \node[my-vx] (v2) at (-3.5,0) {$2$}; \par\node[my-vx] (v4) at (-3.5,-1.5) {$4$}; \par\node[my-vx] (v7) at (1.5,0) {$7$}; \par\draw[my-arc] (v1) -- (v2); \draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=35] (v4); \par\draw[my-arc] (v2) edge[loop, looseness=3.5, min distance=0mm, in = 45, out=315] (v2); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3.5,0) ellipse (1.1 and 2.5); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.7 and .9); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v7); \draw[my-arc] (v2) edge (v7); \draw[my-arc] (v4) edge (v7); \begin{pgfonlayer}{bg} \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \par\end{pgfonlayer} \end{tikzpicture}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG}\resizebox{8pc}{!}{\begin{tabular}{||cc||cc||}\hline\hline\black& \black& \black& \black\\ \black& & \black& \black\\ \hline\hline\black& \black& \black& \\ \black& \black& & \\\hline\hline\end{tabular}}\end{SVG}
\renewcommand{\arraystretch}{0.8} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture}[scale=0.7,baseline=(current bounding box.center)] \pgfdeclarelayer{bg} \pgfdeclarelayer{fg} \pgfsetlayers{bg,main,fg} \par\tikzstyle{my-vx}=[draw, circle, very thick, draw=black, fill=white] \tikzstyle{my-arc}=[draw=black, very thick] \par\begin{pgfonlayer}{fg} \par\node[my-vx] (v1) at (-3,1) {$1$}; \node[my-vx] (v3) at (-3,-1) {$4$}; \par\node[my-vx] (v5) at (1.5,1) {$5$}; \node[my-vx] (v7) at (1.5,-1) {$7$}; \par\draw[my-arc] (v1) edge[loop, looseness=4] (v1); \draw[my-arc] (v1) edge[bend right=0] (v3); \par\draw[my-arc] (v5) edge[loop, looseness=3.5, min distance=0mm, in=45, out=135] (v5); \par\end{pgfonlayer} \par\begin{pgfonlayer}{main} \draw[my-arc, fill=tblue] (-3,0) ellipse (0.9 and 2.2); \draw[my-arc, fill=tpink] (1.5,0) ellipse (0.9 and 2.2); \par\end{pgfonlayer} \par\draw[my-arc] (v1) edge (v5); \draw[my-arc] (v1) edge (v7); \draw[my-arc] (v3) edge (v5); \draw[my-arc] (v3) edge (v7); \begin{pgfonlayer}{bg} \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \coordinate{} {} {} {} {} {} {} {} {} {} {} {} {} {} {} {}; \par\end{pgfonlayer} \end{tikzpicture}
Figure 8.

The set , as a poset ordered by inclusion. Each element is a subset of , written without braces and commas. As noted in the proof of Lemma A.2, the sets , , and have different behavior in the problems . For this reason, we use vertical bars to separate each according to the corresponding partition.

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture} \tikzstyle{mybox} = [draw,outer sep=1.5,inner sep=1,minimum size=10] \node[mybox] (v1) at (-6,4) {$1|234|567$}; \node[mybox] (v2) at (-9,2.5) {$234|567$}; \node[mybox] (v3) at (-6,2.5) {$1|24|567$}; \node[mybox] (v4) at (-2.5,2.5) {$1|234|57$}; \draw(v1) edge (v2); \draw(v1) edge (v3); \draw(v1) edge (v4); \node[mybox] (v5) at (-10,1) {$24|567$}; \node[mybox] (v6) at (-7.5,1) {$1|4|567$}; \node[mybox] (v7) at (-4,1) {$1|24|57$}; \node[mybox] (v8) at (-1,1) {$1|234|7$}; \node[mybox] (v9) at (-10,-0.5) {$4|567$}; \draw(v2) edge (v5); \draw(v3) edge (v6); \draw(v3) edge (v7); \draw(v4) edge (v7); \draw(v4) edge (v8); \node[mybox] (v10) at (-7.5,-0.5) {$24|57$}; \node[mybox] (v11) at (-5.5,-0.5) {$1|567$}; \node[mybox] (v12) at (-3.5,-0.5) {$1|4|57$}; \node[mybox] (v13) at (-1,-0.5) {$1|24|7$}; \draw(v5) edge (v9); \draw(v5) edge (v10); \draw(v6) edge (v11); \draw(v6) edge (v9); \draw(v6) edge (v12); \draw(v7) edge (v10); \draw(v7) edge (v12); \draw(v7) edge (v13); \draw(v8) edge (v13); \node[mybox] (v14) at (-8.5,-2) {$4|57$}; \node[mybox] (v15) at (-5.5,-2) {$1|57$}; \node[mybox] (v16) at (-2,-2) {$1|4|7$}; \draw(v9) edge (v14); \draw(v10) edge (v14); \draw(v11) edge (v15); \draw(v12) edge (v16); \draw(v13) edge (v16); \node[mybox] (v17) at (-4,-3.5) {$1|7$}; \draw(v15) edge (v17); \draw(v16) edge (v17); \end{tikzpicture}
Table 1.

The indices corresponding to the search space used to bound solutions to

Figure 9.

The graphs , , and from Example 7.4. The additional edge in is between the two pink vertices, and the missing edges in and are between the black vertex and the grey vertices.

Figure 9(a)

Graphic without alt text
Figure 9(b)

Graphic without alt text
Figure 9(c)

Graphic without alt text
Figure 10.

Intervals containing the quantities and . Note that and are only defined for all .

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{tikzpicture} \node[label={[yshift=-50pt]{\large$-\infty$}}] (A) at (-5, 0) { \Huge$\left|\right.$ }; \node[label={[yshift=-50pt]{\large$-1$}}] (B) at (-2.5, 0) { \Huge$\left|\right.$ }; \node[label={[yshift=-50pt]{\large$0$}}] (C) at (0, 0) { \Huge$\left|\right.$ }; \node[label={[yshift=-50pt]{\large$1$}}] (D) at (2.5, 0) { \Huge$\left|\right.$ }; \node[label={[yshift=-50pt]{\large$+\infty$}}] (E) at (5, 0) { \Huge$\left.\right|$ }; \draw(A.center) -- (B.center) node [midway, below=0pt] { $g_6,g_7$ }; \draw(B.center) -- (C.center) node [midway, below=0pt] { $g_5$ }; \draw(C.center) -- (D.center) node [midway, above=0pt] { $f_3,f_4,f_6,f_7$ }; \draw(C.center) -- (D.center) node [midway, below=0pt] { $g_1,g_2$ }; \draw(D.center) -- (E.center) node [midway, above=0pt] { $f_1,f_2,f_5$ }; \draw(D.center) -- (E.center) node [midway, below=0pt] { $g_3,g_4$ }; \end{tikzpicture}
Table 2.

The sets which arise from repeated applications of Claim B

Mathematical Fragments

Conjecture 1 (Reference 13, Conjecture 1.3).

For any positive integer , the graph of order with maximum spread is ; that is, and this value is attained only by .

Conjecture 2 (Reference 13, Conjecture 1.4).

If is a graph with vertices and edges attaining the maximum spread , then must be bipartite. That is, for all .

Theorem 1.1.

There exists a constant so that the following holds: Suppose is a graph on vertices with maximum spread. Then is the join of a clique on vertices and an independent set on vertices.

Theorem 1.2.

For all satisfying ,

Theorem 1.3.

For any , there exists some such that

for all and some depending on .

Theorem 1.4.

Let be a Lebesgue-measurable function such that for a.e. , and let be the kernel operator on associated with . For all unit functions ,

Moreover, equality holds if and only if there exists a measure-preserving transformation on such that for a.e. ,

Corollary 1.5.

Let be an symmetric nonnegative matrix. Then

and

Step 1 (Graph-theoretic results).

In Section 2, we observe a number of important structural properties of any graph that maximizes spread for a given order . In particular, we show that

any graph that maximizes spread must be the join of two threshold graphs (Lemma 2.1),

both graphs in this join have order linear in (Lemma 2.2),

any unit eigenvectors and corresponding to and have infinity norms of order (Lemma 2.3),

the quantities , , are all nearly equal, up to a term of order (Lemma 2.4).

This last structural property serves as the backbone of our proof. In addition, we note that, by a tensor argument, an asymptotic upper bound for implies a bound for all .

Equation (1)
Lemma 2.1.

Let , and suppose is an -vertex graph such that . Denote by and extremal unit eigenvectors for . Then

(i)

For any two vertices of , and are adjacent whenever and and are nonadjacent whenever .

(ii)

For any distinct , .

(iii)

Let , and let and . Then .

(iv)

For each , is a threshold graph with threshold function defined on all by

Lemma 2.2.

If is a spread-extremal graph, then both and have size .

Lemma 2.3.

If and are unit eigenvectors for and , then and .

Lemma 2.4.

Assume that and are unit vectors. Then there exists a constant such that for any pair of vertices and , we have

Theorem 3.1 (cf. Reference 6, Theorem 6.6 or Reference 18, Theorem 11.54).

Let be a sequence of graphons converging to with respect to . Then as ,

Proposition 3.2.

Let be a graph on vertices. Then

Corollary 3.3.

For all , .

Proposition 3.4.

Suppose are graphons and are positive real numbers summing to . Let . Then as multisets,

Moreover, with equality if and only if or is the all-zeroes graphon.

Proposition 3.5.

Let be a connected graphon, and write for an eigenfunction corresponding to . Then is nonzero with constant sign a.e.

Lemma 3.6.

Suppose is a graphon achieving maximum spread, and let be eigenfunctions for the maximum and minimum eigenvalues for , respectively. Then the following claims hold:

(i)

For a.e. ,

(ii)

for a.e. .

Lemma 3.7.

If is a graphon achieving the maximum spread with corresponding eigenfunctions , then almost everywhere.

Theorem 4.1.

Suppose maximizes . Then is a stepfunction taking values and of the following form

\renewcommand{\arraystretch}{1.4} \setlength{\unitlength}{1.0pt} \setlength{\extrarowheight}{0.0pt}\begin{SVG} \renewcommand{\arraystretch}{0.8} \begin{tabular}{||cccc||ccc||}\hline\hline\black& \black& \black& \black& \black& \black& \black\\ \black& \black& \black& & \black& \black& \black\\ \black& \black& & & \black& \black& \black\\ \black& & & & \black& \black& \black\\\hline\hline\black& \black& \black& \black& \black& \black& \\ \black& \black& \black& \black& \black& & \\ \black& \black& \black& \black& & & \\\hline\hline\end{tabular} \end{SVG}

Furthermore, the internal divisions separate according to the sign of the eigenfunction corresponding to the minimum eigenvalue of .

Lemma 4.2.

Suppose is a graphon with a -eigenfunction , and suppose there exist disjoint measurable subsets of positive measures and , respectively. Let . Moreover, suppose a.e. on . Additionally, suppose is typical and average on , with respect to and . Let and . Then for a.e. ,

Furthermore,

Corollary 4.3.

Suppose with maximum and minimum eigenvalues corresponding respectively to eigenfunctions . Moreover, suppose that there exist disjoint subsets and so that the conditions of Lemma 4.2 are met for with , , , and . Then,

(i)

for a.e. , and

(ii)

is constant on .

Equation (4)
Claim A.

Except on a set of measure , takes on at most values on , and at most values on .

Claim B.

For a.e. such that , we have that for a.e. , implies that .

Claim C.

For a.e. , if and only if , if and only if .

Claim D.

Except on a set of measure , takes on at most values on , and at most values on .

Theorem 5.1.

If is a graphon that maximizes spread, then may be represented as follows. For all ,

Furthermore,

are the maximum and minimum eigenvalues of , respectively, and if are unit eigenfunctions associated to , respectively, then, up to a change in sign, they may be written as follows. For every ,

Lemma 5.2.

Suppose . Then any solution to attains a value strictly less than .

Example 5.3.

Suppose , and is a solution to . We show that . By Proposition B.1, . Using interval arithmetic,

Thus

Since , we have a contradiction.

Lemma 5.4.

If is a solution to , then .

Corollary 6.1.

Suppose are graphons. Let be an eigenvalue of with a corresponding unit eigenfunction. Let be an orthonormal eigenbasis for with corresponding eigenvalues . Suppose that for all . Then

Lemma 6.2.

For all positive integers , let denote a graph on vertices which maximizes spread. Then for some nonnegative integers such that , , and .

Equation (5)
Claim I.

Suppose is sufficiently small and is sufficiently large in terms of . Then for all ,

Case A.

.

Recall that . Furthermore, and by assumption, . It follows that for all sufficiently large, , so by Lemma 2.1, . Since is a -eigenfunction for ,

Similarly,

Let . Note that for all , as long as is sufficiently large and is sufficiently small, then

Let . By Equations Equation 5 and 7 and with sufficiently small,

Substituting the values of from Theorem 5.1 and simplifying, it follows that

Let . It follows that if is sufficiently large and is sufficiently small, then

Combining Equations 7 and 8, it follows that with sufficiently small, then

Note that

Since , the second inequality does not hold, which completes the proof of the desired claim.

Case B.

.

Recall that . Furthermore, and by assumption, . It follows that for all sufficiently large, , so by Lemma 2.1, . Since is a -eigenfunction for ,

Similarly,

Let . Note that for all , as long as is sufficiently large and is sufficiently small, then

Let . By Equations Equation 5 and 9 and with sufficiently small,

Substituting the values of from Theorem 5.1 and simplifying, it follows that

Let . It follows that if is sufficiently large and is sufficiently small, then

Combining Equations Equation 7 and 10, it follows that with sufficiently small, then

Again, note that

Since , the second inequality does not hold.

Lemma 6.3.

For all nonnegative integers , let . Then for all sufficiently large, the following holds. If is maximized subject to the constraint and , then .

Equation (11)
Proposition 7.1.

If , then maximizes over all subgraphs of of size .

Proposition 7.2.

The length of the longest sequence of consecutive sizes for which there does not exist a complete bipartite graph on at most vertices and exactly edges is zero for and at most for .

Theorem 7.3 (Restatement of Theorems 1.2 and 1.3).

For all satisfying ,

In addition, for any , there exists some such that

for all and some depending on .

Equation (14)
Equation (15)
Example 7.4.

Let and , and consider the graphs , , and (see Figure 9 for a visual representation). We have

Every bipartite graph with vertices and edges is either a subgraph of or , and so, by Proposition 7.1, . The graph is a lower bound for , and so

Equation (16)
Equations (17), (18), (19)
Equation (20)
Proposition A.1.

Let such that and write for the maximum and minimum eigenvalues of , with corresponding unit eigenfunctions . Then for some nonempty set , the following holds. There exists a triple , where is a labeled partition of with parts of positive measure and for all , such that:

(i)

.

(ii)

Allowing the replacement of by and of by , for all , and equal and a.e. on .

(iii)

With for all , is solved by , and .

Lemma A.2.

Proposition A.1 holds with the added assumption that .

Claim A.

Suppose and are distinct such that . Then Proposition A.1 holds with the set replacing .

Claim B.

If satisfies the criteria of Proposition A.1, then without loss of generality the following holds.

(a)

If there exists some such that , then .

(b)

.

(c)

is one of , and .

(d)

is one of , and .

Claim A.

For all , has two distinct positive eigenvalues and one negative eigenvalue.

Claim B.

For all ,

Moreover, is analytic on .

Claim C.

For all , then on the set , we have

Moreover, each expression is analytic on .

Claim D.

If is a solution to , then .

Claim E.

If , and is part of a solution to such that , then .

Claim A.

If is sufficiently close to , then

Claim B.

There exists a constant such that the following holds. If is sufficiently small, then is concave-down on and strictly decreasing on .

Claim C.

If is sufficiently small, then is solved by a unique point . Moreover as ,

Claim D.

For all sufficiently large, is solved by a unique point which satisfies .

Proposition B.1.

Let and be such that . Then

and

Moreover, and are negative.

Proposition B.2.

Let and be such that . Then

and

Moreover, is positive and is negative.

Proposition B.3.

Suppose where is either or . Then

and

Proposition B.4.

Suppose where is either or . Then

and

Proposition B.5.

Suppose and where is either or . Then,

and

Proposition B.6.

Suppose and where is either or . Then

References

Reference [1]
M. Abdi, E. Ghorbani, and W. Imrich, Regular graphs with minimum spectral gap, European J. Combin. 95 (2021), Paper No. 103328, 18, DOI 10.1016/j.ejc.2021.103328. MR4242392,
Show rawAMSref \bib{quartic1}{article}{ author={Abdi, M.}, author={Ghorbani, E.}, author={Imrich, W.}, title={Regular graphs with minimum spectral gap}, journal={European J. Combin.}, volume={95}, date={2021}, pages={Paper No. 103328, 18}, issn={0195-6698}, review={\MR {4242392}}, doi={10.1016/j.ejc.2021.103328}, }
Reference [2]
Maryam Abdi and Ebrahim Ghorbani, Quartic graphs with minimum spectral gap, Preprint, arXiv:2008.03144, 2020.
Reference [3]
David Aldous and Jim Fill, Reversible Markov chains and random walks on graphs, 2002.
Reference [4]
Vishal Arul, Personal communication.
Reference [5]
Jean-Pierre Aubin, Applied functional analysis, John Wiley & Sons, New York-Chichester-Brisbane, 1979. Translated from the French by Carole Labrousse; With exercises by Bernard Cornet and Jean-Michel Lasry. MR549483,
Show rawAMSref \bib{aubin2011applied}{book}{ author={Aubin, Jean-Pierre}, title={Applied functional analysis}, note={Translated from the French by Carole Labrousse; With exercises by Bernard Cornet and Jean-Michel Lasry}, publisher={John Wiley \& Sons, New York-Chichester-Brisbane}, date={1979}, pages={xv+423}, isbn={0-471-02149-0}, review={\MR {549483}}, }
Reference [6]
C. Borgs, J. T. Chayes, L. Lovász, V. T. Sós, and K. Vesztergombi, Convergent sequences of dense graphs II. Multiway cuts and statistical physics, Ann. of Math. (2) 176 (2012), no. 1, 151–219, DOI 10.4007/annals.2012.176.1.2. MR2925382,
Show rawAMSref \bib{BCLSV2012GraphLimitsSpectra}{article}{ author={Borgs, C.}, author={Chayes, J. T.}, author={Lov\'{a}sz, L.}, author={S\'{o}s, V. T.}, author={Vesztergombi, K.}, title={Convergent sequences of dense graphs II. Multiway cuts and statistical physics}, journal={Ann. of Math. (2)}, volume={176}, date={2012}, number={1}, pages={151--219}, issn={0003-486X}, review={\MR {2925382}}, doi={10.4007/annals.2012.176.1.2}, }
Reference [7]
Clemens Brand, Barry Guiduli, and Wilfried Imrich, Characterization of trivalent graphs with minimal eigenvalue gap, Croatica Chemica Acta 80 (2007), no. 2, 193–201.
Reference [8]
Stephan Brandt, The local density of triangle-free graphs, Discrete Math. 183 (1998), no. 1-3, 17–25, DOI 10.1016/S0012-365X(97)00074-5. MR1606776,
Show rawAMSref \bib{Brandt}{article}{ author={Brandt, Stephan}, title={The local density of triangle-free graphs}, journal={Discrete Math.}, volume={183}, date={1998}, number={1-3}, pages={17--25}, issn={0012-365X}, review={\MR {1606776}}, doi={10.1016/S0012-365X(97)00074-5}, }
Reference [9]
Andries E. Brouwer and Willem H. Haemers, Spectra of graphs, Universitext, Springer, New York, 2012, DOI 10.1007/978-1-4614-1939-6. MR2882891,
Show rawAMSref \bib{brouwer2011spectra}{book}{ author={Brouwer, Andries E.}, author={Haemers, Willem H.}, title={Spectra of graphs}, series={Universitext}, publisher={Springer, New York}, date={2012}, pages={xiv+250}, isbn={978-1-4614-1938-9}, review={\MR {2882891}}, doi={10.1007/978-1-4614-1939-6}, }
Reference [10]
Chandler Davis and W. M. Kahan, The rotation of eigenvectors by a perturbation. III, SIAM J. Numer. Anal. 7 (1970), 1–46, DOI 10.1137/0707001. MR264450,
Show rawAMSref \bib{DK}{article}{ author={Davis, Chandler}, author={Kahan, W. M.}, title={The rotation of eigenvectors by a perturbation. III}, journal={SIAM J. Numer. Anal.}, volume={7}, date={1970}, pages={1--46}, issn={0036-1429}, review={\MR {264450}}, doi={10.1137/0707001}, }
Reference [11]
Emeric Deutsch, On the spread of matrices and polynomials, Linear Algebra Appl. 22 (1978), 49–55, DOI 10.1016/0024-3795(78)90056-3. MR510794,
Show rawAMSref \bib{deutsch1978spread}{article}{ author={Deutsch, Emeric}, title={On the spread of matrices and polynomials}, journal={Linear Algebra Appl.}, volume={22}, date={1978}, pages={49--55}, issn={0024-3795}, review={\MR {510794}}, doi={10.1016/0024-3795(78)90056-3}, }
Reference [12]
P. Erdős, Some unsolved problems in graph theory and combinatorial analysis, Combinatorial Mathematics and its Applications (Proc. Conf., Oxford, 1969), Academic Press, London, 1971, pp. 97–109. MR0277392,
Show rawAMSref \bib{erdos}{article}{ author={Erd\H {o}s, P.}, title={Some unsolved problems in graph theory and combinatorial analysis}, conference={ title={Combinatorial Mathematics and its Applications}, address={Proc. Conf., Oxford}, date={1969}, }, book={ publisher={Academic Press, London}, }, date={1971}, pages={97--109}, review={\MR {0277392}}, }
Reference [13]
David A. Gregory, Daniel Hershkowitz, and Stephen J. Kirkland, The spread of the spectrum of a graph, Proceedings of the Eighth Conference of the International Linear Algebra Society (Barcelona, 1999), Linear Algebra Appl. 332/334 (2001), 23–35, DOI 10.1016/S0024-3795(00)00086-0. MR1839425,
Show rawAMSref \bib{gregory2001spread}{article}{ author={Gregory, David A.}, author={Hershkowitz, Daniel}, author={Kirkland, Stephen J.}, title={The spread of the spectrum of a graph}, booktitle={Proceedings of the Eighth Conference of the International Linear Algebra Society (Barcelona, 1999)}, journal={Linear Algebra Appl.}, volume={332/334}, date={2001}, pages={23--35}, issn={0024-3795}, review={\MR {1839425}}, doi={10.1016/S0024-3795(00)00086-0}, }
Reference [14]
Barry Guiduli, The structure of trivalent graphs with minimal eigenvalue gap, J. Algebraic Combin. 6 (1997), no. 4, 321–329, DOI 10.1023/A:1008669009563. MR1471892,
Show rawAMSref \bib{Guiduli2}{article}{ author={Guiduli, Barry}, title={The structure of trivalent graphs with minimal eigenvalue gap}, journal={J. Algebraic Combin.}, volume={6}, date={1997}, number={4}, pages={321--329}, issn={0925-9899}, review={\MR {1471892}}, doi={10.1023/A:1008669009563}, }
Reference [15]
Thomas C. Hales, A computer verification of the Kepler conjecture, Proceedings of the International Congress of Mathematicians, Vol. III (Beijing, 2002), Higher Ed. Press, Beijing, 2002, pp. 795–804. MR1957580,
Show rawAMSref \bib{2002HalesKepler}{article}{ author={Hales, Thomas C.}, title={A computer verification of the Kepler conjecture}, conference={ title={Proceedings of the International Congress of Mathematicians, Vol. III}, address={Beijing}, date={2002}, }, book={ publisher={Higher Ed. Press, Beijing}, }, date={2002}, pages={795--804}, review={\MR {1957580}}, }
Reference [16]
Charles R. Johnson, Ravinder Kumar, and Henry Wolkowicz, Lower bounds for the spread of a matrix, Linear Algebra Appl. 71 (1985), 161–173, DOI 10.1016/0024-3795(85)90244-7. MR813042,
Show rawAMSref \bib{johnson1985lower}{article}{ author={Johnson, Charles R.}, author={Kumar, Ravinder}, author={Wolkowicz, Henry}, title={Lower bounds for the spread of a matrix}, journal={Linear Algebra Appl.}, volume={71}, date={1985}, pages={161--173}, issn={0024-3795}, review={\MR {813042}}, doi={10.1016/0024-3795(85)90244-7}, }
Reference [17]
Chia-an Liu and Chih-wen Weng, Spectral radius of bipartite graphs, Linear Algebra Appl. 474 (2015), 30–43, DOI 10.1016/j.laa.2015.01.040. MR3324593,
Show rawAMSref \bib{liu2015spectral}{article}{ author={Liu, Chia-an}, author={Weng, Chih-wen}, title={Spectral radius of bipartite graphs}, journal={Linear Algebra Appl.}, volume={474}, date={2015}, pages={30--43}, issn={0024-3795}, review={\MR {3324593}}, doi={10.1016/j.laa.2015.01.040}, }
Reference [18]
Oystein Ore, Theory of graphs, American Mathematical Society Colloquium Publications, Vol. XXXVIII, American Mathematical Society, Providence, R.I., 1962. MR0150753,
Show rawAMSref \bib{Lovasz2012Hombook}{book}{ author={Ore, Oystein}, title={Theory of graphs}, series={American Mathematical Society Colloquium Publications, Vol. XXXVIII}, publisher={American Mathematical Society, Providence, R.I.}, date={1962}, pages={x+270}, review={\MR {0150753}}, }
Reference [19]
László Lovász and Balázs Szegedy, Limits of dense graph sequences, J. Combin. Theory Ser. B 96 (2006), no. 6, 933–957, DOI 10.1016/j.jctb.2006.05.002. MR2274085,
Show rawAMSref \bib{lovasz2006limits}{article}{ author={Lov\'{a}sz, L\'{a}szl\'{o}}, author={Szegedy, Bal\'{a}zs}, title={Limits of dense graph sequences}, journal={J. Combin. Theory Ser. B}, volume={96}, date={2006}, number={6}, pages={933--957}, issn={0095-8956}, review={\MR {2274085}}, doi={10.1016/j.jctb.2006.05.002}, }
Reference [20]
László Lovász and Balázs Szegedy, Szemerédi’s lemma for the analyst, Geom. Funct. Anal. 17 (2007), no. 1, 252–270, DOI 10.1007/s00039-007-0599-6. MR2306658,
Show rawAMSref \bib{lovasz2007szemeredi}{article}{ author={Lov\'{a}sz, L\'{a}szl\'{o}}, author={Szegedy, Bal\'{a}zs}, title={Szemer\'{e}di's lemma for the analyst}, journal={Geom. Funct. Anal.}, volume={17}, date={2007}, number={1}, pages={252--270}, issn={1016-443X}, review={\MR {2306658}}, doi={10.1007/s00039-007-0599-6}, }
Reference [21]
N. V. R. Mahadev and U. N. Peled, Threshold graphs and related topics, Annals of Discrete Mathematics, vol. 56, North-Holland Publishing Co., Amsterdam, 1995. MR1417258,
Show rawAMSref \bib{thresholdbook}{book}{ author={Mahadev, N. V. R.}, author={Peled, U. N.}, title={Threshold graphs and related topics}, series={Annals of Discrete Mathematics}, volume={56}, publisher={North-Holland Publishing Co., Amsterdam}, date={1995}, pages={xiv+543}, isbn={0-444-89287-7}, review={\MR {1417258}}, }
Reference [22]
L. Mirsky, The spread of a matrix, Mathematika 3 (1956), 127–130, DOI 10.1112/S0025579300001790. MR81875,
Show rawAMSref \bib{mirsky1956spread}{article}{ author={Mirsky, L.}, title={The spread of a matrix}, journal={Mathematika}, volume={3}, date={1956}, pages={127--130}, issn={0025-5793}, review={\MR {81875}}, doi={10.1112/S0025579300001790}, }
Reference [23]
Vladimir Nikiforov, Linear combinations of graph eigenvalues, Electron. J. Linear Algebra 15 (2006), 329–336, DOI 10.13001/1081-3810.1242. MR2274331,
Show rawAMSref \bib{Nikiforov}{article}{ author={Nikiforov, Vladimir}, title={Linear combinations of graph eigenvalues}, journal={Electron. J. Linear Algebra}, volume={15}, date={2006}, pages={329--336}, review={\MR {2274331}}, doi={10.13001/1081-3810.1242}, }
Reference [24]
Vladimir Nikiforov, Eigenvalue problems of Nordhaus-Gaddum type, Discrete Math. 307 (2007), no. 6, 774–780, DOI 10.1016/j.disc.2006.07.035. MR2291456,
Show rawAMSref \bib{Nikiforov4}{article}{ author={Nikiforov, Vladimir}, title={Eigenvalue problems of Nordhaus-Gaddum type}, journal={Discrete Math.}, volume={307}, date={2007}, number={6}, pages={774--780}, issn={0012-365X}, review={\MR {2291456}}, doi={10.1016/j.disc.2006.07.035}, }
Reference [25]
Vladimir Nikiforov, The spectral radius of graphs without paths and cycles of specified length, Linear Algebra Appl. 432 (2010), no. 9, 2243–2256, DOI 10.1016/j.laa.2009.05.023. MR2599857,
Show rawAMSref \bib{Nikiforov3}{article}{ author={Nikiforov, Vladimir}, title={The spectral radius of graphs without paths and cycles of specified length}, journal={Linear Algebra Appl.}, volume={432}, date={2010}, number={9}, pages={2243--2256}, issn={0024-3795}, review={\MR {2599857}}, doi={10.1016/j.laa.2009.05.023}, }
Reference [26]
Vladimir Nikiforov, Extrema of graph eigenvalues, Linear Algebra Appl. 482 (2015), 158–190, DOI 10.1016/j.laa.2015.05.016. MR3365272,
Show rawAMSref \bib{Nikiforov2}{article}{ author={Nikiforov, Vladimir}, title={Extrema of graph eigenvalues}, journal={Linear Algebra Appl.}, volume={482}, date={2015}, pages={158--190}, issn={0024-3795}, review={\MR {3365272}}, doi={10.1016/j.laa.2015.05.016}, }
Reference [27]
Alex W. N. Riasanovsky and John Urschel, spread_numeric, 2021. https://github.com/ariasanovsky/spread_numeric, commit = c7ac3426bca6cd3ec160d6c713e911b6920f8fba.
Reference [28]
Zoran Stanić, Graphs with small spectral gap, Electron. J. Linear Algebra 26 (2013), 417–432, DOI 10.13001/1081-3810.1662. MR3084652,
Show rawAMSref \bib{Stanic}{article}{ author={Stani\'{c}, Zoran}, title={Graphs with small spectral gap}, journal={Electron. J. Linear Algebra}, volume={26}, date={2013}, pages={417--432}, review={\MR {3084652}}, doi={10.13001/1081-3810.1662}, }
Reference [29]
Zoran Stanić, Inequalities for graph eigenvalues, London Mathematical Society Lecture Note Series, vol. 423, Cambridge University Press, Cambridge, 2015, DOI 10.1017/CBO9781316341308. MR3469535,
Show rawAMSref \bib{StanicBook}{book}{ author={Stani\'{c}, Zoran}, title={Inequalities for graph eigenvalues}, series={London Mathematical Society Lecture Note Series}, volume={423}, publisher={Cambridge University Press, Cambridge}, date={2015}, pages={xi+298}, isbn={978-1-107-54597-7}, review={\MR {3469535}}, doi={10.1017/CBO9781316341308}, }
Reference [30]
Tamás Terpai, Proof of a conjecture of V. Nikiforov, Combinatorica 31 (2011), no. 6, 739–754, DOI 10.1007/s00493-011-2652-1. MR2886108,
Show rawAMSref \bib{terpai}{article}{ author={Terpai, Tam\'{a}s}, title={Proof of a conjecture of V. Nikiforov}, journal={Combinatorica}, volume={31}, date={2011}, number={6}, pages={739--754}, issn={0209-9683}, review={\MR {2886108}}, doi={10.1007/s00493-011-2652-1}, }
Reference [31]
Warwick Tucker, A rigorous ODE solver and Smale’s 14th problem, Found. Comput. Math. 2 (2002), no. 1, 53–117, DOI 10.1007/s002080010018. MR1870856,
Show rawAMSref \bib{2002WarwickInterval}{article}{ author={Tucker, Warwick}, title={A rigorous ODE solver and Smale's 14th problem}, journal={Found. Comput. Math.}, volume={2}, date={2002}, number={1}, pages={53--117}, issn={1615-3375}, review={\MR {1870856}}, doi={10.1007/s002080010018}, }
Reference [32]
John C. Urschel, Graphs, Principal Minors, and Eigenvalue Problems, ProQuest LLC, Ann Arbor, MI, 2021. Thesis (Ph.D.)–Massachusetts Institute of Technology. MR4479606,
Show rawAMSref \bib{urschel2021graphs}{book}{ author={Urschel, John C.}, title={Graphs, Principal Minors, and Eigenvalue Problems}, note={Thesis (Ph.D.)--Massachusetts Institute of Technology}, publisher={ProQuest LLC, Ann Arbor, MI}, date={2021}, pages={(no paging)}, review={\MR {4479606}}, }
Reference [33]
Junliang Wu, Pingping Zhang, and Wenshi Liao, Upper bounds for the spread of a matrix, Linear Algebra Appl. 437 (2012), no. 11, 2813–2822, DOI 10.1016/j.laa.2012.07.007. MR2964726,
Show rawAMSref \bib{wu2012upper}{article}{ author={Wu, Junliang}, author={Zhang, Pingping}, author={Liao, Wenshi}, title={Upper bounds for the spread of a matrix}, journal={Linear Algebra Appl.}, volume={437}, date={2012}, number={11}, pages={2813--2822}, issn={0024-3795}, review={\MR {2964726}}, doi={10.1016/j.laa.2012.07.007}, }

Article Information

MSC 2020
Primary: 05C50 (Graphs and linear algebra (matrices, eigenvalues, etc.)), 15A18 (Eigenvalues, singular values, and eigenvectors), 15A60 (Norms of matrices, numerical range, applications of functional analysis to matrix theory)
Author Information
Jane Breen
Faculty of Science, Ontario Tech University, 2000 Simcoe St N, Oshawa, Ontario L1G 0C5 Canada
jane.breen@ontariotechu.ca
ORCID
MathSciNet
Alex W. N. Riasanovsky
Department of Mathematics, Iowa State University, 411 Morrill Road, Ames, Iowa 50011
a.riasanovsky@gmail.com
MathSciNet
Michael Tait
Department of Mathematics & Statistics, SAC Room 305, Villanova University, 800 Lancaster Avenue, Villanova, Pennsylvania 19085
michael.tait@villanova.edu
ORCID
MathSciNet
John Urschel
Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
urschel@mit.edu
MathSciNet
Additional Notes

The work of the first author was supported in part by NSERC Discovery Grant RGPIN-2021-03775. The work of the second author was supported in part by NSF award DMS-1839918 (RTG). The work of the third author was supported in part by NSF award DMS-2011553. The work of the fourth author was supported in part by ONR Research Contract N00014-17-1-2177. This work was supported by the Institute for Advanced Study and the National Science Foundation under Grant No. DMS-1926686.

Journal Information
Communications of the American Mathematical Society, Volume 2, Issue 11, ISSN 2692-3688, published by the American Mathematical Society, Providence, Rhode Island.
Publication History
This article was received on , revised on , and published on .
Copyright Information
Copyright 2022 by the authors under Creative Commons Attribution-NonCommercial 3.0 License (CC BY NC 3.0)
Article References
  • Permalink
  • Permalink (PDF)
  • DOI 10.1090/cams/14
  • MathSciNet Review: 4525711
  • Show rawAMSref \bib{4525711}{article}{ author={Breen, Jane}, author={Riasanovsky, Alex W. N.}, author={Tait, Michael}, author={Urschel, John}, title={Maximum spread of graphs and bipartite graphs}, journal={Comm. Amer. Math. Soc.}, volume={2}, number={11}, date={2022}, pages={417-480}, issn={2692-3688}, review={4525711}, doi={10.1090/cams/14}, }

Settings

Change font size
Resize article panel
Enable equation enrichment

Note. To explore an equation, focus it (e.g., by clicking on it) and use the arrow keys to navigate its structure. Screenreader users should be advised that enabling speech synthesis will lead to duplicate aural rendering.

For more information please visit the AMS MathViewer documentation.