Skip to Main Content

Graph Rules for Recurrent Neural Network Dynamics

Carina Curto
Katherine Morrison

Communicated by Notices Associate Editor Emilie Purvine

Article cover

1. Introduction

Neurons in the brain are constantly flickering with activity, which can be spontaneous or in response to stimuli LBH09. Because of positive feedback loops and the potential for runaway excitation, real neural networks often possess an abundance of inhibition that serves to shape and stabilize the dynamics YMSL05KAY14. The excitatory neurons in such networks exhibit intricate patterns of connectivity, whose structure controls the allowed patterns of activity. A central question in neuroscience is thus: how does network connectivity shape dynamics?

For a given model, this question becomes a mathematical challenge. The goal is to develop a theory that directly relates properties of a nonlinear dynamical system to its underlying graph. Such a theory can provide insights and hypotheses about how network connectivity constrains activity in real brains. It also opens up new possibilities for modeling neural phenomena in a mathematically tractable way.

Here we describe a class of inhibition-dominated neural networks corresponding to directed graphs, and introduce some of the theory that has been developed to study them. The heart of the theory is a set of parameter-independent graph rules that enables us to directly predict features of the dynamics from combinatorial properties of the graph. Specifically, graph rules allow us to constrain, and in some cases fully determine, the collection of stable and unstable fixed points of a network based solely on graph structure.

Stable fixed points are themselves static attractors of the network, and have long been used as a model of stored memory patterns Hop82. In contrast, unstable fixed points have been shown to play an important role in shaping dynamic (nonstatic) attractors, such as limit cycles PMMC22. By understanding the fixed points of simple networks, and how they relate to the underlying architecture, we can gain valuable insight into the high-dimensional nonlinear dynamics of neurons in the brain.

For more complex architectures, built from smaller component subgraphs, we present a series of gluing rules that allow us to determine all fixed points of the network by gluing together those of the components. These gluing rules are reminiscent of sheaf-theoretic constructions, with fixed points playing the role of sections over subnetworks.

First, we review some basics of recurrent neural networks and a bit of historical context.

Basic network setup

A recurrent neural network is a directed graph together with a prescription for the dynamics on the vertices, which represent neurons (see Figure 1A). To each vertex we associate a function that tracks the activity level of neuron as it evolves in time. To each ordered pair of vertices we assign a weight, , governing the strength of the influence of neuron on neuron . In principle, there can be a nonzero weight between any two nodes, with the graph providing constraints on the allowed values , depending on the specifics of the model.

Figure 1.

(A) Recurrent network setup. (B) A Ramón y Cajal drawing of real cortical neurons.

Graphic without alt text

The dynamics often take the form of a system of ODEs, called a firing rate model DA01:

for , …, . The various terms in the equation are illustrated in Figure 1, and can be thought of as follows:

is the firing rate of a single neuron (or the average activity of a subpopulation of neurons);

is the “leak” timescale, governing how quickly a neuron’s activity exponentially decays to zero in the absence of external or recurrent input;

is a real-valued matrix of synaptic interaction strengths, with representing the strength of the connection from neuron to neuron ;

is a real-valued external input to neuron that may or may not vary with time;

is the total input to neuron as a function of time; and

is a nonlinear, but typically monotone increasing function.

Of particular importance for this article is the family of threshold-linear networks (TLNs). In this case, the nonlinearity is chosen to be the popular threshold-linear (or ReLU) function,

TLNs are common firing rate models that have been used in computational neuroscience for decades SY12TSSM97HSM00BF22. The use of threshold-linear units in neural modeling dates back at least to 1958 HR58. In the last 20 years, TLNs have also been shown to be surprisingly tractable mathematically HSS03CDI13, CM16MDIC16CGM19PLACM22, though much of the theory remains underdeveloped. We are especially interested in competitive or inhibition-dominated TLNs, where the matrix is nonpositive so the effective interaction between any pair of neurons is inhibitory. In this case, the activity remains bounded despite the lack of saturation in the nonlinearity MDIC16. These networks produce complex nonlinear dynamics and can possess a remarkable variety of attractors MDIC16PLACM22PMMC22.

Firing rate models of the form 1 are examples of recurrent networks because the matrix allows for all pairwise interactions, and there is no constraint that the architecture (i.e., the underlying graph ) be feedforward. Unlike deep neural networks, which can be thought of as classifiers implementing a clustering function, recurrent networks are primarily thought of as dynamical systems. And the main purpose of these networks is to model the dynamics of neural activity in the brain. The central question is thus:

Question 1.

Given a firing rate model defined by 1 with network parameters and underlying graph , what are the emergent network dynamics? What can we say about the dynamics from knowledge of alone?

We are particularly interested in understanding the attractors of such a network, including both stable fixed points and dynamic attractors such as limit cycles. The attractors are important because they comprise the set of possible asymptotic behaviors of the network in response to different inputs or initial conditions (see Figure 2).

Note that Question 1 is posed for a fixed connectivity matrix , but of course can change over time (e.g., as a result of learning or training of the network). Here we restrict ourselves to considering constant matrices; this allows us to focus on understanding network dynamics on a fast timescale, assuming slowly varying synaptic weights. Understanding the dynamics associated to changing is an important topic, currently beyond the scope of this work.

Historical interlude: memories as attractors

Attractor neural networks became popular in the 1980s as models of associative memory encoding and retrieval. The best-known example from that era is the Hopfield model Hop82, originally conceived as a variant on the Ising model from statistical mechanics. In the Hopfield model, the neurons can be in one of two states, , and the activity evolves according to the discrete time update rule:

Hopfield’s famous 1982 result is that the dynamics are guaranteed to converge to a stable fixed point, provided the interaction matrix is symmetric: that is, for every . Specifically, he showed that the “energy” function,

decreases along trajectories of the dynamics, and thus acts as a Lyapunov function Hop82. The stable fixed points are local minima of the energy landscape (Figure 2A). A stronger, more general convergence result for competitive neural networks was shown in CG83.

Figure 2.

Attractor neural networks. (A) For symmetric Hopfield networks and symmetric inhibitory TLNs, trajectories are guaranteed to converge to stable fixed point attractors. Sample trajectories are shown, with the basin of attraction for the blue stable fixed point outlined in blue. (B) For asymmetric TLNs, dynamic attractors can coexist with (static) stable fixed point attractors.

Graphic without alt text

These fixed points are the only attractors of the network, and they represent the set of memories encoded in the network. Hopfield networks perform a kind of pattern completion: given an initial condition , the activity evolves until it converges to one of multiple stored patterns in the network. If, for example, the individual neurons store black and white pixel values, this process could input a corrupted image and recover the original image, provided it has previously been stored as a stable fixed point in the network by appropriately selecting the weights of the matrix. The novelty at the time was the nonlinear phenomenon of multistability: namely, that the network could encode many such stable equilibria and thus maintain an entire catalogue of stored memory patterns. The key to Hopfield’s convergence result was the requirement that be a symmetric interaction matrix. Although this was known to be an unrealistic assumption for real (biological) neural networks, it was considered a tolerable price to pay for guaranteed convergence. One did not want an associative memory network that wandered the state space indefinitely without ever recalling a definite pattern.

Twenty years later, Hahnloser, Seung, and others followed up and proved a similar convergence result in the case of symmetric threshold-linear networks HSS03. More results on the collections of stable fixed points that can be simultaneously encoded in a symmetric TLN can be found in CDI13CM16, including some unexpected connections to Cayley–Menger determinants and classical distance geometry.

In all of this work, stable fixed points have served as the model for encoded memories. Indeed, these are the only types of attractors that arise for symmetric Hopfield networks or symmetric TLNs. Whether or not guaranteed convergence to stable fixed points is desirable, however, is a matter of perspective. For a network whose job it is to perform pattern completion or classification for static images (or codewords), as in the classical Hopfield model, this is exactly what one wants. But it is also important to consider memories that are temporal in nature, such as sequences and other dynamic patterns of activity. Sequential activity, as observed in central pattern generator circuits (CPGs) and spontaneous activity in hippocampus and cortex, is more naturally modeled by dynamic attractors such as limit cycles. This requires shifting attention to the asymmetric case, in order to be able to encode attractors that are not stable fixed points (Figure 2B).

Beyond stable fixed points

When the symmetry assumption is removed, TLNs can support a rich variety of dynamic attractors such as limit cycles, quasiperiodic attractors, and even strange (chaotic) attractors. Indeed, this richness can already be observed in a special class of TLNs called combinatorial threshold-linear networks (CTLNs), introduced in Section 3. These networks are defined from directed graphs, and the dynamics are almost entirely determined by the graph structure. A striking feature of CTLNs is that the dynamics are shaped not only by the stable fixed points, but also the unstable fixed points. In particular, we have observed a direct correspondence between certain types of unstable fixed points and dynamic attractors (see Figure 3). This is reviewed in Section 4.

Figure 3.

Stable and unstable fixed points. (A) Stable fixed points are attractors of the network. (B-C) Unstable fixed points are not themselves attractors, but certain unstable fixed points seem to correspond to dynamic attractors (B), while others function solely as tipping points between multiple attractors (C).

Graphic without alt text

Despite exhibiting complex, high-dimensional, nonlinear dynamics, recent work has shown that TLNs—and especially CTLNs—are surprisingly tractable mathematically. Motivated by the relationship between fixed points and attractors, a great deal of progress has been made on the problem of relating fixed point structure to network architecture. In the case of CTLNs, this has resulted in a series of graph rules: theorems that allow us to rule in and rule out potential fixed points based purely on the structure of the underlying graph CGM19PLACM22. In Section 5, we give a novel exposition of graph rules, and introduce several elementary graph rules from which the others can be derived.

Inhibition-dominated TLNs and CTLNs also display a remarkable degree of modularity. Namely, attractors associated to smaller networks can be embedded in larger ones with minimal distortion PMMC22. This is likely a consequence of the high levels of background inhibition: it serves to stabilize and preserve local properties of the dynamics. These networks also exhibit a kind of compositionality, wherein fixed points and attractors of subnetworks can be effectively “glued” together into fixed points and attractors of a larger network. These local-to-global relationships are given by a series of theorems we call gluing rules, given in Section 6.

2. TLNs and Hyperplane Arrangements

For firing rate models with threshold-nonlinearity , the network equations 1 become

for , …, . We also assume for each . Note that the leak timescales have been set to for all . We thus measure time in units of this timescale.

For constant matrix and input vector , the equations

define a hyperplane arrangement in . The -th hyperplane is defined by , with normal vector , population activity vector , and affine shift . If , then intersects the -th coordinate axis at the point . is parallel to the -th axis.

The hyperplanes partition the positive orthant into chambers. Within the interior of each chamber, each point is on the plus or minus side of each hyperplane . The equations thus reduce to a linear system of ODEs, with either or for each . In particular, TLNs are piecewise-linear dynamical systems with a different linear system governing the dynamics in each chamber.

Figure 4.

TLNs as a patchwork of linear systems. (A) The connectivity matrix , input , and differential equations for a TLN with neurons. (B) The state space is divided into chambers (regions) , each having dynamics governed by a different linear system . The chambers are defined by the hyperplanes , with defined by (gray lines).

Graphic without alt text
Figure 5.

A network on neurons, its hyperplane arrangement, and limit cycle. (A) A TLN whose connectivity matrix is dictated by a -cycle graph, together with the TLN equations. (B) The TLN from A produces firing rate activity in a periodic sequence. (C) (Left) The hyperplane arrangement defined by the equations , with a trajectory initialized near the fixed point shown in black. (Right) A close-up of the trajectory, spiraling out from the unstable fixed point and falling into a limit cycle. Different colors correspond to different chambers of the hyperplane arrangement through which the trajectory passes.

Graphic without alt text

A fixed point of a TLN 3 is a point that satisfies for each . In particular, we must have

where is evaluated at the fixed point. We typically assume a nondegeneracy condition on CGM19, which guarantees that each linear system is nondegenerate and has a single fixed point. This fixed point may or may not lie within the chamber where its corresponding linear system applies. The fixed points of the TLN are precisely the fixed points of the linear systems that lie within their respective chambers.

Figure 4 illustrates the hyperplanes and chambers for a TLN with . Each chamber, denoted as a region , has its own linear system of ODEs, , for , or . The fixed points corresponding to each linear system are denoted by , in matching color. Note that only chamber contains its own fixed point (in red). This fixed point, , is thus the only fixed point of the TLN.

Figure 5 shows an example of a TLN on neurons. The matrix is constructed from a -cycle graph and for each . The dynamics fall into a limit cycle where the neurons fire in a repeating sequence that follows the arrows of the graph. This time, the TLN equations define a hyperplane arrangement in , again with each hyperplane defined by (Figure 5C). An initial condition near the unstable fixed point in the all + chamber (where for each ) spirals out and converges to a limit cycle that passes through four distinct chambers. Note that the threshold nonlinearity is critical for the model to produce nonlinear behavior such as limit cycles; without it, the system would be linear. It is, nonetheless, nontrivial to prove that the limit cycle shown in Figure 5 exists. A recent proof was given for a special family of TLNs constructed from any -cycle graph BCRR21.

The set of all fixed points

A central object that is useful for understanding the dynamics of TLNs is the collection of all fixed points of the network, both stable and unstable. The support of a fixed point is the subset of active neurons,

Our nondegeneracy condition (that is generically satisfied) guarantees we can have at most one fixed point per chamber of the hyperplane arrangement , and thus at most one fixed point per support. We can thus label all the fixed points of a given network by their supports:

where . For each support , the fixed point itself is easily recovered. Outside the support, for all . Within the support, is given by:

Here and are the column vectors obtained by restricting and to the indices in , and is the induced principal submatrix obtained by restricting rows and columns of to .

From 4, we see that a fixed point with must satisfy the “on-neuron” conditions, for all , as well as the “off-neuron” conditions, for all , to ensure that for each and for each . Equivalently, these conditions guarantee that the fixed point of lies inside its corresponding chamber, . Note that for such a fixed point, the values for depend only on the restricted subnetwork . Therefore, the on-neuron conditions for in are satisfied if and only if they hold in . Since the off-neuron conditions are trivially satisfied in , it follows that is a necessary condition for . It is not, however, sufficient, as the off-neuron conditions may fail in the larger network.

Conveniently, the off-neuron conditions are independent and can be checked one neuron at a time. Thus,

When satisfies all the off-neuron conditions, so that , we say that survives to the larger network; otherwise, we say dies.

The fixed point corresponding to is stable if and only if all eigenvalues of have negative real part. For competitive (or inhibition-dominated) TLNs, all fixed points—whether stable or unstable—have a stable manifold. This is because competitive TLNs have for all . Applying the Perron–Frobenius theorem to , we see that the largest magnitude eigenvalue is guaranteed to be real and negative. The corresponding eigenvector provides an attracting direction into the fixed point. Combining this observation with the nondegeneracy condition reveals that the unstable fixed points are all hyperbolic (i.e., saddle points).

3. Combinatorial Threshold-Linear Networks

Combinatorial threshold-linear networks (CTLNs) are a special case of competitive (or inhibition-dominated) TLNs, with the same threshold nonlinearity, that were first introduced in MDIC16. What makes CTLNs special is that we restrict to having only two values for the connection strengths , for . These are obtained as follows from a directed graph , where indicates that there is an edge from to and indicates that there is no such edge:

Additionally, CTLNs typically have a constant external input for all in order to ensure the dynamics are internally generated rather than inherited from a changing or spatially heterogeneous input.

A CTLN is thus completely specified by the choice of a graph , together with three real parameters: , and . We additionally require that , , and . When these conditions are met, we say the parameters are within the legal range. Note that the upper bound on implies , and so the matrix is always effectively inhibitory. For fixed parameters, only the graph varies between networks. The network in Figure 5 is a CTLN with the standard parameters , , and .

We interpret a CTLN as modeling a network of excitatory neurons, whose net interactions are effectively inhibitory due to a strong global inhibition (Figure 6). When , we say strongly inhibits ; when , we say weakly inhibits . The weak inhibition is thought of as the sum of an excitatory synaptic connection and the background inhibition. Note that because , when , neuron inhibits more than it inhibits itself via its leak term; when , neuron inhibits less than it inhibits itself. These differences in inhibition strength cause the activity to follow the arrows of the graph.

Figure 6.

CTLNs. A neural network with excitatory pyramidal neurons (triangles) and a background network of inhibitory interneurons (gray circles) that produces a global inhibition. The corresponding graph (right) retains only the excitatory neurons and their connections.

Graphic without alt text

The set of fixed point supports of a CTLN with graph is denoted as:

is precisely , where and are specified by a CTLN with graph and parameters and . Note that is independent of , provided is constant across neurons as in a CTLN. It is also frequently independent of and . For this reason we often refer to it as , especially when a fixed choice of and is understood.

The legal range condition, , is motivated by a theorem in MDIC16. It ensures that single directed edges are not allowed to support stable fixed points . This allows us to prove the following theorem connecting a certain graph structure to the absence of stable fixed points. Note that a graph is oriented if for any pair of nodes, implies (i.e., there are no bidirectional edges). A sink is a node with no outgoing edges.

Theorem 3.1 (MDIC16, Theorem 2.4).

Let be an oriented graph with no sinks. Then for any parameters in the legal range, the associated CTLN has no stable fixed points. Moreover, the activity is bounded.

The graph in Figure 5A is an oriented graph with no sinks. It has a single fixed point, , irrespective of the parameters (note that we use as shorthand for the set ). This fixed point is unstable and the dynamics converge to a limit cycle (Figure 5C).

Even when there are no stable fixed points, the dynamics of a CTLN are always bounded MDIC16. In the limit as , we can bound the total population activity as a function of the parameters , and :

In simulations, we observe a rapid convergence to this regime.

Figure 7.

Dynamics of a CTLN network on neurons. The graph is a directed Erdos–Renyi random graph with edge probability and no self loops. The CTLN parameters are , , and . Initial conditions for each neuron, , are randomly and independently chosen from the uniform distribution on . (A-D) Four solutions from the same deterministic network, differing only in the choice of initial conditions. In each panel, the top plot shows the firing rate as a function of time for each neuron in grayscale. The middle plot shows the summed total population activity, , which quickly becomes trapped between the horizontal gray lines—the bounds in equation 6. The bottom plot shows individual rate curves for all neurons, in different colors. (A) The network appears chaotic, with some recurring patterns of activity. (B) The solution initially appears to be chaotic, like the one in A, but eventually converges to a stable fixed point supported on a -clique. (C) The solution converges to a limit cycle after . (D) The solution converges to a different limit cycle after . Note that one can observe brief “echoes” of this limit cycle in the transient activity of panel B.

Graphic without alt text

Figure 7 depicts four solutions for the same CTLN on neurons. The graph was generated as a directed Erdos–Renyi random graph with edge probability ; note that it is not an oriented graph. Since the network is deterministic, the only difference between simulations is the initial conditions. While panel A appears to show chaotic activity, the solutions in panels B, C, and D all settle into a fixed point or a limit cycle within the allotted time frame. The long transient of panel B is especially striking: around , the activity appears as though it will fall into the same limit cycle from panel D, but then escapes into another period of chaotic-looking dynamics before abruptly converging to a stable fixed point. In all cases, the total population activity rapidly converges to lie within the bounds given in 6, depicted in gray.

Fun examples

Despite their simplicity, CTLNs display a rich variety of nonlinear dynamics. Even very small networks can exhibit interesting attractors with unexpected properties. Theorem 3.1 tells us that one way to guarantee that a network will produce dynamic—as opposed to static—attractors is to choose to be an oriented graph with no sinks. The following examples are of this type.

Figure 8.

Gaudi attractor. A CTLN for a cyclically symmetric tournament on nodes produces two distinct attractors, depending on initial conditions. We call the top one the Gaudi attractor because the undulating curves are reminiscent of work by the architect from Barcelona.

Graphic without alt text

The Gaudi attractor. Figure 8 shows two solutions to a CTLN for a cyclically symmetric tournament⁠Footnote1 graph on nodes. For some initial conditions, the solutions converge to a somewhat boring limit cycle with the firing rates , …, all peaking in the expected sequence, (bottom middle). For a different set of initial conditions, however, the solution converges to the beautiful and unusual attractor displayed at the top.


A tournament is a directed graph in which every pair of nodes has exactly one (directed) edge between them.

Symmetry and synchrony. Because the pattern of weights in a CTLN is completely determined by the graph , any symmetry of the graph necessarily translates to a symmetry of the differential equations, and hence of the vector field. It follows that the automorphism group of also acts on the set of all attractors, which must respect the symmetry. For example, in the cyclically symmetric tournament of Figure 8, both the Gaudi attractor and the “boring” limit cycle below it are invariant under the cyclic permutation : the solution is preserved up to a time translation.

Another way for symmetry to manifest itself in an attractor is via synchrony. The network in Figure 9A depicts a CTLN with a graph on nodes that has a nontrivial automorphism group , cyclically permuting the nodes 2, 3, and 4. In the corresponding attractor, the neurons 2, 3, 4 perfectly synchronize as the solution settles into the limit cycle. Notice, however, what happens for the network in Figure 9B. In this case, the limit cycle looks very similar to the one in A, with the same synchrony among neurons 2, 3, and 4. However, the graph is missing the edge, and so the graph has no nontrivial automorphisms. We refer to this phenomenon as surprise symmetry.

Figure 9.

Symmetry and synchrony. (A) A graph with automorphism group has an attractor where nodes , and fire synchronously. (B) The symmetry is broken due to the dropped edge. Nevertheless, the attractor still respects the symmetry with nodes , and firing synchronously. Note that both attractors are very similar limit cycles, but the one in B has longer period. (Standard parameters: , , .)

Graphic without alt text

On the flip side, a network with graph symmetry may have multiple attractors that are exchanged by the group action, but do not individually respect the symmetry. This is the more familiar scenario of spontaneous symmetry breaking.

Emergent sequences. One of the most reliable properties of CTLNs is the tendency of neurons to fire in sequence. Although we have seen examples of synchrony, the global inhibition promotes competitive dynamics wherein only one or a few neurons reach their peak firing rates at the same time. The sequences may be intuitive, as in the networks of Figures 8 and 9, following obvious cycles in the graph. However, even for small networks the emergent sequences may be difficult to predict.

Figure 10.

Emergent sequences can be difficult to predict. (A) (Left) The graph of a CTLN that is a tournament on seven nodes. (Right) The same graph, but with the cycle corresponding to the sequential activity highlighted in black. (B) A solution to the CTLN that converges to a limit cycle. This appears to be the only attractor of the network for the standard parameters.

Graphic without alt text

The network in Figure 10A has neurons, and the graph is a tournament with no nontrivial automorphisms. The corresponding CTLN appears to have a single, global attractor, shown in Figure 10B. The neurons in this limit cycle fire in a repeating sequence, 634517, with 5 being the lowest-firing node. This sequence is highlighted in black in the graph, and corresponds to a cycle in the graph. However, it is only one of many cycles in the graph. Why do the dynamics select this sequence and not the others? And why does neuron 2 drop out, while all others persist? This is particularly puzzling given that node 2 has in-degree three, while nodes 3 and 5 have in-degree two.

Figure 11.

An example CTLN and its attractors. (A) The graph of a CTLN. The fixed point supports are given by , irrespective of parameters . (B) Solutions to the CTLN in A using the standard parameters , , and . (Top) The initial condition was chosen as a small perturbation of the fixed point supported on . The activity quickly converges to a limit cycle where the high-firing neurons are the ones in the fixed point support. (Bottom) A different initial condition yields a solution that converges to the static attractor corresponding to the stable fixed point on node . (C) The three fixed points are depicted in a three-dimensional projection of the four-dimensional state space. Perturbations of the fixed point supported on produce solutions that either converge to the limit cycle or to the stable fixed point from B.

Graphic without alt text

Indeed, local properties of a network, such as the in- and out-degrees of individual nodes, are insufficient for predicting the participation and ordering of neurons in emergent sequences. Nevertheless, the sequence is fully determined by the structure of . We just have a limited understanding of how. Recent progress in understanding sequential attractors has relied on special network architectures that are cyclic like the ones in Figure 9 PLACM22. Interestingly, although the graph in Figure 10 does not have such an architecture, the induced subgraph generated by the high-firing nodes 1, 3, 4, 6, and 7 is isomorphic to the graph in Figure 8. This graph, as well as the two graphs in Figure 9, have corresponding networks that are in some sense irreducible in their dynamics. These are examples of graphs that we refer to as core motifs PMMC22.

4. Minimal Fixed Points, Core Motifs, and Attractors

Stable fixed points of a network are of obvious interest because they correspond to static attractors HSS03CDI13. One of the most striking features of CTLNs, however, is the strong connection between unstable fixed points and dynamic attractors PMMC22PLACM22.

Question 2.

For a given CTLN, can we predict the dynamic attractors of the network from its unstable fixed points? Can the unstable fixed points be determined from the structure of the underlying graph ?

Throughout this section, is a directed graph on nodes. Subsets are often used to denote both the collection of vertices indexed by and the induced subgraph . The corresponding network is assumed to be a CTLN with fixed parameters , and .

Figure 11 provides an example to illustrate the relationship between unstable fixed points and dynamic attractors. Any CTLN with the graph in panel A has three fixed points, with supports . The collection of fixed point supports can be thought of as a partially ordered set, ordered by inclusion. In our example, and are thus minimal fixed point supports, because they are minimal under inclusion. It turns out that the corresponding fixed points each have an associated attractor (Figure 11B). The one supported on , a sink in the graph, yields a stable fixed point, while the (unstable) fixed point, whose induced subgraph is a -cycle, yields a limit cycle attractor with high-firing neurons , , and . Figure 11C depicts all three fixed points in the state space. Here we can see that the third one, supported on , acts as a “tipping point” on the boundary of two basins of attraction. Initial conditions near this fixed point can yield solutions that converge either to the stable fixed point or the limit cycle.

Not all minimal fixed points have corresponding attractors. In PMMC22 we saw that the key property of such a is that it be minimal not only in but also in , corresponding to the induced subnetwork restricted to the nodes in . In other words, is the only fixed point in . This motivates the definition of core motifs.

Figure 12.

Small core motifs. For each of these graphs, , where is the number of nodes. Attractors are shown for CTLNs with the standard parameters , , and .

Graphic without alt text
Definition 4.1.

Let be the graph of a CTLN on nodes. An induced subgraph is a core motif of the network if .

When the graph is understood, we sometimes refer to itself as a core motif if is one. The associated fixed point is called a core fixed point. Core motifs can be thought of as “irreducible” networks because they have a single fixed point of full support. Since the activity is bounded and must converge to an attractor, the attractor can be said to correspond to this fixed point. A larger network that contains as an induced subgraph may or may not have . When the core fixed point does survive, we refer to the embedded as a surviving core motif, and we expect the associated attractor to survive. In Figure 11, the surviving core motifs are and , and they precisely predict the attractors of the network.

The simplest core motifs are cliques. When these survive inside a network , the corresponding attractor is always a stable fixed point supported on all nodes of the clique. In fact, we conjectured that any stable fixed point for a CTLN must correspond to a maximal clique of —specifically, a target-free clique CGM19.

Up to size , all core motifs are parameter-independent. For size , of core motifs are parameter-independent. Figure 12 shows the complete list of all core motifs of size , together with some associated attractors. The cliques all correspond to stable fixed points, the simplest type of attractor. The -cycle yields the limit cycle attractor in Figure 5, which may be distorted when embedded in a larger network (see Figure 11B). The other core motifs whose fixed points are unstable have dynamic attractors. Note that the -cycu graph has a symmetry, and the rate curves for these two neurons are synchronous in the attractor. This synchrony is also evident in the -ufd attractor, despite the fact that this graph does not have the symmetry. Perhaps the most interesting attractor, however, is the one for the fusion -cycle graph. Here the -cycle attractor, which does not survive the embedding to the larger graph, appears to “fuse” with the stable fixed point associated to (which also does not survive). The resulting attractor can be thought of as binding together a pair of smaller attractors.

We have performed extensive tests on whether or not core motifs predict attractors in small networks. Specifically, we decomposed all 9608 directed graphs on nodes into core motif components, and used this to predict the attractors. We found that 1053 of the graphs have surviving core motifs that are not cliques; these graphs were thus expected to support dynamic attractors. The remaining 8555 graphs contain only cliques as surviving core motifs, and were thus expected to have only stable fixed point attractors. Overall, we found that core motifs correctly predicted the set of attractors in 9586 of the 9608 graphs. Of the 22 graphs with mistakes, 19 graphs have a core motif with no corresponding attractor, and 3 graphs have no core motifs for the chosen parameters.⁠Footnote2


Classification of CTLNs on n=5 nodes available at

5. Graph Rules

We have seen that CTLNs exhibit a rich variety of nonlinear dynamics, and that the attractors are closely related to the fixed points. This opens up a strategy for linking attractors to the underlying network architecture via the fixed point supports . Our main tools for doing this are graph rules.

Throughout this section, we will use greek letters to denote subsets of corresponding to fixed point supports (or potential supports), while latin letters denote individual nodes/neurons. As before, denotes the induced subgraph obtained from by restricting to and keeping only edges between vertices of . The fixed point supports are:

The main question addressed by graph rules is:

Question 3.

What can we say about from knowledge of alone?

For example, consider the graphs in Figure 13. Can we determine from the graph alone which subgraphs will support fixed points? Moreover, can we determine which of those subgraphs are core motifs that will give rise to attractors of the network? We saw in Section 4 (Figure 12) that cycles and cliques are among the small core motifs; can cycles and cliques produce core motifs of any size? Can we identify other graph structures that are relevant for either ruling in or ruling out certain subgraphs as fixed point supports? The rest of Section 5 focuses on addressing these questions.

Figure 13.

Graphs for which is completely determined by graph rules.

Graphic without alt text

Note that implicit in the above questions is the idea that graph rules are parameter-independent: that is, they directly relate the structure of to via results that are valid for all choices of , and (provided they lie within the legal range). In order to obtain the most powerful results, we also require that our CTLNs be nondegenerate. As has already been noted, nondegeneracy is generically satisfied for TLNs CGM19. For CTLNs, it is satisfied irrespective of and for almost all legal range choices of and (i.e., up to a set of measure zero in the two-dimensional parameter space for and ).

5.1. Examples of graph rules

We’ve already seen some graph rules. For example, Theorem 3.1 told us that if is an oriented graph with no sinks, the associated CTLN has no stable fixed points. Such CTLNs are thus guaranteed to only exhibit dynamic attractors. Here we present a set of eight simple graph rules, all proven in CGM19, that are easy to understand and give a flavor of the kinds of theorems we have found.

We will use the following graph theoretic terminology. A source is a node with no incoming edges, while a sink is a node with no outgoing edges. Note that a node can be a source or sink in an induced subgraph , while not being one in . An independent set is a collection of nodes with no edges between them, while a clique is a set of nodes that is all-to-all bidirectionally connected. A cycle is a graph (or an induced subgraph) where each node has exactly one incoming and one outgoing edge, and they are all connected in a single directed cycle. A directed acyclic graph (DAG) is a graph with a topological ordering of vertices such that whenever ; such a graph does not contain any directed cycles. Finally, a target of a graph is a node such that for all . Note that a target may be inside or outside .

Examples of graph rules:

Rule 1 (independent sets).

If is an independent set, then if and only if each is a sink in .

Rule 2 (cliques).

If is a clique, then if and only if there is no node of , , such that for all . In other words, if and only if is a target-free clique. If , the corresponding fixed point is stable.

Rule 3 (cycles).

If is a cycle, then if and only if there is no node of , , such that receives two or more edges from . If , the corresponding fixed point is unstable.

Rule 4 (sources).

(i) If contains a source , with for some , then . (ii) Suppose , but is a source in . Then .

Rule 5 (targets).

(i) If has target , with and for some (), then and thus . (ii) If has target , then and thus .

Rule 6 (sinks).

If has a sink , then .

Rule 7 (DAGs).

If is a directed acyclic graph with sinks , …, , then , the set of all unions of sinks.

Rule 8 (parity).

For any , is odd.

In many cases, particularly for small graphs, our graph rules are complete enough that they can be used to fully work out . In such cases, is guaranteed to be parameter-independent (since the graph rules do not depend on and ). As an example, consider the graph on