Skip to Main Content

Rate-Based Synaptic Plasticity Rules

Lawrence Udeigwe

Communicated by Notices Associate Editor Reza Malek-Madani

Introduction

The human nervous system is made up of networks containing billions of nerve cells also known as neurons. These networks function as a result of each neuron transmitting electrochemical signals to nearby neurons when it releases an action potential or spike. This spike occurs when the difference in electrical potential between the inside and outside of the neuron, also known as its membrane potential, exceeds a certain threshold (see Figure 1) 7. Increase in membrane potential is elicited by several factors including inputs from other neurons and the environment. The frequency at which a neuron spikes is referred to as its spike rate or activity rate or firing rate. The firing rate and firing pattern of a neuron encodes a good amount of the information it transmits.

Figure 1.

(A) Propagation of membrane potential. An action potential (or spike) occurs when the membrane potential of a neuron exceeds a certain threshold. (B) A regular spiking pattern in rat motor cortex.

Graphic without alt text

At the moment a neuron spikes, an electrical pulse travels from the cell body down the axon fiber to be received by nearby neurons whose dendrites are connected to its axon ends. The junction where the axon head meets a dendrite is known as a synapse. The neuron transmitting the signal is called the presynaptic neuron and the neuron receiving the signal is called the postsynaptic neuron (see Figure 2). The term synaptic plasticity refers to the neurobiological process by which specific patterns of activity at the synapses result in changes in synaptic strengths and enable the brain to adapt to new information over time.

Figure 2.

The synaptic relationship between the presynaptic and postsynaptic neurons.

Graphic without alt text

A rate-based plasticity model defines the synaptic strength change as a function of the presynaptic and postsynaptic firing rates. This article explores rate-based synaptic plasticity models and discusses related emergent research questions.

Hebb’s Rule

One of the most studied theories of synaptic plasticity is Hebb’s rule named for Donald Hebb who proposed that when neuron A repeatedly participates in firing neuron B, the synaptic strength of A onto B increases 11. If one thinks of A as the presynaptic neuron and B as the postsynaptic neuron, Hebbian plasticity implies that changes in synaptic strengths in a network of neurons is a function of the pre- and postsynaptic neural activities. Hebbian plasticity is believed to be the neural basis of associative long-term memory and developmental changes such as receptive field development.

Now consider a network of neurons in which presynaptic neuron communicate with postsynaptic neuron . Let be the input signal from neuron to , the output from neuron , and the weight of the synapse between between and . Hebb’s Rule can be expressed as follows:

Introducing a constant of variation, , gives rise to the alternative form

Looking at Hebb’s Rule as the mechanism through which the network learns, can be regarded as the learning rate, a parameter that controls how fast the weights get modified or how quickly that aspect of the network learns.

The linear neuron model

Assume, at any time , that a postsynaptic neuron receives presynaptic inputs in the form of a vector

obtained stochastically from a finite set of input values and let

be a vector of synaptic weights between each presynaptic neuron and the postsynaptic neuron. Then the activity response of the postsynaptic neuron is expressed as

or in dot product form

Geometrically, the dot product between vectors and , is defined as

where is the angle between and . This geometrical interpretation helps one understand exactly what is happening with inputs, weights, and the output. Assume that the presynaptic inputs and synaptic weights are normalized, then the size of depends solely on , for : as gets smaller, gets larger, and and become closer or “more similar” to each other; as gets larger, gets smaller and and become farther away from or “less similar” to each other. Hence, the magnitude of can be seen as a similarity measure between the presynaptic inputs and the synaptic weights .

It is assumed that a neuron “learns” as its synaptic weights modifies over time. In some contexts, after a certain period of learning, the synaptic weights reach an equilibrium state where they stop changing and the neuron is assumed to have learned. For example, it is known that as synaptic strengths in primary visual cortex (V1) get modified, a V1 neurons learns to respond differently to presynaptic input stimuli of different orientations (for instance, a light bar positioned horizontally, vertically, diagonally, etc); and as the synaptic strengths stop changing, the neuron becomes orientation selective, that is, it yields a large postsynaptic response to presynaptic input stimulus of a particular orientation and yields very low response to every other presynaptic input 13. One can thus think of this particular learning of orientation selectivity as the neuron searching for the orientation of its preferred stimulus in the space of synaptic weights. This orientation selectivity behavior of neurons in V1 is an example of associative memory as they remember the preferred orientation.

Figure 3.

The linear neuron model. The output of the presynaptic, , is a weighted sum of its presynaptic inputs.

Graphic without alt text

Stability analysis of Hebb’s Rule

In vector form, Hebb’s rule is written as

Plugging in Equation 4 for yields

The average weight change over the ensemble or distribution of presynaptic input patterns presented during the learning process is

where implies expectation. With the biology-informed assumption that synaptic weights change at a much slower timescale than input stimuli, can be factored out in equation 8, and thus

where is the correlation matrix defined as

When is small, Equation 9 is a discretized version of the linear system of coupled first-order differential equations

whose fundamental solutions have the form , where is a scalar and is a vector independent of time.

Plugging in into Equation 11 yields

Thus is an eigenvalue . Since is a correlation matrix, it is positive semi-definite, and hence . Therefore each component of grows with time (positively or negatively) without bounds, implying that Hebb’s rule is unstable; and this may consequently result in uncontrollable increase or decrease in neuronal activity rates. Just like many biological processes, Hebbian plasticity thus requires a compensatory mechanism, commonly referred to as synaptic homeostasis, to help ensure that the nervous system is in a dynamic regime where it functions optimally. Synaptic homeostasis also changes and adapts to the dynamics of the nervous system, a process known as homeostatic plasticity.

One approach to implementing homeostatic plasticity in neural network models is to globally adjust the synapses to each postsynaptic neuron based on its activity level. This adjustment can be subtractive (i.e., synapses to a particular neuron are changed by the same amount) or multiplicative (i.e., synapses to a particular neuron are changed by an amount proportional to their strength) 6. Among experimentalists, synaptic scaling (i.e., adjusting all excitatory synapses of a neuron up or down to stabilize firing) is a well studied mode of achieving homeostatic plasticity 1. Synaptic scaling globally adjusts synaptic strengths in a multiplicative manner, and has been found to occur in cultured networks of neocortical, hippocampal, and spinal-cord neurons. Another well studied mode of achieving homeostatic plasticity is spike-time-dependent plasticity (STDP) which is based on the known fact that presynaptic activity that precedes postsynaptic firing or depolarization can induce long term potentiation (LTP) of synaptic activity, whereas reversing this temporal order causes long term depression (LTD) 14.

Oja Learning Rule and Neuronal Principal Component Analysis

A multiplicative approach to remedying the instability of Hebb’s rule is to normalize the weights so that they are constrained to lie between and and their sum is always equal to 1. At time , the change in synaptic weight is expressed as

implying

Normalizing Equation 14 yields

In addition to creating a numerical bound for the synaptic weights, the above normalization procedure also has a physiological implication. If there is an increase in weights of some of the synapses, then there will also be a corresponding decrease in weights of other synapses that are connected to the same neuron. This means that there is a spatial competition between the inputs.

A great deal of the mathematical results in the field of synaptic plasticity has been geared towards stabilizing models that incorporate Hebb’s rule. One of these results is Oja’s rule, named after Finnish computer scientist Erkki Oja 15.

Continuing with the normalized Hebb’s rule (Equation 15) and letting yield

and since the weights are normalized and assumed to sum up to 1, the right hand side of Equation 16 simplifies to

Thus Equation 16 simplifies to

A first order Taylor expansion of around gives

or

Negletting higher order terms and rearranging yield

Dropping the subscript gives

This equation is known as Oja’s rule. Sometimes it is written in continuous form as

and in vector form

In Oja’s rule, it can be seen that the learning process is updated by adding the concept of weight decay or “forgetting.” In simple words, this means that the neuron periodically removes a little bit of what it has learned with previous inputs.

Now plugging in Equation 4 into Equation 20, and taking expectation, one gets

where C is the correlation matrix defined in Equation 10. This equation represents the average behavior of the weight vector. At steady state

Note that is a scalar. Letting gives

That is, is an eigenvalue of C. Taking the transpose of on the right of both sides of equation 23, one gets

Equation 24 means that , which then implies that . Thus, the weight vector is normalized to one. This means that the weights become parallel to the eigenvector of C with the maximum eigenvalue and the output neuron becomes the corresponding principal component. Principal components are defined as the inner products between the eigenvectors and the input vectors. Hence, Oja’s Rule is called a principal component analyzer.

BCM Learning Rule and Neuronal Selectivity

One of the most used theoretical models of synaptic plasticity is the BCM learning rule 4, proposed by Elie Bienenstock, Leon Cooper, and Paul Munro in 1982. The learning rule captures the dynamics of synaptic strengths as external stimulus is processed through the lateral geniculate nucleus (LGN) to the primary visual cortex (V1). In a Hebbian manner, the learning rule expresses the changes in synaptic weights as a product of the presynaptic input and a nonlinear function of the postsynaptic neuronal activity. If is the presynaptic input vector and is the postsynaptic activity then BCM adjusts each synaptic weight as follows:

for . The variable is a threshold associated with the postsynaptic activity. Consequently has the following property: for low postsynaptic activity (defined as ), the function is negative; and for high postsynaptic activity (defined as ), is positive. Figure 4A illustrates a typical graph of . The learning rule also allows to be dynamic and depend on the temporal or spatial average value of postsynaptic activity in the following way:

where is any positive integer greater than 1. The parameters and are time scale factors which, in simulated environments, can be used to adjust how fast the system is changing with respect to time. The dynamic threshold (sometimes referred to as the “sliding threshold”) provides stability to the learning rule, and from a biological perspective, provides homeostasis to the system. In practice, and a temporal or spatial average value of postsynaptic activity have shown to yield a stable system. Thus, if at any time, the neuron receives a stimulus randomly chosen from a stimulus set, say , the weight vector, , evolves in the following way:

where the second part of the equation is essentially a low pass filtered version of Equation 25.

Figure 4.

(A) A nonlinear function of the postsynaptic neuronal activity, , and a threshold , of the activity. In the present simulation . (B) When , response converges to a steady state and neuron selects stimulus (Here, the stimuli are and with , the stimuli switch randomly at a rate 5, and ) (C) When , responses oscillate but neuron still selects stimulus (D) When , neuron is no longer selective.

Graphic without alt text

With , Figure 4B–D demonstrates the different dynamics of a BCM model as the timescale of homeostatic plasticity vary relative to that of synaptic plasticity. This is equivalent to varying the ratio in equation 27. The neuron here receives a stimulus input, , stochastically from a set , with equal probabilities, that is, . Furthermore the input is parameterized with as and . In the presented simulations, is arbitrarily chosen as radians. The respective responses of the neuron to the two stimuli are and . At steady state, the neuron is said to be selective if it yields a high response to one stimulus and a low () response to the other. It can be seen that when (Figure 4B), response converges to a stable steady state and neuron selects stimulus ; When , responses oscillate but the neuron still selects stimulus , but when , the neuron is no longer selective. This is an illustration of a theorem presented in 18 that states that when homeostasis is slow (in comparison to synaptic modification), the learning rule loses stability. This shows how important the timescale of homeostatic plasticity is in theoretical models.

Modeling Neuronal Responses Under Different Rearing Conditions

Figure 5.

Receptive field formation and neuronal activity response under different rearing conditions. 5

Graphic without alt text

The BCM learning rule has been used to computationally capture the dynamics of the connection strengths between the cells of the retina and a V1 neuron. With natural images as the input presented to the retina, the BCM learning rule is used to update and , the connection strengths between the V1 neuron and the left and right eyes respectively, as follows:

where and are the stimulus input presented to the left and right eyes respectively. Letting

then , and , , and all retain their previously explained meanings. In this case, , , and are matrices, though flattened into vectors during computation, whose sizes are the same as that of the pixelated image being presented to the retina.

Figure 5 shows an experimentally plausible map of converged connection strengths along with the corresponding activity behaviors of the neuron under different rearing conditions. The map of the connection strengths presented here is believed to correspond to the neuron’s receptive field, i.e., a specific region of sensory space in which an appropriate stimulus can induce neuronal activity. During normal rearing, both eyes get the same input and after running the learning rule to convergence, the receptive fields and the neuronal response activity of both eyes are the same. During monocular deprivation, the neuron gets stimulus input from only one eye (in this case, the right eye), and after convergence the neuronal activity in response to this eye is much stronger than that of the other eye, a result that is also reflected in the receptive field. In reverse suture rearing, one eye is deprived of stimulus for a period of time after which the other eye is deprived till the end of the training period. In binocular deprivation, the input to both eyes are tempered and this is usually reflected in the neuronal activity response to the input from the intensity. During recovery from binocular deprivation, activity response to both eyes eventually converge to the same value after recovery is achieved.

Modeling Recovery from Amblyopia

Figure 6.

(A) Binocular V1 (V1b) neuron receiving input from left and right eye based on the set up of the experiment (B) Stimulus going into the left and the right eyes simplified as two different scalar inputs and weighted by the synaptic strengths and .

Graphic without alt text
Figure 7.

and , inputs to the left eye and right eye respectively.

Graphic without alt text

Amblyopia is a condition where one of the eyes is not properly stimulated during childhood, causing the nerves associated with the eye to be weaker than those of the other eye. One treatment is to cover the stronger eye with a patch to allow the weaker one to grow stronger, however this treatment does not work after what is known as the critical period, which in humans ends anytime from 2 years old to 6 years old. It’s been shown that in mice, injecting tetrodotoxin (TTX), a neurotransmission blocker, into the stronger eye can induce recovery in the weaker eye at any developmental stage of the mouse 8. The experiment that led to this result can be broken into four rearing stages. Stage 1 lasts from day 0 to day 26 of the mouse’s life, stage 2 lasts from day 27 to day 47, stage 3 lasts for about 28 more days, and stage 4 is the remainder of the mouse’s lifetime. During stage 1, both eyes are left open to allow for both eyes to develop normally. Then during stage 2 the left eye is closed, monocularly deprived (MD), to allow the right eye to become stronger and induce amblyopia in the animal. In stage 3, the right eye is injected with TTX (see Figure 6). This completely shuts down the reception strength of the right eye, and allows for the left eye to become stronger. After the TTX wears off begins stage 4 during which both eyes have recovered and are open.

To gather experimental data, the experimentalists attach electrodes to the V1 cortex of the brain of the animal and periodically take electrical readings known as visual evoked potentials as they present the animal with visual stimulus. In particular they take readings from a V1b neuron, a neuron that receives input from both eyes. It is important to note that in mice, for each binocular neuron in V1, the eye on the opposite side of the brain (known as the contralateral eye) evokes more responses than the eye on the same side of the brain as the neuron (known as the ipsilateral eye). This phenomenon is known as the contralateral bias.

Experimentalists in this field, such as Mark Bear at MIT, believe that TTX has some effect on the magnitude of the activity rate of the neuron. To investigate this claim with simulations, one could simplify the stimulus into the left and the right eyes to be two different scalar inputs and weighted by the synaptic strengths and . The inputs and (shown in Figure 7) are formulated to reflect the 4 rearing stages described above. The respective responses of the neuron to the left and right eye are thus

If one assumed the BCM learning rule, then the synaptic weight are updated as follows

where is the synaptic plasticity time scale, E is the activity of the neuron, and is the sliding threshold for the neuron both modeled as follows:

The constant reflects the contralateral bias. The function [ ] is called the rectified linear unit (ReLU) function and is defined as follows

ReLU is an example of a neuronal activation function. As can be observed, the function is not activated until . In the first part of equation 32, this means that at steady state, the activity of the neuron is not turned on until . The activity threshold is postulated to be dynamic and in conversations, experimentalist have suggested that its dynamics depend on some combination and ; thus a possible dynamic is

where for all :

Figure 8.

(A) Behaviors of the neuronal responses during the four different rearing stages of the experiments. is the response to two inputs from the left eye and is the response to input from the right eye. (B) Evolution of the neuronal activation function over the four stages.

Graphic without alt text

With , Figure 8 the behaviors of the neuronal responses during the four different rearing stages of the experiments along with an evolution of the neuronal activation function over the four stages. For simplicity the four rearing stages were each allocated an equal duration of 2500 time steps. Figure 8A captures the observed response activity behaviors of the V1b neuron. Figure 8B suggests that monocular deprivation of the contralateral eye during stage 2 drops the threshold of activation function which allows the neuron to activate more easily relative to stage 1. However injecting TTX at stage 3 allows the threshold to still remain low enough to let the neuron continue to activate and recover the synapses weakened by monocular deprivation. This conjecture is yet to be verified experimentally.

Discussion: Time Scales of Homeostatic Plasticity—Current Debates and Preliminary Explorations

The debate about the timescale of homeostatic plasticity is vibrant. A review of the literature reveals a varied and somewhat paradoxical set of findings among theoreticians and experimentalists. While (as seen above in the case of BCM) homeostatic plasticity in most theoretical models needs to be fast—in seconds or minutes—and sometimes even instantaneous to achieve stability, experimentalists witness slower homeostatic plasticity in the order of hours or days 21. Moreover, the difference between experimental data and models cannot be explained by a simple rescaling of time in the models, because the problem persists for quantitative plasticity models which capture the time course of biological data. Experiments have also shown that homeostatic plasticity may be a continual process that is synapse-specific and depends on history of receptor activations, thus it works on a long timescale in the order of hours 12.

These disparities in timescales have caused some researchers to challenge the popular view that Hebbian plasticity is stabilized via homeostatic plasticity 17. Some studies suggest that the algorithmic rescaling of synaptic weights over seconds found in theoretical studies is not the same mechanism as rescaling of synaptic weights over hours found in experiments 21. For this reason, a new term rapid compensatory processes (RCPs)—found in 21 and 2, for instance—has been used to describe the fast stabilizing mechanisms found in theoretical formulations, reserving the term “homeostatic plasticity” for slow negative feedback processes on the timescale of hours or days. The remainder of this article will adhere to this distinction.

It has been suggested that both fast and slow homeostatic mechanisms exist and that learning and memory use an interplay of both forms of homeostasis: while fast homeostatic control mechanisms (RCPs) help maintain the stability of synaptic plasticity, slower ones are important for fine-tuning neural circuits. Thus homeostasis needs to have a faster rate of change for spike-timing dependent plasticity to achieve stability; a finding that can be extended to rate-based theoretical models like BCM 9. In sum, learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits. Therefore there is a need to work on theoretical models that aim to capture this paradoxical implication.

A learning rule with multiple firing rate setpoints

The practical end goal of homeostatic plasticity is usually thought of as maintaining a single firing rate setpoint. Lately, however, some researchers have argued that the notion of a single setpoint is not compatible with the task of neurons to selectively respond to stimulations 20. This line of argument can be boosted by the ability of neurons to go through periods of near constant strong activity, followed by periods of persistent weak activity. For instance, cells in higher brain areas respond with high specificity to complex concepts and remain dormant when they are not coding these concepts 16. Also, certain neurons respond selectively with elevated firing rates over extended periods during working memory tasks 19. To this end, Zenke et al. 2021 have suggested including, in plasticity models, more than one RCP for more than one setpoint. They suggested that one RCP could activate above a certain activity threshold and ensure that neuronal activity does not exceed this threshold, and a second RCP could activate below another lower-activity threshold. The combined action of the two mechanisms will therefore cause neural activity to stay within an allowed range, but still make room for substantial firing rate fluctuations inside that range. This multiple-setpoint approach is quite novel in that it is different from models that use soft or hard bounds on individual synaptic weights or explicit weight dependence formulation and only impose an implicit constraint on the postsynaptic activity.

Based on these findings, a possible entry point to studying this phenomenon is a modified and generalized version of a neuronal learning rule with two setpoints, introduced by Zenke et al. 21. The learning rule has the following key attribute: at high activity levels, a rapid form of heterosynaptic plasticity (i.e., synaptic pathways that are not specifically stimulated could be affected) limits run-away LTP and creates synaptic competition; at low activity levels, some form of presynaptic-dependent plasticity prevents run-away LTD. Letting be the weight from a presynaptic neuron to postsynaptic neuron , and and the pre- and postsynaptic activity rates respectively, the model is as follows:

where

and is a neural activation function that in its simplest biologically plausible form is the identity function. The stimulus input pattern vector and the synaptic weight vector have the respective forms

and

The labels N1, H, and N2 allude to the fact that the middle term is Hebbian, and the other two terms are Non-Hebbian RCPs. The parameters and control the strengths of the two RCPs; is a learning rate parameter for the Hebbian part of the model; and takes on a positive integer value. The variable serves as a reference weight that can be related to synaptic consolidation dynamics. The variable —with a