PDFLINK |

# Rate-Based Synaptic Plasticity Rules

Communicated by *Notices* Associate Editor Reza Malek-Madani

## Introduction

The human nervous system is made up of networks containing billions of nerve cells also known as *neurons*. These networks function as a result of each neuron transmitting electrochemical signals to nearby neurons when it releases an *action potential* or *spike*. This spike occurs when the difference in electrical potential between the inside and outside of the neuron, also known as its *membrane potential*, exceeds a certain threshold (see Figure 1) *spike rate* or *activity rate* or *firing rate*. The firing rate and firing pattern of a neuron encodes a good amount of the information it transmits.

At the moment a neuron spikes, an electrical pulse travels from the *cell body* down the *axon* fiber to be received by nearby neurons whose *dendrites* are connected to its axon ends. The junction where the axon head meets a dendrite is known as a *synapse*. The neuron transmitting the signal is called the *presynaptic neuron* and the neuron receiving the signal is called the *postsynaptic neuron* (see Figure 2). The term *synaptic plasticity* refers to the neurobiological process by which specific patterns of activity at the synapses result in changes in synaptic strengths and enable the brain to adapt to new information over time.

A rate-based plasticity model defines the synaptic strength change as a function of the presynaptic and postsynaptic firing rates. This article explores rate-based synaptic plasticity models and discusses related emergent research questions.

## Hebb’s Rule

One of the most studied theories of synaptic plasticity is Hebb’s rule named for Donald Hebb who proposed that when neuron A repeatedly participates in firing neuron B, the synaptic strength of A onto B increases

Now consider a network of neurons in which presynaptic neuron communicate with postsynaptic neuron Let . be the input signal from neuron to , the output from neuron and , the weight of the synapse between between and Hebb’s Rule can be expressed as follows: .

1Introducing a constant of variation, gives rise to the alternative form ,

2Looking at Hebb’s Rule as the mechanism through which the network learns, can be regarded as the learning rate, a parameter that controls how fast the weights get modified or how quickly that aspect of the network learns.

### The linear neuron model

Assume, at any time that a postsynaptic neuron receives , presynaptic inputs in the form of a vector

obtained stochastically from a finite set of input values

be a vector of synaptic weights between each presynaptic neuron and the postsynaptic neuron. Then the activity response of the postsynaptic neuron is expressed as

or in dot product form

4Geometrically, the dot product between vectors

where *similarity measure* between the presynaptic inputs

It is assumed that a neuron “learns” as its synaptic weights modifies over time. In some contexts, after a certain period of learning, the synaptic weights reach an *equilibrium state* where they stop changing and the neuron is assumed to have learned. For example, it is known that as synaptic strengths in primary visual cortex (V1) get modified, a V1 neurons learns to respond differently to presynaptic input stimuli of different orientations (for instance, a light bar positioned horizontally, vertically, diagonally, etc); and as the synaptic strengths stop changing, the neuron becomes orientation selective, that is, it yields a large postsynaptic response to presynaptic input stimulus of a particular orientation and yields very low response to every other presynaptic input *associative memory* as they remember the preferred orientation.

### Stability analysis of Hebb’s Rule

In vector form, Hebb’s rule is written as

Plugging in Equation 4 for

The average weight change over the ensemble or distribution of presynaptic input patterns presented during the learning process is

8where

where

When

whose fundamental solutions have the form

Plugging in

Thus *synaptic homeostasis*, to help ensure that the nervous system is in a dynamic regime where it functions optimally. Synaptic homeostasis also changes and adapts to the dynamics of the nervous system, a process known as *homeostatic plasticity*.

One approach to implementing homeostatic plasticity in neural network models is to globally adjust the synapses to each postsynaptic neuron based on its activity level. This adjustment can be subtractive (i.e., synapses to a particular neuron are changed by the same amount) or multiplicative (i.e., synapses to a particular neuron are changed by an amount proportional to their strength) *synaptic scaling* (i.e., adjusting all excitatory synapses of a neuron up or down to stabilize firing) is a well studied mode of achieving homeostatic plasticity *spike-time-dependent plasticity* (STDP) which is based on the known fact that presynaptic activity that precedes postsynaptic firing or depolarization can induce *long term potentiation* (LTP) of synaptic activity, whereas reversing this temporal order causes *long term depression* (LTD)

## Oja Learning Rule and Neuronal Principal Component Analysis

A multiplicative approach to remedying the instability of Hebb’s rule is to normalize the weights so that they are constrained to lie between

implying

14Normalizing Equation 14 yields

15In addition to creating a numerical bound for the synaptic weights, the above normalization procedure also has a physiological implication. If there is an increase in weights of some of the synapses, then there will also be a corresponding decrease in weights of other synapses that are connected to the same neuron. This means that there is a spatial competition between the inputs.

A great deal of the mathematical results in the field of synaptic plasticity has been geared towards stabilizing models that incorporate Hebb’s rule. One of these results is *Oja’s rule*, named after Finnish computer scientist Erkki Oja

Continuing with the normalized Hebb’s rule (Equation 15) and letting

and since the weights are normalized and assumed to sum up to 1, the right hand side of Equation 16 simplifies to

Thus Equation 16 simplifies to

17A first order Taylor expansion of

or

Negletting higher order terms and rearranging yield

Dropping the subscript gives

18This equation is known as Oja’s rule. Sometimes it is written in continuous form as

19and in vector form

20In Oja’s rule, it can be seen that the learning process is updated by adding the concept of *weight decay* or “forgetting.” In simple words, this means that the neuron periodically removes a little bit of what it has learned with previous inputs.

Now plugging in Equation 4 into Equation 20, and taking expectation, one gets

21where C is the correlation matrix defined in Equation 10. This equation represents the average behavior of the weight vector. At steady state

Note that

That is,

Equation 24 means that *principal component*. Principal components are defined as the inner products between the eigenvectors and the input vectors. Hence, Oja’s Rule is called a *principal component analyzer*.

## BCM Learning Rule and Neuronal Selectivity

One of the most used theoretical models of synaptic plasticity is the BCM learning rule

for

where

where the second part of the equation is essentially a low pass filtered version of Equation 25.

With

## Modeling Neuronal Responses Under Different Rearing Conditions

The BCM learning rule has been used to computationally capture the dynamics of the connection strengths between the cells of the retina and a V1 neuron. With natural images as the input presented to the retina, the BCM learning rule is used to update

where

then

Figure 5 shows an experimentally plausible map of converged connection strengths along with the corresponding activity behaviors of the neuron under different rearing conditions. The map of the connection strengths presented here is believed to correspond to the neuron’s *receptive field*, i.e., a specific region of sensory space in which an appropriate stimulus can induce neuronal activity. During normal rearing, both eyes get the same input and after running the learning rule to convergence, the receptive fields and the neuronal response activity of both eyes are the same. During monocular deprivation, the neuron gets stimulus input from only one eye (in this case, the right eye), and after convergence the neuronal activity in response to this eye is much stronger than that of the other eye, a result that is also reflected in the receptive field. In reverse suture rearing, one eye is deprived of stimulus for a period of time after which the other eye is deprived till the end of the training period. In binocular deprivation, the input to both eyes are tempered and this is usually reflected in the neuronal activity response to the input from the intensity. During recovery from binocular deprivation, activity response to both eyes eventually converge to the same value after recovery is achieved.

## Modeling Recovery from Amblyopia

Amblyopia is a condition where one of the eyes is not properly stimulated during childhood, causing the nerves associated with the eye to be weaker than those of the other eye. One treatment is to cover the stronger eye with a patch to allow the weaker one to grow stronger, however this treatment does not work after what is known as the critical period, which in humans ends anytime from 2 years old to 6 years old. It’s been shown that in mice, injecting tetrodotoxin (TTX), a neurotransmission blocker, into the stronger eye can induce recovery in the weaker eye at any developmental stage of the mouse

To gather experimental data, the experimentalists attach electrodes to the V1 cortex of the brain of the animal and periodically take electrical readings known as visual evoked potentials as they present the animal with visual stimulus. In particular they take readings from a V1b neuron, a neuron that receives input from both eyes. It is important to note that in mice, for each binocular neuron in V1, the eye on the opposite side of the brain (known as the contralateral eye) evokes more responses than the eye on the same side of the brain as the neuron (known as the ipsilateral eye). This phenomenon is known as the *contralateral bias*.

Experimentalists in this field, such as Mark Bear at MIT, believe that TTX has some effect on the magnitude of the activity rate of the neuron. To investigate this claim with simulations, one could simplify the stimulus into the left and the right eyes to be two different scalar inputs

If one assumed the BCM learning rule, then the synaptic weight are updated as follows

31where

The constant

ReLU is an example of a neuronal *activation function*. As can be observed, the function is not activated until

where

With

## Discussion: Time Scales of Homeostatic Plasticity—Current Debates and Preliminary Explorations

The debate about the timescale of homeostatic plasticity is vibrant. A review of the literature reveals a varied and somewhat paradoxical set of findings among theoreticians and experimentalists. While (as seen above in the case of BCM) homeostatic plasticity in most theoretical models needs to be fast—in seconds or minutes—and sometimes even instantaneous to achieve stability, experimentalists witness slower homeostatic plasticity in the order of hours or days

These disparities in timescales have caused some researchers to challenge the popular view that Hebbian plasticity is stabilized via homeostatic plasticity *rapid compensatory processes* (RCPs)—found in

It has been suggested that both fast and slow homeostatic mechanisms exist and that learning and memory use an interplay of both forms of homeostasis: while fast homeostatic control mechanisms (RCPs) help maintain the stability of synaptic plasticity, slower ones are important for fine-tuning neural circuits. Thus homeostasis needs to have a faster rate of change for spike-timing dependent plasticity to achieve stability; a finding that can be extended to rate-based theoretical models like BCM

### A learning rule with multiple firing rate setpoints

The practical end goal of homeostatic plasticity is usually thought of as maintaining a single firing rate *setpoint*. Lately, however, some researchers have argued that the notion of a single setpoint is not compatible with the task of neurons to selectively respond to stimulations

Based on these findings, a possible entry point to studying this phenomenon is a modified and generalized version of a neuronal learning rule with two setpoints, introduced by Zenke et al.

where

and

and

The labels N1, H, and N2 allude to the fact that the middle term is Hebbian, and the other two terms are Non-Hebbian RCPs. The parameters