Share this post on:

The external input is just not present any more and only background input remains, all activities unwind back to background firing price ( 0:1{1 Hz) although recurrent weights are still high (Figure 1 B,C). This is an important difference to attractor memory models [535], which will continue to be active after stimulus withdrawal for PubMed ID:http://www.ncbi.nlm.nih.gov/pubmed/20164232 (theoretically) infinitely long time. This persistent activity is important for explaining the dynamics of working memory (seconds) but contradicts the idea of long-term memories which are not permanently active. Here, the memory content is transferred from the input to the synaptic weights [14]. The activities can relax back to background state. We remark that the emergence of the here shown phenomena does not rely on saturation effects and fine tuned topology (seePLOS Computational Biology | www.ploscompbiol.orgSynaptic Scaling Enables Memory ConsolidationText S1). A detailed quantification is provided below. First, we show the impact of a memory recall on the spatial structure of the LTS-synapses.ITSA-1 custom synthesis learning and recallDuring recall the spatial distribution of weights and activities reveals an interesting competitive effect (Figure 2), that is important for the formation of different memory cell assemblies and also leads to the paradox of memory loss during recall ([36], see below). Initially, during learning only the a local patch of units is stimulated and the synapses of their target neurons all grow (purple square in Figure 2 A; L-phase in Figure 1 C), where we have used a strong and local stimulus to drive all synapses into the LTS-regime. Consolidation stimulates the complete network and all synapses within the assembly recover or exceed their initial strengths (Figure 2 B; C1,C2-phase in Figure 1 C). The process of remembering (recalling) a memory is often understood as partial stimulation of an assembly and potentially of some other neurons [568]. By ways of its learnt connections the assembly produces a filling-in and generates a spatially quite complete excitation pattern including most of its members (so-called pattern completion). According to the literature [1,14,560] this represent the behaviorally relevant recall activity. Therefore, only a randomly selected subset of assembly-neurons receives a stimulation (we used here about 30 with some outliers). The resulting network activity clearly shows a filled-in spatial assembly structure (Figure 2 C; Please note that due to the partial stimulus all units of the assembly are stronger active than the control ones. Thus we can assume that the assembly is completed.), where, however, sometimes strongly active neurons are neighbors of weakly active ones. For such constellations the different activities induce a dissimilar weight dynamic. Consider a pair of mutually connected neurons (see hatching in panel C). The weakly active neuron (but still more active than controls) induces a small synaptic plasticity term and synaptic scaling is weak, too. By contrast, the synaptic scaling term for the strongly active neuron is large and, thus, dominates the dynamics. As a consequence, the corresponding weight shrinks substantially (Figure 2 C, inset, yellow bars; see also Text S1 for equations). We remark that such network structures with generic lateral inhibition admit separation of different assemblies from each otherif learning stimuli do not overlap too much. On the other hand – as soon as overlap exists – activation imbalances, as described above, may lead to int.

Share this post on:

Author: Interleukin Related