Synaptic strength can be seen as probability to propagate impulse, and according to synaptic plasticity, function could exist from propagation activity to synaptic strength. If the function satisfies constraints such as continuity and monotonicity, the neural network under external stimulus will always go to fixed point, and there could be one-to-one mapping between the external stimulus and the synaptic strength at fixed point. In other words, neural network “memorizes” external stimulus in its synapses. A biological classifier is proposed to utilize this mapping.

## I. INTRODUCTION

Known experiment results show that synaptic connection strengthens or weakens over time in response to increases or decreases in impulse propagation.^{1} It is also postulated that “neurons that fire together wire together”.^{2,3} This biochemical mechanism, called synaptic plasticity,^{4,5} is believed to play a critical role in the memory formation,^{6–9} although it is still argued if synapse is the sole locus of learning and memory.^{10,11} Meanwhile, a synapse propagates impulses stochastically,^{12–14} which means that synaptic strength could be measured with the probability of propagating an impulse successfully. With this probabilistic treatment we find out that, in the plasticity process a synapse’s strength would be inevitably attracted towards the same fixed point regardless of its initial strength, and for a neural network there could exist a one-to-one mapping between the external stimulus from environment and the synapses’ strength at fixed point. This one-to-one mapping serves the very purpose of ideal memory: to develop different stable neural state for different stimulus from the environment, and develop the same stable neural state for the same stimulus no matter what state is initialized with. It follows that the synapses alone could sufficiently give rise to persistent memory: they could the sole locus of learning and memory.

The remainder of paper goes as follows. Section II identifies the constraints under which synaptic plasticity of one synaptic connection leads to its fixed state and one-to-one stimulus-state mapping (memory). Section III extends the concepts of fixed state and one-to-one mapping for the neural network consisting of many synaptic connections. Section IV proposes a simple neural classifier utilizing this memory to classify handwritten digit images.

## II. SYNAPTIC CONNECTION AND ITS FIXED POINT

Let us start with one synaptic connection as shown in FIG 1. In nature, synapses are known to be plastic, low-precision and unreliable.^{15} This stochasticity allows us to assume synaptic strength *s* to be the probability (reliability) of propagating a nerve impulse through, instead of being weight (usually unbounded real number) as in Artificial Neural Network^{16} (ANN). Easily we have *y* = *xs* where *x*, *s*, *y*∈[0, 1]. Now we treat synaptic plasticity, i.e. the relation between synaptic strength *s* and simultaneous firing probability *y*, as a function

Here *s*^{*}∈[0, 1] represents the target value that a connection’s strength will be strengthened or weakened to if the connection is under constant simultaneous firing probability *y* (while *s* in *y* = *xs* represents current strength). By *y* = *xs* and Eq. (1), we have *s*^{*} = *λ*(*xs*) stating that, under constant stimulus probability *x*, the connection initialized with strength *s* will evolve towards *s*^{*}.

Function *λ* of Eq. (1) truly links “firing together” and “wiring together”. For comparison, Hebbian learning rule^{2} treats synaptic plasticity, in the context of ANN, as a function Δ*w* = *ηxy* to learn connections’ weight from the training patterns; the function translates “firing together” into “neuron’s input and output both being positive or negative”. Different from ANN, our model actually makes no assumption of neuron being computational unit, and aims to show that with *λ* stimulus could sufficiently and precisely control the enduring fixed state of synaptic connection. The following reasoning will hinge on this “target strength function” *λ*, and we will put constrains on this uncharted function to see how they affect the dynamics of connection strength and most importantly how stimulus is one-to-one mapped to the strength at fixated state.

Here is our first constraint: ** λ is continuous on y**. This constraint is neurobiologically justifiable regarding synaptic plasticity, since sufficiently small change in impulse probability would most probably result in arbitrarily small change in synaptic strength. In that case, given any

*x*,

*λ*(

*xs*) is a continuous function on

*s*from unit interval [0, 1] to unit interval [0, 1], and according to Brouwer’s fixed-point theorem

^{17}there must exist a fixed point

*s*

^{+}∈[0, 1] such that

*s*

^{+}=

*λ*(

*xs*

^{+}): connection strength at

*s*

^{+}will evolve to

*s*

^{+}and hence fixate, no longer strengthened or weakened. Here the crucial Brouwer’s theorem is a fixed-point theorem in topology, which states that, for any continuous function

*f*(

*t*) mapping a compact convex set (e.g. interval [0, 1] in our case; could be multi-dimensional) to itself, there is always a point

*t*

^{+}such that

*f*(

*t*

^{+}) =

*t*

^{+}. Moreover, as illustrated in FIG 2, given any initial value the strength is always attracted towards fixed point. Therefore, a gentle constraint of continuity on

*λ*function could preferably drive synaptic connection to the fixed state.

To verify connection strength’s tendency towards fixed points, we design Algorithm 1 to simulate our connection model. In this simulation,^{18} recent simultaneous firings are recorded and the rate is supposed to approximate the simultaneous firing probability *y*; the connection updates its strength by a small step Δ_{s} = 10^{−4} each iteration to the direction of target strength. As shown in FIG 3, we run the simulation for four typical target strength functions, and the strength trajectories resulted show that the constraint of continuity ensures the tendency towards fixed points given any initial strength.

Input: stimulus probability x, initial synaptic strength s_{0}, target | |

strength function λ, strength step Δ_{s}, and interations I. | |

Output: trajectory of strength s. | |

1: | initialize fire-together recorder (10^{4}-entries array): recorder⇐0. |

2: | initialize fire-together recorder pointer: p←0. |

3: | initialize current strength: s←s_{0}. |

4: | for i = 0 to I do |

5: | preset current pointed entry of recorder: recorder[p]←0. |

6: | pick random r1 and r2 from uniform distribution Unif(0, 1). |

7: | if x > r1 and s > r2 then |

8: | neuron 1 and 2 fire together: recorder[p]←1. |

9: | endif |

10: | if recorder has been traversed once (i≥10^{4}) then |

11: | set y with the proportion of 1-entries in recorder. |

12: | set target strength: s^{*}←λ(y). |

13: | if s^{*} > s then |

14: | step-increase current strength: s←min(s + Δ_{s}, 1). |

15: | end if |

16: | if s^{*} < s then |

17: | step-decrease current strength: s←max(0, s−Δ_{s}). |

18: | end if |

19: | end if |

20: | forward recorder pointer: p←(p + 1) mod 10^{4}. |

21: | end for |

Input: stimulus probability x, initial synaptic strength s_{0}, target | |

strength function λ, strength step Δ_{s}, and interations I. | |

Output: trajectory of strength s. | |

1: | initialize fire-together recorder (10^{4}-entries array): recorder⇐0. |

2: | initialize fire-together recorder pointer: p←0. |

3: | initialize current strength: s←s_{0}. |

4: | for i = 0 to I do |

5: | preset current pointed entry of recorder: recorder[p]←0. |

6: | pick random r1 and r2 from uniform distribution Unif(0, 1). |

7: | if x > r1 and s > r2 then |

8: | neuron 1 and 2 fire together: recorder[p]←1. |

9: | endif |

10: | if recorder has been traversed once (i≥10^{4}) then |

11: | set y with the proportion of 1-entries in recorder. |

12: | set target strength: s^{*}←λ(y). |

13: | if s^{*} > s then |

14: | step-increase current strength: s←min(s + Δ_{s}, 1). |

15: | end if |

16: | if s^{*} < s then |

17: | step-decrease current strength: s←max(0, s−Δ_{s}). |

18: | end if |

19: | end if |

20: | forward recorder pointer: p←(p + 1) mod 10^{4}. |

21: | end for |

Our goal is to establish a one-to-one mapping between the stimulus and the connection strength at fixed point. Specifically, we could **(1)** given any stimulus *x*∈[0, 1], identify the fixed point *s*^{+} of connection strength without ambiguity; **(2)** given any strength *s*^{+}∈[0, 1] at fixed point, identify stimulus *x* without ambiguity. Among the four target strength functions in FIG 3, *λ*(*y*) = 0.9*y* + 0.05 and *λ*(*y*) = −*y* + 1 can lead to one-to-one stimulus-strength mapping. Given any stimulus *x*, a synaptic connection equipped with one of these functions will have one single fixed point of strength regardless of its initial strength, such that the relation between stimulus and fixed point strength can be treated as a function *s*^{+} = *θ*(*x*). In FIG 4, simulation shows that *θ* could be strictly monotonic and hence one-to-one mapping from *x* to *s*^{+}, such that *θ*(*x*) has one-to-one inverse function *θ*^{−1}(*s*^{+}). By contrast, FIG 5 shows that *λ*(*y*) = 0.5*sin*(4*πy*) + 0.5 cannot ensure the uniqueness of fixed point and thus there is no such one-to-one *θ*(*x*); FIG 6 shows that there is no *θ* either for the discontinuous *λ* function in FIG 3(d).

In fact, we can pinpoint more constraints on *λ* as the conditions for function *θ* to be one-to-one mapping. In addition to constraint of continuity, let ** λ**(

**)**

*y***be strictly monotonic on**[

**0, 1**] and hence one-to-one; let

**to rule out fixed point**

*λ*(0)≠0*s*

^{+}= 0. In that case,

*λ*has inverse function

*λ*

^{−1}(

*s*) which is strictly monotonic between

*λ*(0) and

*λ*(1), and given any fixed point strength

*s*

^{+}between we can identify stimulus

*x*=

*λ*

^{−1}(

*s*

^{+})/

*s*

^{+}. That is, function

*θ*

^{−1}(

*s*

^{+}) =

*λ*

^{−1}(

*s*

^{+})/

*s*

^{+}exists. Let

*λ*^{−1}

**(**. Then given any stimulus

*s*)*/s*be strictly monotonic between*λ*(0) and*λ*(1)*x*∈[0, 1] there is one single fixed point

*s*

^{+}such that

*x*=

*λ*

^{−1}(

*s*

^{+})/

*s*

^{+}. That is, function

*s*

^{+}=

*θ*(

*x*) exists. Both of

*λ*(

*y*) = 0.9

*y*+ 0.05 and

*λ*(

*y*) = −

*y*+ 1 obey all those constraints and their one-to-one

*θ*functions can be verified by the simulation results in FIG 4, whereas

*λ*(

*y*) = 0.5

*sin*(4

*πy*) + 0.5 is not even strictly monotonic. However, neither

*λ*(

*y*) = 0.9

*y*+ 0.05 nor

*λ*(

*y*) = −

*y*+ 1 is ideal for our purpose. Guided by these constraints, we choose

*λ*function carefully such that its derived

*θ*(

*x*) function is monotonically increasing and range of which spans nearly the entire [0, 1] interval, as shown in FIG 7. Of all the

*λ*constraints, continuity and strong monotonicity are reasonable requirements of consistency on the neurobiological process of synaptic plasticity, whereas

*λ*(0)≠0 and strong monotonicity of

*λ*

^{−1}(

*s*)/

*s*are rather specific and peculiar claims. Admittedly, those

*λ*constraints need to be supported by experimental evidences.

Now we have the one-to-one (continuous and strictly monotonic) functions *λ*, *λ*^{−1}, *θ* and *θ*^{−1}, and in those functions initial strength *s*_{0} is irrelevant. Given *s*^{+} we can identify *x* and *y* without ambiguity, and vice versa. Our interpretation of these mappings is, the synaptic connection at fixed point precisely “memorizes” the information of what (stimulus) it senses and how it responses (with impulse propagation).

## III. NEURAL NETWORK AND ITS FIXED POINT

Now let us turn to the neural network shown in FIG 8. A neural network could be treated as an “aggregate connection” as it turns out. We shall see that, the definitions and reasoning for neural network align well with neural connection in last section.

As with synaptic connection, we can describe a neural network by defining **(1)** the external stimulus as an *n*-dimensional vector *X*∈[0,1]^{n} in which each *x*_{i} is the probability of neuron *i* receiving stimulus; **(2)** the strength of all connections as a *c*-dimensional vector *S*∈[0,1]^{c} in which each *s*_{ij} is the strength of connection from neuron *i* to neuron *j* (denoted as *i*⇝*j*); **(3)** the simultaneous firing probabilities over all connections as a *c*-dimensional vector *Y* ∈[0,1]^{c} in which each *y*_{ij} is the simultaneous firing probability over *i*⇝*j*. In fact, one single neural connection is a special case of neural network with *c* = 1 and *n* = 2.

Stimulus and strength uniquely determine impulses propagation within neural network, so there exists a mapping Ψ:(*X*, *S*) → *Y*. Presumably, the mapping Ψ is continuous on *S*. By Eq. (1), there exists a mapping Λ:*Y* → *S*^{*} such that $sij*=\lambda ij(yij)$ for each *y*_{ij} in *Y* and its counterpart $sij*$ in *S*^{*}. Here *S*^{*}∈[0,1]^{c} is *c*-dimensional vector of connections’ target strength, and mapping Λ could be visualized as a vector of target strength functions such that entry Λ_{ij} is *λ*_{ij}. Then with mapping Ψ and Λ we have a composite mapping Λ○Ψ:(*X*, *S*) → *S*^{*}. If each *λ*_{ij} function is continuous on its *y*_{ij}, mapping Λ○Ψ must be continuous on *S* and according to Brouwer’s fixed-point theorem given *X* there must exist one fixed point *S*^{+}∈[0,1]^{c} such that Λ○Ψ(*X*, *S*^{+}) = *S*^{+}. And under constant stimulus *X*, neural network will go to fixed point *S*^{+} as each connection *i*⇝*j* goes to its fixed point $sij+$. Our simulation verifies that tendency as shown in FIG 9. In this simulation, impulses traverse the neural network stochastically such that each neuron is fired at most once per iteration; synaptic connections update their strength as in Algorithm 1.

Generally the number of stable fixed points for a neural network is *∏*^{c}*f*_{ij} where each *f*_{ij} is the number of stable fixed points of *i*⇝*j*. As in FIG 9(b), *∏*^{c}*f*_{ij} can be enormous when each *f*_{ij} ≥ 2. As with synaptic connection, our goal is to establish one-to-one mapping between stimulus *X* and fixed point *S*^{+} for neural network and meanwhile keep initial strength *S*_{0} out of picture. *λ*’s continuity alone cannot ensure the uniqueness of fixed point, such that *S*_{0} can determine which fixed point to go for. Now with all the *λ* constraints, we have: **(1)** Λ is a one-to-one mapping and thus has inverse mapping Λ^{−1}:*S*^{*} → *Y*; **(2)** there exists a mapping Θ:*X* → *S*^{+}, because under stimulus *X* the neural network will go to the same unique fixed point *S*^{+} no matter what initial strength *S*_{0} to begin with; **(3)** if Θ is a one-to-one mapping, Θ has inverse mapping Θ^{−1}:*S*^{+} → *X*. With mapping Λ, Λ^{−1}, Θ and Θ^{−1} being one-to-one, given *S*^{+} we can identify *X* and *Y* without ambiguity, and vice versa. Therefore, the same interpretation with respect to synaptic connection could apply here: the neural network at fixed point precisely “memorizes” the information about the stimulus on many neurons and the impulse propagation across many connections.

Nevertheless, even all of *λ* constraints are not sufficient to secure one-to-one Θ:*X* → *S*^{+} for a neural network, as opposed to the neural connection. Here is a case. For Θ to be one-to-one, all neurons must have outbound connection. Otherwise, e.g., for a neural network with three neurons (say 0, 1 and 2) and two connections (say 0⇝1 and 1⇝2), stimulus *X*_{1} = (1, 1, 0) and *X*_{2} = (1, 1, 1) will result in the same fixed point because stimulus on neuron 3, no matter what it is, affects no connection. Or equivalently, for Θ to be one-to-one, the definition of *X* should consider only the neurons with outbound connections such that *X*’s dimension *dim*(*X*) ≤ *n*. In the perspective of information theory,^{19} many-to-one Θ introduces equivocation to the neural network at fixed point, as if information loss occurred due to noisy channel. If *dim*(*X*) > *dim*(*S*) = *c*, mapping Θ conducts “dimension reduction” on stimulus *X*, and information loss is bound to occur.

Here is a trivial case regarding stimulus dependence. Consider a neural network with 0⇝2, 1⇝2 and 2⇝3, and stimulus *X* = (*x*_{0}, *x*_{1}). When the neural network is at fixed point, $x2=x0s02++x1s12+\u2212s02+s12+x0Pr(1|0)$ where Pr(1|0) is the probability of neuron 1 being stimulated conditional on neuron 0 being stimulated. Pr(1|0)≠*x*_{1} if stimulus on neuron 1 and 2 are not independent. Pr(1|0) affects $s23+$ and hence *S*^{+}, or in other words the neural network at fixed point gains the hidden information of Pr(1|0). However, if Pr(1|0) varies, given mere *X* there will be uncertainty about *S*^{+} such that mapping Θ doesn’t exist unless stimulus *X* is “augmented” to *X* = (*x*_{0}, *x*_{1}, Pr(1|0)).

## IV. AN APPLICATION FOR CLASSIFICATION

Ideally, a neural network with memory of stimulus *X* — formally, mapping Θ casts memory of stimulus *X* as fixed point *S*^{+} — should response to stimulus *X* more “intensely” than the neural network with different memory responses to *X*. Memory would manifest itself as impulses propagation throughout ensemble of neurons.^{2,20–22} Thus, it is natural to differentiate response by counting the neurons fired or synaptic connections propagated by impulses. Given the reasoning that synapse could be the sole locus of memory,^{10,11} we adopt the count of synaptic connections propagated as a macroscopic measure of how intensely memory responses to stimulus or stimulus “recalls” memory. And accordingly we propose a classifier consisting of *g* neural networks, which classifies stimulus into one of *g* classes by the decision criteria of which neural network gets the most synaptic connections propagated. Reminiscent of supervised learning,^{23} each neural network of our classifier is trained to its fixed point by its particular training stimulus, and then a testing stimulus is tested on all *g* neural networks independently to see which gets the most connections propagated. For simplicity we assume testing itself doesn’t jeopardize the fixed points of neural networks. And most importantly we assume that for each neural network given any stimulus there is one single fixed point such that mapping Θ:*X* → *S*^{+} exists.

Consider a neural network in the classifier to be trained by $X\u030c$ to fixed point *S*^{+} and then tested by *X*. In other words, neural network memorizing $X\u030c$ as *S*^{+} is tested by *X*. Because impulses propagate across the neural network stochastically, the count of synaptic connections propagated in one test should be random variable. Let it be $ZX\u030cX$. Then for the neural network in FIG 8 $ZX\u030cX=\u2211czij$ where each r.v. $zij\u223cBernoulli(xisij+)$, i.e., synaptic connection *i*⇝*j* is propagated with probability $xisij+$ in the test such that $Pr(zij=1)=xisij+$. Easily *z*_{ij}’s expected value is $E[zij]=xisij+$, and its variance is $Var(zij)=xisij+(1\u2212xisij+)$. By central limit theorem, *Z*’s distribution could tend towards Gaussian-like (bell curve) as *c* increases, even if all *z*_{ij} are not independent and identically distributed. We have

And when *c* is large,

For any *i*⇝*j*, in the training stage because $S+=\Theta (X\u030c)$ we have $sij+=\theta ij(X\u030c)$, and in the testing stage *x*_{i} is uniquely determined by *S*^{+} and *X* such that *x*_{i} is a function of $X\u030c$ and *X*.

We experiment with this classifier to classify handwritten digit images.^{24} Ten identical neural networks (hence *g* = 10) of FIG 10, each designated for a digit from 0 to 9, are trained to their fixed points by their training images in FIG 11 as stimulus, and then testing images, also as stimulus, are classified into the digit whose designated neural network gets the biggest *Z* value. We run many tests to evaluate classification accuracy, and collect *Z* values to approximate r.v. *Z*’s distribution. With all synaptic connections equipped with *λ*_{L} in FIG 7, the classifier has accuracy ∼44%, and ∼51% with *λ*_{T}. Note that, equipped with *λ*_{L} or *λ*_{T}, the neural network of FIG 10 will have one-to-one Θ_{L} or Θ_{T} according to last section. FIG 12 and FIG 13 show that, in positive testing (e.g. digit-6 image is tested in neural network trained by digit-6 images), *Z*’s expected value (sample mean) could be considerably bigger than that in negative testing (e.g. digit-6 image is tested in neural network trained by digit-1 images), so as to discriminate digit-6 images from the others. Given the same testing image classification target can be different test by test since the ten *Z* outcomes are randomized. To improve classification accuracy, we shall distance the distribution of positive testing *Z* as far as possible from those of negative testing *Z*. We present another two special neural networks in FIG 14 to demonstrate how our classifier utilizes memory to classify images and how to improve its accuracy in the neurobiological way.

When the classifier adopts ten neural networks of FIG 14(a) and equips all connections with *λ*_{L} in FIG 7, classification accuracy is ∼31% and *Z*’s distribution for testing digit-6 images is shown in FIG 15(a). We already know that *λ*_{L} makes *θ*_{L}(*x*) ≈ *x*. Then for one test we have

Here $X\u030c\u22baX$ is the dot product of training vector $X\u030c\u2208[0,1]64$ and testing vector *X*∈[0,1]^{64}. Generally, the dot product of two vectors, a scalar value, is essentially a measure of similarity between the vectors. The bigger $E[ZX\u030cX]$ is, the more intensely neural network with memory of training $X\u030c$ responses to testing *X*, and the more similar $X\u030c$ and *X* are to each other. Therefore, Eq. (4) simply links otherwise unrelated neural response intensity and stimulus similarity. By comparing ten $E[ZX\u030cX]$ values, we can tell which $X\u030c$ is the most similar to *X* and hence which digit is classification target. Only, $ZX\u030cX$ value from test actually deviates around the true $E[ZX\u030cX]$ randomly, which makes it a useable and yet unreliable classification criteria.

When the classifier equips all connections with threshold-like *λ*_{T} in FIG 7, classification accuracy raises to ∼44%. By comparing FIG 15(b) with FIG 15(a), the distance between *Z*_{66}’s distribution and the other nine *Z*_{k6,k≠6}’s distribution is bigger with threshold-like *λ*_{T} than with linear-like *λ*_{L}. This accuracy improvement can be explained conveniently with a true threshold function (or step function)

Of the sum terms in $\u221164xixi\u030c$ of Eq. (4), *θ*_{step} basically diminishes small $x\u030ci\u2208[0,xstep)$ to 0 and enhances big $x\u030ci\u2208[xstep,1]$ to 1, such that most probably $E[Z66]$ would increase by having $x\u030ci=1$ in the sum terms with big *x*_{i} while the other nine $E[Zk6,k\u22606]$ would decrease by having $x\u030ci=0$ in the sum terms with big *x*_{i}, so as to preferably increase $E[Z66]\u2212E[Zk6,k\u22606]$. And likewise Var(*Z*) would most probably decrease. As a result, *θ*_{step} increases the distance between the distribution of *Z*_{66} and *Z*_{k6,k≠6} and thus better separates them.

FIG 14(b) provides another type of neural network to improve classification accuracy without adopting threshold-like *λ* function for all synaptic connections. Let the linear-like *λ*_{L} be equipped back and take $\omega i=100xi\u030c3$ simply for example. With this setting our classifier has accuracy ∼47%. Here we have $E[ZX\u030cX]\u2248\u221164xi(100xi\u030c4)$ where $100xi\u030c4$, like *θ*_{step}, actually transforms $xi\u030c\u2208[0,0.14)$ (here $0.14\u22480.56$) to within [0, 10) and transforms $xi\u030c\u2208[0.14,1]$ to across [10, 100] — again, the strong training pixel-stimulus are greatly weighted while the weak ones are relatively suppressed. As shown in FIG 15(c) the distance between the distribution of *Z*_{66} and *Z*_{k6,k≠6} is increased compared to FIG 15(a). Here our neurobiological interpretation regarding $\omega i=100xi\u030c3$ is, the training stimulus affects not only synaptic strength, but also the growth of neuron cluster in the replication of neuron cells and in the formation of new synaptic connections. Again this claim needs to be supported by evidences.

TABLE I summarizes the performance of our classifier with different types of neural networks and target strength functions. The four typical *λ* functions in FIG 3 are also evaluated to demonstrate how these somewhat “pathological” target strength functions affect classification.

λ or θ functions
. | FIG 10 . | FIG 14(a) . | FIG 14(b) . |
---|---|---|---|

$\lambda L(y)=0.99y+0.01$ | 44% | 31% | 47% |

$\lambda T(y)=21+e\u22124.4(y+0.01)\u22121$ | 51% | 44% | 51% |

θ_{step} | - | 48%^{a} | 60% ^{b} |

λ(y) = 0.9y + 0.05 | 14% | 16% | 19% |

λ(y) = 0.5sin(4πy) + 0.5 | 5%^{c} | 6% | 2% |

λ(y) = −y + 1 | 4% | 5% | 1%^{d} |

Discontinuous λ | 23% | 20% | 28% |

λ or θ functions
. | FIG 10 . | FIG 14(a) . | FIG 14(b) . |
---|---|---|---|

$\lambda L(y)=0.99y+0.01$ | 44% | 31% | 47% |

$\lambda T(y)=21+e\u22124.4(y+0.01)\u22121$ | 51% | 44% | 51% |

θ_{step} | - | 48%^{a} | 60% ^{b} |

λ(y) = 0.9y + 0.05 | 14% | 16% | 19% |

λ(y) = 0.5sin(4πy) + 0.5 | 5%^{c} | 6% | 2% |

λ(y) = −y + 1 | 4% | 5% | 1%^{d} |

Discontinuous λ | 23% | 20% | 28% |

^{a}

*x*_{step} is set to 0.6.

^{b}

*x*_{step} is set to 0.2.

^{c}

Accuracy under 10% is actually worse than wild guessing.

^{d}

If classification criteria is changed to “which neural network gets the fewest synaptic connections propagated”, the accuracy will be ∼40%.

By Eq. (4), the classification of handwritten digit images could be simplified to a task of restricted linear classification:^{23} given ten classes each with its discriminative function $\delta i(X)=X\u030ci\u22baX$ where $X\u030c,X\u2208[0,1]64$, image *X* is classified to the class with the largest *δ*_{i} value. Our neural classifier simply takes over the computation of vectors’ dot product $X\u030ci\u22baX$ and adds randomness to the ten results. To parameterize the ten *δ*_{i} with their $X\u030ci$, the “supervisors” could train the neural networks in classifier with the images they deem best — “average images” in our case or digits learning cards in teachers’ case. Our neural classifier is rather unreliable and primitive compared to ANN which is also capable of linear classification. On one hand, given the same image ANN always outputs the same prediction result. On the other hand, ANN is not only a classifier but also more importantly a “learner”, which learns from all kinds of handwritten digits to find the optimal $X\u030ci$ for the ten *δ*_{i}; ANN with optimal $X\u030ci$ is more tolerant with poor handwriting, and thus has less misclassification and better prediction accuracy. Only, ANN’s learning optimal $X\u030ci$, an optimization process of many iterations, requires massive computational power to carry out, which is unlikely to be provided by the real-life nervous system — there is no evidence that an individual neuron can even conduct basic arithmetic operations. Despite of its weakness, our neural classifier has merit in its biological nature: it reduces the computation of vectors’ dot product to simple counting of synaptic connections propagated; its training and testing could be purely neurobiological development and activities where no arithmetic operation is involved; its classification criteria, i.e. “deciding” or “feeling” which neural (sub)network has the most connections propagated, could be an intrinsic capability of intelligent agents. This classifier might project new insights on the neural reality, hopefully.

## V. CONCLUSION

This paper proposes a mathematical theory to explain how memory forms and works. It all begins with synaptic plasticity. We find out that, synaptic plasticity is more than impulses affecting synapses; it actually plays as a force that can drive neural network eventually to a long-lasting state. We also find out that, under certain conditions there would be a one-to-one mapping between the neural state and the external stimulus that neural network is exposed to. With the mapping, given stimulus we know exactly what neural state will be; given neural state we know precisely what stimulus has been. The mapping is essentially a link between past event and neural present; between the short-lived and the enduring. In that sense, the mapping itself is memory, or the mapping casts memory in neural network. Next, we study how memory affects neural network’s response to stimulus. We find out that, the neural network with memory of stimulus can response to similar stimulus more intensely than to the stimulus of less similarity, if response intensity is evaluated by the number of synaptic connections propagated by impulses. That is to say, a neural network with memory is able to classify stimulus. To verify this ability, we experiment with the classifier consisting of ten neural networks, and they turn out to have considerable accuracy in classifying the handwritten digit images. The classifier proves that neurons could collectively provide fully biological computation for classification.

Our reasoning takes root in the mathematical treatment of synaptic plasticity as target strength function *λ* from impulse frequency to synaptic strength. We put hypothetical constraints on this *λ* function to ensure that the ideal one-to-one mapping exists. Although these constraints are necessary to keep our theory mathematically sound, they raise concerns. Firstly, they could be overly restrictive. Take continuity constraint for example. Even the discontinuous function of FIG 3(d), whose nonexistent *θ* function would map certain stimulus to any point within a “fixed interval” instead of a specific fixed point as shown in FIG 6, can be a useable *λ* in our classifier according to TABLE I. In this case, fixed point per se doesn’t have to exist, and mere tendency to seek out for it could serve the purpose. Secondly, as discussed in Section II those *λ* constraints have yet to be supported by neurobiological evidences. Above all, the evidence that reveals true *λ* is vital to clarify the uncertainty.

## ACKNOWLEDGMENTS

We thank the anonymous reviewers for their comments that improved the manuscript.

## REFERENCES

*from sklearn import datasets; datasets.load_digits()”*.