Memristors are widely considered as promising elements for the efficient implementation of synaptic weights in artificial neural networks (ANNs) since they are resistors that keep memory of their previous conductive state. Whereas demonstrations of simple neural networks (e.g., a single-layer perceptron) based on memristors already exist, the implementation of more complicated networks is more challenging and has yet to be reported. In this study, we demonstrate linearly nonseparable combinational logic classification (XOR logic task) using a network implemented with CMOS-based neurons and organic memrisitive devices that constitutes the first step toward the realization of a double layer perceptron. We also show numerically the ability of such network to solve a principally analogue task which cannot be realized by digital devices. The obtained results prove the possibility to create a multilayer ANN based on memristive devices that paves the way for designing a more complex network such as the double layer perceptron.

The development and hardware realization of artificial neural networks that are capable of learning information processing (pattern recognition and classification, approximation, prediction, etc.) remains one of the most challenging tasks in artificial intelligence. One of the main issues in this pursuit is the lack of suitable hardware for the implementation of key elements of a typical ANN – neurons and synapses. While the CMOS based neurons are nowadays commercially available,1 the appropriate candidate for the synapse is still under discussion. There are two main possible ways of synapse realization: a digital one (e.g., as the Static Random Access Memory2 or floating gate transistor3) and an analogue one (memristive device).4 The main advantage of the first one is its full integration with the standard CMOS technology. However this approach suffers from i) digital versus analogue representation of synaptic weights reflecting the lower performance of ANNs’ super-parallel computations; ii) mediocre energy efficiency, if compared to memristive systems and to their biological counterparts, iii) the chip has a lower potential density than in case of memristors use. In this context, memristive devices are very promising candidates.5 Basically, a memristive device is a two-terminal device, whose conductivity may be changed almost continuously by applying a relatively large voltage bias, but is retained constant when a smaller bias or no bias is applied.6 Memristive properties were found in inorganic (such as TiOx, HfOx, SiOx, etc.),7–10 organic (polyaniline, polyimide)11,12 and hybrid organic/inorganic13 materials. Organic or polymeric materials have unique advantages over traditional inorganic memristive devices, including high flexibility for biocompatible neuromorphic circuits and implants, low cost, solution processability, large-area implementation. An important advantage is also the possibility to realize the polymeric stochastic memristive systems in which communication between the computing elements (neurons) can be arranged in 3D.14 Regarding neural networks, where effective learning requires a precise knowledge of the conductivity state of all elements and kinetics of its variation, polyaniline based system has another very important advantage. Conductivity of polyaniline, and, therefore memristive elements, is directly connected to its color.15 Thus, it gives the possibility to measure conductivity of each element with a contactless spectrophotometric method, what will allow simplifying the circuit.

Dealing with the hardware realization of a simple ANN, based on memristive devices, few are proposed in literature.16–19 The single-layer (or elementary) perceptron is the simplest kind of neural network which can implement basic learning and parallel processing. However, to the best of our knowledge, there is no successful attempt of multilayer perceptron hardware realization on memristive devices. Nevertheless, in the field of artificial intelligence, more complex neural networks are requested to solve demanding tasks.20 A multilayer perceptron can perform linearly nonseparable tasks (i.e. the tasks that cannot be separated by an hyper-plane in the space of their features, which is also an input space of the perceptron21), that cannot be solved by a single-layer perceptron.

Thus, the main goal of the present work is the hardware realization of a simple double-layer ANN based on organic (polyaniline) memristive devices able to solve linearly nonseparable tasks. In this manuscript we present the first steps towards the realization of the double layer perceptron, including the design and hardware realization of the ANN. The implementation of the backpropagation algorithm and its use to train the device will be the subject of a subsequent work. Here, we designed an ANN and showed the first results of its capability in performing the XOR logic task. Moreover, we show numerically that our setup is capable to solve an analogue task particularly demanding for the standard von Neumann architectures. The obtained results, although still preliminary, are highly encouraging and suggest a new route for the implementation of multilayer ANN based on memristive devices.

PANI based memristive devices were fabricated following the technique reported in Ref. 22. A solution of PANI (Mw≈100 000, Sigma Aldrich) was prepared with a concentration of 0.1 mg·mL−1 in 1-methyl-2-pyrrolidinone (Sigma Aldrich ACS reagent 99.0%) with the addition of 10% of Toluene (AnalaR NORMAPUR® ACS). This solution was filtered twice and then deposited onto a glass substrate (1.5x0.5 cm2) with two Cr electrodes by Langmuir–Schaefer technique. The PANI conductive channel was formed by depositing 60 layers of polymer in its emeraldine base form and then transforming it in the emeraldine salt conducting form by a doping process consisting in the immersion in HCl (1M). Subsequently, a stripe of solid electrolyte, of about 1 mm width, was deposited in the center of the PANI channel in a crossed configuration and a silver wire (0.05 mm), inserted in the polyelectrolyte, worked as a reference electrode. The electrolyte was prepared starting from a water solution (20 mg·ml−1) of polyethylene oxide (PEO) with a molecular weight of 8·106 Da in which a solution of LiClO4 (Sigma) and water were added to reach the concentration of 0.1M. The final structure was additionally doped in HCl vapor. The voltage cycles application and the current measurements were performed by means of a NI PXIe-4130, PXIe-4138, PXIe-4139 Source Measure units, NI DAQ board and two bias voltage suppliers (0.4 and 15 V). All source and measurement elements were controlled by a dedicated LabVIEW program.

The principal scheme of the network, as shown in Fig. 1a, consisted of two inputs (X1, X2), two neurons (several in general case) on the hidden layer and an output neuron (or several neurons). Inputs and neurons were connected by links with specific synaptic weights (wij, wjk). The circuit diagram of the network based on memristive devices is presented in Fig. 1b (color parts coincides with those in Fig. 1a). Each weight was represented by two memristive devices (see below). Vital requirement for training the network is the ability to change the resistance (proportional to the synaptic weight) of every memristive device independently from others. To manage this issue we developed an access system based on CMOS-transistors as the voltage-controlled switches. This system allowed to apply a writing voltage to the specified memristive device within a training procedure or to read the voltage during information processing. Such a switch connects each memristor either to one of the inputs when being biased by some non-negative voltage or to the reference voltage source (+0.2 V) (for motivation see below). A commutator composed of one 1-in-8 analogue switch (considered as a “master”) and two more (“slave”) connected in series allowed us to control all 12 switches in the circuit by the five logic inputs (Fig. 1c).

FIG. 1.

a) Logic scheme of the implemented neural network with 2 inputs, 2 hidden and 1 output neurons. b) Circuit diagram of the ANN memristor-based hardware with circled “neurons”, each consisted of differential summator and activation function. Numeration of memristive links (Mnij(±)) comprises a number of layer n, connected with i-th input and j-th output neurons. A sign defines if this partial weight is positive or negative. Access system is shown for M111+ and M121+ memristors and is omitted for others for simplicity. c) Logic scheme of the commutator used with 5 logic inputs (L0 – L4) and 16 outputs (only 12 of them were used according to the number of memristors). Separate output “All” corresponds to the application of control voltage (+15 V) to all memristive device access systems (during reading some input vector by the perceptron). In absence of control voltage, -15 V was applied to the access systems due to the necessity of applying +0.2 V to all memristors (see the inset and the text for details).

FIG. 1.

a) Logic scheme of the implemented neural network with 2 inputs, 2 hidden and 1 output neurons. b) Circuit diagram of the ANN memristor-based hardware with circled “neurons”, each consisted of differential summator and activation function. Numeration of memristive links (Mnij(±)) comprises a number of layer n, connected with i-th input and j-th output neurons. A sign defines if this partial weight is positive or negative. Access system is shown for M111+ and M121+ memristors and is omitted for others for simplicity. c) Logic scheme of the commutator used with 5 logic inputs (L0 – L4) and 16 outputs (only 12 of them were used according to the number of memristors). Separate output “All” corresponds to the application of control voltage (+15 V) to all memristive device access systems (during reading some input vector by the perceptron). In absence of control voltage, -15 V was applied to the access systems due to the necessity of applying +0.2 V to all memristors (see the inset and the text for details).

Close modal

The artificial neuron body (soma) was implemented in the circuit by an op-amp based differential adder and a voltage divider with a MOSFET controlled by the output of the summator. This element executed the basic neuron functions in terms of information processing – summation and threshold. The differential summator performing y=wixi function is required to separate different classes of input combinations, where y is the output voltage of the summator, xi, wi – the i-th input voltage and the corresponding weight respectively. Moreover, such a scheme allows the realization of negative synaptic weights by doubling the number of memristors which is crucial for the learning algorithm convergence in almost all possible tasks. In this scheme, each synapse was represented by two memristive devices, “excitatory” and “inhibitory”, connected to non-inverting and inverting inputs of the op-amp accordingly. The resulting weight of the i-th synapse was wi=Rfb(Gi+Gi), where Rfb is the value of the feedback resistance, Gi+ and Gi the conductances of the i-th excitatory and inhibitory memristive devices respectively. The output voltage y was applied to the gate of the MOSFET in the voltage divider connecting the neuron output to the logic “1” when open and to the logic “0” in the opposite case. The threshold voltage of the voltage divider was about 1.8 V, depending on the characteristics of the MOSFET used. Typical transfer function (which in terms of ANNs is called an activation function) is shown in Fig. 2a.

FIG. 2.

a) Transfer functions of the three used voltage dividers implementing an activation function of neurons. b) Typical I-V curve of the organic PANI-based memristive device. The inset shows typical I–V curve for the ionic current of the PANI-based memristive device. c) Typical kinetics of the PANI-based memristive device conductance under potentiating voltage pulse (+0.6 V, solid line) and depressing one (-0.2 V, dashed line). d) Absolute values of the memristive conductance change under the potentiating voltage pulse (+0.6 V, prefix “p” in the legend) and depressing one (-0.2 V, prefix “d”) as a function of the initial conductance, for various pulse durations (specified in the legend).

FIG. 2.

a) Transfer functions of the three used voltage dividers implementing an activation function of neurons. b) Typical I-V curve of the organic PANI-based memristive device. The inset shows typical I–V curve for the ionic current of the PANI-based memristive device. c) Typical kinetics of the PANI-based memristive device conductance under potentiating voltage pulse (+0.6 V, solid line) and depressing one (-0.2 V, dashed line). d) Absolute values of the memristive conductance change under the potentiating voltage pulse (+0.6 V, prefix “p” in the legend) and depressing one (-0.2 V, prefix “d”) as a function of the initial conductance, for various pulse durations (specified in the legend).

Close modal

The initial characterization of the memristive devices was developed by measuring cyclic I-V curves. The measurement scheme is described in detail elsewhere.23 Typical I-V characteristics for electronic and ionic currents are shown in Fig. 2b. There are two peaks in the I-V ionic curve at about 0.1 V and 0.5V (inset in Fig. 2b), corresponding to the potentials of redox reactions that the PANI undergoes. The ionic current passing through the PANI/PEO interface is due to the variation of redox state that changes the conductivity of PANI. Thus, adjusting the potential value it is possible to control the rate of PANI conductivity change. The electronic current shows a nonlinear rectifying behavior (Fig. 2b). The electronic current presents a slight increment before 0.5 V applied, while, at about 0.7 V, the current increases markedly, because of the oxidation process.24 During the backward voltage sweep, the reduction process results in the conductivity decrease. According to this, the voltage of 0.4 V was used for reading the output values and memristive device’s conductance to prevent their noticeable variations. This value was also determined as logic “1”. The voltage of 0 V was considered as logic “0”. When no input vector was applied to the network, each memristive device was biased to +0.2 V, as it approximately corresponds to the redox equilibrium potential of PANI. For the learning procedure, the amplitude of potentiation voltage pulse was chosen to be +0.6 V, while that of depression to -0.2 V.

The training pulses durations were established on the base of PANI memristive device resistive switching kinetics. Typical plots are shown in Fig. 2c. Absolute values of the conductance change under potentiating voltage pulse (+0.6 V) and depressing one (-0.2 V) are presented in Fig. 2d, as functions of the initial conductance for various pulse durations. Each value was obtained by applying voltage during 10, 20 and 40 s for depression and 100, 200 and 400 s for potentiation and measuring current through the device within 1 s. The figure shows that the memristive device conductance could be changed almost continuously from 10-7 to 10-5 S. Additional analysis demonstrated that conductance, under potentiating voltage, could be well approximated by a function A0+A1exp(t/τ1)+A2exp(t/τ2), while that under depression by a function A3+A4exp(t/τ3). Characteristic time values τ1, τ2 and τ3 varied from sample to sample, but their averages were 400 s, 40 s and 50 s respectively. It should be noted also that endurance characteristics of each memristive device strongly depend on the state of its solid electrolyte: when it dries out the device loses its memristive properties and becomes a simple resistor. In order to extend the working time of the device we covered it with a polyimide kapton tape. The retention time of memristive devices at +0.2 V (PANI redox potential) was not very long (about a day) but it was enough for the demonstration purposes of our work. It could be increased by, for example, inserting of metal nanoparticles inside the PANI layer as it potentially can preserve the charge for conserving the current electrochemical state and thus conductance of the memristive device.25 

Since a double-layer perceptron is able to solve linearly nonseparable task, we chose the “XOR” function to be performed by our network. It is the logic task, where (0;0) and (1;1) input signals belong to the class “0” and the other two (1;0) and (0;1) to the class “1” (according to the logic outputs), leading to the lack of a single straight line in the feature plane separating these classes. This task cannot be solved by elementary (single-layer) perceptron, where each output neuron implements one hyper-plane separating the classes. Nevertheless, the second layer neurons in a double-layer ANN perform the separation in a feature space of the first layer, enabling union, intersection and difference of the “subclasses” highlighted by the hidden layer of the network.

In machine learning, the back propagation with batch correction learning algorithm26 is widely used for nonseparable task solving. Shortly, the algorithm comprises the calculation of the gradient of a squared error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the squared error function. It means that one has to tune the weight values very precisely. This point was an issue for the hardware perceptron due to the fact that resistive switching kinetics of memristive devices were not similar enough for unified mathematical model. So we could only follow the weight correction direction (sign), but not its value, choosing the empirically established learning pulse time duration. Such modification of the back propagation learning algorithm leads to the strong correlation of the necessary number of steps to converge with the initial weights distribution: closer it was to the final distribution, the less number of steps was needed. It is to note that even not every initial state of the network led to the convergence. Possible solutions of the issue could be an implementation of different algorithms based on spike timing dependent plasticity (STDP) rules27 or realization the circuit where conductivity of each element would be measured with a contactless spectrophotometric method.15 

Each step of our learning procedure consisted consecutively of an application of the whole training set of vectors x(k) (k = 1, 2, 3, 4), actual weight measuring (applying the “reading” pulses) and weight correction (applying the “writing” pulses). The correction pulse duration values were chosen in such a way as to minimize the duration of learning steps, and it was kept constant (but different for depressing and potentiating pulses) for all steps in the whole learning procedure. The procedure was performed until convergence. Fig. 3a shows results of the learning procedure for XOR logic function at the first and last iterations (the whole procedure consisted of two steps as in the example). Fig. 3b depicts the weight values change after learning. As described above, each weight was adjusted by two memristive devices (their conductances are not shown separately) and set in arbitrary units. As shown in Fig 3c, the weights were adjusted so that two output classes were separated by two planes in the feature space.

FIG. 3.

Experimental data. a) Output signal within the epochs before (left) and after (right) training and expected output signal (dotted). b) Synaptic weights and c) corresponding feature plane partition (area above and below the plane y=4,5 is the class “1” and “0”, correspondingly). Obtained separating planes are implemented by corresponding neurons in the first layer.

FIG. 3.

Experimental data. a) Output signal within the epochs before (left) and after (right) training and expected output signal (dotted). b) Synaptic weights and c) corresponding feature plane partition (area above and below the plane y=4,5 is the class “1” and “0”, correspondingly). Obtained separating planes are implemented by corresponding neurons in the first layer.

Close modal

Since the double layer perceptron separates the feature space into different classes by hyper-planes and their further combination, one class represents the multidimensional polygon-like area in the feature space. This form allows the perceptron to classify not only “black” (logic “0”) and “white” (“1”) classes, but also “gray” ones (some range of signal amplitude between logic “0” and “1”), i.e. the analogue input signals. Here, we show a basic opportunity to solve an analogue task by means of our circuit on an example of the simplest polygon: the triangle. As every straight line was performed by one neuron in the hidden layer, we used a circuit consisting of two inputs and three neurons on the first layer and one output neuron on the second layer. The circuit was simulated using real characteristics (memristive device kinetics, neuron activation functions, resistors and other elements shown in Fig. 1b). Used for the i-th neuron activation function was obtained by fitting experimental data shown in Fig. 1a by sigmoidal function yi=11+eΣi4.50.5, where Σi is the weighted sum of the inputs of the i-th neuron, considering +0.4 V as logic “1”. Learning was performed following back propagation learning algorithm described in Ref. 24, simplified by replacing the derivative of the activation function by a constant 0.5 to speed up the convergence. Optimal learning rate constant η was found to be equal to 2 for used initial weights uniformly distributed on the interval.2,8 The possible position of separating lines (in bold red), implemented by the hidden neurons, and the calculated output signal (heat map) are shown in Fig. 4a. Vector points of the training set (white squares in Fig. 4a) were chosen for learning the perceptron to classify the analogue signal approximately in the geometry of triangle, with enough margins between points to avoid a possible uncertainty of classification, associated to the activation function width. The points inside the triangle were defined as corresponding to the class “1”, while the others to the class “0”. The learning procedure can be seen as the value dependence of the squared error function E on the epoch number for different initial conditions (Fig. 4b). The error convergence to the value of zero means that the double layer perceptron could be learned to solve an analogue classification task for different sets of initial weights.

FIG. 4.

a) Simulated output signal and corresponding separating lines. b) Error function value within the learning procedure for 4 different sets of initial weights.

FIG. 4.

a) Simulated output signal and corresponding separating lines. b) Error function value within the learning procedure for 4 different sets of initial weights.

Close modal

In conclusion, we have shown that memristive devices can be used in principle for multilayer ANN hardware realization. For the first time, we built a double-layer ANN network that paves the way for the realization of a multi-layer perceptron, demonstrating the possibility to perform nonseparable combinational logic classification (XOR logic task). It was also proved that a perceptron principally can solve analogue tasks which cannot be realized by digital devices. This approach could be extended (but not directly) to larger ANNs and other machine learning algorithms for more complex and data-intensive tasks.

The work was partially supported by the Russian Science Foundation (16-13-00052) and was partially done on the equipment of the Resource center of electrophysical methods (Complex of NBICS-technologies of Kurchatov Institute). This research was performed in the framework of the “Grandi Progetti 2012” funded by Autonomous Province of Trento, Italy (PAT): “Developing and studying novel intelligent nano materials and devices towards adaptive electronics and neuroscience applications — MaDEleNA Project”.

1.
J. M.
Cruz-Albrecht
,
M. W.
Yung
, and
N.
Srinivasa
, “
Energy-efficient neuron, synapse and STDP integrated circuits
,”
IEEE Transaction on Biomedical Circuits and Systems
6
(
3
),
246
256
(
2012
).
2.
J.
Seo
,
B.
Brezzo
 et al, “
A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons, Custom Integrated Circuits Conference (CICC)
,”
IEEE
1-4 (
2011
).
3.
G.
Indiveri
,
R.
Etienne-Cummings
 et al, “
Neuromorphic silicon neuron circuits
,”
Front. Neurosci.
5
,
1
23
(
2011
).
4.
J. J.
Yang
,
D. B.
Strukov
, and
D. R.
Stewart
, “
Memristive devices for computing
,”
Nat. Nanotechnol.
8
,
13
24
(
2013
).
5.
Y. V.
Pershin
and
M.
Di Ventra
, “
Memory effects in complex materials and nanoscale systems
,”
Adv. Phys.
60
,
145
227
(
2011
).
6.
D. B.
Strukov
,
G. S.
Snider
,
D. R.
Stewart
, and
R. S.
Williams
, “
The missing memristor found
,”
Nature
453
,
80
83
(
2008
).
7.
M.
Hamaguchi
,
K.
Aoyama
,
S.
Asanuma
,
Y.
Uesu
, and
T.
Katsufuji
, “
Electric-fieldinduced resistance switching universally observed in transition-metal-oxide thinfilms
,”
Appl. Phys. Lett.
88
,
142508
(
2006
).
8.
S. U.
Sharath
,
J.
Kurian
,
P.
Komissinskiy
,
E.
Hildebrandt
,
T.
Bertaud
,
C.
Walczyk
,
P.
Calka
,
T.
Schroeder
, and
L.
Alff
, “
Thickness independent reduced forming voltage in oxygen engineered HfO2based resistive switching memories
,”
Appl. Phys. Lett.
105
,
073505
(
2014
).
9.
A. N.
Mikhaylov
,
A. I.
Belov
,
D. V.
Guseinov
,
D. S.
Korolev
,
I. N.
Antonov
,
D. V.
Efimovykh
,
S. V.
Tikhov
,
A. P.
Kasatkin
,
O. N.
Gorshkov
,
D. I.
Tetelbaum
,
A. I.
Bobrov
,
N. V.
Malekhonova
,
D. A.
Pavlov
,
E. G.
Gryaznov
, and
A. P.
Yatmanov
, “
Bipolar resistive switching and charge transport in silicon oxide memristor
,”
Mater. Sci. Eng. B
194
,
48
54
(
2015
).
10.
Wang
,
Y. H.
,
Zhao
,
K. H.
,
Shi
,
X. L.
,
Xie
,
G. L.
,
Huang
,
S. Y.
, and
Zhang
L. W.
, “
Investigation of the resistance switching in Au/SrTiO3:Nb heterojunctions
,”
Appl. Phys. Lett.
103
,
031601
(
2013
).
11.
V.
Erokhin
,
T.
Berzina
, and
M. P.
Fontana
, “
Hybrid electronic device based on polyaniline-polyethyleneoxide junction
,”
J. Appl. Phys.
97
,
064501
(
2005
).
12.
Y.
Kim
,
D.
Yoo
,
J.
Jang
,
Y.
Song
,
H.
Jeong
,
K.
Cho
,
W.-T.
Hwang
,
W.
Lee
,
T.-W.
Kim
, and
T.
Lee
, “
Characterization of PI: PCBM organic nonvolatile resistive memory devices under thermal stress
,”
Organic Electronics
33
,
48
54
(
2016
).
13.
Z.
Ma
,
C.
Wu
,
D. U.
Lee
,
F.
Li
, and
T. W.
Kim
, “
Carrier transport and memory mechanisms of multilevel resistive memory devices with an intermediate state based on double-stackedorganic/inorganic nanocomposites
,”
Organic Electronics
28
,
20
24
(
2016
).
14.
V.
Erokhin
,
T.
Berzina
,
K.
Gorshkov
,
P.
Camorani
,
A.
Pucci
,
L.
Ricci
,
G.
Ruggeri
,
R.
Sigala
, and
A.
Schüz
, “
Stochastic hybrid 3D matrix: learning and adaptation of electrical properties
,”
J. Mater. Chem.
22
,
22881
22887
(
2012
).
15.
S.
Battistoni
,
A.
Dimonte
, and
V.
Erokhin
, “
Spectrophotometric characterization of organic memristive devices
,”
Organic Electronics
38
,
79
83
(
2016
).
16.
F.
Alibart
,
E.
Zamanidoost
, and
D. B.
Strukov
, “
Pattern classification by memristive crossbar circuits using ex situ and in situ training
,”
Nat. Commun.
4
,
2072
(
2013
).
17.
V. A.
Demin
,
V. V.
Erokhin
,
A. V.
Emelyanov
,
S.
Battistoni
,
G.
Baldi
,
S.
Iannotta
,
P. K.
Kashkarov
, and
M. V.
Kovalchuk
, “
Hardware elementary perceptron based on polyaniline memristive devices
,”
Organic Electronics
25
,
16
20
(
2015
).
18.
M.
Prezioso
,
F.
Merrikh-Bayat
,
B. D.
Hoskins
,
G. C.
Adam
,
K. K.
Likharev
, and
D. B.
Strukov
, “
Training and operation of an integrated neuromorphic network based on metal-oxide memristors
,”
Nature
521
,
61
(
2015
).
19.
S.
Choi
,
P.
Sheridan
, and
W. D.
Lu
, “
Data Clustering using Memristor Networks
,”
Scientific Reports
5
,
10492
(
2015
).
20.
L.
Wang
,
M.
Duan
, and
S.
Duan
, “
Memristive Perceptron for Combinational Logic Classification
,”
Math. Problems in Engineering
2013
,
625790
(
2013
).
21.
G.
Cybenko
, “
Approximation by superpositions of a sigmoidal function
,”
Mathematics of Control, Signals, and Systems
2
(
4
),
303
314
(
1989
).
22.
T.
Berzina
,
A.
Smerieri
,
M.
Bernabò
,
A.
Pucci
,
G.
Ruggeri
,
V.
Erokhin
, and
M. P.
Fontana
, “
Optimization of an organic memristor as an adaptive memory element
,”
J. Appl. Phys.
105
,
124515
(
2009
).
23.
V.
Allodi
,
V.
Erokhin
, and
M. P.
Fontana
, “
Effect of temperature on the electrical properties of an organic memristive device
,”
J. Appl. Phys.
108
,
074510
(
2010
).
24.
T.
Berzina
,
S.
Erokhina
,
P.
Camorani
,
O.
Konovalov
,
V.
Erokhin
, and
M. P.
Fontana
, “
Electrochemical Control of the Conductivity in an Organic Memristor: A Time-Resolved X-ray Fluorescence Study of Ionic Drift as a Function of the Applied Voltage
,”
ACS Appl. Mater. Interfaces
1
,
2115
2118
(
2009
).
25.
Berzina
,
T.
,
Pucci
,
A.
,
Ruggeri
,
G.
,
Erokhin
,
V.
, and
Fontana
,
M.
Gold nanoparticles–polyaniline composite material: Synthesis, structure and electrical properties
,”
Synthetic Metals
161
,
1408
1413
(
2011
).
26.
P. J.
Werbos
,
The Roots of Backpropagation. From Ordered Derivatives to Neural Networks and Political Forecasting
(
John Wiley & Sons, Inc.
,
New York, NY
,
1994
).
27.
S.
Saïghi
,
C. G.
Mayr
,
T.
Serrano-Gotarredona
 et al, “
Plasticity in memristive devices for spiking neural networks
,”
Front. Neurosci.
9
,
51
(
2015
).