Most of the real world models involve a dynamic approach, this being required by the nature of the phenomena studied because the majority have a dynamic character. Because most variables are time-dependent, we can deduce that the result/output at a certain tine depends on the result or results obtained in previous periods.
As a consequence, recurrent neural networks (RNN) include historical data about inputs and outputs, with at least one feedback connection (Sulehria & Zhang 2007). Figure 2.2 and 2.3 illustrate the two classical recurrent neural networks called the Elman network and the Jordan network. Regarding the usage of these two neural networks in real world processes, they have proved …show more content…
Considering that the activation function is a signum function, the time evolution of the neuron’s phases is done by the following rule:
y_k^(t+1)=sgn(w_k^” y^t+e_k-τ_k) (2.10)
for k=(1,n) ̅ and t=0,1,2…
Example 2.1
Let’s calculate the weights matrix for a Hopfield artificial neural network with 4 neurons and only one fundamental memory , stored in the network by means of weights. is the learning rule used for the weights matrix given by with . We generate the weights matrix as follows:
Therefore, the weights matrix