pythonneural-networkrecurrent-neural-networkneurolab

Time series forecast with recurrent Elman network in neurolab


I use the Elman recurrent network from neurolab to predict a time series of continuous values. The network is trained from a sequence such that the input is the value at index i and the target is the value at index i+1.

To make predictions beyond the immediate next time step, the output of the net is feed back as input. If, for example, I intend to predict the value at i+5, I proceed as follows.

  1. Input the value from i
  2. Take the output and feed it to the net the as next input value (e.g. i+1)
  3. Repeat 1. to 3. four more times
  4. The output is a prediction of the value ati+5

So for predictions beyond the immediate next time step, recurrent networks must be activated with the output from a previous activation.

In most examples, however, the network is fed with an already complete sequence. See, for example, the functions train and sim in the example behind the link above. The first function trains the network with an already complete list of examples and the second function activates the network with a complete list of input values.

After some digging in neurolab, I found the function step to return a single output for a single input. Results from using step suggest, however, that the function does not retain the activation of the recurrent layer, which is crucial to recurrent networks. declining activation

How can I activate a recurrent Elman network in neurolab with a single input such that it maintains its internal state for the next single input activation?


Solution

  • It turns out it is quite normal for output which is generated from previous output sooner or later to converge towards a constant value. In effect, the output of a network cannot depend only on its previous output.