nextSimulation of Hebbian Algorithms upINNE: a Neural Network previousSimulation of Boltzmann machines


Simulation of Back-error propagation Algorithms

Back-error propagation is by far the most popular supervised learning method for multilayer neural networks [Rum86]. Back propagation and its variants have been applied to a variety of problems, including pattern recognition, signal processing, image compression, speech recognition, nonlinear system modelling and control. The most appealing feature of back propagation is its adaptive nature, which allows complex processes to be modeled through learning from mesurements or examples, updating the arc weight connecting neurons. In our simulation, this process is visualised and the arcs show different colours and thickness according to their weights. The method does not require specific mathematical models or expert knowledge to solve the problem. Our examples are simple problems often used for testing and benchmarking a network: typically, the training set contains all possible input patterns, so there is no question of generalization. For instance, the network in Fig 3-a recognizes the digits between 0 and 9. During the learning phase we can observe the error dropping and after training, we can test the network giving in input a hand written digit by setting on or off the input neurons displayed in a grid. The network in one step will try to recognise the digit even if some noise is present: setting the value of the output neurons between 0 and 1 according to the recognition probability of the digit (see Fig 3-b).

  
Figure 3: Back-error propagation for digit classification. (a)accepting digit 8. (b)accepting digit 2 in presence of noise.


nextSimulation of Hebbian Algorithms upINNE: a Neural Network previousSimulation of Boltzmann machines