BACK PROPAGATION
in the MODEL
sub-menu is chosen.
It was first introduced by Webos in 1974, and independently by Parker in 1982. It became well known only after Rumelhart and McClelland famous book Parallel Distributed Processing in 1986. It was successfully used by Sejnowski and Rosenberg in their program, in which a network was trained to learn the phonetics rules. The back-error propagation has been a big improvement from the perceptron, or other early adaptive networks, such as the adaline (adaptive linear neuron) or the madaline architecture (multilayer adaline), which was proved by Minsky and Papert to be very limited: the perceptron could only classify patterns that are linearly separable. This limitation is inherently tied to the fact that it could only have two layers of units and only one could change weights. On the opposite the back-propagation can have two or more layers of units, with addaptive weights. The network is trained, that is it is presented with a series of patterns pairs. Each pair consists of an input patterns and an output pattern, which is the desired response of the network to the input and is used to determine the error values in the network when the weights are adjusted. When the network is trained with success, it produces correct answers more and more often as the training session goes on. The root-mean-squared (RMS) error is computed to quantify the learning rate. As the learning process goes on, its RMS error decreases - less than 0.1 would mean that the network has learned its training set.
The network response
to an input is not a yes/no answer. At each step of the learning phase the network goes
closer and closer to the desired response. This process is called the convergence process
of the network, where the RMS value gets closer to zero. But this does not happen
always, sometimes the network does not seem to be learning any more. If one think of
RMS values as functions of the weights, looking for RMS values getting closer to zero
means looking for minimum points. At times, instead of reaching a global point of
minimum in the RMS value, a local minimum is found and the learning phase is stuck.
The standard back-error propagation does not provide means to get around this problem,
or better to get out of local minima. Nevertheless it may try not to enter into a local
minimum with a number of techniques.
After the network has learnt, it is usually testes to assess its performance.
The back-error propagation panel
In the panel there are some general buttons: DISCONNECT
, CONNECT
, RESET
,
EXIT
, STEP
and LEARN
. Moreover there are three sliders to tune up some
parameters, asociated to the simulation (the first two) and to the visualization (the
third one): eta, the number of learning steps and the video refresh frequency.
A message advise users that no training set is loaded, and two buttons are dedicated to
loading a training file or to create a new one. Finally some labels display the error, and
the steps in the learning process already done, and a small graphical window displays the
error trend.
Now we will describe the actions associated to every button in detail.
CONNECT
DISCONNECT
RESET
EDIT
menu item RESET NET
.
EXIT
LEARN
Steps
, with the mouse pointer, holding the left button pressed.
During the learning phase, the edit area will be refreshesd with a frequency
regulated by the slider, labelled Refresh freq.
.
STEP
LOAD
TS
), you can load it, as any
other training file, with the LOAD
button.