a set of C++ library classes
for neural networks development



nn

Application for training and testing Multi Layer Perceptrons (MLP) and single layer perceptrons.

The application nn implements an MLP and a simple perceptron with multiple inputs and outputs for classification problems. The classes can be coded according to the classical One-Per-Class (OPC) coding scheme or Error Correcting Output Coding (ECOC) scheme. Program option allow to build arbitrary MLP with one, two or no hidden layers, using a number of hidden units limited only by the available memory. The user can also select different flavours of back propagation algorithms and different learning algorithm parameters (e.g. learning rates, momentum). If an option is not set by the user, a default one is used.


Usage: nn trainfile [options]

Options:
-test string test file name
-type string MLP type:
normal (MLP-OPC) (default)
ecoc (MLP-ECOC)
-lencw unsigned codeword length
-nc unsigned number of the classes
-nl unsigned number of the layers (considering also the input layer):
1: one layer (no hidden units)
2: two layers (1 hidden layer)
3: three layers (2 hidden layers)
-h1 unsigned number of hidden units of the first hidden layer of the three layers MLP
-h2 unsigned number of hidden units of the second hidden layer of the three layers MLP or number of hidden units of the two layers MLP (default 4)
-d unsigned sample dimension/number of input units
-maxit unsigned maximal number of iterations of the backpro algorithm (default 500)
-maxerr double maximal normalized root mean square error (RMS) (default 0.1)
-alg string Backpropagation learning algorithm type:
gd (gradient descent)
gd_dl (gradient descent linear decrement)
gd_de (gradient descent exponential decrement)
md (gradient descent with momentum)
md_dl (gradient descent with momentum and linear decrement)
md_de (gradient descent with momentum and exponential decrement)
bold (bold driver)
boldmom (bold driver with momentum)
-rate double learning rate (default 0.07)
-mom double momentum rate (default 0.5)
-incbold double bold driver increment rate
-decbold double bold driver decrement rate
-decr double gradient decrement (for gradient descent with momentum and linear or exponential decrement)
-seed seed for the random initialization of the weights. If 0 the initialization is performed using current computer time.
-s string save neural net parameters and weights onto a file
-out string save neural net ouptuts onto a file
-serr string save neural net history RMS errors onto a file
-w string read neural net parameters and weights from a file
-p integer 1 = printing a point at each iteration; 0 = noprint

Examples:
For all the examples we refer to a training set named foo.train and a testing set named foo.test with 5-dimensional examples and 7 classes. These are only few examples. See above Usage for a complete reference of the available parameters. Note that you can place the parameters in any order, and most of the parameters are not mandatory. More precisely, you can also supply no parameters, but remember that the number of inputs of the MLP must agree with the dimension of the input examples and you must also specify the correct number of classes.

  1. Training and testing a standard MLP with one hidden layer:
    nn foo.train -test foo.test -d 5 -nc 7
    Note that the order of the the supplied parameters is not important.
    Note also that the number of inputs of the MLP must agree with the dimension of the input examples and you must also specify the correct number of classes. The selected MLP has one hidden layer (default) and uses a backpropagation learning algorithm with fixed learning rate.
    If you want to select a learning rate equal, for instance, to 0.1:
    nn foo.train -test foo.test -d 5 -nc 7 -rate 0.1
    If you want to select 20 neurons for the hidden layer:
    nn foo.train -test foo.test -d 5 -nc 7 -h2 20

  2. Training and testing a standard MLP with two hidden layer:
    nn foo.train -test foo.test -d 5 -nc 7 -nl 3
    If you want to select a learning rate equal, for instance, to 0.04:
    nn foo.train -test foo.test -d 5 -nc 7 -rate 0.04 -nl 3
    If you want to select 20 neurons for the fist hidden layer and 12 for the second hidden layer:
    nn foo.train -test foo.test -d 5 -nc 7 -nl 3 -h1 20 -h2 12

  3. Training a standard MLP changing learning algorithms:
    Using a backpropagation with a momentun term:
    nn foo.train -test foo.test -d 5 -nc 7 -alg md
    If you want to select a learning rate equal, for instance, to 0.1 and a momentum rate equal to 0.4:
    nn foo.train -test foo.test -d 5 -nc 7 -alg md -rate 0.1 -mom 0.4
    If you want to select a bold driver learning algorithm with increment rate equal to 1.04 and decremet rate equal to 0.5:
    nn foo.train -test foo.test -d 5 -nc 7 -alg bold -incbold 1.04 -decbold 0.4

  4. Training and testing an ECOC MLP with one hidden layer:
    nn foo.train -test foo.test -d 5 -nc 7 -type MLP-ECOC

  5. Training and testing a standard MLP and saving it onto a file:
    nn foo.train -test foo.test -d 5 -nc 7 -s mymlp
    The file mymlp.net stores the parameters and weights of the MLP. You can reload its weights and parameters and continue the training in a second, possibly different stage: nn foo.train -test foo.test -d 5 -nc 7 -alg boldmom -maxerr 0.02 -w mymlp
    The MLP is initialized with the paramenters and weights stored in the file mymlp.net, and the the training starts using a bold driver with momentum algorithm and the learning algorithms ends when the normalized RMS errors drops down to 0.02 (or the maximum allowed number of iterations is reached).

  6. Saving the outputs or the history error:
    For saving the computed outputs of the net onto a file named output:
    nn foo.train -test foo.test -d 5 -nc 7 -out output
    For saving the history of the RMS errors during the learning epochs onto a file named history:
    nn foo.train -test foo.test -d 5 -nc 7 -serr history

  7. A more complicated example:
    We want to train and test an MLP with one hidden layer and 35 hidden neurons using a backpropagation algorithm with an exponential decrement of the learning rate during the epochs, with an initial rate of 0.2, initializing it with the weights stored in the file startweights. We want also that the learning ends when the normalized RMS errors goes below 0.02 or we have reached 5000 iterations; then we want that the resulting MLP is stored in the file finishweights and the outputs realtive to the test set is stored in the file output, while the history errors during the epochs stored in the file history.
    In order to obtain this result you must type:
    nn foo.train -test foo.test -d 5 -nc 7 -h2 35 -alg gd_de -rate 0.2 -maxerr 0.05 -maxit 5000 -w startweights -s finishweights -out output -serr history

Output:
Output of the application
Format of the file storing the net
File storing the outputs
Input data file

Alphabetic index Hierarchy of classes


Last Updated February 2001
For comments and suggestions mail to Giorgio Valentini