Output of nn application. |
Output format of the application for training and testing Multi Layer Perceptrons (MLP) and single layer perceptrons.
The application print on standard output the following informations:
- Structure: number of hidden layers, hidden units, inputs, outputs.
- Type of learning algorithm, its parameters and stop conditions.
- Training results: Normalized RMS error and iterations
- Testing results: Number of errors and error rate, confusion matrix
- Elapsed CPU time
See also nn
Example:
This example shows the output of a Standard MLP for a 7 classes classification problem:Standard Multi-Layer-Perceptron Number of layers: 2 Input dimension: 4 Number of hidden units: 7 Number of classes: 7 Output dimension: 7 Minimum Hamming distance between class codewords: 2 Stop conditions: Maximum number of iterations : 2000 Threshold normalized RMS error : 0.05 Learning algorithm: Alg. Gradient descent Learning rate : 0.3 By pattern learning. Training results: RMS normalized error = 0.0536067 Iterations = 2000 Testing : Neural network stored in mynet.net Neural network output stored in mynet.out Confusion matrix : 6 0 0 0 0 0 0 0 9 0 0 0 0 0 2 0 9 0 0 0 0 0 0 0 7 0 0 0 0 0 0 2 9 0 0 0 0 0 0 0 9 1 0 0 0 0 0 0 8 Neural net errors: Number of errors : 5 : Percentual error rate : 8.06452 % CPU time (min.sec) : 0.31 :
Alphabetic index Hierarchy of classes