nextAcknowledgments upINNE: a Neural Network previousThe example of colour clustering


Simulation of Kohonen's maps

Kohonen algorithms are unsupervised competitive algorithms [Koh88]. These networks, also called Kohonen's maps, use the spatial organization of output neurons to determine the information in the input data. The network structure consists of a unique level where every input neuron is connected to all output neurons, which are generally organized into a matrix of N x M dimension.

INNE allows to approximate the input data generated by a uniform distribution inside 2D polygons. Through a graphical representation of the map, it is possible to observe the evolution of the simulations and how the map fits the input form. In this model we are not interested in the direction of (as in Hebbian algorithms) but in their values, which define a position in the input space. Neurons that are next to each other in the network are shown as dots connected by lines and their positions in the input space are given by the arc weight of the corresponding neuron.

For example, let us consider a neural network having 2 input neurons and a 10 x 10 matrix of output neurons. If we give in input a uniform distribution, in a polygon we can analyse the evolution of Kohonen maps tending to fit the input space. At the beginning of the simulation we initialize the net with small value random arc weights so that the neurons are shown all clustered in a small space of irregular shape as in Fig. 8-a. During the simulation we can observe how the reticule evolves towards more and more regular shapes (Fig. 8-b).

  
Figure 8: Net evolution. Initial (a) and final (b) configurations and error condition (c)

After the learning phase, if the number of learning steps was adequate, the map will give a good approximation of the input polygon. It is possible to observe that points on the borderline of the network are positioned at a certain distance from the border; in fact, every point represents a portion of the input space. This is divided into regions where neurons represent the centre of gravity. In the ordering phase of the weight vectors, we can have problems so that the network doesn't approximate the input space correctly. This event is illustrated in Fig. 8-c. When these problems occur, the time required for learning increases.


nextAcknowledgments upINNE: a Neural Network previousThe example of colour clustering