The Kohonen feature map model

In this paragraph we introduce the Kohonen feature map model, which is an example of unsupervised learning algorithms, in particular an unsupervised competitive algorithm.
The panel to drive the simulation will open when the item KOHONEN MODEL in the MODEL sub-menu is choosen.

Kohonen model: an overview

Kohonen has proposed an unsupervised self-organizing competitive algorithm for generating a mapping of an input signal vector I of a N-dimensional real space (with N usually high) onto a one or two dimensional topological space where the output neurons are mapped. The map is generated by adjusting the weight vectors of neurons w to the input vector I, so that the topological relationships of weight vectors faithfully preserve the essential features of the inputs.
The output units Oi are arranged in an array and are fully connected to the inputs with arc weight wij.

A competitive learning rule is used, choosing the output unit with weight vector, wi*, closest to the current input I:

||wi*-I|| <= ||wi - I||, for all i

The neuron update function is critical for topology preserving networks. The update function is defined in terms of a neighbourhood function equals to 1 for i=i*, the winning neuron, and falls off rapidly when neurons get far from it in the output array. Units close to the winner, as the winner itself, should have their weights changed appreciably, while those further away (where is small) should have little effect. A typical choice for is:


where sigma is a width parameter that is gradually decreased.
The Kohonen learning rule can be written into a step by step procedure for computer simulation as follows:
  • Step 1
    Initialization: select the size and structure of the network. Initialize wij with small random values.
  • Step 2
    Present a new input I(tk) in the N-dimensional real set.
  • Step 3
    Compute the distance between I(tk) to all neurons, or the matching scores di's:
    di= || I(tk)-wi(tk)||,
    where ||*|| is the Euclidean norm or the infinity norm or any norm p.
  • Step 4
    Select the i*th neuron closest to I(tk), or minimum distance di*:
    di*=min(di).
  • Step 5
    Update weight vectors to the i*th neuron and its neighbours.

    where eta is usually a slowly decreasing function of time and eta is great than 0 and less of 1.
  • Step 6
    Repeat from step 2.
We can think of a sort of elastic net in input space that wants to come as close as possible to the inputs; the net has the topology of the output array (i.e., a line or a plane) and the points of the net have the weights as coordinates. This idea is useful to keep in mind when interpreting, for instance, the
figure of a net learning a triangular shape.

This figures shows an example of net, mapping 2 inputs I1 and I2 onto a 10x10 planar array. Input pattern were chosen randomly from a triangular uniform distribution which is shown as the triangle in the Edit and Simulation Window. The weights (wi1, wi2 ) for each output unit are shown in this input space by an intersection in the grid of lines, which connects all nearest neighbour pairs.

Another example: 3 different times during the process of learning a rectangular shape, with a net of 2 input neuron connecting 199 neurons laying on a line.

The Kohonen feature map module

The panel to drive the simulation

with its option panel allowing to select the learning rule.


Back to index