nextSimulation of competitive networks upSimulation of Hebbian Algorithms previousSimulation of Hebbian Algorithms


The example of image compression

As an example of Hebbian algorithms, an application for image compression has been added to INNE. The proliferation of multi-media tools in computer communication networks has increased the demand for techniques to improve the efficiency of transmission and storage of image; therefore a large variety of algorithms for image compression has been realized. The basic idea behind a class of image compression algorithms is to exploit the fact that nearby pixels in images are often highly correlated. A given image is therefore divided into several blocks of pixels and each block, treated as a vector, is linearly transformed into a vector whose components are mutually uncorrelated. These components are then independently quantized for transmission or storage. The reconstruction of the original image is obtained by using the inverse linear transform operation on the quantized coefficient vector. The optimal transform in which the average mean-squared reconstruction error is minimized is called Principal Component Analysis (or Karhunen-Loéve transform).

For this example, INNE allows users to load images in the ppm (portable pixmap) format and after execution returns the compressed image: users can choose either the Oja rule or the Sanger rule in the Hebbian algorithm for the compression. We can both compress and reconstruct the whole image by using the same neural network.

The first choice to be taken concerns the neural network dimension that affects the size of the blocks into which the images are subdivided. In fact different sizes have been tested, but large networks, corresponding to large blocks, showed some difficulties in the training phase and the quality of the compressed image seemed to degrade according to the generalization property. Thus 8 x 8 blocks have been chosen and this is a standard size used in many different coding algorithms. Of course, the compressed image needs to be quantized before being transmitted or archived.

Hebbian algorithms implement an approximation of the Karhunen-Loéve transform [Her91] which minimizes the mean square error of the compressed image, but is time consuming, therefore quicker even though non optimal transformations are commonly used. Among them, the Discrete Cosine transform, which gives errors similar to those of the KLT in much less time, is often used in case such as the Jpeg case. Hebbian algorithms can be compared to DCT in terms of error, but not in terms of time, since the learning phase can be time consuming.

  

Figure 5: Image compression

Fig. 5 presents an image and its compression using a neural network that has 64 input neurons organized in a 8 x 8 matrix and 8 output neurons. The learning phase lasted 30000 learning steps, with , a decay function of exponential with and the Sanger rule. After the training phase, the network output is quantized with 8 bits for the first and the second neuron, 6 bits for the 3rd and the 4th and 2 for the others.


nextSimulation of competitive networks upSimulation of Hebbian Algorithms previousSimulation of Hebbian Algorithms