INNE
a Neural Network Simulation Environment


Introduction


INNE - Interactive Neural Network Environment - is a neural network simulation environment for UNIX workstations. Its aim is to provide a tool for modelling and experimenting neural networks: it allows to interactively design, train and test neural networks and to visualise their behaviour. The net editor allows to deal with large networks and to define their structure graphically. Different neural models are available to define the network behaviour.

To download a demo version.


Contents


Neural networks are computational models that perform computation via a densely connected net of very simple processing elements. This idea is an abstraction based on the behaviour of biological nerve cells. The application provides the background to master with this paradigm of computation; it covers some basic models of neural computation.

Boltzmann machines
Hopfield networks

Boltzmann machines and Hopfield networks, combinatoric optimization models, example of networks that do not change dynamically their structure; They can be used to solve combinatorial problems by approximation: it implements a probabilistic optimization algorithm, called simulated annealing, able to avoid locally optimal solutions. A learning algorithm allows to use this model also for classification problems, even though to this purpose the back-error propagation learning algorithm is more efficient.

The Hopfield model can be seen as a particular case of the more general Boltzmann machine; it is able to find the locally optimal solution nearest to the starting solution. It performs a gradient descent method, which is the deterministic version of the Boltzmann machine simulated annealing.


A Boltzmann machine that solves the problem of finding the greatest node subset, which are not connected with each others. The panel to control the si mulation

Back-error propagation

Back-error propagation networks, a supervised model, where learning takes place under supervision, that is confronting the output with the correct answer. The back-error propagation is one of the most popular neural network paradigm, and it is used successfully in many applications that require pattern-matching: after a supervised learning phase to establish the connections weights, the network responds to an input pattern by producing the corresponding output pattern.


A net trained by the back error propagation algorithm that recognises hand written digits: the input pattern is provided in the bottom node matrix and the output nodes are labelled on top.

Hebbian networks

Hebbian networks, an unsupervised model, where learning takes place with no feed-back from the environment and in presence of redundancy in input data; The Hebbian models are based on a law specified by Donald Hebb in 1949, studying the cellular modifications that occur in animals while learning. He observed that the connection between two neurons are strengthened when both neurons are active simultaneously.
Based on this general observation, several different mathematical rule to contr ol the network learning, have been introduced. We implemented three rules:

  • the plain Hebb's rule;
  • the Oja's rule;
  • the Sanger's rule, also called the generalized hebbian learning rule .

These networks are usually used to give a measure of familiarity of the input data according to average patterns among data seen in the past or to give a projection onto the principal component of a probabilistic distributi on of input data. In particular the principal component analysis allows to solve problems such as image compression and texture segmentation.


Net to determine the first two principal component of a bidimensional distribution, and the output: the two component learned (in green). The Hebbian mdodel panel and its option panel to control the simulation. The edit panel to define input distributions and the panel to set their values.

Simple competitive networks

Simple competitive networks, an unsupervised competitive model, where single neurons compete for being the one to fire;

Kohonen networks

Kohonen networks, an unsupervised competitive model, where single neurons compete for being the one to fire, as in the simple competitive model, and moreover performs feature mapping, in which the weight update rule is modified to involve neighbourhood relations in the output array.
Kohonen has formuled a learning rule for self-organizing feature map networks; they can automatically form one or many-dimensional maps of intrinsic features of the input data.
Applications of feature mapping have been made in many fields, including sensory mapping, motor control, speech recognition, vector quantization and combinatorial optimization.

A net, shown with & without connecting arcs, that recognizes bidimension al shapes. The panel and the option panel to control the si mulation.


Learning a rectangular shape: 3 different moments in the learning pro cess.