Unsupervised learning models: an overview
In unsupervised learning there is no feedback from the environment to say
if the output is correct. The network must discover by itself patterns,
features, correlation, or categories in the input data.
The network can learn by itself only if there is redundancy in the input data;
without which it would be impossible to find any patterns or features in the
data.
The type of pattern that an unsupervised learning network detects in the
input data depends on its architecture. In general unsupervised networks can be used for:
- Principal Component Analysis. We can find the eigenvector direction of
the input patterns without calculate the correlation matrix.
- Familiarity. The output could tell us how similar a new input pattern is to typical or average patterns seen in the past.
- Clustering Analysis. A set of binary-valued outputs, with only one on at a time, could
tell us which of several categories an input pattern belongs to.
- Prototyping. The network might form categories as in the previous case,
but then give us as output a prototype or exemplar from the appropriate
class.
- Encoding. The output should be an encoded version of the input, in fewer
bits, keeping as much relevant information as possible.
- Feature Mapping. If the output units had a fixed geometrical arrangement with only node active at a time, they could map similar input patterns to different and close points in this arrangement.
These cases are not necessarily distinct, and they might also be
combined in several ways.
Unsupervised learning architectures are mostly fairly simple; most networks
consist of only a single layer and most are essentially feed-forward. Except
in the case of Feature Mapping, usually there are fewer outputs than inputs.
In INNE we implemented three models of unsupervised learning algorithms:
Hebbian models with different learning rules,
Simple competitive model
and the Kohonen feature map.
Back to index