NEURAL NETWORK STRUCTURE, CLASSIFICATION AND ARCHITECTURE | IEEE TOPIC


By on 10:49 AM


NEURAL NETWORK STRUCTURE
According to Frank Rosenblatt’s theory in 1958 ,the basic element of a neural network is
the perceptron, which in turn has 5 basic elements: an n-vector input, weights, summing
function, threshold device, and an output. Outputs are in the form of -1 and/or +1. The
threshold has a setting which governs the output based on the summation of input
vectors. If the summation falls below the threshold setting, a -1 is the output. If the summation exceeds the threshold setting, +1 is the output. Figure 4 depicts the structure of a basic perceptron which is also called artificial neuron. The perceptron can also be dealt as a mathematical model of a biological neuron. While in actual neurons the dendrite receives electrical signals from the axons of other neurons, in the perceptron these electrical signals are represented as numerical values.
          A more technical investigation of a single neuron perceptron shows that it can have an
input vector X of N dimensions(as illustrated in figure.5). These inputs go through a
vector W of Weights of N dimension. Processed by the Summation Node, "a" is
generated where "a" is the "dot product" of vectors X and W plus a Bias. "A" is then
processed through an activation function which compares the value of "a" to a predefined
Threshold. If "a" is below the Threshold, the perceptron will not fire. If it is above the
Threshold, the perceptron will fire one pulse whose amplitude is predefined.

ARCHITECTURE

1. Feed forward Networks:

Feed-forward ANNs allow signals to travel one way only; from input to output. There is
no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed forward
ANNs tend to be straight forward networks that associate inputs with outputs.
They are extensively used in pattern recognition. This type of organisation is also referred
to as bottom-up or top-down.

2. Feedback Networks:

Feedback networks can have signals travelling in both directions by introducing loops in
the network. Feedback networks are very powerful and can get extremely complicated.
Feedback networks are dynamic; their 'state' is changing continuously until they reach an
equilibrium point. They remain at the equilibrium point until the input changes and a new
equilibrium needs to be found. Feedback architectures are also referred to as interactive
or recurrent, although the latter term is often used to denote feedback connections in
single-layer organisations.

3.Network Layers:

The commonest type of artificial neural network consists of three groups, or layers, of
units: a layer of "input" units is connected to a layer of "hidden" units, which is
connected to a layer of "output" units.

 1.The activity of the input units represents the raw information that is fed
into the network.
 2.The activity of each hidden unit is determined by the activities of the
input units and the weights on the connections between the input and the
hidden units.
 3.The behaviour of the output units depends on the activity of the hidden
units and the weights between the hidden and output units.

This simple type of network is interesting because the hidden
units are free to construct their own representations of the input. The weights
between the input and hidden units determine when each hidden unit is active, and
so by modifying these weights, a hidden unit can choose what it represents.
We also distinguish single-layer and multi-layer architectures. The single-layer
organisation, in which all units are connected to one another, constitutes the most
general case and is of more potential computational power than hierarchically
structured multi-layer organisations. In multi-layer networks, units are often
numbered by layer, instead of following a global numbering.