Artificial neural networks are inspired by thebiological neural networks and they work by mimicking the same concept. Theidea originated from the study of information processing in biological systemand its mathematical representation by McCulloch & Pitts (1943). Thebasic building unit of a biological neural network is a neuron known asperceptron in artificial neural network. These units perform very simplefunction but when combine together they can built very complex classificationfunctions whose effectiveness can be increased using high grade parallelization(Rojas, 2013).1.1 PerceptronThe idea of a hypothetical nervous systemknown as perceptron was given by Rosenblatt (1958).
The workingof a perceptron is the mimic of biological neuron. A neuron has dendrites throughwhich the information flows into the body where it gets processed and thenpassed to axon which are connected to dendrites of next neuron. Similarly aperceptron has multiple inputs, a processing stage and a single output. Theoutput is the sum of all the input weights with biases added to them and anactivation function is used to spark the output as illustrated in fig 3.1. Figure 3.1:Illustration of biological and artificial neuron1 A simple activation function will give abinary output.
Artificial neural network consisting of a single neuron are tootrivial for complex tasks. In order to address wider set of problems they canbe combined to form multi-layer perceptron or feed forward networks. 1.2 Types ofArtificial Neural NetworkThere are different types of ANN dependingupon no. of layers, there functionality and the flow of data. Major categoriesare explained below.
1.2.1 Single Layer Feed Forward NetworkThe simplest kind of an artificial neuralnetwork is a single feed forward or an acyclic network which consist of aninput layer containing source nodes which projects into an output layer. Insuch networks there is no connection between neurons in the output layer to theneuron in the input layer. There is no computation in the input layer so itdoes not counts, thus these are called single layer networks as shown in figure3.2. Figure 3.
2:representation of a single layer feed forward network (Mas & Flores, 2008) 1.2.2 Multilayer Feed Forward NetworkA feed forward multilayer network containsmultiple neurons arranged in layers.
Neurons in the adjacent layers haveconnections between them. It basically consist of three layers, an input layer,hidden layer and an output layer. Theinput neurons in the first layer don’t perform any calculations. Hidden layersperform the calculations.
Inputs in these networks are labeled thus MLP usessupervised learning through backpropogation algorithms, so the networks knowswhat is the desired output of each given input. The data flows from the input nodeto the output nodes passing through the hidden nodes as shown in figure 3.3.The data flows only in the forward direction, there is no backward passing ofdata or any loops or cycles as in case of recurrent neural networks (Ian Goodfellow and Yoshua Bengio and Aaron Courville,2016). Figure 3.3:Multi-layer feed forward network Recurrent Network2 1.2.
3 Recurrent NetworkA recurrent network is a feed forward networkwith input layer and one or multiple hidden layers before an output layer with atleast one feed backward loop as shown in figure 3.4. Recurrent networks aredifferent from feed forward network because of this feed-backward loop as theyfeed their own output moment after moment as input. Figure 3.
4:Recurrent neural network 3Feed-forward networks are believed to achievehigh performance on vision and speech problems (Bengio, 2009). In thisresearch convolutional neural network is used which is a kind of feed forwardnetwork.