Skip to main content

Course Outcome (CO)                                                     Bloom’s Knowledge Level (KL)
At the end of course, the student will be able to understand

CO 1 Study of basic concepts of Neuro Computing, Neuroscience and ANN. Understand the
different supervised and unsupervised and neural networks performance.
K1, K2
CO 2 Study of basic Models of neural network. Understand the Perception network. and
Compare neural networks and their algorithm.
 K2, K3
CO 3 Study and Demonstrate different types of neural network. Make use of neural networks
for specified problem domain.
 K2 K3, K4
CO 4 Understand and Identify basic design requirements of recurrent network and Selforganizing feature map.
K1, K2
CO 5 Able to understand the some special network. Able to understand the concept of Soft
computing.
K1, K2 K3
                                                            DETAILED SYLLABUS
Unit                   Topic                                                                                                        Proposed  Lecture


I Neurocomputing and Neuroscience: The human brain, biological neurons, neural
processing, biological neural network.
Artificial Neural Networks: Introduction, historical notes, neuron model, knowledge
representation, comparison with biological neural network, applications.
Learning process: Supervised learning, unsupervised learning, error correction
learning, competitive learning, adaptation learning, Statistical nature of the learning
process.

II Basic Models: McCulloch-Pitts neuron model, Hebb net, activation functions,
aggregation functions.
Perceptron networks: Perceptron learning, single layer perceptron networks,
multilayer perceptron networks.
Least mean square algorithm, gradient descent rule, nonlinearly separable problems
and bench mark problems in NN.

III Multilayer neural network: Introduction, comparison with single layer networks.
Back propagation network: Architecture, back propagation algorithm, local minima
and global minima, heuristics for making back propagation algorithm performs better,
applications.
Radial basis function network: Architecture, training algorithm, approximation
properties of RBF networks, comparison of radial basis function network and back
propagation networks.

IV Recurrent network: Introduction, architecture and types.
Self-organizing feature map: Introduction, determining winner, Kohonen Self
Organizing feature maps (SOM) architecture, SOM algorithm, properties of feature
map; Learning vector quantization-architecture and algorithm.
Principal component and independent component analysis.

V Special networks: Cognitron, Support vector machines. Complex valued NN and
complex valued BP.
Soft computing: Introduction, Overview of techniques, Hybrid soft computing
techniques.

Suggested Readings:
1. Kumar S., “Neural Networks- A Classroom Approach”, McGraw Hill.
2. Haykin S., “Neural Networks – A Comprehensive Foundation”, Pearson Education.
3. Yegnanarayana B. “Artificial Neural Networks”, Prentice Hall of India.
4. Freeman J. A., “Neural Networks”, Pearson Education.
5. James F., “Neural Networks – Algorithms, Applications and Programming Techniques”, Pearson