Thread Rating:
  • 1 Vote(s) - 4 Average
  • 1
  • 2
  • 3
  • 4
  • 5
artificial neural network seminar report
Post: #1

This seminars is about the artificial neural network application in processing industry. An artificial neural network as a computing system is made up of a number of simple and highly interconnected processing elements, which processes information by its dynamic state response to external inputs. In recent times study of ANN models have gained rapid and increasing importance because of their potential to offer solutions to some of the problems in the area of computer science and artificial intelligence. Instead of performing a program of instructions sequentially, neural network models explore many competing hypothesis simultaneously using parallel nets composed of many computational elements. No assumptions will be made because no functional relationship will be established. Computational elements in neural networks are non linear models and also faster. Hence the result comes out through non linearity due to which the result is very accurate than other methods. The details of deferent neural networks and their learning algorithm are presented its clearly illustrator how multi layer neural network identifies the system using forward and inverse modeling approach and generates control signal. The method presented here are directed inverse, direct adaptive, internal model and direct model reference control based ANN techniques.
Artificial neural networks have emerged from the studies of how brain performs. The human brain consists of many million of individual processing elements, called neurons that are highly interconnected.
Information from the outputs of the neurons, in the form of electric pulses is received by the cells at connections called synapses. The synapses connect to the cell inputs, or dendrites and the single output pf the neuron appears at the axon. An electric pulse is sent down the axon when the total input stimuli fro all of the dendrites exceed a certain threshold.
Artificial neural networks are made up of simplified individual models of the biological neuron that are connected together to form a network. Information is stored in the network in the form of weights or different connections strengths associated with synapses in the artificial neuron models.
Many different types of neural networks are available and multi layer neural networks are the most popular which are extremely successful in pattern reorganization problems. An artificial neuron model is shown below. Each neuron input is weighted by W. changing the weights of an element will alter the behavior of the whole network. The output y is obtained by summing the weighted inputs to the neuron and passing the result through a non-linear activation function, f ().

Multi layer networks consists of an input layer, a hidden layer are made up of no. of nodes. Data flows through the network in one direction only, from input to output; hence this type of network is called a feed-forwarded network. A two-layered network is shown below.

Artificial neural networks are implemented as software packages in computers and being used to incorporate of artificial intelligence in control system. ANN is basically mathematical tools which are being designed to employ principles similar to neurons networks of biological system. ANN is able to emulate the information processing capabilities of biological neural system. ANN has overcome many of the difficulties that t conventional adaptive control systems suffer while dealing with non linear behavior of process.
In realistic application the design of ANN system is complex, usually iterative and interactive task. Although it is impossible to provide an all inclusive algorithmic procedure, the following highly interrelated, skeletal steps reflect typical efforts and concerns. The plethora of possible ANN design parameters include:
The interconnection strategy/network topology/network structure.
Unit characteristics (may vary within the network and within subdivisions within the network such as layers).
Training procedures.
Training and test sets.
Input/output representation and pre- and post-processing.

Their ability to represent nonlinear relations makes them well suited for non linear modeling in control systems.
Adaptation and learning in uncertain system through off line and on line weight adaptation
Parallel processing architecture allows fast processing for large-scale dynamic system.
Neural network can handle large number of inputs and can have many outputs.
Neural network architecture have learning algorithm associated with them. The most popular network architecture used for control purpose is multi layered neural network [MLNN] with error propagation [EBP] algorithm.
Learning rules are algorithm for slowly alerting the connections weighs to achieve a desirable goal such a minimization of an error function. The generalized step for any neural network leaning algorithm is follows are the commonly used learning algorithm for neural networks.
Multi layer neural net (MLNN)
Error back propagation (EBB)
Radial basis functions (RBF
Reinforcement learning
Temporal deference learning
Adaptive resonance theory (ART)
Genetic algorithm
Selection of a particular learning algorithm depends on the network and network topology. As MLNN with EBP is most extensively used and widely accepted network for process application, namely for identification and control of the process.


There has been anexplosion of neural network application in the areas of process control engineering in the last few years.since it become very difficult to obtain the model of complex non-linear system due its unknown dynamics and a noise necessitates the requirement for a non classic technique which has the ability to model the physical process accurately. since nonlinear governing relationships can be handled very contendly by neutral network, these networks offer a cost effective solution to modeling of time varying chemical process.

Using ANN carries out the modeling of the process by using ANN by any one of the following two ways:
Forward modeling
Direct inverse modeling
The basic configuration used for non-linear system modeling and identification using neural network. The number of input nodes specifies the dimensions of the network input. In system identification context, the assignment of network input and output to network input vector.
This approach employs a generalized model suggested by Psaltis et learn the inverse dynamic model of the plant as a feed forward controller. Here, during the training stage, the control input are chosen randomly within there working range. And the corresponding plant output values are stored, as a training of the controller cannot guarantee the inclusion of all possible situations that may occur in future. Thus ,the developed model has take of robustness
The design of the identification experiment used to guarantee data for training the neural network models is crucial, particularly, in-linear problem. The training data must contain process input-output information over the entire operating range. In such experiment, the types of manipulated variables used are very important.
The traditional pseudo binary sequence (PRBS) is inadequate because the training data set contains most of its steady state information at only two levels, allowing only to fit linear model in over to overcome the problem with binary signal and to provide data points throughout the range of manipulated variables.The PRBS must be a multilevel sequence.This kind of modeling of the process play a vital role in ANN based direct inverse control configuration.
Direct inverse control
Direct adaptive control
Indirect adaptive control
Internal model control
Model reference adaptive control
This control configuration used the inverse planet model.Fro the direct inverse control. The network is required to be trained offline to learn the inverse dynamics of the plant. The networks are usually trained using the output errors of the networks and not that of the plant. The output error of the networks is defined.
Where En is the networks output error ud is the actual controls signal required to get desired process output and on is the networks output. When the networks is to be trained as a controller. The output errors of the networks are unknown. Once the network is trained using direct inverse modeling learns the inverse system model. It is directly placed in series with the plant to be controlled and the configuration shown in figure. Since inverse model of the plant is in off line trained model, it tacks robustness.

In the direct adaptive control. The network is trained on line. And the weights of connections are updated during each sampling interval. In this caseâ„¢ the cost function is the plant output error rather than the networks output error. The configuration of DAC is shown in figure.

The limitation of using this configuration is that one must have the some knowledge about the plant dynamics i.e. Jacobin matrix of the plant. To solve the problems; initially, Psaltis al proposed a technique for determining the partial derivatives of the plant at its operating point Xianzhang et al and Yao Zhang et al presented a simple approach, in which by using the sign of the plant Jacobin. The modifications of the weights are carried out.

Narendra K S et al. Presented an indirect adaptive control strategy. In this approach, two neutral networks for controller purpose and another for plant modeling and is called plant emulator decides the performance of the controller. The configuration of indirect adaptive control scheme becomes as shown FIG.3.

In direct learning the neutral controller, it is well known that the partial derivatives of the controlled plant output with respect to the plant input (plant Jacobin) is required . One method to over come this problem is the use NN to identify the plant, and to calculate its Jacobin. Since the plant emilator learning converges before the neutral controllers learning begins, an effective neutral control system is achieved.
The IMC uses two neutral networks for implementation . I n this configrurations,one neutral networks is placed in parallel with the plant and other neutral network in series the plant. The structure of nonlinear IMC is shown in FIG.4.

The IMC provides a direct method for the designof nonlinear feedback controllers. If a good model of the plant is savable, the close loop system gives an exact set point tracking despite immeasurable disturbance acting on the plant.

For the development of NN based IMC, the following two steps are required:
Plant identification
Plant inverse model
Plant identification is carried out using the forward modeling techniques. Once the network is trained, it represents the perfect dynamics of the plant the error signal used to adjust the networks weights is the difference between the plant output and the model output.
The neutral networks used to represent the inverse of the plant (NCC) is trained using the plant itself. The error signal used to train the plant inverse model is the difference between the desired plant and model outputs.
The neural network approximates a wide variety of nonlinear control laws by adjusting the weights in training to achieve the desired approximate accuracy. One possible MRAC structure based on neural network is shown
In this configuration, control systems attempted to make the plant output YP(t) to follow the reference model output asymptotically. The error signal
Used to train the neural network controller is the difference between the model and the plant outputs, principally; this network works like the direct adaptive neural control system.

Applications Type of ANN control architecture Learning algorithm Remark
Modeling of stirred tank system and interpretation of bloser sat data MLNN EBP Nonlinear
Cart pendulum identification MLNN EBP Nonlinear
Biomas and penicillin estimation for an industrial fermination MLNN EBP Networks are trained to learn process nonlinearity from process input\output to develop estimators
Inter operation of biosensor data MLNN EBP BP network is used for quantitative interoperation problem
Fault detection and control of reactor MLNN LMS & EBP algorithm Simulation study on nonlinear model of reactor
Temperature control of reactor Direct inverse control EBP Offline training of the controller is carried out to learn the inverse dynamics of planet
Ship course keeping control Direct adaptive control EBP Use of fuzzy logic is made along with neural control
Tracking control of DC motor Direct inverse control EBP Makes use of look up table for calculating the control actions
Mobile robot control Direct inverse control EBP It makes use the sensor data for decision purpose
Nonlinear air handling plant Predictive control EBP Nonlinear control
This paper presents the state of ANN in process control applications. The ability of MLNN to model arbitrary non linear process is used for the Identification and control of a complex process. Since the unknown Complex systems are online modeled. And are controlled by the Input/output dependent neural networks, the control mechanisms are robust for varying system parameters. It is found that the MLNN with EBP training algorithm are best Suited for identification and control since the learning is of supervised nature And can handle the nonlinearity present in the plants with only input/output Information. However, there are difficulties in implementing MLNN with EBP. Like selection of learning rate, momentum factor, selection of network size etc
Thus it becomes very much essential to have some concrete guard Lines for selecting the network. Further, there is lot of scope in developing Different effective configurations based on ANN for identification and control of the complex process.
[1] L.M. Waghmare, Dr. Vinod Kumar & Dr. Saxena, Electrical India Januvary 1998.
Post: #2


Artificial Neuron Network

An idea
Post: #3


A neural network, also known as a parallel distributed processing network, is a computing solution that is loosely modeled after cortical structures of the brain.
It consists of interconnected processing elements called nodes or neurons that work together to produce an output function.
Neurons (also known as neurones and nerve cells) are electrically excitable cells in the nervous system that function to process and transmit information.
Neural network process the information in parallel only.
Neural network is robust to tolerate error or failure.
'Neural Network' has two distinct connotations i.e Biological neural networks and Artificial neural networks .
Post: #4
Artificial Neural Network

Many task which seem simple for us, such as reading a handwritten note or recognizing a face, are difficult task for even the most advanced computer. In an effort to increase the computer ability to perform such task, programmers began designing software to act more like the human brain, with its neurons and synaptic connections. Thus the field of “Artificial neural network” was born. Rather than employ the traditional method of one central processor (a Pentium) to carry out many instructions one at a time, the Artificial neural network software analyzes data by passing it through several simulated processos which are interconnected with synaptic like “weight”
Once we have collected several record of the data we wish to analyze, the network will run through them and “learn ” the input of each record may be related to the result. After training on a few doesn’t cases the network begin to organize and refines its on own architecture to feed the data to much the human brain; learn from example.
This “REVERSE ENGINEERING” technology were once regarded as the best kept secret of large corporate, government an academic researchers.
The field of neural network was pioneered by BERNARD WIDROW of Stanford University in 1950’s.
Post: #5
Neural Network

Soft computing differs from conventional(hard )
Computing in that , unlike hard computing .it is
Tolerant of precision uncertainity, partial truth and approximation. In effect, the role model for soft computing is the human mind.
The principal constituents, i.e., tools, techniques, of soft computing are :-fuzzy logic ,neural network ,genetic algorithm etc.

Evolution of Neural Networks

Studied the brain
Each neuron in the brain has a relatively simple function
But - 10 billion of them (60 trillion connections)
Act together to create an incredible processing unit
The brain is trained by its environment
Learns by experience
Compensates for problems
by massive parallelism

Inspiration from neurobiology

A neuron: many-inputs / one-output unit
output can be excited or not excited
incoming signals from other neurons determine if the neuron shall excite ("fire")
Output subject to attenuation in the synapses, which are junction parts of the neuron
Basically, a biological neuron receives inputs from other sources, combines them in some way, performs a generally nonlinear operation on the result, and then output the final result.


Inputs: dentrites
Processing: soma
Outputs: axons
Synapses: electrochemical contact between neurons
Post: #6


T.Anu Prasanth , K.S.Bhaskar

the neural net control the system in normal operation but it will control that system during unforeseen occurrences..

We can start with the training using Time domain signals, event inputs and observe the network response in detecting or not detecting the parameter correctly, noting the percentage of the number of inputs for best inference

Gabor transform was shown as useful for NMR signal network for chemical identification better

for more ::_>>\
Post: #7
does your site have aseminars report on fundamentals of neural network which includes what a neural network is,learning processes etc??
Post: #8



The power system often comes across some situations when there is insulation failure of equipment or flashover of lines initiated by a lightening stroke or through accidental faulty operation. The system must be protected against flow of heavy currents (which can cause permanent damage to the major equipment) by disconnecting the faulty part of the system by means of the circuit breakers operated by protective relaying. A power system comprises synchronous generators, transformers, lines and loads. So, it is the responsibility of the each and every worker or engineer to detect and repair the faults without damage to the power system. The fault diagnosis of a power system provides an effective means to get information about system restoration and maintenance of the power system. Artificial intelligence has been successfully implemented on fault diagnosis and system monitoring. Expert systems are used by defining rules, for a fault diagnosis. In the present work particularly a new method of “AI” namely “Artificial Neural Network” is used as diagnosing to power system faults.

A study has been made by taking a sample model of power systems. The all possible faults of the system were diagnosed and predicted with the help of “Auto-Configuring Artificial Neural Network” namely “Radial Basis Function Network” and the comprehensive study reveals that the proposed method is more efficient, faster and reliable than any other method used for fault diagnosis of power systems.

Over the last two decades, fault diagnosis (FD) has become an issue of primary importance in modern process automation as it provides a basis for the reliable and safe fundamental design features of many complex engineering systems. A fault diagnosis is required to avoid power loss
in different systems or even loss of human lives. Fault diagnosis aims to provide information for time and location of faults that occur in the supervised process. A system is called a healthy system when it runs free of faults, on the contrary, a faulty system is that having deviations from the normal behaviour of the system or its instrumentations. Fault diagnosis process includes the following tasks fault detection, which indicates that something is going wrong in the system, fault isolation which determines the exact location of a fault and fault identification, which determines the magnitude of a fault severity. Faults are diagnosed by processing the multiple measurements using spectrum analyses or using the logic reasoning approach or comparing the measurement to preset limit values, limit checking approach. The other approach is to develop an intelligent system based on expert knowledge and learning of the power system behaviours. Current trends in the field of Fault Diagnosis of power systems apply Artificial Neural Networks (ANNs) to diagnose faults of some system components such as transmission lines, transformers, synchronous generators, or any other component related to the system. The artificial neural networks (ANNs) have the capability to perform pattern recognition and diagnosis that are difficult to describe in terms of analytical diagnosis algorithms since they can learn input patterns by themselves. Learning can be viewed as an automatic, incremental synthesis of functional mappings that represent a fault function. Unlike adaptation methods, where the emphasis is on approximating temporal properties, learning systems employ networks with large memory for approximating the spatial dependence of the fault function. Therefore, learning methods can be used not only for fault detection but also for identification of characteristics of the fault through approximation of its functional relation to the measurable state and input variables.


Artificial intelligence (AI) is simply the way of making the computer think intelligently. It there by provides a simple, structured approach to designing complex decision-making programs. While designing an AI system, the goal of the system must be kept in mind. There exists a more sophisticated system; which guides the selection of a proper response to a specific situation. This process is known as “Pruning”, as its name suggests eliminates path way of thoughts that are not relevant to the immediate objective of reaching a goal.

Artificial Neural Networks are relatively crude electronic models based on the neural structure of the brain. The brain basically learns from experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages. This brain modeling also promises a less technical way to develop machine solutions. This new approach to computing also provides a more graceful degradation during system overload than its more traditional counterparts. These biologically inspired methods of computing are thought to be the next major advancement in the computing industry. Even simple animal brains are capable of functions that are currently impossible for computers. Computers do rote things well, like keeping ledgers or performing complex math. But computers have trouble recognizing even simple patterns much less generalizing those patterns of the past into actions of the future. Now, advances in biological research promise an initial understanding of the natural thinking mechanism. This research shows that brains store information as patterns. Some of these patterns are very complicated and allow us the ability to recognize individual faces from many different angles. This process of storing information as patterns, utilizing those patterns, and then solving problems encompasses a new field in computing. This field, as mentioned before, does not utilize traditional programming but involves the creation of massively parallel networks and the training of those networks to solve specific problems. This field also utilizes words very different from traditional computing, words like behave, react, self-organize, learn, generalize, and forget.

AI has made a significant impact on power system research. Power system engineers have applied successfully AI methods to power system research problems like energy control, alarm processing, fault diagnosis, system restoration, voltage/var control, etc. for the last couple of years a new AI method namely Artificial Neural Network (ANN) has been used extensively in power system research. In comparison to the AI method, which tries to mimic mental process that takes place in human reasoning, ANN on the other hand tries to stimulate the neural activity that takes place in the human brain. ANN has been successfully applied to economic load dispatch, shot term load forecasting, security analysis, alarm processing, capacitor installation and EMTP problems. An attempt has been made here to solve the fault diagnosis problem in power systems using ANN.

The principal functions of these diagnosis systems are:
1) Detection of fault occurrence
2) Identification of faulted sections
3) Classification of faults into types:

A)HIFs (high impedance faults) or
B)LIFs(low impedance faults)

This has been achieved through a cascade, multilayered ANN structure. Using these FDS accurately identifies HIFs, which are relatively difficult to identify in the other methods.

Post: #9
presented by:
Shikhir Kadian
Kapil Kanugo
Mohin Vaidya
Jai Balani

Artificial Neural Networks
Introduction to Artificial Neural Networks
Why ANN?

• Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine with algorithmic approach
1. Pattern recognition (old friends, hand-written characters)
2. Content addressable recall
3. Approximate, common sense reasoning (driving, playing piano, baseball player)
• These tasks are often ill-defined, experience based, hard to apply logic
What is an (artificial) neural network?
A set of nodes (units, neurons, processing elements) 1.Each node has input and output
2.Each node performs a simple computation by its node function
• Weighted connections between nodes
• Connectivity gives the structure/architecture of the net
• What can be computed by a NN is primarily determined by the connections and their weights
• What can a ANN do?
• Compute a known function
• Approximate an unknown function
• Pattern Recognition
• Signal Processing
• Learn to do any of the above
Biological neural activity
• Each neuron has a body, an axon, and many dendrites
1. Can be in one of the two states: firing and rest.
2. Neuron fires if the total incoming stimulus exceeds the threshold
• Synapse: thin gap between axon of one neuron and dendrite of another.
1. Signal exchange
Synaptic strength/efficiency
Backpropagation Algorithm

• Training Set
A collection of input-output patterns that are used to train the network
• Testing Set
A collection of input-output patterns that are used to assess network performance
• Learning Rate-η
A scalar parameter, analogous to step size in numerical integration, used to set the rate of adjustments
Post: #10
Artificial Neural Networks
Introduction to Neural Networks
Principles of Brain Processing
Brain Computer: What is it?
Biological Neurons
Brain-like Computer

• ANN as a Brain-Like Computer
Applications of Artificial Neural Networks
Image Recognition: Decision Rule and Classifier
• Is it possible to formulate (and formalize!) the decision rule, using which we can classify or recognize our objects basing on the selected features?
• Can you propose the rule using which we can definitely decide is it a tiger or a rabbit?
• Once we know our decision rule, it is not difficult to develop a classifier, which will perform classification/recognition using the selected features and the decision rule.
• However, if the decision rule can not be formulated and formalized, we should use a classifier, which can develop the rule from the learning process
• In the most of recognition/classification problems, the formalization of the decision rule is very complicated or impossible at all.
• A neural network is a tool, which can accumulate knowledge from the learning process.
• After the learning process, a neural network is able to approximate a function, which is supposed to be our decision rule
Why neural network?
• Mathematical Interpretation of Classification in Decision Making
• Intelligent Data Analysis in Engineering Experiment
• Learning via Self-Organization Principle
• Symbol Manipulation or Pattern Recognition ?
Artificial Neuron
A Neuron

• Neurons’ functionality is determined by the nature of its activation function, its main properties, its plasticity and flexibility, its ability to approximate a function to be learned
• Artificial Neuron:
Classical Activation Functions
Principles of Neurocomputing
Threshold Neuron (Perceptron)

• Output of a threshold neuron is binary, while inputs may be either binary or continuous
• If inputs are binary, a threshold neuron implements a Boolean function
• The Boolean alphabet {1, -1} is usually used in neural networks theory instead of {0, 1}. Correspondence with the classical Boolean alphabet {0, 1} is established as follows:
Threshold Boolean Functions
• The Boolean function is called a threshold (linearly separable) function, if it is possible to find such a real-valued weighting vector that equation
holds for all the values of the variables x from the domain of the function f.
• Any threshold Boolean function may be learned by a single neuron with the threshold activation function.
• Threshold Boolean Functions: Geometrical Interpretation
“OR” (Disjunction) is an example of the threshold (linearly separable) Boolean function:
“-1s” are separated from “1” by a line
• 1 1à 1
• 1 -1à -1
• -1 1à -1
• -1 -1à -1
• XOR is an example of the non-threshold (not linearly separable) Boolean function: it is impossible separate “1s” from “-1s” by any single line
• 1 1 à 1
• 1 -1à -1
• -1 1à -1
• -1 -1à 1
Threshold Neuron: Learning
• A main property of a neuron and of a neural network is their ability to learn from its environment, and to improve its performance through learning.
• A neuron (a neural network) learns about its environment through an iterative process of adjustments applied to its synaptic weights.
• Ideally, a network (a single neuron) becomes more knowledgeable about its environment after each iteration of the learning process.
• Let us have a finite set of n-dimensional vectors that describe some objects belonging to some classes (let us assume for simplicity, but without loss of generality that there are just two classes and that our vectors are binary). This set is called a learning set:
• Learning of a neuron (of a network) is a process of its adaptation to the automatic identification of a membership of all vectors from a learning set, which is based on the analysis of these vectors: their components form a set of neuron (network) inputs.
• This process should be utilized through a learning algorithm.
• Let T be a desired output of a neuron (of a network) for a certain input vector and Y be an actual output of a neuron.
• If T=Y, there is nothing to learn.
• If T≠Y, then a neuron has to learn, in order to ensure that after adjustment of the weights, its actual output will coincide with a desired output
• Error-Correction Learning
• If T≠Y, then is the error .
• A goal of learning is to adjust the weights in such a way that for a new actual output we will have the following:
• That is, the updated actual output must coincide with the desired output.
Error-Correction Learning
• The error-correction learning rule determines how the weights must be adjusted to ensure that the updated actual output will coincide with the desired output:
• α is a learning rate (should be equal to 1 for the threshold neuron, when a function to be learned is Boolean)
Learning Algorithm
• Learning algorithm consists of the sequential checking for all vectors from a learning set, whether their membership is recognized correctly. If so, no action is required. If not, a learning rule must be applied to adjust the weights.
• This iterative process has to continue either until for all vectors from the learning set their membership will be recognized correctly or it will not be recognized just for some acceptable small amount of vectors (samples from the learning set).
When we need a network
• The functionality of a single neuron is limited. For example, the threshold neuron (the perceptron) can not learn non-linearly separable functions.
• To learn those functions (mappings between inputs and output) that can not be learned by a single neuron, a neural network should be used.
A simplest network
• Solving XOR problem using the simplest network
• Solving XOR problem using the simplest network
• Threshold Functions and
Threshold Neurons
• Threshold (linearly separable) functions can be learned by a single threshold neuron
• Non-threshold (nonlinearly separable) functions can not be learned by a single neuron. For learning of these functions a neural network created from threshold neurons is required (Minsky-Papert, 1969)
• The number of all Boolean functions of n variables is equal to , but the number of the threshold ones is substantially smaller. Really, for n=2 fourteen from sixteen functions (excepting XOR and not XOR) are threshold, for n=3 there are 104 threshold functions from 256, but for n>3 the following correspondence is true (T is a number of threshold functions of n variables):
• For example, for n=4 there are only about 2000 threshold functions from 65536
• Is it possible to learn XOR, Parity n and other non-linearly separable functions
using a single neuron?
• Any classical monograph/text book on neural networks claims that to learn the XOR function a network from at least three neurons is needed.
• This is true for the real-valued neurons and real-valued neural networks.
• However, this is not true for the complex-valued neurons !!!
• A jump to the complex domain is a right way to overcome the Misky-Papert’s limitation and to learn multiple-valued and Boolean nonlinearly separable functions using a single neuron.
• XOR problem
• Blurred Image Restoration (Deblurring) and Blur Identification by MLMVN
• Blurred Image Restoration (Deblurring) and Blur Identification by MLMVN
• I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, "Blur Identification by Multilayer Neural Network based on Multi-Valued Neurons", IEEE Transactions on Neural Networks, vol. 19, No 5, May 2008, pp. 883-898.
Problem statement: capturing
• Mathematically a variety of capturing principles can be described by the Fredholm integral of the first kind
• where x,t ℝ2, v(t) is a point-spread function (PSF) of a system, y(t) is a function of a real object and z(x) is an observed signal.
Image deblurring: problem statement
• Mathematically blur is caused by the convolution of an image with the distorting kernel.
• Thus, removal of the blur is reduced to the deconvolution.
• Deconvolution is an ill-posed problem, which results in the instability of a solution. The best way to solve it is to use some regularization technique.
• To use any kind of regularization technique, it is absolutely necessary to know the distorting kernel corresponding to a particular blur: so it is necessary to identify the blur.
Blur Identification
• We use multilayer neural network based on multi-valued neurons (MLMVN) to recognize Gaussian, motion and rectangular (boxcar) blurs.
• We aim to identify simultaneously both blur and its parameters using a single neural network.
Post: #11

Many task which seem simple for us, such as reading a handwritten note or recognizing a face, are difficult task for even the most advanced computer. In an effort to increase the computer ability to perform such task, programmers began designing software to act more like the human brain, with its neurons and synaptic connections. Thus the field of “Artificial neural network” was born. Rather than employ the traditional method of one central processor (a Pentium) to carry out many instructions one at a time, the Artificial neural network software analyzes data by passing it through several simulated processos which are interconnected with synaptic like “weight”
Once we have collected several record of the data we wish to analyze, the network will run through them and “learn ” the input of each record may be related to the result. After training on a few doesn’t cases the network begin to organize and refines its on own architecture to feed the data to much the human brain; learn from example.
This “REVERSE ENGINEERING” technology were once regarded as the best kept secret of large corporate, government an academic researchers.
The field of neural network was pioneered by BERNARD WIDROW of Stanford University in 1950’s.
Why would anyone want a `new' sort of computer?
What are (everyday) computer systems good at... .....and not so good at?
Good at Not so good at
Fast arithmetic Interacting with noisy data or data from the environment
Doing precisely what the programmer programs them to do Massive parallelism
Fault tolerance
Adapting to circumstances
Where can neural network systems help?
• where we can't formulate an algorithmic solution.
• where we can get lots of examples of the behaviour we require.
• where we need to pick out the structure from existing data.
What is a neural network?
Neural Networks are a different paradigm for computing:
• Von Neumann machines are based on the processing/memory abstraction of human information processing.
• Neural networks are based on the parallel architecture of animal brains.
Neural networks are a form of multiprocessor computer system, with
• Simple processing elements
• A high degree of interconnection
• Simple scalar messages
• Adaptive interaction between elements
Artificial neural network (ANNs) are programs designed to solve any problem by trying to mimic structure and function of our nervous system. Neural network are based on simulated neurons. Which are joined together in a variety of ways to form networks. Neural network resembles the human brain in the following two ways: -
A neural network acquires knowledge through learning
A neural network’s knowledge is stored with in the interconnection strengths known as synaptic weight.
Neural network are typically organized in layers. Layers are made up of a number of interconnected ‘nodes’, which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’ where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’ where the answer is output as shown in the graphic below.

Each layer of neural makes independent computation on data that it receives and passes the result to the next layers(s). The next layer may in turn make independent computation and pass data further or it may end the computation and give the output of the overall computation .The first layer is the input layer and the last one, the output layer. The layers that are placed within these two are the middle or hidden layers.
A neural network is a system that emulates the cognitive abilities of the brain by establishing recognition of particular inputs and producing the appropriate output. Neural networks are not “hard-wired” in particular way; they are trained using presented inputs to establish their own internal weights and relationships guided by feedback. Neural networks are free to form their own internal working and adapt on their own.
Commonly neural network are adjusted, or trained so that a particular
input leads to a specific target output Target

There, the network is adjusted based on a comparison of the output and the
target, until the network output matches the target. Typically many such input/target pairs are used to train network.
Once a neural network is ‘trained’ to a satisfactory level it may be used as an analytical tool on other data. To do this, the user no longer specifies any training runs and instead allows the network to work in forward propagation mode only. New inputs are presented to the input pattern where they filter into and are processed by the middle layers as though training were taking place, however, at this point the output is retained and no back propagation occurs.
The structure of Nervous system
Nervous system of a human brain consists of neurons, which are interconnected to each other in a rather complex way. Each neuron can be thought of as a node and interconnection between them are edge, which has a weight associates with it, which represents how mach the tow neuron which are connected by it can it interact. Node (neuron) (interconnection)
Functioning of A Nervous System
The natures of interconnections between 2 neurons can such that – one neuron can either stimulate or inhibit the other. An interaction can take place only if there is an edge between 2 neurons. If neuron A is connected to neuron B as below with a weight w, then if A is stimulated sufficiently, it sends a signal to B. The signal depends on

The weight w, and the nature of the signal, whether it is stimulating or inhibiting. This depends on whether w is positive or negative. If its stimulation is more than its threshold. Also if it sends a signal, it will send it to all nodes to which it is connected. The threshold for different neurons may be different.
If many neurons send signal to A, the combined stimulus may be more than the threshold.
Next if B is stimulated sufficiently, it may trigger a signal to all neurons to which it is connected.
Depending on the complexity of the structure, the overall functioning may be very complex but the functioning of individual neurons is as simple as this. Because of this we may dare to try to simulate this using software or even special purpose hardware.
Major components Of Artificial Neuron
This section describes the seven major components, which make up an artificial neuron. These components are valid whether the neuron is used for input, output, or is in one of the hidden layers.
Component 1. Weighing factors:
A neuron usually receives many simultaneous inputs. Each input has its own relative weight, which gives the input the impact that it needs on the processing elements summation function. These weights perform the same type of function, as do the varying synaptic strengths of biological neurons. In both cases, some input are made more important than others so that they have a greater effect on the processing element as they combine to produce a neuron response.
Weights are adaptive coefficients within the network that determine the intensity of the input signal as registered by the artificial neuron. They are a measure of an input’s connection strength. These strengths can be modified in response to various training sets and according to a network’s specific topology or through its learning rules.
Component 2. Summation Function:
The first step in a processing element’s operation is to compute the weighted sum of all of the inputs. Mathematically, the inputs and the corresponding weights are vectors which can be represented as (i1, i2,………….in) and (w1, w2,……………….wn). The total input signal is the dot, or inner, product of these two vectors. This simplistic summation function is found by multiplying each component of the i vector by the corresponding component of the w vector and then adding up all the products. Input1 = i1*w1, input2=i2*w2, etc., are added as input1+input2+………..+inputn. The result is a single number, not a multi-element vector.
Geometrically, the inner product of two vectors can be considered a measure of their similarity. If the vector point in the same direction, the inner product is maximum; if the vectors points in opposite direction (180 degrees out of phase), their inner product is minimum.
The summation function can be more complex than just the simple input and weight sum of products. The input and weighting coefficients can be combined in many different ways before passing on to the transfer function. In addition to a simple product summing, the summation function can select the minimum, maximum, majority, product, or several normalizing algorithms. The specific algorithm for combining neural inputs is determined by the chosen network architecture and paradigm.
Post: #12
Simplified diagram of connecting neurons
How did Artificial Neural network develop?
 Seeing the billions of interconnections in the human brain, and the way the human brain recognizes different patterns, it was felt that there was a need to simulate the brain.
Model of Neuron
Three major components of biological neuron are :
 Cell body
 Dendrites
 Axon
At the one end of the neuron

there are

a multitude of tiny filaments



join together

To form larger branches and trunks
where they attach to cell body
At the other end of the neuron
is a

Single filament leading
out of the cell body
called the axon


Extensive branching end
links in its far end
called axon terminals

Dendrites represented as inputs to the
Axon Neuron’s output

Each neuron has

Many inputs through only one output its multiple dendrites through its single axon
each branch of the axon meeting exactly one
dendrite of another cell
Synaptic gap Gap between the axon
terminals and dendrites of
another cell
Distance 50 and 200 Angstroms
 Connections between neurons are formed at synapses
Axon of a neuron synaptic gap Dendrite of
another neuron

Information processors
Communications between neurons

How do they take place?
• Communication takes place with the help of electrical
Signals are sent through
the axon of one neuron
To the dendrites of other neurons
 Even then the brain has very less difficulty in correctly and immediately recognizing patterns or objects.
 the crucial difference therefore lies not in the essential speed of processing but in the organization of processing.
 The key is the notion of massive parallelism or connectionism.
 The processing tasks in the brain are distributed among 10^11 -10^12 elementary nerve cells called neurons.

 An ANN is composed of processing elements called or perceptrons, organized in different ways to form the network’s structure.
Processing Elements
 An ANN consists of perceptrons. Each of the perceptrons receives inputs, processes inputs and delivers a single output.
 Mathematical representation
 The neuron calculates a weighted sum of inputs and compares it to a threshold. If the sum is higher than the threshold, the output is set to 1, otherwise to -1.
 Process control
 Vehicle control
 Forecasting and prediction
 Financial applications
 A neural network can perform a task that a linear program cannot.
 When an element in neural network fails, it can continue without any problem by their parallel nature.
 A neural network learns and does not need to be reprogrammed.
 It can be implemented with any application.
 It can be implemented without any problem.
 The neural network needs training to operate.
 The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
 Requires high processing time for large neural networks.
 The ability of neural networks to learn and generalize in addition to their wide range of applicability makes them very powerful tools.
 There is no need to understand the internal mechanisms of that task.
 They are also used for the real time systems.
 Finally I would like to state that even though neural networks have huge potential we will only get the best of them when they are integrated with computing, fuzzy logic and so on.
Post: #13
to get information about the topic Artificial Intelligence Neural Networks full report ,ppt and related topic refer the link bellow

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: neural network seminar report download pdf, artificial neural network 2010 ppt, is empathy learned, seminar on artificial chromosome, artificial neural network ensembles, artificial neural network ppt 2011, who is jai,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Seminar Report On Wearable Bio-Sensors mechanical wiki 3 3,231 30-03-2015 10:07 AM
Last Post: seminar report asees
  gsm pdf and final seminar report suvendu9238 10 9,226 19-11-2014 09:34 PM
Last Post: jaseela123
  Seminar Report on Night Vision Technologies computer girl 2 2,510 05-07-2014 09:17 PM
Last Post: Guest
Last Post: Guest
  Seminar Report On Optical Computing Technology mechanical wiki 3 3,421 27-07-2013 12:41 PM
Last Post: computer topic
  Seminar Report On NavBelt and GuideCane mechanical wiki 8 4,718 19-07-2013 11:31 AM
Last Post: computer topic
  Low Power Wireless Sensor Network computer science crazy 4 4,527 30-04-2013 10:04 AM
Last Post: computer topic
  optical switching seminar report electronics seminars 7 8,178 29-04-2013 10:55 AM
Last Post: computer topic
  A Disaster Information System by Ballooned Wireless Adhoc Network seminar surveyer 2 1,425 15-02-2013 10:20 AM
Last Post: seminar details
  memristor seminar report project report tiger 21 19,120 25-01-2013 12:02 PM
Last Post: seminar details