* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Neural Networks – An Introduction
Synaptogenesis wikipedia , lookup
Molecular neuroscience wikipedia , lookup
Single-unit recording wikipedia , lookup
Neuroethology wikipedia , lookup
Cortical cooling wikipedia , lookup
Multielectrode array wikipedia , lookup
Binding problem wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Neural oscillation wikipedia , lookup
Artificial intelligence wikipedia , lookup
Neuroeconomics wikipedia , lookup
Neurocomputational speech processing wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Neural coding wikipedia , lookup
Gene expression programming wikipedia , lookup
Pattern recognition wikipedia , lookup
Sparse distributed memory wikipedia , lookup
Optogenetics wikipedia , lookup
Neuroesthetics wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
Channelrhodopsin wikipedia , lookup
Neural modeling fields wikipedia , lookup
Central pattern generator wikipedia , lookup
Synaptic gating wikipedia , lookup
Biological neuron model wikipedia , lookup
Neural binding wikipedia , lookup
Metastability in the brain wikipedia , lookup
Catastrophic interference wikipedia , lookup
Nervous system network models wikipedia , lookup
Development of the nervous system wikipedia , lookup
Neural engineering wikipedia , lookup
Artificial neural network wikipedia , lookup
Convolutional neural network wikipedia , lookup
Neural Networks
EE459
Neural Networks
The Structure
Kasin Prakobwaitayakit
Department of Electrical Engineering
Chiangmai University
Neural Networks
The Structure of Neurones
A neurone has a cell body, a branching input
structure (the dendrIte) and a branching output
structure (th axOn)
–Axons connect to dendrites via synapses.
–Electro-chemical signals are propagated from
the dendritic input, through the cell body, and
down the axon to other neurons
Neural Networks
The Structure of Neurones
• A neurone only fires if its input signal
exceeds a certain amount (the threshold) in
a short time period.
• Synapses vary in strength
– Good connections allowing a large signal
– Slight connections allow only a weak signal.
– Synapses can be either excitatory or inhibitory.
Neural Networks
A Classic Artifical Neuron(1)
ao+1
wj0
a1 wj1
wj2
a2
an
wjn
Sj
f (Sj)
Xj
Neural Networks
A Classic Artifical Neuron(2)
All neurons contain an
activation function which
determines whether the
signal is strong enough to
produce an output.
Shows several functions that could be used as an activation function.
Neural Networks
Learning
• When the output is
calculated, the desire
output is then given to
the program to modify
the weights.
• After modifications are
done, the same inputs
given will produce the
outputs desired.
Formula :
Weight N = Weight N +
learning rate * (Desire
Output-Actual Output)
* Input N * Weight N
Neural Networks
Tractable Architectures
• Feedforward Neural Networks
– Connections in one direction only
– Partial biological justification
• Complex models with constraints (Hopfield
and ART).
– Feedback loops included
– Complex behaviour, limited by constraining
architecture
Neural Networks
Fig. 1: Multilayer Perceptron
Output Values
Output Layer
Adjustable
Weights
Input Layer
Input Signals (External Stimuli)
Neural Networks
Types of Layer
• The input layer.
– Introduces input values into the network.
– No activation function or other processing.
• The hidden layer(s).
– Perform classification of features
– Two hidden layers are sufficient to solve any
problem
– Features imply more layers may be better
Neural Networks
Types of Layer (continued)
• The output layer.
– Functionally just like the hidden layers
– Outputs are passed on to the world outside the
neural network.
Neural Networks
A Simple Model of a Neuron
y1
y2
y3
yi
w1j
w2j
w3j
O
wij
• Each neuron has a threshold value
• Each neuron has weighted inputs from other
neurons
• The input signals form a weighted sum
• If the activation level exceeds the threshold, the
neuron “fires”
Neural Networks
An Artificial Neuron
y1
y2
y3
yi
w1j
w2j
w3j
f(x)
O
wij
• Each hidden or output neuron has weighted input
connections from each of the units in the preceding layer.
• The unit performs a weighted sum of its inputs, and
subtracts its threshold value, to give its activation level.
• Activation level is passed through a sigmoid activation
function to determine output.
Neural Networks
Mathematical Definition
•
•
•
•
Number all the neurons from 1 up to N
The output of the j'th neuron is oj
The threshold of the j'th neuron is qj
The weight of the connection from unit i to
unit j is wij
• The activation of the j'th unit is aj
• The activation function is written as f(x)
Neural Networks
Mathematical Definition
• Since the activation aj is given by the sum of
the weighted inputs minus the threshold, we
can write:
aj =
Si ( w o ) - q
ij i
oj = f(aj )
j
Neural Networks
Activation functions
• Transforms neuron’s input into output.
• Features of activation functions:
– A squashing effect is required
• Prevents accelerating growth of activation levels
through the network.
– Simple and easy to calculate
– Monotonically non-decreasing
• order-preserving
Neural Networks
Standard activation functions
• The hard-limiting threshold function
– Corresponds to the biological paradigm
• either fires or not
• Sigmoid functions ('S'-shaped curves)
–
–
–
–
1
f(x) =
The logistic function
1 + e -ax
The hyperbolic tangent (symmetrical)
Both functions have a simple differential
Only the shape is important
Neural Networks
Training Algorithms
• Adjust neural network weights to map
inputs to outputs.
• Use a set of sample patterns where the
desired output (given the inputs presented)
is known.
• The purpose is to learn to generalize
– Recognize features which are common to good
and bad exemplars
Neural Networks
Back-Propagation
• A training procedure which allows multilayer feedforward Neural Networks to be
trained;
• Can theoretically perform “any” inputoutput mapping;
• Can learn to solve linearly inseparable
problems.
Neural Networks
Activation functions and training
• For feedforward networks:
– A continuous function can be differentiated
allowing gradient-descent.
– Back-propagation is an example of a gradientdescent technique.
– Reason for prevalence of sigmoid
Neural Networks
Training versus Analysis
• Understanding how the network is doing
what it does
• Predicting behaviour under novel conditions