News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
artificial neural network
artificial neural network
neural network
- ---- put forward -:
The view of Connectionism
- Core: the essence of intelligence is the connection mechanism
Neural network is a highly complex large-scale nonlinear adaptive system consisting of a large number of simple processing units.
ANN seeks to simulate human brain's intelligence behavior from four aspects.
• physical structure
• computational simulation
• storage and operation
• training
The characteristics of artificial neural network (ANN):
Distribution of information
Global and local operations of operations
The nonlinearity of processing
artificial neural network
• definition
Neural network is a parallel and distributed information processing network. It is a directed graph with nodes as processing nodes and connected by weighted directional arcs. The processing unit is the simulation of physiological neurons, while the directed arc is the simulation of axon synapse dendrite pairs. The weight of directed arc represents the strength of interaction between two processing units.
• it is usually made up of a large number of neurons
Each neuron has only one output and can connect to many other neurons.
Each neuron input has multiple connection channels, and each connection channel corresponds to a connection weight coefficient.
• emphasize:
- parallel, distributed processing structure
The output of a processing unit can be arbitrarily branched and the size remains unchanged.
- the output signal can be an arbitrary mathematical model
- the complete local operation of the processing unit
Artificial neuron model:
General model
The neuron is made up of multiple inputs Xi, i=1,2... N and an output y
Y: neuron output
The summation operation (the sum of weights of the input state of input signal) is: the frontal offset (threshold) of neuron cell, the Wi: connection weight coefficient, N: the number of input signals.
response function
The basic function of the response
- control the activation of input to output
- conversion of function to input and output
It transforms the input from the infinite domain to the specified limited output.
Common neuron response function
(a) threshold unit
(b) linear element
(c) (d) nonlinear element: Sigmoid function
Artificial neural network typical structure:
After the neuron model is determined, the characteristics and capabilities of a neural network mainly depend on the topology and learning method of the network.
Several basic forms of artificial neural network connection.
• forward network (a)
The nodes of the input layer can not be called: neurons
The weights of the I neurons and the j neurons of the first layer on the Wij: level.
From output to input feedback feedforward network (b).
• used to store some sort of pattern sequence
- interlayer interconnect forward network (c)
The neurons at the same time in the restricted layer; the grouping function.
Interaction network (d)
4 typical structures of neural networks
The basic neural network learning algorithm (learning / weight change)
Learning method
Learning method is the core issue in artificial neural network research.
- supervised learning
The output of the network (model output) is compared with the expected output (i.e. supervision), and the connection weight of the network is adjusted according to the difference between the two. Finally, the difference becomes smaller.
- unsupervised learning
• when the input mode enters the network, the network automatically adjusts the weight according to a predetermined rule, such as the rules of competition, so that the network will eventually have the ability to classify the pattern and so on. (don't know expected output)
- reinforcement learning
Learning rules
- Hebb learning rules
If two neurons are excited at the same time (i.e. activated simultaneously), the synaptic connection between them strengthens.
A is the learning rate, Vi and Vj are the outputs of neurons I and J.
Delta (delta) learning rule (gradient descent).
1., learn from the known samples as teachers.
2., the learning rule can be derived from the gradient method of two error functions.
3., error correction learning rule is actually a gradient method.
A) can't guarantee the global optimal solution
B) requires a large number of training samples, and the convergence speed is slow.
C) is sensitive to the local order change.
- gradient descent learning rules
- Kohonen learning rules
The back propagation learning rule (BP rule) is the most widely used.
Probabilistic learning rules
Competitive learning rule (unsupervised learning rule).
The generalization ability of the network
1. the training of neural network is a learning process of the internal rules of the training samples, and the purpose of training the network is to make the network model have the correct mapping ability to the data outside the training sample.
2., neural network input its training sample after training is completed.
- ---- put forward -:
The view of Connectionism
- Core: the essence of intelligence is the connection mechanism
Neural network is a highly complex large-scale nonlinear adaptive system consisting of a large number of simple processing units.
ANN seeks to simulate human brain's intelligence behavior from four aspects.
• physical structure
• computational simulation
• storage and operation
• training
The characteristics of artificial neural network (ANN):
Distribution of information
Global and local operations of operations
The nonlinearity of processing
artificial neural network
• definition
Neural network is a parallel and distributed information processing network. It is a directed graph with nodes as processing nodes and connected by weighted directional arcs. The processing unit is the simulation of physiological neurons, while the directed arc is the simulation of axon synapse dendrite pairs. The weight of directed arc represents the strength of interaction between two processing units.
• it is usually made up of a large number of neurons
Each neuron has only one output and can connect to many other neurons.
Each neuron input has multiple connection channels, and each connection channel corresponds to a connection weight coefficient.
• emphasize:
- parallel, distributed processing structure
The output of a processing unit can be arbitrarily branched and the size remains unchanged.
- the output signal can be an arbitrary mathematical model
- the complete local operation of the processing unit
Artificial neuron model:
General model
The neuron is made up of multiple inputs Xi, i=1,2... N and an output y
Y: neuron output
The summation operation (the sum of weights of the input state of input signal) is: the frontal offset (threshold) of neuron cell, the Wi: connection weight coefficient, N: the number of input signals.
response function
The basic function of the response
- control the activation of input to output
- conversion of function to input and output
It transforms the input from the infinite domain to the specified limited output.
Common neuron response function
(a) threshold unit
(b) linear element
(c) (d) nonlinear element: Sigmoid function
Artificial neural network typical structure:
After the neuron model is determined, the characteristics and capabilities of a neural network mainly depend on the topology and learning method of the network.
Several basic forms of artificial neural network connection.
• forward network (a)
The nodes of the input layer can not be called: neurons
The weights of the I neurons and the j neurons of the first layer on the Wij: level.
From output to input feedback feedforward network (b).
• used to store some sort of pattern sequence
- interlayer interconnect forward network (c)
The neurons at the same time in the restricted layer; the grouping function.
Interaction network (d)
4 typical structures of neural networks
The basic neural network learning algorithm (learning / weight change)
Learning method
Learning method is the core issue in artificial neural network research.
- supervised learning
The output of the network (model output) is compared with the expected output (i.e. supervision), and the connection weight of the network is adjusted according to the difference between the two. Finally, the difference becomes smaller.
- unsupervised learning
• when the input mode enters the network, the network automatically adjusts the weight according to a predetermined rule, such as the rules of competition, so that the network will eventually have the ability to classify the pattern and so on. (don't know expected output)
- reinforcement learning
Learning rules
- Hebb learning rules
If two neurons are excited at the same time (i.e. activated simultaneously), the synaptic connection between them strengthens.
A is the learning rate, Vi and Vj are the outputs of neurons I and J.
Delta (delta) learning rule (gradient descent).
1., learn from the known samples as teachers.
2., the learning rule can be derived from the gradient method of two error functions.
3., error correction learning rule is actually a gradient method.
A) can't guarantee the global optimal solution
B) requires a large number of training samples, and the convergence speed is slow.
C) is sensitive to the local order change.
- gradient descent learning rules
- Kohonen learning rules
The back propagation learning rule (BP rule) is the most widely used.
Probabilistic learning rules
Competitive learning rule (unsupervised learning rule).
The generalization ability of the network
1. the training of neural network is a learning process of the internal rules of the training samples, and the purpose of training the network is to make the network model have the correct mapping ability to the data outside the training sample.
2., neural network input its training sample after training is completed.