BPNeuralNetwork.doc

上传人:公**** 文档编号:551359204 上传时间:2023-03-31 格式:DOC 页数:7 大小:1.86MB
返回 下载 相关 举报
BPNeuralNetwork.doc_第1页
第1页 / 共7页
BPNeuralNetwork.doc_第2页
第2页 / 共7页
BPNeuralNetwork.doc_第3页
第3页 / 共7页
BPNeuralNetwork.doc_第4页
第4页 / 共7页
BPNeuralNetwork.doc_第5页
第5页 / 共7页
点击查看更多>>
资源描述

《BPNeuralNetwork.doc》由会员分享,可在线阅读,更多相关《BPNeuralNetwork.doc(7页珍藏版)》请在金锄头文库上搜索。

1、Control Theory and Control Engineering Ran Zhang 21210124 Intelligent Control and Intelligent Systems System Identification with BP Neural NetworkRan Zhang 21210124Page 2 AbstractThis article introduced a method of using a BP (Back-Propagation) Neural Network to realize system identification.We stud

2、ied three systems of different system functions, and analyzed the affects of different parameters of the BP neural network. key wordsMLP (Multi-Layered Perceptron), Neurons, Hidden Layer, BP Neural NetworkAlgorithm IntroductionThe Neurons or Neurodes formed the central nervous system in animals or h

3、uman beings brains. The networks in the human beings brain can deal with senior mental activities. AnArtificial Neural Network, often just called a neural network, is amathematical modelinspired bybiological neural networks. A neural network consists of an interconnected group ofartificial neurons,

4、and it processes information by using aconnectionistapproach tocomputation. In most cases a neural network is anadaptive systemthat changes its structure during a learning phase. Neural networks are used to model complex relationships between inputs and outputs or tofind patternsin data.BP Neural Ne

5、tworks is one of the basical Artificial Neural Networks. It is based on the MLP architecture. Training with the system samples, the algorithm could prduct a Neural Network model to approximate the real system.(1) MLPMulti-Layered Perceptron network is a method using supervised learning, the architec

6、ture of MLP is showed in Figure1.Figure 1 The structure of MLPThe signal is transfered in the certain direction. There is no connection between the neurons in the same layer, and the neurons of adjacent layers are fully connected. And each connection between adjacent layers has a weight.In each hidd

7、en (or output) layer, every neuron has an activition function which against to the weighted sum of the out put of the previous layer. After serval iterations, then the model will generate a set of outputs.We have a lot of choices for the activition function, such as linear function, Sigmoid function

8、 and so on. Generally, we choose a Sigmoid function () as the activition function.(2) BP Neural NetworkBased on the MLP network, adjust the weights of each connection using the error of the next layer, that is the error feedback method.The BP algorithm is deduced from the steepest gradient descent m

9、ethod. Referring to Figure 1, for the qth sample, we define the power function as is the desired output of the qth sample; is the real output of the network.According to the steepest gradient descent method, we can get the adjustment of the weight of each connection as follows:For the output layer,F

10、or the hidden and input layer,In the formulas above, is the activition function, and is the derivative of, and s is equal to the difference between the weighted sum of the inputs and the threshold of each neuron. is the learning rate.Turn to the threshold of each neuron, we can conclude the simular

11、fomula as follows:When the network has been trained with all the samples for one time, the algorithm would finish one epoch. Then calculate the performance index. If the index fit the accuracy requirements, then end the training, else start another training epoch.Experiments and AnalysisBased on the

12、 algorithm introduced above, we choose three systems with different system functions as follows:(1) We choose a MLP model with 1 hidden layer, and we applied different number of neurons of the hidden layer to study the affects of the number of neurons.We choosed 9 sets of uniform data to be used to

13、train the network, and then tested the network with 361 sets of uniform data. Choose Matlab as the simulating tool. Performance index is set as.The results are showed below.Note:Due to the existence of zeros in the desired output, the relative error will be huge in the area neaby the zeros, and that

14、 will make the relative error useless to just the performances of the network. As a result, we compute the absolute error to characterize the performance as the desired output is same.a) 3 neurons in the hidden layer ()Figure 2 Plots of training convergence and functionsFigure 3 Absolute error betwe

15、en network output and desired outputb) 5 neurons in the hidden layer ()Figure 4 Plots of training convergence and functions Figure 5 absolute error between network output and desired outputc) 5 neurons in the hidden layer ()Figure 6 Plots of training convergence and functions Figure 7 Absolute error

16、 between actual output and desired outputd) 7 neurons in the hidden layer ()Figure 8 Plots of training convergence and functions Figure 9 Absolute error between network output and desired outputThe ranges of axises are set to be the same so as to make comparsion more convient by sight.From the results s

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 生活休闲 > 科普知识

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号