《mlp神经网络》ppt课件

上传人:tian****1990 文档编号:69075890 上传时间:2019-01-12 格式:PPT 页数:26 大小:999.50KB
返回 下载 相关 举报
《mlp神经网络》ppt课件_第1页
第1页 / 共26页
《mlp神经网络》ppt课件_第2页
第2页 / 共26页
《mlp神经网络》ppt课件_第3页
第3页 / 共26页
《mlp神经网络》ppt课件_第4页
第4页 / 共26页
《mlp神经网络》ppt课件_第5页
第5页 / 共26页
点击查看更多>>
资源描述

《《mlp神经网络》ppt课件》由会员分享,可在线阅读,更多相关《《mlp神经网络》ppt课件(26页珍藏版)》请在金锄头文库上搜索。

1、Multi-layer Perceptrons,Junying Zhang,contents,structure universal theorem MLP for classification mechanism of MLP for classification nonlinear mapping binary coding of the areas MLP for regression learning algorithm of the MLP back propagation learning algorithm heuristics in learning process,XOR a

2、nd Linear Separability Revisited,Remember that it is not possible to find weights that enable Single Layer Perceptrons to deal with non-linearly separable problems like XOR: However, Multi-Layer Perceptrons (MLPs) are able to cope with non-linearly separable problems. Historically, the problem was t

3、hat there were no learning algorithms for training MLPs. Actually, it is now quite straightforward.,Structure of an MLP,it is composed of several layers neurons within each layer are not connected ith layer is only fully connected to the (i+1)jth layer Signal is transmitted only in a feedforward man

4、ner,Structure of an MLP,Model of each neuron in the net includes A nonlinear activation function the net is nonlinear The function is smooth derivative Generally, sigmoidal function The network contains one or more layers of hidden neurons that are not part of input or output of the net enable the n

5、et to learn complex tasks,Expressive power of an MLP,Questions How many hidden layers are needed? How many units should be in a (the) hidden layer? Answers Komogorovs mapping neural network existence theorem (universal theorem),Komogorovs mapping neural network existence theorem (universal theorem),

6、Any continuous function g(x) defined on the unit hypercube can be represented in the from For properly chosen functions and It is impractical the functions and are not the simple weighted sums passed through nonlinearities favored in neural networks It tells us very little about how to find the nonl

7、inear functions based on data the central problem in network based pattern recognition those functions can be extremely complex; they are not smooth,Komogorovs mapping neural network existence theorem (universal theorem),Any continuous function g(x) can be approximated to arbitrary precision by for

8、properly chosen function f(.) when NH approaches to infinity.,MLP for classification,MLP for regression,Learning scheme,Supervised learning,Two propagation directions - Function Signal: in forward direction - Error signal: in backward direction,Learning in MLP,Objective function where,The desired ou

9、tput of the jth output neuron,The real output of the jth output neuron,Sum squared error function,Steepest descent search method Partial derivative extension,Learning rate parameter,Synaptic weight from ith neuron in k-1th layer to the jth neuron in kth layer of the network,Situation for Situation f

10、or,Back propagation learning algorithm of MLP,Updating equation where which is,Back propagation formula,For sigmoidal function f(.) we have,Speeding the learning process,Learning rate parameter,Momentum constant parameter,Heuristics for making the back-propagation algorithm perform better,Sequential

11、 versus batch update Comparison, sequential model is Computationally faster More suitable for large and highly redundant training data set Makes the search in weight space stochastic in nature Less likely to be trapped in a local minimum More difficult to establish theoretical conditions for converg

12、ence of the algorithm,Stopping criteria,the back-propagation algorithm is considered to have converged gradient vector when the Euclidean norm of the gradient vector reaches a sufficiently small gradient threshold Squired error When the absolute rate of change in the average squared error per epoch

13、is sufficiently small Generalization When generalization performance reaches a peak,Generalization performance,Overfitting downfitting,generalization performance,Cross-validation Leave-one-out,Practical Considerations for Back-Propagation Learning,How Many Hidden Units?,Different Learning Rates for

14、Different Layers?,Overview,We started by revisiting the concept of linear separability and the need for multi-layered neural networks We then saw how the Back-Propagation Learning Algorithm for multilayered networks could be derived easily from the standard gradient descent approach. We ended by looking at some practical issues that didnt arise for the single layer networks,

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号