《香港科技大学TensorFlow三天速成课件 (1).pptx》由会员分享,可在线阅读,更多相关《香港科技大学TensorFlow三天速成课件 (1).pptx(156页珍藏版)》请在金锄头文库上搜索。
1、DeepNeuralNetwork SungKim DeepNeuralNetwork SungKim Recap MachineLearningBasics LinearRegressionLogisticRegression Binaryclassification SoftmaxClassification Regression Linear Hypothesis Costfunction Goal Minimizecost Whatcost W lookslike W 1 cost W Whatcost W lookslike W 1 cost W 0 W 0 cost W 4 67
2、W 2 cost W Whatcost W lookslike W 1 cost W 0 W 0 cost W 4 67 W 2 cost W 4 67 Howtominimizecost Formaldefinition Classification LinearRegressionHypothesis LogisticRegression Casefory 1 Casefory 0 Multiplelabels a b c Softmaxfunction tf matmul X W b hypothesis tf nn softmax tf matmul X W b Costfunctio
3、n crossentropy Crossentropycost losscost tf reduce mean tf reduce sum Y tf log hypothesis axis 1 Costfunction crossentropy hypothesis tf nn softmax tf matmul X W b Crossentropycost losscost tf reduce mean tf reduce sum Y tf log hypothesis axis 1 optimizer tf train GradientDescentOptimizer learning r
4、ate 0 1 minimize cost Recap MachineLearningBasics LinearRegressionLogisticRegression Binaryclassification SoftmaxClassification Today XORwithLogisticRegression Binaryclassification Solution DeepNeuralNetworkChallengesofDNNComputinggradients chainrulesandbackpropagationVanishingGradients ReluOverfitt
5、ing regularization dropouts XORdataset x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 Logisticregression W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothe
6、sis tf sigmoid tf matmul X W b x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W tf Variable tf random normal 2
7、1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b cost lossfunctioncost tf reduce mean Y tf log hypothesis 1 Y tf log 1 hypothesis train tf train GradientDescentOptimizer learning rate 0 1 minimize co
8、st Accuracycomputation Trueifhypothesis 0 5elseFalsepredicted tf cast hypothesis 0 5 dtype tf float32 accuracy tf reduce mean tf cast tf equal predicted Y dtype tf float32 Launchgraphwithtf Session assess InitializeTensorFlowvariablessess run tf global variables initializer forstepinrange 10001 sess
9、 run train feed dict X x data Y y data ifstep 100 0 print step sess run cost feed dict X x data Y y data sess run W Accuracyreporth c a sess run hypothesis predicted accuracy feed dict X x data Y y data print nHypothesis h nCorrect c nAccuracy a XORwithlogisticregression Butitdoesn twork XORwithlogi
10、sticregression Butitdoesn twork x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1
11、1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b cost lossfunctioncost tf reduce mean Y tf log hypothesis 1 Y tf log 1 hypothesis train tf train GradientDescentOptimizer learning rate 0 1 minimize cost Accuracycomputation Trueifhypothesis 0 5elseFalsepredicted tf cast hypothesis 0 5 dtyp
12、e tf float32 accuracy tf reduce mean tf cast tf equal predicted Y dtype tf float32 Launchgraphwithtf Session assess InitializeTensorFlowvariablessess run tf global variables initializer forstepinrange 10001 sess run train feed dict X x data Y y data ifstep 100 0 print step sess run cost feed dict X
13、x data Y y data sess run W Accuracyreporth c a sess run hypothesis predicted accuracy feed dict X x data Y y data print nHypothesis h nCorrect c nAccuracy a Hypothesis 0 5 0 5 0 5 0 5 Correct 0 0 0 0 Accuracy 0 5 OnelogisticregressionunitcannotseparateXOR Howaboutmultiplelogisticregressionunits XORu
14、singNN NeuralNet NeuralNet NeuralNet Forwardpropagation http playground tensorflow org CanyoufindanotherWandbfortheXOR Forwardpropagation NN Recap Multinomialclassification NN HowcanwelearnW andbfromtraningdata NeuralNet W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1
15、 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b NeuralNet W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b W1
16、tf Variable tf random normal 2 2 name weight1 b1 tf Variable tf random normal 2 name bias1 layer1 tf sigmoid tf matmul X W1 b1 W2 tf Variable tf random normal 2 1 name weight2 b2 tf Variable tf random normal 1 name bias2 hypothesis tf sigmoid tf matmul layer1 W2 b2 NNforXOR x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W1 tf Variable tf random normal 2 2 name weight1 b1 tf Variable tf random norma