人工智能与数据挖掘教学课件lect713

上传人:人*** 文档编号:580637212 上传时间:2024-08-29 格式:PPT 页数:30 大小:258KB
返回 下载 相关 举报
人工智能与数据挖掘教学课件lect713_第1页
第1页 / 共30页
人工智能与数据挖掘教学课件lect713_第2页
第2页 / 共30页
人工智能与数据挖掘教学课件lect713_第3页
第3页 / 共30页
人工智能与数据挖掘教学课件lect713_第4页
第4页 / 共30页
人工智能与数据挖掘教学课件lect713_第5页
第5页 / 共30页
点击查看更多>>
资源描述

《人工智能与数据挖掘教学课件lect713》由会员分享,可在线阅读,更多相关《人工智能与数据挖掘教学课件lect713(30页珍藏版)》请在金锄头文库上搜索。

1、8/29/2024AI & DM1Chapter 8Neural NetworksPart III: Advance Data Mining Techniques换雍幽正洲京争窥斋敢娃嗅驮溺锁乃匠邓忿撮众刺镜恕辽拷筒跑么爬硬唬人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM21.What & Why ANN (8.1 Feed forward Neural Network)2.How ANN works - working principle (8.2.1 Supervised Learning)3.Most popula

2、r ANN - Backpropagation Network (8.5.1 The Backpropagation Algorithm: An example)Content调顽削位浊骗帕家蚁焦糯摊恃座搓娱绩剿蛮感霜冀绕抄英门遥螟缸郴臻兜人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM31.What & Why ANN: Artificial Neural Networks (ANN)ANN is an information processing technology that emulates a biologica

3、l neural network. Neuron (Neuron (神经元神经元) vs Node (Transformation) vs Node (Transformation) Dendrite (Dendrite (树突树突) vs Input) vs Input Axon (Axon (轴突轴突) vs Output) vs Output Synapse (Synapse (神经键神经键) vs Weight) vs WeightStarts in 1970s, become very popular in 1990s, because of the advancement of c

4、omputer technology.澎兴膀馋活鞘悠殴鲤椽夹约桥刽吟砌摘雾擒矗终长隶表鳖要失预友役劝裁人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM4亨秒慑赎通垫莉言旧眩蔡跪峰渊舷漓腻祥鸥方定矩瞻龋示撼迹茧儡弱凶岸人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM5士灌厂转乖尹尺眨歹捞复梦例诀苫秘钞在争召及珠独惕套削队疤宴韩台曼人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM6Wh

5、at is ANN: Basics Types of ANNTypes of ANN Network structure, e.g. Figure 17.9 & 17.10 (Turban, 2000, version 5, Network structure, e.g. Figure 17.9 & 17.10 (Turban, 2000, version 5, p663)p663) Number of hidden layersNumber of hidden layers Number of hidden nodesNumber of hidden nodes Feed forward a

6、nd feed backward (time dependent problems)Feed forward and feed backward (time dependent problems) Links between nodes (exist or absent of links)Links between nodes (exist or absent of links) The ultimate objectives of training: obtain a set of weights that makes all The ultimate objectives of train

7、ing: obtain a set of weights that makes all the instances in the training data predicted as correctly as possible.the instances in the training data predicted as correctly as possible. Back-propagation is one type of ANN which can be used for classification Back-propagation is one type of ANN which

8、can be used for classification and estimationand estimation multi-layer: Input layer, Hidden layer(s), Output layermulti-layer: Input layer, Hidden layer(s), Output layer Fully connected Fully connected Feed forwardFeed forward Error back-propagationError back-propagation埠扎途嘲猪鼎轰僵回抚弥烙凳喜隧臃听挑羚芭密至窥斧腊循遗梦

9、颐薪碰卯人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM71.What & Why ANN (8.1 Feed forward Neural Network)2.How ANN works - working principle (8.2.1 Supervised Learning)3.Most popular ANN - Backpropagation Network (8.5.1 The Backpropagation Algorithm: An example)Content征胶笆室祁邱权挡惯我足究枚抱珍抑俄劳腮待木

10、皮宪议杠豺碟恫牛祥镭邱人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM82. How ANN: working principle (I)Step 1Step 1: Collect data: Collect dataStep 2Step 2: Separate data into training and test : Separate data into training and test sets for network training and validation sets for network trainin

11、g and validation respectivelyrespectivelyStep 3Step 3: Select network structure, learning : Select network structure, learning algorithm, and parametersalgorithm, and parameters Set the Set the initial weightsinitial weights either by rules or randomly either by rules or randomly Rate of learningRat

12、e of learning (pace to adjust weights) (pace to adjust weights) Select learning algorithm (Select learning algorithm (More than a hundred More than a hundred learning algorithms available for various situations learning algorithms available for various situations and configurations)and configuration

13、s)注天枢羌脐班防尿竟呀升莱傻兆独砌题碴乍丛遍济障状镊喇突羹刃端汀档人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM92. ANN working principle (II) Step 4Step 4: Train the network: Train the network Compute outputsCompute outputs Compare outputs with desired targets. The difference Compare outputs with desired targets. Th

14、e difference between the outputs and the desired targets is called deltabetween the outputs and the desired targets is called delta Adjust the weights and repeat the process to minimize the Adjust the weights and repeat the process to minimize the delta. delta. The objective of training is to The ob

15、jective of training is to MinimizeMinimize the Delta (Error). the Delta (Error). The final result of training is a set of weights.The final result of training is a set of weights. Step 5Step 5: Test the network: Test the network Use test set: comparing test results to historical results, to Use test

16、 set: comparing test results to historical results, to find out the accuracy of the networkfind out the accuracy of the network Step 6Step 6: Deploy developed network application if the : Deploy developed network application if the test accuracy is acceptabletest accuracy is acceptable釜中烘栈都后师函搞卜陨妒莽飞

17、赞肖嫂讹畔砧赚咨傲接埂工挣缄昭俺菩鞠人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM102. ANN working principle (III): ExampleExample 1: OR operation (see table below)Two input elements, X1 and X2InputsCaseX1X2Desired Results1 0002011 (positive)3101 (positive)4111 (positive)玫怪陨儒尉宏龄拉蛙孵灵村厦樱络吠涡朱宠身老痕裳镜呆馒纪蹭囱嫌抛孩

18、人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM112. ANN working principle (IV): Example Network structure: one layer (see next page)Network structure: one layer (see next page) Learning algorithmLearning algorithm Weighted sum - summation function: Y1=Weighted sum - summation function:

19、Y1=XiWiXiWi Transformation (transfer) function: Y1 less than threshold, Transformation (transfer) function: Y1 less than threshold, Y=0; otherwise Y=1Y=0; otherwise Y=1 Delta = Z-YDelta = Z-Y Wi(final) = Wi(initial) + Alpha*Delta*XiWi(final) = Wi(initial) + Alpha*Delta*Xi Initial Parameters: Initial

20、 Parameters: Rate of learning: alpha = 0.2Rate of learning: alpha = 0.2Threshold =0.5; Threshold =0.5; Initial weight: 0.1, 0.3Initial weight: 0.1, 0.3 Notes:Notes: Weights are initially random Weights are initially random The value of The value of learning rate -learning rate - alpha, is set low fi

21、rst. alpha, is set low first.搓兵乌饼茁螟梦郴班肇还痘练俗转活垢尿萌挛盐怔脂管料尿蔬婉涉醚账磅人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM12Processing Informationin an Artificial Neuronx1w1jx2Yjw2jNeuron j wij xiWeightsOutputInputsSummationsTransfer function赤朔状县甘挽匠季唉段乔吨妮圭怔婆盲壹笼蚀眺实狰纳帘捏佳坚默汤抓使人工智能与数据挖掘教学课件lect-7-13人工智能与

22、数据挖掘教学课件lect-7-138/29/2024AI & DM131.What & Why ANN (8.1 Feed forward Neural Network)2.How ANN works - working principle (8.2.1 Supervised Learning)3.Most popular ANN - Backpropagation Network (8.5.1 The Backpropagation Algorithm: An example)Content恿郊舰樱砾线晰韶挫艰玲锄桶辖壕皋浩阶榷城煌旅洼赵铆副汪谐耗撇棺唯人工智能与数据挖掘教学课件lect-7

23、-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM143. Back-propagation NetworkNetwork Topologymulti-layer: Input layer, Hidden layer(s), Output multi-layer: Input layer, Hidden layer(s), Output layerlayerFully connected Fully connected Feed forwardFeed forwardError back-propagationError back-propagationInit

24、ialize weights with random values态柯搔尊亡命镐俯狗循触漳黔级术枫膛售怪杖稻令里籽短政诬粥发蜗丸革人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM15Back-propagation NetworkOutput nodesInput nodesHidden nodesOutput vectorInput vector: xiwij烷侯藏堰记诅仟骤甸泊植吱例蓝猩誓弛齿瓢弥群肆杀畅病烬刷釉胁扩钝殷人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2

25、024AI & DM163. Back-propagation NetworkFor each nodeFor each node1. Compute the 1. Compute the net inputnet input to the unit using summation function to the unit using summation function 2. Compute the 2. Compute the output valueoutput value using the activation function (i.e. using the activation

26、function (i.e. sigmoid function)sigmoid function)3. Compute the 3. Compute the errorerror4. Update the 4. Update the weights (and the bias)weights (and the bias) based on the error based on the error5. Terminating conditions:5. Terminating conditions: all all w wij ij in the previous epoch (in the p

27、revious epoch (周期周期) were so small as to be below some ) were so small as to be below some specified thresholdspecified threshold the percentage of samples misclassified in the previous epoch is below some the percentage of samples misclassified in the previous epoch is below some thresholdthreshold

28、 a pre-specified number of epoch has expireda pre-specified number of epoch has expired哇她翱俘幌轻嫩炼钥峨翅狮驮涯闻傀理娶锚冲忻舜脸背捅佛搂夕遮雇渺辊人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM17Backpropagation Error Output Layer簇衡吧戊菏咒资视妮勺冈帕霸豺屹耿惦筹樟肪滥洽厂餐帚逐遁苏兵侍爵州人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024

29、AI & DM18Backpropagation Error Hidden Layer氟谴绢券驴额找胡秘援善蚁秤稀算沼龙协民串稠攫皿妊输祖阻半畅达壁描人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM19The Delta Rule口傀态萝描差雹暗蘸晋桶欺眨逼秃次节麦泌性豪嫉尾唯初澡灰暗茶册肠拥人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM20Root Mean Squared Error手们片叹麻似伊诣胡拣玖疙欠绣片鲁俩撇肆披敦恍熊句蛹拉挺医距薪件瘟

30、人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM213. Back-propagation (cont.) Increase network accuracy and training speedIncrease network accuracy and training speed Network topologyNetwork topology number of nodes in input layernumber of nodes in input layer number of hidden layers (us

31、ually is one, no more than two)number of hidden layers (usually is one, no more than two) number of nodes in each hidden layernumber of nodes in each hidden layer number of nodes in output layernumber of nodes in output layer Change initial weights, learning parameter, terminating Change initial wei

32、ghts, learning parameter, terminating conditioncondition Training process:Training process: Feed the training instancesFeed the training instances Determine the output errorDetermine the output error Update the weightsUpdate the weights Repeat until the terminating condition is metRepeat until the t

33、erminating condition is met夹珠镶馋缆书测做营压死屡垢孝哦时轩缚首粘感膛卡亨笋莱断墅敝乓鹰砍人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM22Supervised Learning with Feed-Forward Networks Backpropagation Learning讼肢绵唤遁辈骚桩篆洁刑翱环邯鼻急笛缩诞贰片币峦烬桥佩蹄糠蜜泄啦鸿人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM23Summary: Decis

34、ions the builder must makeNetwork Topology: number of hidden layers, number of nodes in each layer, and feedbackLearning algorithms Parameters: initial weight, learning rateSize of training and test dataStructure and parameters determine the length oftraining time and the accuracy of the network简搏惊瘪

35、挚瘁料翌蛹呸迂氯邓佬槛饶液缸蓝韧距闻倾旋威精寿渝垒昨艘钮人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM24Neural Network Input Format(Normalization: categorical to numerical)1.All input and output must numerical and between 0,12.Categorical Attributes. e.g. attribute with 4 possible valuesOrdinal: Set to 0, 0.33, 0

36、.66, 1Nominal: Set to 0,0, 0,1, 1,0. 1,1 3.Numerical Attributes: 替邪患肋衰怀学腥缮慰葡坦客堕贯巷典蜀虞趟氢邻坚岳姓焊未枚爸享摧俐人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM25Neural Network Output Format1.Categorical Attributes: (Numerical to (Numerical to categorical)categorical) Type 0 & 1Type 0.452.Numerical Att

37、ributes: (0,1 to ordinary value)(0,1 to ordinary value) Min+X*(Max-min)禾搏利拧徒婉勃肘骑轴涝铲甘丫音靳菏攫呻振昂奶萝恩醛够隋柒揩欲炊碉人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM26HomeworkP264, Computational Questions -2 r=0.5, Tk = 0.65Adjust all weights for one epoch蛛勤瞳碴元十壁秘瑚碎师河疫乱喳碗掂泥闭沸碧矩采赎山舜怕孵钮通视御人工智能与数据挖掘教学课件l

38、ect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM27 Case StudyExample: Bankruptcy Prediction with Neural NetworksStructure: Three-layer network, back-propagationTraining data: Small set of well-known financial ratiosData available on bankruptcy outcomes Supervised network倔奥挫陪统统肃率纵淤阂洪觅叠歇严鞘覆澜曙锡吕叁格萍除屎署确倚试

39、虞人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM28Architecture of the Bankruptcy Prediction Neural NetworkX4X3X5X1X2Bankrupt 0Not bankrupt 1焦雀禄贬哎鸭谗断炕烈膀丫蚊瓢怜返愁桌葵考浩仲滥镶趁疵疵晌澜锚恬桂人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM29Bankruptcy Prediction: Network architectureBankruptcy

40、 Prediction: Network architecture Five Input NodesFive Input NodesX1: Working capital/total assets X1: Working capital/total assets X2: Retained earnings/total assetsX2: Retained earnings/total assetsX3: Earnings before interest and taxes/total X3: Earnings before interest and taxes/total assetsasse

41、tsX4: Market value of equity/total debtX4: Market value of equity/total debtX5: Sales/total assetsX5: Sales/total assets Single Output NodeSingle Output Node: Final classification for each firm : Final classification for each firm Bankruptcy or Bankruptcy or NonbankruptcyNonbankruptcy Development To

42、ol: NeuroShellDevelopment Tool: NeuroShell榆范侦瓷迎黑臃悠企厂膘充费炸媒乔冬屉氛位亨谬储潜括娘媳效磐鼻窑重人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-138/29/2024AI & DM30 DevelopmentDevelopment Three-layer network with Three-layer network with back-error propagation back-error propagation (Turban, figure (Turban, figure 17.12, p669)

43、17.12, p669) Continuous valued inputContinuous valued input Single output node: 0 = bankrupt, 1 = not bankrupt (Single output node: 0 = bankrupt, 1 = not bankrupt (Nonbankruptcy)Nonbankruptcy) TrainingTraining Data Set: 129 firmsData Set: 129 firms Training Set: 74 firms: 38 bankrupt, 36 notTraining

44、 Set: 74 firms: 38 bankrupt, 36 not TestingTesting Test data set: 55 firms: 27 bankrupt firms, 28 nonbankrupt firmsTest data set: 55 firms: 27 bankrupt firms, 28 nonbankrupt firms The neural network correctly predicted:The neural network correctly predicted: 81.5 percent bankrupt cases 81.5 percent bankrupt cases 82.1 percent nonbankrupt cases82.1 percent nonbankrupt cases岂麻末涪错忻极炔甄势乡升嫁恒怕赖王拦祈士已帖臼腿荫矛空许竿买研描人工智能与数据挖掘教学课件lect-7-13人工智能与数据挖掘教学课件lect-7-13

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 建筑/环境 > 施工组织

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号