哈工大机器学习历年考试

上传人:工**** 文档编号:487852746 上传时间:2024-02-08 格式:DOC 页数:19 大小:848.50KB
返回 下载 相关 举报
哈工大机器学习历年考试_第1页
第1页 / 共19页
哈工大机器学习历年考试_第2页
第2页 / 共19页
哈工大机器学习历年考试_第3页
第3页 / 共19页
哈工大机器学习历年考试_第4页
第4页 / 共19页
哈工大机器学习历年考试_第5页
第5页 / 共19页
点击查看更多>>
资源描述

《哈工大机器学习历年考试》由会员分享,可在线阅读,更多相关《哈工大机器学习历年考试(19页珍藏版)》请在金锄头文库上搜索。

1、1 Give the definitions or your comprehensions of the following terms.(12)1.1 The inductive learning hypothesisP171.2 OverfittingP491.4 Consistent learner P1482 Give brief answers to the following questions.(15)2.2 If the size of a version space is , In general what is the smallest number of queries

2、may be required by a concept learner using optimal query strategy to perfectly learn the target concept?P272.3 In genaral, decision trees represent a disjunction of conjunctions of constrains on the attribute values of instanse,then what expression does the following decision tree corresponds to ?Ou

3、tLookHumidityWindSunnyOvercastRainYesHighNormalYesNoStrongYesWeakNo3 Give the explaination to inductive bias, and list inductive bias of CANDIDATE-ELIMINATION algorithm, decision tree learning(ID3), BACKPROPAGATION algorithm.(10)4 How to solve overfitting in decision tree and neural network?(10)Solu

4、tion:l Decision tree:u 与早停止树增长(stop growing earlier)u 后修剪法(post-pruning)l Neural Networku 权值衰减(weight decay)u 验证数据集(validation set)5 Prove that the LMS weight update rule performs a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now calculat

5、e the derivative of E with respect to the weight , assuming that is a linear function as defined in the text. Gradient descent is achieved by updating each weight in proportion to . Therefore, you must show that the LMS training rule alters weights in this proportion for each training example it enc

6、ounters. () (8)Solution:As Vtrain(b)(Successor(b) we can get E= =As mentioned in LMS: We can get Therefore, gradient descent is achievement by updating each weight in proportion to ;LMS rules alters weights in this proportion for each training example it encounters.6 True or false: if decision tree

7、D2 is an elaboration of tree D1, then D1 is more-general-than D2. Assume D1 and D2 are decision trees representing arbitrary boolean funcions, and that D2 is an elaboration of D1 if ID3 could extend D1 to D2. If true give a proof; if false, a counter example.(Definition: Let and be boolean-valued fu

8、nctions defined over .then is more_general_than_or_equal_to(written ) If and only if then) (10)The hypothesis is false. One counter example is A XOR Bwhile if A!=B, training examples are all positive, while if A=B, training examples are all negative,then, using ID3 to extend D1, the new tree D2 will

9、 be equivalent to D1, i.e., D2 is equal to D1.7 Design a two-input perceptron that implements the boolean function .Design a two-layer network of perceptrons that implements . (10)8 Suppose that a hypothesis space containing three hypotheses, , and the posterior probabilities of these typotheses giv

10、en the training data are 0.4, 0.3 and 0.3 respectively. And if a new instance is encountered, which is classified positive by , but negative by and ,then give the result and detail classification course of Bayes optimal classifier.(10)P1259 Suppose S is a collection of training-example days describe

11、d by attributes including Humidity, which can have the values High or Normal. Assume S is a collection containing 10 examples, 7+,3-. Of these 10 examples, suppose 3 of the positive and 2 of the negative examples have Humidity = High, and the remainder have Humidity = Normal. Please calculate the in

12、formation gain due to sorting the original 10 examples by the attribute Humidity.( log21=0, log22=1, log23=1.58, log24=2, log25=2.32, log26=2.58, log27=2.8, log28=3, log29=3.16, log210=3.32, ) (5)Solution: (a)Here we denote S=7+,3-,then Entropy(7+,3-)= =0.886;(b)Gain(S,a2)Values()=High, Normal,=4 ,=

13、5Thus Gain=0.886-=0.0410 Finish the following algorithm. (10)(1) GRADIENT-DESCENT(training examples, )Each training example is a pair of the form , where is the vector of input values, and t is the target output value. is the learning rate (e.g., 0.05).l Initialize each to some small random valuel U

14、ntil the termination condition is met, Dol Initialize each to zero.l For each in training_examples, Dol Input the instance to the unit and compute the output ol For each linear unit weight , Dol For each linear unit weight , Do(2) FIND-S Algorithml Initialize h to the most specific hypothesis in Hl

15、For each positive training instance xl For each attribute constraint ai in h If Then do nothing Else replace ai in h by the next more general constraint that is satisfied by xl Output hypothesis h1. What is the definition of learning problem?(5)Use “a checkers learning problem” as an example to state how to design a learning system. (15)Answer: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its per

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 办公文档 > 模板/表格 > 财务表格

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号