《机器学习第四章答案》由会员分享,可在线阅读,更多相关《机器学习第四章答案(2页珍藏版)》请在金锄头文库上搜索。
1、4.1 What are the values of weights wo, wl, and w2 for the perceptron whose decision surface is illustrated in Figure 4.3? Assume the surface crosses the xl axis at -1, and the x2 axis at 2. Ans. The function of the decision surface is: 2+2x1-x2 = 0, so w0 =2, w1 = 2, w2 = -1. 4.2. Design a two-input
2、 perceptron that implements the boolean function A ? B. Design a two-layer network of perceptrons that implements A XOR B. Ans. We assume 1 for true, -1 for false. (1) A ? B: w0 = -0.8. w1 = 0.5, w2 = -0.5. x1(A) x2(B) w0+w1x1+w2x2 output -1 -1 -0.8 -1 -1 1 -1.8 -1 1 -1 0.2 1 1 1 -0.8 -1 (2) A XOR B
3、 = (A ? B) (? A B) The weights are: Hidden unit 1: w0 = -0.8, w1 = 0.5, w2 = -0.5 Hidden unit 2: w0 = -0.8, w1 = -0.5, w2 = 0.5 Output unit: w0 = 0.3, w1 = 0.5, w2 = 0.5 x1(A) x2(B) Hidden unit 1 value Hidden unit 2 value Output value-1 -1 -1 -1 -1 -1 1 -1 1 1 1 -1 1 -1 1 1 1 -1 -1 -1 4.3. Consider
4、two perceptrons defined by the threshold expression 022110+xwxww. Perceptron A has weight values: w0 = 1, w1 =2, w2 = 1. and perceptron B has the weight values: w0 = 0, w1 = 2, w2 = 1. True or false? Perceptron A is more-generalthan perceptron B. (more-generalthan is defined in Chapter 2.) Ans. True
5、. For each input instance x=(x1, x2), if x is satisfied by B, which means 2x1+x20, then we have 2x1+x2 +10. Hence, x is also satisfied by the A. 4.5. Derive a gradient descent training rule for a single unit with output o, where 2211110.nnnnxwxwxwxwwo+=Ans. The gradient descent is:,.,1,)(0wnEwEwEwE?=?r)( )()()()(2122ididDdddddDdddDdddxxototwiototwiwiE-=-?-=-?=?The training rule for gradient descent is:iiiwww+=, where +-=?-=DdididddiixxotwEw)(2