Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册

上传人:共*** 文档编号:95718146 上传时间:2019-08-21 格式:PDF 页数:46 大小:210.73KB
返回 下载 相关 举报
Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册_第1页
第1页 / 共46页
Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册_第2页
第2页 / 共46页
Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册_第3页
第3页 / 共46页
Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册_第4页
第4页 / 共46页
Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册_第5页
第5页 / 共46页
点击查看更多>>
资源描述

《Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册》由会员分享,可在线阅读,更多相关《Statistics and FinanceAn Introduction Solutions Manual 统计与金融:解决方案简介手册(46页珍藏版)》请在金锄头文库上搜索。

1、David Ruppert Statistics and Finance: An Introduction Solutions Manual July 9, 2004 Springer Berlin Heidelberg NewYork HongKong London Milan Paris Tokyo 2 Probability and Statistical Models 1. (a) E(0.1X + 0.9Y ) = 1. Var(0.1X + 0.9Y ) = (0.12)(2) + 2(0.1)(0.9)(1) + (0.92)(3) = 2.63. (b) VarwX + (1

2、w)Y = 3w2 4w + 3. The derivative of this expression is 6w 4. Setting this derivative equal to 0 gives us w = 2/3. The second derivative is positive so the solution must be a minimum. In this problem, assets X and Y have same expected return. This means that regardless of the choice of w, that is, th

3、e asset allocation, the expected return of the portfolio doesnt change. So by minimizing the variance, we can reduce the risk without reducing the return. Thus the ratio w = 2/3 corresponds to the optimal portfolio 2. (a) Use (2.54) with w1= (1 1)Tand w2= (1 1)T. (b) Use part (a) and the facts that

4、Cov(1X,2X) = 122 X, Cov(1X,Z) = 0, Cov(Y,2X) = 0, and Cov(Y,Z) = 0. (c) Using (2.54) with w1and w2the appropriately-sized vectors of ones, it can be shown that Cov n1 X i=1 Xi, n2 X i0=1 Yi0 ! = n1 X i=1 n2 X i0=1 Cov(Xi,Yi0). 3. The likelihood is L(2) = n Y i=1 1 22e 1 22(Yi) 2. Therefore the log-l

5、ikelihood is 22 Probability and Statistical Models logL(2) = 1 22 n X i=1 (Yi )2 nlog(2)/2 + H where H consists of all terms that do not depend on . Diff erentiating the log-likelihood with respect to 2and setting the derivative1with respect to 2equal to zero we get 1 2(2)2 n X i=1 (Yi )2 n 22 = 0 w

6、hose solution is 2= 1 n n X i=1 (Yi )2. 4. Rearranging the fi rst equation, we get 0= E(Y ) 1E(X).(2.1) Substituting this into the second equation and rearranging, we get E(XY ) E(X)E(Y ) = 1E(X2) E(X)2. Then using XY= E(XY ) E(X)E(Y ) and 2 X = E(X2) E(X)2 we get 1= XY 2 X , and substituting this i

7、nto (2.1) we get 0= E(Y ) XY 2 X E(X). 5. E(wTX) = E N X i=1 wiXi ! = N X i=1 wiE(Xi) = wTE(X). Next Var(wTX) = E ?wTX E(wTX)?2 = E “ N X i=1 wiXi E(Xi) #2 1 The solution to this problem is algebraically simpler if we treat 2rather than as the parameter. 2 Probability and Statistical Models3 = N X i

8、=1 N X j=1 E wiwjXi E(Xi)Xj E(Xj) = N X i=1 N X j=1 wiwjCov(Xi,Xj). One can easily check that for any N 1 vector X and N N matrix A XTAX = N X i=1 N X j=1 XiXjAij, whence wTCOV(X)w = N X i=1 N X j=1 wiwjCov(Xi,Xj). 6. Since logL(,2) = n 2 log(2) n 2 log(2) 1 22 n X i=1 (Yi )2 and Pn i=1(Yi Y ) 2 b 2

9、 ML = n, it follows that logL(Y ,b 2 ML) = n 2 1 + log(2) + log(b 2 ML. Next, the solution to 0 = 2 ?logL(0,2)? = 2 ( n 2 log(2) n 2 log(2) 1 22 n X i=1 Y 2 i ) = n 22 + 1 2(2)2 n X i=1 Y 2 i , solves n2= Pn i=1Y 2 i so that b 2 0,ML= 1 n n X i=1 Y 2 i . 7. (a) EX E(X) = E(X) EE(X) = E(X) E(X) = 0.

10、(b) By independence of X E(X) and Y E(Y ) we have E X E(X)Y E(Y ) = EXE(X)EY E(Y ) = 00 = 0. 42 Probability and Statistical Models 8. (a) Since b Y = E(Y ) + XY 2 X X E(X). and EX E(X) = 0 by Problem 7. it follows that E(bY ) = E(Y ) so that EbY Y = 0 0 = 0. (b) E(Y bY )2= E ? Y E(Y )2+ 2 XY 4 X X E

11、(X)2 2XY 2 X E h (Y E(Y )X E(X) i? = 2 Y + 2 XY 2 X 2 2 XY 2 X = 2 Y ? 1 2 XY 2 X 2 Y ? = 2 Y ?1 2 XY ? . 9. E(XY ) = E(X3) = Z a a x3 2adx = x4 8a ? ? ? ? a a = 0 and E(X) = 0 so that XY= E(XY )E(X)E(Y ) = 00 = 0. Since Y is determined by X one suspects that X and Y are not independent. This can be

12、 proved by fi nding set A1and A2such that PX1 A1and Y A2 6= PX A1PY A2. This is easy. For example, 1/2 = P|X| a/2 and Y (a/2)2 6= P|X| a/2PY (a/2)2 = (1/2)(1/2). 10. There is an error on page 55. The MAP estimator is 4/5, not 5/6. This can be shown by fi nding the value of that maximizes f(|3) = 304

13、(1 ). Thus, one solves 0 = d d 304(1 ) = 303(4 5) whose solution is = 4/5. 11. (a) Since the kurtosis of a N(,2) random variable is 3, E(X )4= 34. Therefore, for a random variable X that is 95% N(0,1) and 5% N(0,10), we have E(X4) = (0.95)(3)(14) + (0.05(3)(104) = 1502.9 and E(x2) = (0.95)(12) + (0.

14、05)(102) = 5.9500. Therefore, the kurtosis is 1502.9 5.952 = 42.45. 2 Probability and Statistical Models5 (b) One has that E(X4) = 3p+3(1p)4and E(X2) = p+(1p)2so that the kurtosis is 3p + (1 p)4 p2+ 2p(1 p)2+ (1 p)24 . (c) For any fi xed value of p less than 1, lim 3p + (1 p)4 p2+ 2p(1 p)2+ (1 p)24

15、= 3 1 p. Therefore, by letting get very large and p get close to 1, the kurtosis can be made arbitrarily large. Suitable and p such that the kurtosis is greater than 10,000 can be found by fi xing p such that 3/(1 p) 10,000 that then increasing until the kurtosis exceeds 10,000. (d) There is an error in the second sentence of part (d). The sentence should be “Show that for any p0 p0and a , such that the normal mixture with these values of p and has a kurtosis at least M.” This

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 大杂烩/其它

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号