格林面板数据讲义-24

上传人:小** 文档编号:54901829 上传时间:2018-09-21 格式:PPT 页数:56 大小:555KB
返回 下载 相关 举报
格林面板数据讲义-24_第1页
第1页 / 共56页
格林面板数据讲义-24_第2页
第2页 / 共56页
格林面板数据讲义-24_第3页
第3页 / 共56页
格林面板数据讲义-24_第4页
第4页 / 共56页
格林面板数据讲义-24_第5页
第5页 / 共56页
点击查看更多>>
资源描述

《格林面板数据讲义-24》由会员分享,可在线阅读,更多相关《格林面板数据讲义-24(56页珍藏版)》请在金锄头文库上搜索。

1、Econometric Analysis of Panel Data,William Greene Department of Economics Stern School of Business,Econometric Analysis of Panel Data,24. Bayesian Econometric Models for Panel Data,Sources,Lancaster, T.: An Introduction to Modern Bayesian Econometrics, Blackwell, 2004 Koop, G.: Bayesian Econometrics

2、, Wiley, 2003 “Bayesian Methods,” “Bayesian Data Analysis,” (many books in statistics) Papers in Marketing: Allenby, Ginter, Lenk, Kamakura, Papers in Statistics: Sid Chib, Books and Papers in Econometrics: Arnold Zellner, Gary Koop, Mark Steel, Dale Poirier,Software,Stata, Limdep, SAS, etc. S, R, M

3、atlab, Gauss WinBUGS Bayesian inference Using Gibbs Sampling (On random number generation),http:/www.mrc-bsu.cam.ac.uk/bugs/welcome.shtml,A Philosophical Underpinning,A method of using new information to update existing beliefs about probabilities of events Bayes Theorem for events. (Conceived for u

4、pdating beliefs about games of chance),On Objectivity and Subjectivity,Objectivity and “Frequentist” methods in Econometrics The data speak Subjectivity and Beliefs Priors Evidence Posteriors Science and the Scientific Method,Paradigms,Classical Formulate the theory Gather evidence Evidence consiste

5、nt with theory? Theory stands and waits for more evidence to be gathered Evidence conflicts with theory? Theory falls Bayesian Formulate the theory Assemble existing evidence on the theory Form beliefs based on existing evidence Gather evidence Combine beliefs with new evidence Revise beliefs regard

6、ing the theory,Applications of the Paradigm,Classical econometricians doggedly cling to their theories even when the evidence conflicts with them that is what specification searches are all about. Bayesian econometricians NEVER incorporate prior evidence in their estimators priors are always studiou

7、sly noninformative. (Informative priors taint the analysis.) As practiced, Bayesian analysis is not Bayesian.,Likelihoods,(Frequentist) The likelihood is the density of the observed data conditioned on the parameters Inference based on the likelihood is usually “maximum likelihood” (Bayesian) A func

8、tion of the parameters and the data that forms the basis for inference not a probability distribution The likelihood embodies the current information about the parameters and the data,The Likelihood Principle,The likelihood embodies ALL the current information about the parameters and the data Propo

9、rtional likelihoods should lead to the same inferences,Application:,(1) 20 Bernoulli trials, 7 successes (Binomial)(2) N Bernoulli trials until the 7th success (Negative Binomial),Inference,The Bayesian Estimator,The posterior distribution embodies all that is “believed” about the model. Posterior =

10、 f(model|data)= Likelihood(,data) * prior() / P(data) “Estimation” amounts to examining the characteristics of the posterior distribution(s). Mean, variance Distribution Intervals containing specified probabilities,Priors and Posteriors,The Achilles heel of Bayesian Econometrics Noninformative and I

11、nformative priors for estimation of parameters Noninformative (diffuse) priors: How to incorporate the total lack of prior belief in the Bayesian estimator. The estimator becomes solely a function of the likelihood Informative prior: Some prior information enters the estimator. The estimator mixes t

12、he information in the likelihood with the prior information. Improper and Proper priors P() is uniform over the allowable range of Cannot integrate to 1.0 if the range is infinite. Salvation improper, but noninformative priors will fall out of the posterior.,Diffuse (Flat) Priors,Conjugate Prior,THE

13、 Question,Where does the prior come from?,Large Sample Properties of Posteriors,Under a uniform prior, the posterior is proportional to the likelihood function Bayesian estimator is the mean of the posterior MLE equals the mode of the likelihood In large samples, the likelihood becomes approximately

14、 normal the mean equals the mode Thus, in large samples, the posterior mean will be approximately equal to the MLE.,Reconciliation A Theorem (Bernstein-Von Mises),The posterior distribution converges to normal with covariance matrix equal to 1/N times the information matrix (same as classical MLE).

15、(The distribution that is converging is the posterior, not the sampling distribution of the estimator of the posterior mean.) The posterior mean (empirical) converges to the mode of the likelihood function. Same as the MLE. A proper prior disappears asymptotically. Asymptotic sampling distribution o

16、f the posterior mean is the same as that of the MLE.,Mixed Model Estimation,MLWin: Multilevel modeling for Windows http:/multilevel.ioe.ac.uk/index.html Uses mostly Bayesian, MCMC methods “Markov Chain Monte Carlo (MCMC) methods allow Bayesian models to be fitted, where prior distributions for the model parameters are specified. By default MLwin sets diffuse priors which can be used to approximate maximum likelihood estimation.” (From their website.),

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 商业/管理/HR > 经营企划

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号