信道匹配算法用于混合模型——挑战em算法channels'

上传人:ldj****22 文档编号:50616362 上传时间:2018-08-09 格式:PPT 页数:22 大小:1.96MB
返回 下载 相关 举报
信道匹配算法用于混合模型——挑战em算法channels'_第1页
第1页 / 共22页
信道匹配算法用于混合模型——挑战em算法channels'_第2页
第2页 / 共22页
信道匹配算法用于混合模型——挑战em算法channels'_第3页
第3页 / 共22页
信道匹配算法用于混合模型——挑战em算法channels'_第4页
第4页 / 共22页
信道匹配算法用于混合模型——挑战em算法channels'_第5页
第5页 / 共22页
点击查看更多>>
资源描述

《信道匹配算法用于混合模型——挑战em算法channels'》由会员分享,可在线阅读,更多相关《信道匹配算法用于混合模型——挑战em算法channels'(22页珍藏版)》请在金锄头文库上搜索。

1、信道匹配算法用于混合模型挑战EM算法Channels Matching Algorithm for Mixture Models A Challenge to the EM Algorithm 鲁晨光 Chenguang Lu Hpage: http:/ http:/ This ppt may be downloaded from http:/ 1. Mixture Models: Guessing Parametersl There are about 70 thousand papers with EM in titles. See http:/ l True model: P*(Y) a

2、nd P*(X|Y) produces P(X)=P*(y1)P*(X|y1)+P*(y2)P*(X|y2)+ l Predictive model:P(Y) and j produces Q(X)=P(y1)P(X|1)+P(y2)P(X|2)+ l Gaussian distribution: P(X| j)=Kexp-(X-cj)2/(2dj2) l Iterative algorithm to guess P(Y) and (cj ,dj) till Kullback-Leibler divergence or relative entropy开始 P(X) Q(X)Q(X) P(X)

3、 Iterating2. The EM Algorithm for Mixture ModelsThe popular EM algorithm and its convergence proof Likelihood is negative general entropy negative general joint entropyE-step: put P(yj|xi, ) into Q M-step:Maximize Q. Convergence Proof: 1) Qs increasing makes H(Q|P) 0; 2) Q is increasing in every M-s

4、tep and no-decreasing in every E-step.3. Problems with the Convergence Proof of the EM Algorithm1). There is a counterexample against the convergence proof 1,2: Real and guessed model parameters and iterative resultsFor the true model, Q*=P(XN,Y|*)= - 6.031N; After first M-step, Q=P(XN,Y|)= - 6.011N

5、, which is larger.2). E-step might decrease Q, such as in the above example(talked later).1.Dempster, A. P., Laird, N. M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B 39, 138 (1977). 2 Wu, C. F. J.: On the Convergence

6、Properties of the EM Algorithm. Annals of Statistics 11, 9510 (1983).0-6.011N Target -6.031NP(XN,Y|)4. ChannelsMatching Algorithm The Shannon channel The semantic channel The semantic mutual information formula:5. Research History1989: 色觉的译码模型及其验证( The decoding model of color vision and verification

7、), 光学学报,9,2(1989),158163 1993:广义信息论(A generalized Information Theory), 中国科技 大学出版社; 1994:广义熵和广义互信息的编码意义,通信学报, 5卷6期,37-44. 1997:投资组合的熵理论和信息价值, 中国科技大学出版社; 1999: A generalization of Shannons information theory (a short version of the book) , Int. J. of General Systems, 28: (6) 453-490,1999Recently, I fo

8、und this theory could be used to improve statistical learning in many aspects. See http:/ http:/ Home page: http:/ Blog:http:/ Published in 19936. Truth function and Semantic Likelihood Function Using membership function mAj(X) as truth function of a hypothesis yj=“X is in Aj”: T(j|X)=mAj(X), j=Aj (

9、a fuzzy set) as a sub-model Using thruth function T(j|X) and source P(X) to produce semantic likelihood function:Viewing semantic likelihood function from two examples of GPS Most possible position7. Semantic Information Measure compatible with Shannon,Popper,Fisher,and Zadehs ThoughtsIf T(j|X)=exp-

10、|X-xj|2/(2d2), j=1, 2, , n, then =Bar-Hillel and Carnaps information standard deviation This information measure reflects Poppers thought well. The less the logical probability is, the more information there is; The larger the deviation is, the less information there is; A wrong estimation conveys n

11、egative information.8. Semantic Kullback-Leibler Information and Semantic Mutual Information Averaging I(xi;j) to get Semantic Kullback-Leibler Information:Relationship between normalized log-likelihood and I(X; j):Averaging I(X;j) to get Semantic Mutual Information:Sampling distribution9. Semantic

12、Channel Matches Shannons ChannelOptimize the truth function and the semantic channel:When the sample is large enough, the optimized truth function is proportional to the transition probability function xj* makes P(yj|xj*) be the maximum of P(yj|X). If P(yj|X) or P(yj) is hard to obtain, we may useWi

13、th T*(j|X), the semantic Bayesian prediction is equivalent to traditional Bayesian prediction: P*(X|j)=P(X|yj).Semantic channel Shannon channel 10. MSI in Comparison with MLE and MAPMSI(estimation)Maximum Semantic Information (estimation)MLE: MAP: MSI:MSI has features: 1)compatible with MLE,but, sui

14、table to cases with variable source P(X); 2)compatible with traditional Bayesian predictions; 3)using truth functions as predictive models so that the models reflect communication channels features.11. Matching Function between Shannon Mutual Information R and Average Log-normalized-likelihood GShannons Information rate distortion function: R(D)Replaced by We have Information rate - semantic information function R(G):All R(G) functions are bowl like.12.

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 行业资料 > 其它行业文档

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号