python的隐马尔科夫hmmlearn库的应用教学

上传人:第*** 文档编号:32695155 上传时间:2018-02-12 格式:DOCX 页数:11 大小:251.83KB
返回 下载 相关 举报
python的隐马尔科夫hmmlearn库的应用教学_第1页
第1页 / 共11页
python的隐马尔科夫hmmlearn库的应用教学_第2页
第2页 / 共11页
python的隐马尔科夫hmmlearn库的应用教学_第3页
第3页 / 共11页
python的隐马尔科夫hmmlearn库的应用教学_第4页
第4页 / 共11页
python的隐马尔科夫hmmlearn库的应用教学_第5页
第5页 / 共11页
点击查看更多>>
资源描述

《python的隐马尔科夫hmmlearn库的应用教学》由会员分享,可在线阅读,更多相关《python的隐马尔科夫hmmlearn库的应用教学(11页珍藏版)》请在金锄头文库上搜索。

1、Python HMMLearn TutorialEdited By 毛片物语hmmlearn implements the Hidden Markov Models (HMMs). The HMM is a generative probabilistic model, in which a sequence of observable (mathbfX) variables is generated by a sequence of internal hidden states (mathbfZ). The hidden states are not be observed directly

2、. The transitions between hidden states are assumed to have the form of a (first-order) Markov chain. They can be specified by the start probability vector (boldsymbolpi) and a transition probability matrix (mathbfA). The emission probability of an observable can be any distribution with parameters

3、(boldsymboltheta) conditioned on the current hidden state. The HMM is completely determined by (boldsymbolpi), (mathbfA) and (boldsymboltheta).There are three fundamental problems for HMMs: Given the model parameters and observed data, estimate the optimal sequence of hidden states. Given the model

4、parameters and observed data, calculate the likelihood of the data. Given just the observed data, estimate the model parameters.The first and the second problem can be solved by the dynamic programming algorithms known as the Viterbi algorithm and the Forward-Backward algorithm, respectively. The la

5、st one can be solved by an iterative Expectation-Maximization (EM) algorithm, known as the Baum-Welch algorithm.References:Rabiner89 Lawrence R. Rabiner “A tutorial on hidden Markov models and selected applications in speech recognition”, Proceedings of the IEEE 77.2, pp. 257-286, 1989.Bilmes98 Jeff

6、 A. Bilmes, “A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models.”, 1998.Available modelshmm.GaussianHMM Hidden Markov Model with Gaussian emissions.hmm.GMMHMM Hidden Markov Model with Gaussian mixture emissions.hmm.Multinom

7、ialHMM Hidden Markov Model with multinomial (discrete) emissionsRead on for details on how to implement an HMM with a custom emission probability.Building HMM and generating samplesYou can build an HMM instance by passing the parameters described above to the constructor. Then, you can generate samp

8、les from the HMM by calling sample. import numpy as np from hmmlearn import hmm np.random.seed(42) model = hmm.GaussianHMM(n_components=3, covariance_type=full) model.startprob_ = np.array(0.6, 0.3, 0.1) model.transmat_ = np.array(0.7, 0.2, 0.1,. 0.3, 0.5, 0.2,. 0.3, 0.3, 0.4) model.means_ = np.arra

9、y(0.0, 0.0, 3.0, -3.0, 5.0, 10.0) model.covars_ = np.tile(np.identity(2), (3, 1, 1) X, Z = model.sample(100)The transition probability matrix need not to be ergodic. For instance, a left-right HMM can be defined as follows: lr = hmm.GaussianHMM(n_components=3, covariance_type=diag,. init_params=cm,

10、params=cmt) lr.startprob_ = np.array(1.0, 0.0, 0.0) lr.transmat_ = np.array(0.5, 0.5, 0.0,. 0.0, 0.5, 0.5,. 0.0, 0.0, 1.0)If any of the required parameters are missing, sample will raise an exception: hmm.GaussianHMM(n_components=3) X, Z = model.sample(100)Traceback (most recent call last):.sklearn.

11、utils.validation.NotFittedError: This GaussianHMM instance is not fitted yet. Call fit with appropriate arguments before using this method.Fixing parametersEach HMM parameter has a character code which can be used to customize its initialization and estimation. EM algorithm needs a starting point to

12、 proceed, thus prior to training each parameter is assigned a value either random or computed from the data. It is possible to hook into this process and provide a starting point explicitly. To do so1. ensure that the character code for the parameter is missing from init_params and then2. set the pa

13、rameter to the desired value.For example, consider an HMM with explicitly initialized transition probability matrix model = hmm.GaussianHMM(n_components=3, n_iter=100, init_params=mcs) model.transmat_ = np.array(0.7, 0.2, 0.1,. 0.3, 0.5, 0.2,. 0.3, 0.3, 0.4)A similar trick applies to parameter estim

14、ation. If you want to fix some parameter at a specific value, remove the corresponding character from params and set the parameter value before training.Examples: Sampling from HMMTraining HMM parameters and inferring the hidden statesYou can train an HMM by calling the fit method. The input is a ma

15、trix of concatenated sequences of observations (aka samples) along with the lengths of the sequences (see Working with multiple sequences).Note, since the EM algorithm is a gradient-based optimization method, it will generally get stuck in local optima. You should in general try to run fit with various initializations and select the highest scored model.The score of the model can be calculated by the score method.The inferred optimal hidd

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 中学教育 > 职业教育

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号