基于模型引导的智能优化算法

上传人:恋****泡 文档编号:121364887 上传时间:2020-02-22 格式:PPT 页数:61 大小:2.26MB
返回 下载 相关 举报
基于模型引导的智能优化算法_第1页
第1页 / 共61页
基于模型引导的智能优化算法_第2页
第2页 / 共61页
基于模型引导的智能优化算法_第3页
第3页 / 共61页
基于模型引导的智能优化算法_第4页
第4页 / 共61页
基于模型引导的智能优化算法_第5页
第5页 / 共61页
点击查看更多>>
资源描述

《基于模型引导的智能优化算法》由会员分享,可在线阅读,更多相关《基于模型引导的智能优化算法(61页珍藏版)》请在金锄头文库上搜索。

1、基于模型引导的智能优化算法 智能优化的本质是什么 n通过对已有结果的归纳 学习 获得关于最优 解的结构 分布 知识 可能是不完整的 基于该知识 有效缩小搜索区域 加速最优解 的获得 n 学习 在智能优化算法中应该发挥什么样的作 用 n如何 学习 PBIL Population Based Incremental Learning CGA Compact Genetic Algorithm 01 0 1 Satisfy Individual What s it Quantum Inspired Evolutionary Algorithms Define The probability of s

2、ampling s is For example the probability of sampling 0101 is So qjt is the probability of solutions to be sampled found Quantum Inspired Evolutionary Algorithms A three Q bit system with three pairs of amplitudes such as then the states of the system can be represented as Quantum Inspired Evolutiona

3、ry Algorithms n在量子力学中 粒子的状态由概率幅完全描写 其模 平方为粒子坐标取值的概率密度 n如果将微观世界中一个粒子的位置视为问题的一个解 则编码描述了采样到该解的概率 n根据每次采样所获得的最优解对概率模型进行调整 让模型记住已获得的优良解的信息 进而引导搜索向 更好解的方向进行 使得下次采样得到更好解的概率 增大 这种表达的意义何在 Quantum Inspired Evolutionary Algorithms 角度的变化是线性的 而最终概率的变化是非线性的 Updating of the probability Model 0 1 Quantum Coding Gen

4、etic Algorithm begin t 0 i initialize Q t ii make P t by observing the states of Q t iii evaluate P t iv store the best solutions among P t into B t while not termination condition do begin t t 1 v make P t by observing the states of Q t 1 vi evaluate P t vii update Q t using Q gates viii store the

5、best solutions among B t 1 and P t into B t ix Perform genetic operator crossover mutation on Q t end end 1 Select two chromosomes from group randomly with a given probability Pc 2 Exchange their CETs temporarily 3 Update the two chromosomes according to their new targets for one time 4 Change back

6、their CETs Crossover Operator Quantum Coding Genetic Algorithm 1 Select a set of chromosomes with a given probability Pm 2 For each chromosome select a qubit randomly 3 Swap the position of its pair of probability amplitude and Mutation Operator Quantum Coding Genetic Algorithm 单变量模型 变量间不存在相互依赖关系 变量

7、间无 依赖关系 对适应度的贡献也是各自独立的 MIMIC Mutual Information Maximization for Input Clustering De Bonet等 1997年提出来 双变量模型 BOA Bayesian Optimization Algorithm Pelikan等 1999年提出来 采用Bayesian Network建模描述变量间的依赖关系 采用Bayesian Dirichlet Metric来表征网络的质量 采用一个贪心搜索策略来优化网络的结构 Bayesian Optimization Algorithm 多变量模型 Principle of ED

8、A 1 Generate initial population of size M 2 Select N promising solutions where N M 3 Calculate joint probability distribution of selected individuals 4 Generate offspring according to the calculated probability distribution and replace parent 5 Go to step 2 until termination criteria is meet What ma

9、kes EDA different Introducing explicitly statistical learning into searching process Key issues in EDA nProper Models that describe not only distribution of variables but also the dependent structure among them nEfficient methods for learning models Statistical Learning in EDA nParameter Learning nM

10、odel Structure Learning nHow to implement them simultaneously in one procedure nModel Selection nRegularization two different strategies for tackling the problem of a finite size of samples L Xu Bayesian Ying Yang Learning Intelligent Technologies for Information Analysis Springer pp661 706 2004 Two

11、 Strategies nModel selection prefers a model that has least complexity for which a compact inner representation is aimed at such that extra representation space can be released nRegularization is imposed on a model that has a fixed scale of representation space with its complexity larger than needed

12、 such that inner representation can spread as uniformly as possible over all the representation space with a distribution that is as simple as possible which thus becomes equivalent to a model with a reduced complexity Estimation of GMM nExpectation Maximization EM Algorithms nClustering Continuous

13、Optimization with GMM via Clustering Continuous Optimization with GMM via Boosting nEstimating GMM by Progressively Adding Components nBoosting GMM algorithm nGreedy EM algorithm n Boosting Gaussian Mixture Model Xubo Song Kun Yang and Misha Pavel ICONIP 2004 LNCS 3316 EDA based on Boosting GMM Gree

14、dy EM for GMM NIKOS V and ARISTIDIS L Neural Processing Letters 15 77 87 2002 Continuous Optimization based on Greedy Estimation of GMM Test Functions Test Functions Test Functions PopulationSelecion UMDAc1000500 EGNA1000500 CEGDA2000500 Boosting GMMEDA 600300 GEMEDA1000300 Settings for the test EDA

15、s Experimental Results Experimental Results Experimental Results Experimental Results Experimental Results 在连续域下的EDA方法 也相应地分成三类 1 变量间不存在依赖关系 UMDAc PBILc 2 变量间存在成对的依赖关系 MIMICc 3 多变量间存在依赖关系 IDEA EMNAa 1 变量间不存在相互依赖关系 UMDAc Univariate Marginal Distribution Algorthm continuous Larranaga等 2000提出来 i 每一代的每个

16、变量可能服从不同的概率密度函 数 ii 依靠最大似然估计方法来估计概率分布参数 如UMDAc在高斯分布的情况下 每个变量服从一个单变量 高斯分布 PBILc Population Based Incremental Learning continuous Sebag and Ducoulombier 1998年提 出来 2 两个变量间存在依赖关系 MIMICc Mutual Information Maximization for Input Clustering continuous Larranaga等 2000 年提出来 在高斯分布的情况 EMNAa Estimation of Multivariate Normal Algorithm Adaptive Larranaga and Lozano 2001年提出

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号