外文翻译--机器学习的研究

上传人:人*** 文档编号:563755579 上传时间:2023-09-25 格式:DOC 页数:24 大小:128.50KB
返回 下载 相关 举报
外文翻译--机器学习的研究_第1页
第1页 / 共24页
外文翻译--机器学习的研究_第2页
第2页 / 共24页
外文翻译--机器学习的研究_第3页
第3页 / 共24页
外文翻译--机器学习的研究_第4页
第4页 / 共24页
外文翻译--机器学习的研究_第5页
第5页 / 共24页
点击查看更多>>
资源描述

《外文翻译--机器学习的研究》由会员分享,可在线阅读,更多相关《外文翻译--机器学习的研究(24页珍藏版)》请在金锄头文库上搜索。

1、 Machine-Learning Research Four Current Directions Thomas G.Dietterich Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification

2、 accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models.The last five years have seen an explosion in machine-learning research. This explosion has many causes: First, sep

3、arate research communities in symbolic machine learning, computation learning theory, neural networks, statistics, and pattern recognition have discovered one another and begun to work together. Second, machine-learning techniques are being applied to new kinds of problem, including knowledge discov

4、ery in databases, language processing, robot control, and combinatorial optimization, as well as to more traditional problems such as speech recognition, face recognition, handwriting recognition, medical data analysis, and game playing.In this article, I selected four topics within machine learning

5、 where there has been a lot of recent activity. The purpose of the article is to describe the results in these areas to a broader AI audience and to sketch some of the open research problems. The topic areas are (1) ensembles of classifiers, (2) methods for scaling up supervised learning algorithms,

6、 (3) reinforcement learning, and (4) the learning of complex stochastic models.The reader should be cautioned that this article is not a comprehensive review of each of these topics. Rather, my goal is to provide a representative sample of the research in each of these four areas. In each of the are

7、as, there are many other papers that describe relevant work. I apologize to those authors whose work I was unable to include in the article. Ensembles of ClassifiersThe first topic concerns methods for improving accuracy in supervised learning. I begin by introducing some notation. In supervised lea

8、rning, a learning program is given training examples of the form (x1, y1), (xm, ym) for some unknown function y = f(x). The xi values are typically vectors of the form whose components are discrete or real valued, such as height, weight, color, and age. These are also called the feature of Xi, I use

9、 the notation Xij to. refer to the jth feature of Xi. In some situations, I drop the i subscript when it is implied by the context.The y values are typically drawn from a discrete set of classes 1, k in the case of classification or from the real line in the case of regression. In this article, I fo

10、cus primarily on classification. The training examples might be corrupted by some random noise.Given a set S of training examples, a learning algorithm outputs a classifier. The classifier is a hypothesis about the true function f. Given new x values, it predicts the corresponding y values. I denote

11、 classifiers by h1,, hi.An ensemble of classifier is a set of classifiers whose individual decisions are combined in some way (typically by weighted or unweighted voting) to classify new examples. One of the most active areas of research in supervised learning has been the study of methods for const

12、ructing good ensembles of classifiers. The main discovery is that ensembles are often much more accurate than the individual classifiers that make them up. An ensemble can bee more accurate than its component classifiers only if the individual classifiers disagree with one another (Hansen and Salamo

13、n 1990). To see why, imagine that we have an ensemble of three classifiers: h1, h2, h3, and consider a new case x. If the three classifiers are identical, then when h1(x) is wrong, h2(x) and h3(x) are also wrong. However, if the errors made by the classifiers are uncorrelated, then when h1(x) is wro

14、ng, h2(x) and h3(x) might be correct, so that a majority vote correctly classifies x. More precisely, if the error rates of L hypotheses hi are all equal to pL/2 and if the errors are independent, then the probability that binomial distribution where more than L/2 hypotheses are wrong. Figure 1 show

15、s this area for a simulated ensemble of 21 hypotheses, each having an error rate of 0.3. The area under the curve for 11 or more hypotheses being simultaneously wrong is 0.026, which is much less than the error rate of the individual hypotheses.Of course, if the individual hypotheses make uncorrelat

16、ed errors at rates exceeding 0.5, then the error rate of the voted ensemble increases as a result of the voting. Hence, the key to successful ensemble methods is to construct individual classifiers with error rates below 0.5 whose errors are at least somewhat uncorrelated.Methods for Constructing EnsemblesMany methods for constructing ensembles have been developed. Some methods are ge

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 学术论文 > 开题报告

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号