面向缺陷的软件系统可靠管理规范的研究

上传人:tia****nde 文档编号:66989637 上传时间:2019-01-06 格式:PPT 页数:34 大小:2.12MB
返回 下载 相关 举报
面向缺陷的软件系统可靠管理规范的研究_第1页
第1页 / 共34页
面向缺陷的软件系统可靠管理规范的研究_第2页
第2页 / 共34页
面向缺陷的软件系统可靠管理规范的研究_第3页
第3页 / 共34页
面向缺陷的软件系统可靠管理规范的研究_第4页
第4页 / 共34页
面向缺陷的软件系统可靠管理规范的研究_第5页
第5页 / 共34页
点击查看更多>>
资源描述

《面向缺陷的软件系统可靠管理规范的研究》由会员分享,可在线阅读,更多相关《面向缺陷的软件系统可靠管理规范的研究(34页珍藏版)》请在金锄头文库上搜索。

1、University of Electronic Science and Technology of China,面向缺陷的软件系统可靠性管理规范的研究,寇纲 电子科技大学经济与管理学院,University of Electronic Science and Technology of China,University of Electronic Science and Technology of China,基于数据挖掘和多目标决策的软件风险评估和管理,Risk Assessment,Risk Management,What can be done and what options are

2、 available? What are the associated trade-offs in terms of all costs, benefits, and risks? What are the impacts of current management decisions on future options?,What can go wrong? What is the likelihood that it could go wrong? What are the consequences? What is the time domain?,Kaplan and Garrick

3、1981,Haimes 1991,Risk Communication (Data Mining),Risk Communication (MCDM),Yacov Haimes 2009,University of Electronic Science and Technology of China,No Free Lunch (NFL) theorem,“if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many ot

4、her functions where B outperforms A” (Wolpert and Macready, 1995). In other words, there exists no single classifier that could achieve the best performance for all measures.,University of Electronic Science and Technology of China,Approach 1overview,Aim: Design a performance metric that combines va

5、rious measures to evaluate the quality of classifiers for software defect prediction; Data: 11 datasets from NASA MDP repository; Tool: WEKA Techniques: Statistic,University of Electronic Science and Technology of China,Approach 1Classifiers,Trees: classification and regression tree (CART), Nave Bay

6、es tree, and C4.5 Functions: linear logistic regression, radial basis function (RBF) network, sequential minimal optimization (SMO), Support Vector Machine (SVM), and Neural Networks Bayesian classifiers: Bayesian network and Nave Bayes lazy classifiers: K-nearest-neighbor Rules: decision table and

7、Repeated Incremental Pruning to Produce Error Reduction (RIPPER) rule induction,University of Electronic Science and Technology of China,Approach 1Step 1,For a specific dataset i (i=1,2,11), and a specific performance measure j (j=1,2,13), Do t test for pairs of classifiers (k=1, 2,13): (the statist

8、ical significance is set as 0.05),I f C_1 performs better at measure j than C_2.,The top three ranking classifiers are assigned to the score of 3, 2, and 1, respectively.,University of Electronic Science and Technology of China,Approach 1Step 2,For a specific dataset i:,The larger the “Sum_rank”, th

9、e better the classifier is. The value of “Sum_rank” is normalized.,Sum,University of Electronic Science and Technology of China,Approach 1Step 3,For a specific dataset i:,Sum,The lager the score, the better the classifier.,University of Electronic Science and Technology of China,Approach 1Results,Un

10、iversity of Electronic Science and Technology of China,Approach 1conclusion,The best result for a given dataset according to a given measure may perform poorly on a different measure. Neural network and SVM have longer training time than other classifiers in general. No classifier yielding the best

11、measures across the 11 datasets. SVM (functions.LibSVM), K-nearest-neighbor (lazy.IBk), and C4.5 (trees.J48) ranked the top three classifiers based on the experiment.,University of Electronic Science and Technology of China,Approach 2why?,Experimental results have shown that ensemble of classifiers

12、are often more accurate and robust to the effects of noisy data, and achieve lower average error rate than any of the constituent classifiers. However, inconsistencies exist in different studies and the performances of learning algorithms may vary using different performance measures and under diffe

13、rent circumstances.,University of Electronic Science and Technology of China,Approach 2Overview,Aim: Evaluate the performance of ensemble classifiers for software defect detection; Data: 11 datasets from NASA MDP repository; Tool: WEKA, Matlab 7.0 Techniques: MCDM Tool: AHP,University of Electronic

14、Science and Technology of China,Approach 2ensemble methods,Bagging It combines multiple algorithms by taking a plurality vote to get an aggregated single predictor. randomly sampling. Boosting In boosting, however, weights of training instances change in each iteration to force learning algorithms t

15、o put more emphasis on instances that were predicted incorrectly previously and less emphasis on instances that were predicted correctly previously Stacking minimizing the generalization error rate of one or more algorithms; Can different types of learning algorithms; Vote,University of Electronic S

16、cience and Technology of China,Approach 2AHP,The analytic hierarchy process (AHP) is a multi-criteria decision making approach that helps decision makers structure a decision problem based on pairwise comparisons and experts judgments.,University of Electronic Science and Technology of China,Approach 2 Pairwise comparisons of performance measures,University of Electronic Science and Technology of China,Approach 2 Priorities of AdaBoost classifiers (Group 1),Univ

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号