sampleselectionbias样本选择偏差

上传人:tian****1990 文档编号:81768179 上传时间:2019-02-22 格式:PPT 页数:18 大小:867.50KB
返回 下载 相关 举报
sampleselectionbias样本选择偏差_第1页
第1页 / 共18页
sampleselectionbias样本选择偏差_第2页
第2页 / 共18页
sampleselectionbias样本选择偏差_第3页
第3页 / 共18页
sampleselectionbias样本选择偏差_第4页
第4页 / 共18页
sampleselectionbias样本选择偏差_第5页
第5页 / 共18页
点击查看更多>>
资源描述

《sampleselectionbias样本选择偏差》由会员分享,可在线阅读,更多相关《sampleselectionbias样本选择偏差(18页珍藏版)》请在金锄头文库上搜索。

1、Sample Selection Bias,Lei Tang Feb. 20th, 2007,Classical ML vs. Reality,Training data and Test data share the same distribution (In classical Machine Learning) But thats not always the case in reality. Survey data Species habitat modeling based on data of only one area Training and test data collect

2、ed by different experiments Newswire articles with timestamps,Sample selection bias,Standard setting: data (x,y) are drawn independently from a distribution D If the selected samples is not a random samples of D, then the samples are biased. Usually, training data are biased, but we want to apply th

3、e classifier to unbiased samples.,Four cases of Bias(1),Let s denote whether or not a sample is selected. P(s=1|x,y) = P(s=1) (not biased) P(s=1|x,y) = P(s=1|x) (depending only on the feature vector) P(s=1|x,y) = P(s=1|y) (depending only on the class label) P(s=1|x,y) (depending on both x and y),Fou

4、r cases of Bias(2),P(s=1|x, y)= P(s=1|y): learning from imbalanced data. Can alleviate the bias by changing the class prior. P(s=1|x,y) = P(s=1|x) imply P(y|x) remain unchanged. This is mostly studied. If the bias depends on both x and y, lack information to analyze.,An intuitive Example,P(s=1|x,y)

5、= P(s=1|x) = s and y are independent. So P(y|x, s=1) = P(y|x). Does it really matter as P(y|x) remain unchanged?,Bias Analysis for Classifiers(1),Logistic Regression Any classifiers directly models P(y|x) wont be affected by bias Bayesian Classifier But for nave Bayesian classifier,Bias Analysis for

6、 Classifiers(2),Hard margin SVM: no bias effect. Soft margin SVM: has bias effect as the cost of misclassification might change. Decision Tree usually results in a different classifier if the bias is presented In sum, most classifiers are still sensitive to the sample bias. This is in asymptotic ana

7、lysis assuming the samples are “enough”,Correcting Bias,Expected Risk: Suppose training set from Pr, test set from Pr So we minimize the empirical regularized risk:,Estimate the weights,The samples which are likely to appear in the test data will gain more weight. But how to estimate the weight of e

8、ach sample? Brute force approach: Estimate the density of Pr(x) and Pr(x), respectively, Then calculate the sample weight. Not applicable as density estimation is more difficult than classification given limited number of samples. Existing works use simulation experiments in which both Pr(x) and Pr(

9、x) are known (NOT REALISTIC),Distribution Matching,The expectation in feature space: We have Hence, the problem can be formulated as Solution is:,Empirical KMM optimization,where,Therefore, its equivalent to solve the QP problem:,Experiments,A Toy Regression Example,Simulation,Select some UCI datase

10、ts to inject some sample selection bias into training, then test on unbiased samples.,Bias on Labels,Unexplained,From theory, the importance sampling should be the best, why KMM performs better? Why kernel methods? Can we just do the matching using input features? Can we just perform a logistic regr

11、ession to estimate beta by treating test data as positive class, and training data as negative. Then, beta is the odds.,Some Related Problems,Semi-supervised Learning (Is it equivalent?) Multi-task Learning: assume P(y|x) to be different. But sample selection bias(mostly) assume P(y|x) to be the same. MTL requires training data for each task. Is it possible to discriminate features which introduce the bias? Or find invariant dimensionalities?,Any Questions?,Happy Pig Year!,

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 高等教育 > 大学课件

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号