深度学习—训练集、验证集和测试集概念

上传人:新** 文档编号:511416527 上传时间:2023-01-22 格式:DOC 页数:3 大小:108.50KB
返回 下载 相关 举报
深度学习—训练集、验证集和测试集概念_第1页
第1页 / 共3页
深度学习—训练集、验证集和测试集概念_第2页
第2页 / 共3页
深度学习—训练集、验证集和测试集概念_第3页
第3页 / 共3页
亲,该文档总共3页,全部预览完了,如果喜欢就下载吧!
资源描述

《深度学习—训练集、验证集和测试集概念》由会员分享,可在线阅读,更多相关《深度学习—训练集、验证集和测试集概念(3页珍藏版)》请在金锄头文库上搜索。

1、Trai ning, Validati on and Test DataExample:(A) We have data on 16 data items , their attributes and class labels. RANDOMLY divide them into 8 for training, 4 for validation and 4 for testing.Trai ningItem No.d -AttributesClass1.02.03.KNOWN FOR ALL14.15.DATA ITEMS16.17.08.0Validati on9.010.011.112.0

2、Test13.014.015.116.1(B) . Next, suppose we develop, three classification models A, B, C from the training data. Let the training errors on these models be as shown below (recall that the models do not necessarily provide perfect results on trai ning data n either they are required to).Classificati o

3、n results fromItem No.d- AttributesTrue ClassModel AModel BModel C1.00112.ALL KNOWN00003.10104.11015.10006.11117.00008.0000Classificati on Error2/83/83/8(C) . Next, use the three models A, B, C to classify each item in the validation set based on its attribute vales. Recall that we do know their tru

4、e labels as well. Suppose we get the follow ing results:Classificati on results fromItem No.d- AttributesTrue ClassModel AModel BModel C9.010010.001011.101012.0010Classificati on Error2/42/41/4If we use minimum validation error as model selection criterion, we would select model C.(D) . Now use mode

5、l C to determine class values for each data point in the test set. We do so by substitut ing the (known) attribute value into the classificati on model C. Again, recall that we know the true label of each of these data items so that we can compare the values obtained from the classification model wi

6、th the true labels to determine classification error on the test set. Suppose we get the follow ing results.Classificati on results fromItem No.d- AttributesTrue ClassModel C13.0014.ALL KNOWN0015.1016.11Classificati on Error1/4(E) . Based on the above, an estimate of generalization error is 25%.What

7、 this means is that if we use Model C to classify future items for which only the attributes will be known, not the class labels, we are likely to make in correct classificati ons about 25% of the time.(F) . A summary of the above is as follows:ModelTrai ningValidati onTestA2550B50C2525Cross Validat

8、i onIf available data are limited, we employ Cross Validation (CV). In this approach, data are randomly divided into almost k equal sets. Training is done based on (k-1) sets and the k- th set is used for test. This process is repeated k times (k-fold CV). The average error on the k repetitions is u

9、sed as a measure of the test error.For the special case whe n k=1, the above is called Leave- One -Out-Cross-Validatio n (LOO-CV).EXAMPLE:Consider the above data consisting of 16 items.(A). Let k= 4, ., 4- fold Cross Validatio n.Divide the data into four sets of 4 items each.Suppose the follow ing s

10、et up occurs and the errors obta ined are as show n.Set 1Set 2Set 3Set 4Trai ningItems 1 - 12Items 1 - 813-16Items 1 - 49-16Items 5-16TestItems 13-16Items 9-12Items 5 - 8Items 1 -4Error on test set(assume)25%35%28%32%Estimated Classification Error (CE) = 25+35+28+32= 30%4(B). LOO -CVFor this, data a

11、re divided into 16 sets, each con sisti ng of 15 training data and one test data.Set 1Set 2Set 15Set 16Trai ningItems 1 - 15Items 1 -14,16Item 1,3-8Items 2-16TestItem 16Item 15Item 2Item 1Error on test set(assume)0%100%100%100%Suppose Average Classification Error based on the values in the last row isCE)= 32%Then the estimate of test error is 32% .

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 办公文档 > 解决方案

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号