Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配

上传人:L** 文档编号:55019900 上传时间:2018-09-23 格式:PDF 页数:19 大小:348.08KB
返回 下载 相关 举报
Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配_第1页
第1页 / 共19页
Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配_第2页
第2页 / 共19页
Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配_第3页
第3页 / 共19页
Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配_第4页
第4页 / 共19页
Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配_第5页
第5页 / 共19页
点击查看更多>>
资源描述

《Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配》由会员分享,可在线阅读,更多相关《Adaptive Task Assignment for Crowdsourced Classification拥挤源分类的自适应任务分配(19页珍藏版)》请在金锄头文库上搜索。

1、Adaptive Task Assignment for Crowdsourced ClassificationChien-Ju Ho, Shahin Jabbaricjhocs.ucla.edu, shahincs.ucla.edu University of California, Los AngelesJennifer Wortman V Microsoft Research, New York City and University of California, Los AngelesAbstractCrowdsourcing markets have gained popular-

2、ity as a tool for inexpensively collecting datafrom diverse populations of workers. Classifi- cation tasks, in which workers provide labels(such as “offensive” or “not offensive”) for in- stances (such as “websites”), are among the most common tasks posted, but due to hu- man error and the prevalenc

3、e of spam, the labels collected are often noisy. This problem is typically addressed by collecting labels for each instance from multiple workers and com- bining them in a clever way, but the question of how to choose which tasks to assign to each worker is often overlooked.We investigate the proble

4、m of task assignment and label in-ference for heterogeneous classification tasks. By applying online primal-dual techniques, we derive a provably near-optimal adaptive assignment algorithm. We show that adap- tively assigning workers to tasks can lead to more accurate predictions at a lower cost whe

5、n the available workers are diverse.1. IntroductionCrowdsourcing markets provide a platform for inex- pensively harnessing human computation power tosolve tasks that are notoriously difficult for computers. In a typical crowdsourcing market, such as Amazon Mechanical Turk, registered users may post

6、their own “microtasks” which are completed by workers in ex- change for a small payment, usually around ten cents. A microtask may involve, for example, verifying the phone number of a business, determining whether or not an image contains a tree, or determining (subjec-Proceedings of the 30thIntern

7、ational Conference on Ma- chine Learning, Atlanta, Georgia, USA, 2013.JMLR: W Wais et al., 2010). Forclassification tasks, this problem can be overcome by collecting labels for each instance from multiple work- ers and combining these to infer the true label. In- deed, much recent research has focus

8、ed on developing algorithms for combining labels from heterogeneous la- belers (Dekel Ipeirotis et al., 2010). However, this research has typically focused on the in- ference problem, sidestepping the question of how to assign workers to tasks by assuming that the learner has no control over the ass

9、ignment.One exception is the work of Karger et al. (2011a;b), who focus on the situation in which all tasks are homogeneous (i.e.,equally difficult and not requiring specialized skills), in which case they show that it is not possible to do better than using a random assignment.One might expect the

10、assignment to matter more when the tasks are heterogeneous.Classifying images of dogs versus images of cats is likely easier for the av- erage worker than classifying images of Welsh Terriers versus images of Airedale Terriers. It might be neces- sary to assign more workers to tasks of the latter ty

11、peto produce high confidence labels. The assignment can also be important when tasks require specialized skills. A worker who knows little about dogs may not be able to produce high quality labels for the Terrier task, but may have skills that are applicable elsewhere.We investigate the problem of t

12、ask assignment and la-bel inference for heterogeneous classification tasks. InAdaptive Task Assignment for Crowdsourced Classificationour model, a task requester has a set of tasks, each of which consists of an instance for which he would like to infer a binary label.Workers arrive online. The learn

13、er must decide which tasks to assign to each worker, and then use the noisy labels produced by the workers to infer the true label for each task.The goal of the learner is to output a set of labels withsufficiently low error while requesting as few labels from workers as possible. Building on online

14、 primal- dual methods (Buchbinder b) and Ho Devanur et al., 2011) to network optimization (Alon et al., 2004) and pag- ing (Bansal et al., 2007). Ho if the requester could quickly verify the accuracy of la-bels, he wouldnt need the workers labels in the first place. In their model, each task may be

15、assigned to a worker only once. In ours, repeated labeling is neces- sary since there would be no way to estimate workerquality without it. These differences require a different problem formulation and novel analysis techniques.Repeated labeling has received considerable empiri- cal attention, datin

16、g back to the EM-based algorithm of Dawid 2008; Dekel when it is close to 0, the label will be random noise.To model the fact that the requester cannot wait arbi- trarily long, we assume that he can only assign tasks tothe first m workers who arrive, for some known m. We therefore index workers 1, ,m. Later we consider an additional m workers who are used for exploration.In addition to assigning tasks to workers, the learn-ing algorithm must produce a final e

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 资格认证/考试 > 其它考试类文档

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号