EXPLORING TASK PROPERTIES IN CROWDSOURCING

上传人:yanm****eng 文档编号:594814 上传时间:2017-04-09 格式:PDF 页数:13 大小:620.92KB
返回 下载 相关 举报
EXPLORING TASK PROPERTIES IN CROWDSOURCING_第1页
第1页 / 共13页
EXPLORING TASK PROPERTIES IN CROWDSOURCING_第2页
第2页 / 共13页
EXPLORING TASK PROPERTIES IN CROWDSOURCING_第3页
第3页 / 共13页
EXPLORING TASK PROPERTIES IN CROWDSOURCING_第4页
第4页 / 共13页
EXPLORING TASK PROPERTIES IN CROWDSOURCING_第5页
第5页 / 共13页
点击查看更多>>
资源描述

《EXPLORING TASK PROPERTIES IN CROWDSOURCING》由会员分享,可在线阅读,更多相关《EXPLORING TASK PROPERTIES IN CROWDSOURCING(13页珍藏版)》请在金锄头文库上搜索。

1、 EXPLORING TASK PROPERTIES IN CROWDSOURCING AN EMPIRICAL STUDY ON MECHANICAL TURK Schulze, Thimo, University of Mannheim, Chair in Information Systems III, Schloss, 68131 Mannheim, Germany, schulzewifo.uni-mannheim.de Seedorf, Stefan, University of Mannheim, Chair in Information Systems III, Schloss

2、, 68131 Mannheim, Germany, seedorfwifo.uni-mannheim.de Geiger, David, University of Mannheim, Chair in Information Systems III, Schloss, 68131 Mannheim, Germany, geigerwifo.uni-mannheim.de Kaufmann, Nicolas, mailnicolas-kaufmann.de Schader, Martin, University of Mannheim, Chair in Information System

3、s III, Schloss, 68131 Mannheim, Germany, martin.schaderuni-mannheim.de Abstract In the last years, crowdsourcing has emerged as a new approach for outsourcing work to a large number of human workers in the form of an open call. Amazons Mechanical Turk (MTurk) enables requesters to efficiently distri

4、bute micro tasks to an unknown workforce which selects and processes them for small financial rewards. While worker behavior and demographics as well as task design and quality management have been studied in detail, more research is needed on the relationship between workers and task design. In thi

5、s paper, we conduct a series of explorative studies on task properties on MTurk. First, we identify properties that may be relevant to workers task selection through qualitative and quantitative preliminary studies. Second, we provide a quantitative survey with 345 participants. As a result, the tas

6、k properties are ranked and set into relation with the workers demographics and background. The analysis suggests that there is little influence of education level, age, and gender. Culture may influence the importance of bonuses, however. Based on the explorative data analysis, five hypotheses for

7、future research are derived. This paper contributes to a better understanding of task choice and implies that other factors than demographics influence workers task selection. Keywords: Amazon Mechanical Turk, Cultural Differences, Survey, Crowdsourcing 1 Introduction “Crowdsourcing,” first mentione

8、d by Howe (2006), can be defined as the act of taking a task once performed by the employees of a company and outsourcing it to a large, undefined group of people in an open call (Howe, 2008). The term has been used for a wide variety of phenomena and is related to areas like Open Innovation, Co-Cre

9、ation, or User Generated Content. Recently, the area of “paid crowdsourcing” has gained a lot of momentum, with companies like CrowdFlower () and CloudCrowd () receiving big venture funding (T, 2010a, 2010b). Frei (2009) defines paid crowdsourcing as using a technology intermediary for outsourcing p

10、aid work of all kinds to a large group of workers. Because of the dynamic scalability, paid crowdsourcing is often compared to cloud computing (Corney et al., 2009; Lenk et al., 2009). Paid crowdsourcing on a large scale is enabled by platforms that allow requesters and workers to allocate resources

11、. Amazon Mechanical Turk () is a market platform that gives organizations (“Requesters”) the opportunity to get large amounts of work completed by a cost-effective, scalable, and potentially large number of disengaged workers (“Turkers”). Requesters break down jobs into micro tasks called HITs (Huma

12、n Intelligence Tasks) which are selected and completed by human workers for a relatively small reward. Example tasks include image labeling, transcription, content categorization, and web research. However, this open nature of task allocation exposes the requester to serious problems regarding the q

13、uality of results. Some workers submit HITs by randomly selecting answers, submitting irrelevant text, etc., hoping to be paid for simply completing a task. Besides this inevitable “spam problem,” reasons for bad results may include that workers did not understand the requested task or were simply n

14、ot qualified to solve it. Verifying the correctness of every submitted solution can often be as costly and time-consuming as performing the task itself (Ipeirotis et al., 2010). The prevalent solution to deal with these issues is the implementation of suitable quality management measures. A common a

15、pproach is redundant assignment of tasks to multiple workers in combination with a subsequent comparison of the respective results. Another option is peer review where results from one worker are verified by others with a higher level of credibility (Kern et al., 2010). The resources invested into t

16、hese measures can constitute a considerable overhead and diminish the efficiency of micro-task crowdsourcing. Research has shown that the quality of task results can be substantially improved by choosing an adequate task design (Huang et al., 2010). Depending on the type and background of a task, its presentation may influence the result quality in two ways: First, a good and appropriate design facilitates the overa

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 学术论文 > 其它学术论文

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号