Measuring the Intelligence of Crowdsourcing

上传人:yanm****eng 文档编号:594842 上传时间:2017-04-09 格式:PDF 页数:10 大小:629KB
返回 下载 相关 举报
Measuring the Intelligence of Crowdsourcing_第1页
第1页 / 共10页
Measuring the Intelligence of Crowdsourcing_第2页
第2页 / 共10页
Measuring the Intelligence of Crowdsourcing_第3页
第3页 / 共10页
Measuring the Intelligence of Crowdsourcing_第4页
第4页 / 共10页
Measuring the Intelligence of Crowdsourcing_第5页
第5页 / 共10页
点击查看更多>>
资源描述

《Measuring the Intelligence of Crowdsourcing》由会员分享,可在线阅读,更多相关《Measuring the Intelligence of Crowdsourcing(10页珍藏版)》请在金锄头文库上搜索。

1、Crowd IQ: Measuring the Intelligence of CrowdsourcingPlatformsMichal KosinskiThe Psychometrics Centre,University of Cambridge, UKmk583cam.ac.ukYoram BachrachMicrosoft Research,Cambridge, UKGjergji KasneciMicrosoft Research,Cambridge, UKJurgen Van-GaelMicrosoft Research,Cambridge, UKThore GraepelMicr

2、osoft Research,Cambridge, UKABSTRACTWe measure crowdsourcing performance based on a stan-dard IQ questionnaire, and examine Amazons Mechani-cal Turk (AMT) performance under di erent conditions.These include variations of the payment amount o ered,the way incorrect responses a ect workers reputations

3、,threshold reputation scores of participating AMT work-ers, and the number of workers per task. We show thatcrowds composed of workers of high reputation achievehigher performance than low reputation crowds, and thee ect of the amount of payment is non-monotone|bothpaying too much and too little a e

4、cts performance. Fur-thermore, higher performance is achieved when the taskis designed such that incorrect responses can decreaseworkers reputation scores. Using majority vote to aggre-gate multiple responses to the same task can signi cantlyimprove performance, which can be further boosted bydynami

5、cally allocating workers to tasks in order to breakties.ACM Classification KeywordsH.4 Information Systems Applications: MiscellaneousGeneral TermsAlgorithms, EconomicsAuthor KeywordsCrowdsourcing, Psychometrics, Incentive SchemesINTRODUCTIONConsider a task relying heavily on mental abilities, sucha

6、s solving an IQ test. Who would you expect to performbetter: an average job applicant, or a small crowd com-posed of anonymous people paid a few cents for theirwork? Many would think that the single well-motivatedPermission to make digital or hard copies of all or part of this work forpersonal or cl

7、assroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specificpermis

8、sion and/or a fee.Web Science12, June 2224, 2012, Evanston, IL, USA.individual should obtain a better or at least a compa-rable score. We show that even a small crowd can dobetter on an IQ test than 99% of the general population,and can also perform the task much faster.The collective intelligence o

9、f crowds can be used to solvea wide range of tasks. Well known examples of platformsusing the crowds labour to solve complex tasks includeWikipedia, Yahoo! Answers, and various prediction mar-kets 1, 10, 43. Similarly, rather than relying exclu-sively on the labour of their own employees, institutio

10、nsare using crowdsourcing to carry out business tasks andobtain information using one of the crowdsourcing mar-ketplaces, such as Amazon Mechanical Turk1 (AMT),Taskcn2 and Crowd-Flower3. These marketplaces con-nect workers, interested in selling their labour, with re-questers seeking crowds to solve

11、 their tasks.Requesters split their problems into single tasks, so-called Human Intelligence Tasks4 (HITs), and o er re-wards to workers for solving them. Crowdsourcing mar-kets o er great opportunities for both the requesters andthe workers. They allow workers to easily access a largepool of jobs g

12、lobally, and work from the comfort of theirown homes. On the other hand, requesters gain instantaccess to very competitively priced labour, which can bequickly obtained even for a time-critical task.Typical crowdsourced tasks include lling surveys, la-belling items (such as images or descriptions) o

13、r popu-lating databases. More sophisticated HITs may involvedeveloping product descriptions, analysing data, trans-lating short passages of text, or even writing press ar-ticles on a given subject. In our study, a workers taskwas to answer a question from an IQ test.Current implementations of crowds

14、ourcing su er fromcertain limitations and disadvantages and to use theme ectively, one must devise appropriate designs for thetasks at hand. Di erent tasks may require solutions 3www.crowd 4Our experiments were conducted on Amazon MechanicalTurk, so we adopt their terminology.di erent quality; in so

15、me problems it may be acceptableto be wrong sometimes, while other problems require so-lutions of the highest achievable quality. Also, while insome conditions it may be important to obtain solutionsinstantly, in others a degree of latency may be accept-able.One major problem in crowdsourcing domain

16、s is thatthe e ort level exerted by workers cannot be directly ob-served by requesters. A worker may attempt to free-ridethe system and increase their earnings by lowering thequality and maximizing quantity of their responses 16,17, 34, 41. To alleviate this free-riding problem, someresearchers have proposed making the workers partici-pate in a contest for th

展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 学术论文 > 其它学术论文

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号