DESIGNING COMPLEX CROWDSOURCING APPLICATIONS COVERING MULTIPLE PLATFORMS AND TASKS

Authors

  • ALESSANDRO BOZZON Software and Computer Technologies Department. Delft University of Technology Postbus 5 2600 AA, Delft, The Netherlands
  • MARCO BRAMBILLA Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico di Milano. Piazza Leonardo da Vinci, 32. 20133 Milano, Italy
  • STEFANO CERI Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico di Milano. Piazza Leonardo da Vinci, 32. 20133 Milano, Italy
  • ANDREA MAURI Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico di Milano. Piazza Leonardo da Vinci, 32. 20133 Milano, Italy
  • RICCARDO VOLONTERIO Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) Politecnico di Milano. Piazza Leonardo da Vinci, 32. 20133 Milano, Italy

Keywords:

Crowdsourcing, Workflow, Social Network, Community, Control

Abstract

A number of emerging crowd-based applications cover very dierent scenarios, including opinion mining, multimedia data annotation, localised information gathering, marketing campaigns, expert response gathering, and so on. In most of these scenarios, applications can be decomposed into tasks that collectively produce their results; tasks interactions give rise to arbitrarily complex work ows. In this paper we propose methods and tools for designing crowd-based work ows as interacting tasks. We describe the modelling concepts that are useful in this frame- work, including typical work ow patterns, whose function is to decompose a cognitively complex task into simple interacting tasks for cooperative solving. We then discuss how work ows and patterns are managed by CrowdSearcher, a sys- tem for designing, deploying and monitoring applications on top of crowd-based systems, including social networks and crowdsourcing platforms. Tasks performed by humans consist of simple operations which apply to homogeneous objects; the complexity of ag- gregating and interpreting task results is embodied within the framework. We show our approach at work on a validation scenario and we report quantitative ndings, which highlight the eect of work ow design on the nal results.

 

Downloads

Download data is not yet available.

References

S. Abiteboul and V. Vianu. Collaborative data-driven work

ows: think global, act local. In

Proceedings of the 32nd symposium on Principles of database systems, pages 91{102, New York,

NY, USA, Etats-Unis, 2013. ACM.

G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: a survey of

the state-of-the-art and possible extensions. Knowledge and Data Engineering, IEEE Transactions

on, 17(6):734{749, 2005.

S. Ahmad, A. Battle, Z. Malkani, and S. Kamvar. The jabberwocky programming environment

for structured social computing. In UIST '11, pages 53{64. ACM, 2011.

O. Alonso, D. E. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. SIGIR Forum,

(2):9{15, Nov. 2008.

M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell,

and K. Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual

ACM symposium on User interface software and technology, UIST '10, pages 313{322, New York,

NY, USA, 2010. ACM.

B. Bislimovska, A. Bozzon, M. Brambilla, and P. Fraternali. Graph-based search over web application

model repositories. In S. Auer, O. Daz, and G. Papadopoulos, editors, Web Engineering,

volume 6757 of Lecture Notes in Computer Science, pages 90{104. Springer Berlin Heidelberg,

B. Bislimovska, A. Bozzon, M. Brambilla, and P. Fraternali. Textual and content-based search in

repositories of web application models. ACM Trans. Web, 8(2):11:1{11:47, Mar. 2014.

A. Bozzon, M. Brambilla, and S. Ceri. Answering search queries with crowdsearcher. In 21st Int.l

Conf. on World Wide Web 2012, WWW '12, pages 1009{1018. ACM, 2012.

A. Bozzon, M. Brambilla, S. Ceri, and A. Mauri. Reactive crowdsourcing. In 22nd World Wide

Web Conf., WWW '13, pages 153{164, 2013.

A. Bozzon, M. Brambilla, S. Ceri, A. Mauri, and R. Volonterio. Pattern-based speci cation of

crowdsourcing applications. In Web Engineering, 14th International Conference, ICWE 2014,

Toulouse, France, July 1-4, 2014. Proceedings, pages 218{235, 2014.

A. Bozzon, M. Brambilla, S. Ceri, M. Silvestri, and G. Vesci. Choosing the right crowd: expert

nding in social networks. In 16th International Conference on Extending Database Technology,

EDBT '13, pages 637{648, New York, NY, USA, 2013. ACM.

A. Bozzon, I. Catallo, E. Ciceri, P. Fraternali, D. Martinenghi, and M. Tagliasacchi. A framework

for crowdsourced multimedia processing and querying. In Proceedings of the First International

Workshop on Crowdsourcing Web Search, Lyon, France, April 17, 2012, pages 42{47, 2012.

M. Brambilla, S. Ceri, A. Mauri, and R. Volonterio. Community-based crowdsourcing. In

C. Chung, A. Z. Broder, K. Shim, and T. Suel, editors, SOCM Workshop, 23rd International

World Wide Web Conference, WWW '14, Seoul, April 7-11, 2014, Companion Volume, pages

{896. ACM, 2014.

S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Using the crowd for top-k and group-by queries.

In Proceedings of the 16th International Conference on Database Theory, ICDT '13, pages 225{236,

New York, NY, USA, 2013. ACM.

A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via

the em algorithm. JOURNAL OF THE ROYAL STATISTICAL SOCIETY, 39(1):1{38, 1977.

D. E. Difallah, G. Demartini, and P. Cudre-Mauroux. Pick-a-crowd: tell me what you like, and

i'll tell you what to do. In 22nd international conference on World Wide Web, WWW '13, pages

{374, 2013.

A. Doan, R. Ramakrishnan, and A. Y. Halevy. Crowdsourcing systems on the world-wide web.

Commun. ACM, 54(4):86{96, Apr. 2011.

A. E. Elo. The rating of chessplayers, past and present. Arco Pub., New York, 1978.

M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answering queries

with crowdsourcing. In ACM SIGMOD 2011, pages 61{72. ACM, 2011.

P. Fraternali, M. Tagliasacchi, D. Martinenghi, A. Bozzon, I. Catallo, E. Ciceri, F. Nucci, V. Croce,

I. S. Altingovde, W. Siberski, F. Giunchiglia, W. Nejdl, M. Larson, E. Izquierdo, P. Daras,

O. Chrons, R. Traphoener, B. Decker, J. Lomas, P. Aichroth, J. Novak, G. Sillaume, F. S. Figueroa,

and C. Salas-Parra. The cubrik project: Human-enhanced time-aware multimedia search. In

Proceedings of the 21st International Conference Companion on World Wide Web, WWW '12

Companion, pages 259{262, New York, NY, USA, 2012. ACM.

C. Grady and M. Lease. Crowdsourcing document relevance assessment with mechanical turk.

In Proceedings of the NAACL HLT 2010 Workshop, CSLDAMT '10, pages 172{179, Stroudsburg,

PA, USA, 2010. Association for Computational Linguistics.

W. H. Inmon. Building the Data Warehouse. John Wiley & Sons, Inc., New York, NY, USA,

S. Kochhar, S. Mazzocchi, and P. Paritosh. The anatomy of a large-scale human computation

engine. In HCOMP '10, pages 10{17. ACM, 2010.

E. Law and L. von Ahn. Human Computation. Synthesis Lectures on Arti cial Intelligence and

Machine Learning. Morgan & Claypool Publishers, 2011.

C. H. Lin, Mausam, and D. S. Weld. Crowdsourcing control: Moving beyond multiple choice. In

UAI, pages 491{500, 2012.

G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. Turkit: tools for iterative tasks on

mechanical turk. In HCOMP '09, pages 29{30. ACM, 2009.

G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. Exploring iterative and parallel human

computation processes. In Proceedings of the ACM SIGKDD Workshop on Human Computation,

HCOMP '10, pages 68{76, New York, NY, USA, 2010. ACM.

A. Marcus, E. Wu, D. Karger, S. Madden, and R. Miller. Human-powered sorts and joins. Proc.

VLDB Endow., 5(1):13{24, Sept. 2011.

A. Marcus, E. Wu, S. Madden, and R. C. Miller. Crowdsourced databases: Query processing with

people. In CIDR 2011, pages 211{214. www.cidrdb.org, Jan. 2011.

P. Minder and A. Bernstein. How to translate a book within an hour: towards general purpose

programmable human computers with crowdlang. In WebScience 2012, pages 209{212, Evanston,

IL, USA, June 2012. ACM.

M. Minsky. The society of mind. Simon & Schuster, Inc., New York, NY, USA, 1986.

A. Nigam and N. Caswell. Business artifacts: An approach to operational speci cation. IBM

Systems Journal, 42(3):428{445, 2003.

S. Nowak and S. Ruger. How reliable are annotations via crowdsourcing: a study about interannotator

agreement for multi-label image annotation. In Proceedings of the international confer-

ence on Multimedia information retrieval, MIR '10, pages 557{566, New York, NY, USA, 2010.

ACM.

O. M. G. (OMG). Business process model and notation (bpmn) version 2.0. Technical report, jan

J. Oosterman, A. Nottamkandath, C. Dijkshoorn, A. Bozzon, G.-J. Houben, and L. Aroyo. Crowdsourcing

knowledge-intensive tasks in cultural heritage. In Proceedings of the 2014 ACM Confer-

ence on Web Science, WebSci '14, pages 267{268, New York, NY, USA, 2014. ACM.

A. Parameswaran, M. H. Teh, H. Garcia-Molina, and J. Widom. Datasift: An expressive and

accurate crowd-powered search toolkit. In 1st Conf. on Human Computation and Crowdsourcing

(HCOMP), 2013.

A. G. Parameswaran and N. Polyzotis. Answering queries using humans, algorithms and databases.

In CIDR 2011, pages 160{166, Asilomar, CA, USA, January 2011.

H. Park, R. Pang, A. G. Parameswaran, H. Garcia-Molina, N. Polyzotis, and J. Widom. Deco: A

system for declarative crowdsourcing. PVLDB, 5(12):1990{1993, 2012.

V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? improving data quality and data

mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international

conference on Knowledge discovery and data mining, KDD '08, pages 614{622, New York, NY,

USA, 2008. ACM.

W. M. P. van der Aalst. Business process management demysti ed: A tutorial on models, systems

and standards for work

ow management. Lectures on Concurrency and Petri Nets: Advances in

Petri Nets | LNCS Vol. 3098, pages 1{65, June 2004. InternalNote: Submitted by: hr.

P. Venetis, H. Garcia-Molina, K. Huang, and N. Polyzotis. Max algorithms in crowdsourcing

environments. In WWW '12, pages 989{998, New York, NY, USA, 2012. ACM.

J. Wang and A. Kumar. A framework for document-driven work

ow systems. In Proceedings

of the 3rd international conference on Business Process Management, BPM'05, pages 285{301,

Berlin, Heidelberg, 2005. Springer-Verlag.

J. Yang, C. Hau , A. Bozzon, and G. Houben. Asking the right question in collaborative q&a

systems. In 25th ACM Conference on Hypertext and Social Media, HT '14, Santiago, Chile,

September 1-4, 2014, pages 179{189, 2014.

J. Yang, K. Tao, A. Bozzon, and G. Houben. Sparrows and owls: Characterisation of expert behaviour

in stackover

ow. In User Modeling, Adaptation, and Personalization - 22nd International

Conference, UMAP 2014, Aalborg, Denmark, July 7-11, 2014. Proceedings, pages 266{277, 2014.

Downloads

Published

2015-03-31

How to Cite

BOZZON, A. ., BRAMBILLA, M. ., CERI, S. ., MAURI, A. ., & VOLONTERIO, R. . (2015). DESIGNING COMPLEX CROWDSOURCING APPLICATIONS COVERING MULTIPLE PLATFORMS AND TASKS. Journal of Web Engineering, 14(5-6), 443–473. Retrieved from https://journals.riverpublishers.com/index.php/JWE/article/view/3847

Issue

Section

Articles