In the article Crowdsourcing Systems on the World-Wide Web (2011), Doan et al. assess the contemporary business landscape in online crowdsourcing (CS) models. Using a plethora of real-life examples of varying notoriety and a practical, taxonomic structure, the article examines specific types of CS architectures and uncovers the key challenges faced by such systems today. The authors develop an uncharacteristically broad (inclusive) definition for CS which encompasses both direct user contributions like Wikis or idea generators, and passive user value like on social media sites. Compared to other academia on this topic, Doan et al. develop a definition of CS based on the intention of these systems as problem-solving, rather than just the crowd-based method through which they operate. Thus, they define CS systems as ‘[Any system that] enlists a crowd of humans to help solve a problem defined by the system owners.’
While the article contains no scientific testing or experimentation, and offers little in terms of an academic research agenda, it offers some practical value. The authors develop an untypical taxonomy based on nine characteristics of CS systems, and use these to determine system typologies. A distinction is first made between CS systems that use explicit and implicit methods of user collaboration. Traditional platforms like Wikipedia or Mechanical Turk are overt (explicit) about the value users generate, systems like social media sites or ReCaptcha are not. Compared to the other, more academically pervasive dimensions used (e.g. user inputs and system architecture), this initial grouping is unusual as it forces the inclusion of so-called ‘piggybacking’ systems. According to the authors, these are systems in which there are no direct users. Rather, the system uses traces left by external users on other sites (often search engines like Google) to solve their defined problem.
The unconventionally broad definition of CS used by this article lends itself well to a theoretical taxonomy, but it presents problems in the application of such. By including not only implicit systems that lack overt problems or solutions for users to engage in, but ‘piggybacking’ systems that lack users entirely, the tangible concept of CS is made opaque. Even the Internet in its entirety is included in this definition. This makes it hard to categorize real examples concretely, and reduces confidence in any of the actionable recommendations given that pertain to just one category (“does what applies to the internet really apply to my business?”). Additionally, the paper would benefit from modern revision. While the examples are outdated, as is expected from a fast-moving area such as this, they still serve as effective prototypes of their practices today.
In addition to developing a taxonomy on CS system types, the article lastly discusses challenges most often faced by such systems and recommends approaches based on practical examples. While the examples are slightly outdated, as is expected from a fast-moving area such as this, they still serve as effective prototypes of their practices today.
Four distinct challenges are discussed:
- how to recruit and retain users?
- what can users do?
- how to combine their inputs?
- how to evaluate them?
For each challenge, recommendations are given for assorted CS types using real-life examples – allowing readers an insight into how existing practices have solved or circumvented these challenges in various CS contexts. These challenge recommendations are the crux of this paper’s value as it lacks any direct research or academic agenda of any kind. Regardless, the authors have constructed a useful, albeit not scholastic, lens through which to examine crowdsourcing.