Many phenomena have come with the emergence of web 2.0, amongst which online crowdsourcing. Crowdsourcing is the word used to describe the development of the cooperation between organisations, such as government, companies, institutions or persons make use if a large group of unspecified individuals for the sake of consultation, innovation, policy making and research. These individuals might be professionals, volunteers or people interested in the specific topic. Crowdsourcing does not necessarily have to take place on the internet, however this blog post will focus of on crowdsourcing that makes use of the internet. The focus will be specifically on crowdsourcing within software engineering, as the thread throughout this blog post will be The paper “Crowdsourcing in Software Engineering: Models, Motivations, and Challenges” written by T. LaToza and A. van der Hoek in 2016.
Crowdsourcing has lead to all sorts of incredible accomplishments across industries, though not much attention has been paid to the achievements of crowdsourcing within software engineering. Crowdsourcing has proven successful for some forms of conducts within software engineering, such as functionality testing, usability inspections, programming questions and debugging. However, for crowdsourcing to become as impactful as in other industries, there are still some major challenges to overcome.
Crowdsourcing varies in many aspects such as the way in which the tasks are issued, the amount of people that collaborate, and whether the task is subdivided into smaller tasks. Therefore, different crowdsourcing models exist in software engineering.
Starting with peer production, best described as a crowdsourcing model based on mirco-participation from a large amount of independent individuals (Haythornthwaite, 2009). In most cases the contributions are made without a monetary reward. Instead, contributors are motivated by a common purpose, community purpose, reputation and increased experience with new technologies (Bauwens, 2009). Well-known examples are Linux, Firefox and Apache.
Next to peer production, competitions are getting bigger within software development. Instead of treating workers as collaborators, workers are treated as contenders. As collaboration is decreased in this form of crowdsourcing, a more diverse input is gathered since contenders each work individually. In some cases, a more diverse input could result in higher quality outcomes. These cases include tasks in which creativity is required such as design tasks, but also bug detection can be very suitable for this type crowdsourcing model (Leimeister et al., 2009).
Another model that is found in software development is Microtasking. In microtaksing, batches of microtasks are posted. These tasks are often completed by multiple participants at the time, and using voting and other types of mechanisms, the best solutions are selected. An example is Amazon’s Mechanical Turk, a platform on which microtasking tasks are posted. In the software development, this model is most suitable for testing. Specific user scenarios or functionalities can easily be tested by the enormous amount of labour force, as microtasking is easily scalable and very fast. Screening and payment is done through the platform, and therefore it might be much simpler for companies to post the to-be-tested user scenarios on these platforms instead of hiring employees to do the testing.
There are many advantages that crowdsourcing can offer such as reduced time to market, participation of specialists for certain tasks and the consideration of multiple alternatives (LaToza and van der Hoek, 2016). However, the nature of software causes several major challenges that need to be overcome before these benefits can be reaped. The biggest challenge in software engineering is that in order for a task to be crowd sourced, it must have clear goals and a simple context, as the participant must fully understand the details and scope of the task.
Therefore, it is no surprise that the biggest successes of crowdsourcing in software engineering have been for small specific tasks such as testing and debugging. Yet, many software tasks are complex and hard to precisely articulate, making it hard to break them down in smaller and clearly articulated tasks.
Even if a successful decomposition method can be found for these complex tasks, can requirement specification take place in enough detail to successfully merger the decomposed task back into the complete whole?
In-house development, outsourcing, and contracting are still dominant in the industry. Even though crowdsourcing has booked some successes, it has not disrupted common practice within software engineering. Notwithstanding the fact that it does have the potential to do so, I am very curious to see what the future of crowdsourcing in this industry will hold.
Bauwens, M. (2009). Class and capital in peer production. Capital & Class, 33(1), pp.121-141.
Haythornthwaite, C. (2009). Crowds and Communities: Light and Heavyweight Models of Peer Production. IEEE.
LaToza, T. and van der Hoek, A. (2016). Crowdsourcing in Software Engineering: Models, Motivations, and Challenges. IEEE Software, 33(1), pp.74-80.
Leimeister, J., Huber, M., Bretschneider, U. and Krcmar, H. (2009). Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition. Journal of Management Information Systems, 26(1), pp.197-224.