Crowdsourcing has been gaining relevance in recent years, mostly because it allows firms to outsource efficiently and to challenge the inertia they face internally through new ways of innovating and customers’ involvement in new product development. However, this does not preclude organizations whose reputation and fame is based on their talent’s pools and ability to innovate from benefitting from crowdsourcing.
One of such cases involves Harvard University and Topcoder, while the first organization hardly needs a presentation the second one is a crowdsourcing platform whose community is beyond 600.000 specialists in computer programming. This community works through contests in competitive programming, design, and development. In 2013, however, Harvard University decided to use Topcoder as a new way to do research.
The idea is based on division of labour, dividing tasks in smaller ones to gain in productivity. This principle is already implemented by other crowdsourcing platforms such as Amazon Turk, where the crowd is used to perform really simple jobs for which however there is a very little compensation.Topcoder represents the opposite scenario in which the crowd is hyperspecialised in a certain field and therefore able to outperform many organizations in both outcomes and costs.
Topcoder goal was developing a new predictive algoritm able to outperform the NIH’s standard algorithm ( also known as BLAST).To make this possible Harvard had to first reframe the problem, so that it could be accessible to individuals not trained in computational biology. Results, were outstanding, since in just 2 weeks 16 of the results were more accurate than BLAST and up to 1000 times faster. And this was achieved with just a $6000 investment, even though it was the results of more than 4000 hours of work.
After all, crowdsourcing may be something more than “the new(est) pool of cheap labour”…