Innovation tournaments are an important tool of organizations today to tackle important innovation challenges. One of these examples is Netflix, who rewarded $1 million to the winner of their innovation tournament for improved movie recommendations. However, many managers struggle with the question how, and if, to influence the outcomes of these open innovation settings by providing in-process feedback. The study of Wooten & Ulrich (2017) aims to address this managerial challenge by investigating the effects of feedback on the participation and the outcome of innovation tournaments. (Wooten & Ulrich, 2017)
To investigate these effects are six field experiments conducted among two real-life online contest platforms. Each of the six field experiments performs a contest in search for a new company logo, is open to all participants, is unblind (all ideas and feedback are visible to all participants), and allows for multiple entry of participants. In each contest are the participants randomly selected to three different in-process feedback treatments. Consumers received either no feedback, ‘random’ feedback (feedback is not associated to the idea submitted), and direct feedback based on the submission made. The quality of each submitted idea is judged by a panel of consumers that fall within the stated target group. (Wooten & Ulrich, 2017)
One of the results that the researchers found was that contest participation can be boosted with in-process feedback, especially when the feedback is directed. However, the result indicated that this boost was little with respect to engaging new participants and mainly increased the number of entries. This boost in participation can be explained by increased engagement, as participants may feel more connected to the process. (Wooten & Ulrich, 2017)
The other results measured the outcomes of the contests and were evaluated in two ways, either on the quality of the ideas, or the variance in quality among the ideas. In line with Hildebrand, Herrmann & Landwehr (2013) either feedback provided resulted in idea generation that tends to move towards the average, resulting in less variance of quality. In addition, they found that the quality of ideas varied less in their second entry when a first submission was already of high quality. The quality measurement, however, did turned out to be affected by the type of feedback treatment that was used. The directed feedback treatment proved to be beneficial for the next ideas submitted, where random feedback actually resulted in a negative effect for subsequent submissions. This effect was to be expected as Alexy, Criscuolo & Salter (2011) indicated that signalling information (of which feedback can be seen as such) can ensure that incoming ideas are of higher fit, and therefore might be judged as higher quality. (Wooten & Ulrich, 2017)
The question remains what do these result imply for the management community. As indicated earlier are managers struggling how, and if, to influence their innovation contests. This study provides valuable information to these managers on how they can support their specified targets. For example Alexi et al (2011) identified that some organizations use open innovation mainly to increase engagement, but do not focus on the outcomes of it due to the evaluation costs. This study provides valuable information that feedback increases the participation regardless of the feedback that is provided. This would mean that organizations can invest relatively little in providing feedback, as it can be meaningless, and still boost the participation to the contest. On the other hand this study is showing as well that if an organization is willing to invest time and effort, it can increase the quality of ideas by providing actual directed feedback to the ideas. (Wooten & Ulrich, 2017)
Although these results could be beneficial to organizations, manager should be aware of the weaknesses of this study. One of these weaknesses that this study experiences is that it measures solely the effects of daily feedback, and therefore didn’t incorporate different timeframes. Studies in other fields, such as retargeting adds, identified that customers did turn out to be sensitive to timeframes in which a response was received (Moriguchi et al., 2016). Future research should investigate if this applies in the field of innovation contests as well, but until that point should managers be cautious in choosing their feedback timeframes. Furthermore, the star rating feedback can be seen as a too simplistic method of providing feedback. It provides the advantages that results can be easily compared and that there is no ambiguity in the meaning of the feedback. The generalizability, however, is at stake for more technical contest in which the feedback actually is required to be more in-depth to give a sense of direction. Nevertheless, these weaknesses do not hamper the practical implications but should be used as a note of caution. (Wooten & Ulrich, 2017)
References:
Alexy, O., Criscuolo, P., & Salter, A. (2011). No soliciting: strategies for managing unsolicited innovative ideas. California Management Review, 54(3), 116-139.
Hildebrand, C., Häubl, G., Herrmann, A. and Landwehr, J.R. (2013). When social media can be bad for you: Community feedback stifles consumer creativity and reduces satisfaction with self-designed products. Information Systems Research, 24(1), 14-29.
Moriguchi, Takeshi and Xiong, Guiyang and Luo, Xueming (2016). Retargeting Ads for Shopping Cart Recovery: Evidence from Online Field Experiments.
Wooten, J. O., & Ulrich, K. T. (2017). Idea generation and the role of feedback: Evidence from field experiments with innovation tournaments. Production and Operations Management, 26(1), 80-99.
Image retrieved from:
Markets insider (2017). Netflix lost the biggest Emmy to Hulu – but its customers couldn’t care less (NFLX), Retrieved fromhttp://markets.businessinsider.com/news/stocks/netflix-stock-price-emmy-2017-lost-to-hulu-but-its-customers-couldnt-care-less-2017-9-1002379405, 15-02-2018.