This is a review of the paper “Competing for Attention: An Empirical Study of Online Reviewers’ Strategic Behavior’ written by Shen, Hu & Ulmer (2015).
Introduction
In 2007, a study by Deloitte found that 62% of consumers read consumer-written online product reviews, and among these consumers, 82% stated that their purchase decisions were directly influenced by online reviews. Shen, Hu & Ulmer (2015) argue that these percentages would be higher if the study were to be replicated today, as consumers increasingly rely on online opinions and experiences shared by consumers when deciding what product to purchase. As such, it is important for companies to understand what incentivizes online reviewers to actually write reviews and what the effects of incentives are on the content of their reviews (Shen et al., 2015).
The authors argue that there is a large body of literature on online product reviews, but that this existing body of literature has failed to look at how online reviewers are incentivized to write reviews (Shen et al., 2015). This includes studies such as by Basuroy et al. (2003), who looked at numerical aspects of reviews and studies such as by Godes & Silva (2012), who looked at the evolution of review ratings. However, the authors note that a large part of existing research simply assumes that reviews are written for the same motives that offline consumers have when they provide word-of-mouth reviews (Dichter, 1966).
With this gap in mind, the authors drew from literature in other contexts, such as motivation for voluntary contributions in open source software and firm-hosted online forums. Building on this literature, the authors propose that gaining online reputation and attention from other consumers is an important motivation for their contribution to review systems (Shen et al., 2015). In order to explore this, the paper “empirically investigates how incentives such as reputation and attention affect online reviewers’ behaviours” (Shen et al., 2015, p. 684).
The Methodology
In order to conduct this empirical investigation, the authors use real-life data of online reviews of books and electronics, gathered from Amazon and Barnes & Noble (Shen et al., 2015). The data was collected on a daily basis and allows for a comparison both across product categories as well as across different review systems (Shen et al., 2015). Amazon and Barnes & Noble were selected because they are the two largest online book retailers and have two distinctly different review environments (Shen et al., 2015). Whereas Amazon ranks reviewers based on their contribution, allowing the reviewers to build up a reputation and consistently gain future attention, Barnes & Noble does not offer any of this (Shen et al., 2015).
The authors gathered a sample that includes all books released between September and October 2010, resulting in a sample of 1,751 books with 10,195 reviews (Shen et al., 2015, p. 685). Additionally, the authors randomly selected 500 electronic products on Amazon in order to allow for cross category comparison with the findings resulting from the analysis of the book reviews, allowing the authors to generalize their findings (Shen et al., 2015).
Based on this data, the authors look at two review mechanisms at two levels, namely the product level and the review rating level.
At the product level, the authors study how popularity (determined by the sales volume of the product) and crowdedness (measured by the number of preexisting reviews for the product) affect a reviewer’s decision on whether to write a review for a product (Shen et al., 2015). Additionally, the model controls for potential reviewers (in order to control for the possibility that an increasing number of daily reviews is due to an increasing number of potential reviewers over time) and the effect of time, in order to control for the issue that reviewers might lose interesting in writing reviews for products that have been out for a while (Shen et al., 2015). The resulting model for the product level can be found below:
At the review rating level, the authors study how reputation status affects reviewer’s decisions on whether to differentiate from the current consensus (Shen et al., 2015). They look at how a target rating deviates from the average rating, indicating how differentiated the rating is (Shen et al., 2015).
Main Results
The main results stemming from this study are that online reviewers appear to behave differently when they have strong incentives to gain attention and enhance their online reputation (Shen et al., 2015). Looking at popularity, online reviewers tend to select popular books to review, as this would allow them to receive more attention (Shen et al., 2015). As for the crowdedness, it was found that fewer reviewers will choose to review a book if the review segment becomes crowded, indicating that reviewers tend to avoid such spaces as they would have to compete for attention (Shen et al., 2015).
Next to this, differences in the results between Amazon and Barnes & Noble indicate that in online review environments with a reviewer ranking system, reviewers are more strategic and post more differentiated ratings to capture attention, doing so to improve their online reputation (Shen et al., 2015). In turn, this reviewer ranking system intensifies the competition for attention among reviewers. Next to these main findings, the authors ran some additional analyses to further understand online reviewers behaviours (Shen et al., 2015).
Running the same analyses on the electronic products dataset yielded consistent results. As such, the authors argue that their findings are robust (Shen et al., 2015).
Adding onto their results, the authors argue that with a reviewer ranking system through which reviewers can build up their reputation, opportunities arise for reviewers to monetize their online reputation by receiving free products, travel invitations and even job offers (Coster, 2006).
Strength & Managerial Implications
The main strength of this paper is in its use of real-life cases and the practical implications for online review systems and companies that make use of these review systems.
As reviewers respond strategically to incentives such as a quantified online reputation, this can be used to motivate reviewers consistently (Shen et al., 2015). An example of this is TripAdvisor’s profiles and contributor badges (as seen in the picture to the left).
Additionally, as reviewers are more likely to write a review for popular but uncrowded products, companies can make use of this by sending review invitations to niche product buyers and emphasize the small number of existing reviews or even by highlighting small numbers of existing reviews in the design of the website (Shen et al., 2015). As companies have their own specific goals, they may develop their own algorithms for selecting certain groups of reviewers to receive review invitations, rather than sending these invitations to every buyer, as is currently the common practice (Shen et al., 2015).
Lastly, reviewers that consistently offer highly differentiated reviews should carefully be taken into account by companies as these reviewers might simply be trying to game the system rather than serve the purpose of the review of signaling product quality (Shen et al., 2015). This can be through the use of ranks, but also other signals, such as “helpfulness” votes or even altered algorithms for such reviewers.
References
Basuroy, S., Chatterjee, S., & Ravid, S. A. (2003). How critical are critical reviews? The box office effects of film critics, star power, and budgets. Journal of marketing, 67(4), 103-117.
Coster, H. (2006). The Secret Life of An Online Book Reviewer. Forbes, December, 1.
Deloitte. (2007). “Most Consumers Read and Rely on Online Reviews; Companies Must Adjust,” Deloitte & Touche USA LLP.
Godes, D., & Silva, J. C. (2012). Sequential and temporal dynamics of online opinion. Marketing Science, 31(3), 448-473.
Shen, W., Hu, Y. J., & Ulmer, J. R. (2015). Competing for Attention: An Empirical Study of Online Reviewers’ Strategic Behavior. Mis Quarterly, 39(3), 683-696.
Group 10.