Tag Archives: review helpfulness

Finding your way in a review rollecoaster: review analysis


You know that feeling, trying to find that one, perfect coffee machine on Amazon by scrolling through tons of reviews to find the one best suited to your needs?  Think about it. If we as individual consumers are having difficulties browsing through the reviews to find those that add value, imagine the difficulty companies must have in analysing those large amounts of reviews.

The above explained difficulty occurs mainly due to what we call the ‘4 V’s of Data’: volume, variety, velocity and veracity (Salehan and Kim, 2015). That’s where the paper ‘predicting the performance of online consumer reviews: a sentiment mining approach to big data analytics’ by Salehan and Kim (2015) comes in. The paper looks at the predictors of both readership and helpfulness of online consumer reviews (OCR). Using different techniques the paper aims to create an approach that can be adopted by companies to develop automated systems for sorting and classifying large amounts of OCR. Sounds exciting, doesn’t it?! Let’s have a look at how this works.

What this paper is about

Whereas previous literature focusses at the factors that determine the perceived helpfulness of a review, this paper takes a step back. It starts by considering the factors that determine the likelihood of a consumer paying attention to a review in the first place, since without reading a review you cannot determine its helpfulness. Hence the research questions are as follows:

Research Question 1: Which factors determine the likelihood of a consumer paying attention to a review?
Research Question 2: Which factors determine the perceived helpfulness of a review?

In order to answer the  research questions, the paper looks at a sample of 2616 Amazon reviews and considers several factors they believe may impact review readership, helpfulness, or both. Readership is measured as the total number of votes (helpful and not helpful), whereas helpfulness is measured as the proportion of helpful votes out of total votes. Since I see no reason to bore you with detailed methodologies, I made a quick and easy to follow overview of the different factors the paper considers using a random Amazon review as an example:

blog2.2

Findings

  • Longevity is measured as the number of days since the review was created. It has a positive effect on readership, meaning older reviews are more likely to be read. Whereas this may sound counterintuitive, this could simply occur due to the way in which Amazon sorts the reviews, since by default users view reviews with most helpful votes first, unless they change the setting to viewing the most recent review first.
  • Review – & title sentiment is measured by conducting sentiment analysis on the review content, which scores a review depending on how emotional the content is (either positive or negative). Both have a small, negative effect on helpfulness, which indicates that consumers perceive emotional content to be less rational and therefore less useful. These findings are somewhat different from previous research, which showed that reviews carrying a strong negative sentiment have a stronger impact on buyer behaviour than positive or neutral reviews.
  • Title length has a small, negative effect on readership meaning that a reviews with longer titles are less likely to be read.
  • Review length has a large, positive effect on both readership and helpfulness, meaning that longer reviews are read more and receive more helpfulness votes on average.

All above outlined findings are statistically significant. Whereas previous research focussed mainly on numerical rating and length of the review, this paper looks at the textual information the review contained. This means that the practical implementations are high. For example, the paper suggests that companies may use sentiment data to analyse large amounts of OCR which are constantly produced on the Internet. The paper also showed the importance of the title: make it short and not too emotional. This is something e-commerce companies can guide their customers in when writing a review.

Discussion

In my opinion, a large limitation of this paper is that they use the number of ‘total votes’ as the number of times a review was read. I don’t know about you, but I certainly don’t hit the vote button every time I read a review. Hence I think using a different methodology might be better. For example you, could track customers as they move over a page, note how long they spend at the review and count the review to be ‘read’ if the this time was anywhere between e.g. 20 and 50 seconds (since you don’t want to count people that simply left the page open).

How about in practice?

This sounds great, but are there actually companies out there using similar approaches to make the life of their customers easier? A company that does this very well is Coolblue. Their aim is to be the most customer centric company of the Netherlands (Coolblue, 2018) and hence they go even further than described in the paper. Their product page contains an overview of the pros and cons for the product to allow for an easy overview. Whether these pros and cons come from frequently placed customer reviews isn’t clear. Moreover, they ask customers to fill in the pros and cons, so that customers looking to buy don’t need to read through long, unstructured sentences. Lastly, they use the review helpfulness to rank reviews according to relevance.

Sources

Salehan, M., & Kim, D. J. (2016). Predicting the performance of online consumer reviews: A sentiment mining approach to big data analytics. Decision Support Systems, 81, 30-40.

Coolblue, 2018. Yearbook 2017, Accessed via http://nieuws.coolblue.nl/jaarboek-2017/

Culture, Conformity and Emotional Suppression in Online Reviews


Paper: “Culture, Conformity and Emotional Suppression in Online Reviews” by Hong et al., 2016

“While Americans say, “the squeaky wheel gets the grease,” the Japanese say, “the nail that stands out gets pounded down.”

In other words, in the States, people who complain the loudest get the most attention while in Japan, people are discouraged to express personal opinions loudly especially if they don’t fit the community expectations. This phenomenon illustrates the differences between individualist (American) and collectivist (Japanese) cultures as defined by Hofstede (2001) and House et al. (2004). But this post is not entirely about cultural differences – it is about their influence on online reviews. Continue reading Culture, Conformity and Emotional Suppression in Online Reviews

Helpfulness of online consumer reviews: Readers’ objectives and review cues.


Generally, customers seek for quality information about a product before purchasing it. The emergence of the internet has facilitated convenient access to a variety of information sources to obtain this quality information such as consumer generated ratings and reviews. These consumer-generated product evaluations are generally found on portal (e.g. google.com), retailer (e.g. Amazon), manufacturer (e.g. Nike) or product evaluation websites (e.g. yelp.com). These evaluations have strong effects on consumer persuasion, willingness to pay and trust (Tsekouras, lecture 2017). However, not all customer reviews have the same effect on purchase decisions as some reviews are perceived more helpful than others (Chen et al, 2008).

Building on this research, Baek et al, 2012  tried to determine which factors influence the perceived helpfulness of online reviews. In addition, they researched  which factors are more important depending on the purpose of reading a review.

Furthermore, the paper extends previous research by considering two ways of looking at online reviews. Customers may take both peripheral and central cues into account when determining whether a review is helpful. Persuasion through the peripheral route requires less cognitive effort, hereby readers focus on more accessible information (e.g. the author of the review). Persuasion though the central route requires more cognitive effort, hereby customers focus on the content of the message.

Methodology
Data collection occurred through Amazon.com on a subset of 23 products, from a variety of categories. For these products they collected the reviews and information related to the reviewer. The final dataset included 15.059 online consumer reviews written by 1,796 reviewers. Review helpfulness is measured though customers who rated whether the review was helpful or not.

Results
The results show that a reviews’ helpfulness is affected by how inconsistent a review rating is with the average rating for that product, whether or not the review is written by a high-ranked reviewer, the length of the review message and the number of negative words included in the review message. The former finding is consistent with the negativity bias stating that negative reviews tend to be more salient than positive reviews (Tsekouras, lecture 2017).
Furthermore, it shows that customers assess the helpfulness of a review merely with central cues when they buy search goodsnd high-priced products. On the other hand, they use more peripheral cues when buying experience goods and low-priced products.

framework
Conceptual framework and hypotheses confirmation

So what does this imply?
The findings of this research raise several practical managerial implications for firms, which I consider the main strength of the research. However,  some implications may rely on the goal of the retailer, which I will elaborate on below. The findings may help web designers and marketers to design and shape their reviewing systems in such a way that review helpfulness is maximized. When more helpful reviews are written, the success of their service may increase as it leads to more customers using their service and increased sales (Chen et al. 2008; Chevalier and Mayzlin, 2006).

First, as it is shown that high-ranked reviewers are more credible to readers, firms may want to request and incentivise these reviewers to review their products more often. This is already done by Amazon, as they send top reviewers free merchandise to review (Chow, 2013), which has its pros and cons in my opinion. On the one hand, these top reviewers may not take into account factors such customer service, which in my opinion is an important factor in evaluating whether or not to buy the product. On the other hand, the purchasing bias and under-reporting bias are mitigated, which may result in a more ‘true’ rating as these biases normally result in a skewed product rating distribution (Hu et al, 2009). However, this ‘true’ rating may therefore differ from the average rating, which in turn decreases – as found in the study–  the perceived helpfulness of the review. Consequently, I think this issue could be a very interesting field for further research.

Furthermore, to increase review helpfulness, a division between high-priced, low-priced, search and experience goods could be made. For example for high-priced and search goods the firm may want to encourage customers to write detailed messages, whereas for low-priced and experience goods reviewer credibility and review rating should emphasized more.

In addition, online retailers may face a trade-off between perceived helpfulness and positivity of a review. Some retailers encourage customers to write positive reviews, however this undermines the perceived usefulness of the review, which in turn may decrease the number of customers using the retailers’ service. Therefore, in my opinion in the long run it would be more helpful to encourage customers to write honest reviews.

Finally, I would like to make a suggestion for improvement. As review helpfulness is measured only though the customers who voted on whether a review was helpful or not, the findings might be less generalizable for the customers who did not vote. Consequently, the researchers may want to conduct an experiment to increase generalizability.

References

Baek, H.; Ahn, J.; and Choi, Y. Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17, 2 (2012), 99-126.

Chen, P. Y., Dhanasobhon, S., & Smith, M. D. (2008). All reviews are not created equal: The disaggregate impact of reviews and reviewers at amazon. com.

Chevalier, J.A., and Mayzlin, D. The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43, 3 (2006), 345–354.

Hu, N., J. Zhang, and Paul A. Pavlou. (2009) “Overcoming The J-Shaped Distribution Of Product Reviews”. Communications of the ACM 52.10h: 144.

Tsekouras, D. (2 March 2017), Lecture Customer Centric Digital Commerce, “Post-consumption Worth of Mouth”.

NPR.org. (2013). Top Reviewers On Amazon Get Tons Of Free Stuff. [online] Available at: http://www.npr.org [Accessed 4 Mar. 2017].