Tag Archives: Product reviews

To Keep Or Not To Keep: Effects of Online Customer Reviews on Product Returns


By Madeleine van Spaendonck (365543ms)

In the US, the current average return rate for products bought online is approximately around 30% of purchases (The Economist, 2013). Most returns take place due to customers’ negative post-purchase product evaluation rather than product defects. One factor that is found to have an impact on this is the role of Online Consumer Reviews.

This is what Minnema et al. (2016) investigated in their study “To Keep or Not to Keep: Effects of Online Customer Reviews on Product Returns”. It uses a multi-year (2011-2013) dataset from a European online retailer that offers both electronics and furniture products. The paper examines the impact of three OCR characteristics (valence, volume and variance) on return decisions (figure 1). The researchers evaluate the net effect of OCRs, looking at its influence on both purchase and return decisions.

Screen Shot 2017-03-10 at 23.41.04.png

Theory

The hypotheses examined are based on the ‘expectation disconfirmation mechanism’. Post-purchase satisfaction results from the combination of customer expectations formed at the purchase-moment, product performance, and the difference between them. Negative expectation disconfirmation therefore decreases satisfaction, leading to a higher return probability. Therefore, higher expectation levels should lead to higher purchase and return probabilities, while higher expectation uncertainty should lower these.

Main results

Figure 2 presents a summary of the results of the study.

Screen Shot 2017-03-10 at 23.42.40.png

A particularly counterintuitive insight is that overly positive review valence (whereby the current OCR valence is higher than the long-term product average) leads to not only more sales but also a higher return probability. A potential reason for this is that OCRs induce the customer to form product expectations at the moment of purchase, leading to higher purchase probability. However, high expectations due to overly positive reviews may not be met. This leads to negative expectation confirmation, which then leads to higher return probability. Review volume and variance mostly affect purchase decisions, having little to no effect on product returns.

Strengths, Weaknesses and Suggested Improvements

While the majority of scholarly work in this field focuses on OCRs effects on product sales, this paper also addresses the lack of understanding of its effects on product returns. Taking into account both aspects is vital, because the prediction of OCR effects on retailer performance may be overly optimistic or pessimistic if only its effects on sales are considered. The study also shows that OCR effects advance beyond the moment of purchase and have the power to affect the decision to return a product. However, the model did not incorporate other information sources available at the purchase-moment that affect return-likelihood, such as product descriptions and pictures provided by the retailer. A comparative analysis could be used to evaluate whether reviews or retailer-provided information have the strongest impact on returns.

Managerial Implications

The study highlights the importance of considering product returns when evaluating OCR effects, as overly positive reviews may have negative consequences for retailers’ financial performance. Overly positive reviews, leading to more product returns, result in large reverse logistics costs. To reduce negative expectation disconfirmation, retailers should provide information and tools (besides OCRs) that allow consumers to set the right expectations and see if the product really meets their needs.

Sources:

Minnema, A., Bijmolt, T.H.A., Gensler, S., Wiesel, T. (2016). “To Keep or Not to Keep: Effects of Online Customer Reviews on Product Returns.” Journal of Retailing, 92(3), pp. 253–267.

The Economist. (2013). Return to Santa. December 21, (latest accessed March 8, 2017), http://www.economist.com/news/business/21591874-e- commerce-firms-have-hard-core-costly-impossible-please-customers- return-santa

Source for cover photo:

Ministry Ideaz, (2016), How do I return a product I no longer want? [ONLINE]. Available at: http://support.ministryideaz.com/customer/portal/articles/1022650-how-do-i-return-a-product-i-no-longer-want- [Accessed 8 March 2017].

Helpfulness of online consumer reviews: Readers’ objectives and review cues.


Generally, customers seek for quality information about a product before purchasing it. The emergence of the internet has facilitated convenient access to a variety of information sources to obtain this quality information such as consumer generated ratings and reviews. These consumer-generated product evaluations are generally found on portal (e.g. google.com), retailer (e.g. Amazon), manufacturer (e.g. Nike) or product evaluation websites (e.g. yelp.com). These evaluations have strong effects on consumer persuasion, willingness to pay and trust (Tsekouras, lecture 2017). However, not all customer reviews have the same effect on purchase decisions as some reviews are perceived more helpful than others (Chen et al, 2008).

Building on this research, Baek et al, 2012  tried to determine which factors influence the perceived helpfulness of online reviews. In addition, they researched  which factors are more important depending on the purpose of reading a review.

Furthermore, the paper extends previous research by considering two ways of looking at online reviews. Customers may take both peripheral and central cues into account when determining whether a review is helpful. Persuasion through the peripheral route requires less cognitive effort, hereby readers focus on more accessible information (e.g. the author of the review). Persuasion though the central route requires more cognitive effort, hereby customers focus on the content of the message.

Methodology
Data collection occurred through Amazon.com on a subset of 23 products, from a variety of categories. For these products they collected the reviews and information related to the reviewer. The final dataset included 15.059 online consumer reviews written by 1,796 reviewers. Review helpfulness is measured though customers who rated whether the review was helpful or not.

Results
The results show that a reviews’ helpfulness is affected by how inconsistent a review rating is with the average rating for that product, whether or not the review is written by a high-ranked reviewer, the length of the review message and the number of negative words included in the review message. The former finding is consistent with the negativity bias stating that negative reviews tend to be more salient than positive reviews (Tsekouras, lecture 2017).
Furthermore, it shows that customers assess the helpfulness of a review merely with central cues when they buy search goodsnd high-priced products. On the other hand, they use more peripheral cues when buying experience goods and low-priced products.

framework
Conceptual framework and hypotheses confirmation

So what does this imply?
The findings of this research raise several practical managerial implications for firms, which I consider the main strength of the research. However,  some implications may rely on the goal of the retailer, which I will elaborate on below. The findings may help web designers and marketers to design and shape their reviewing systems in such a way that review helpfulness is maximized. When more helpful reviews are written, the success of their service may increase as it leads to more customers using their service and increased sales (Chen et al. 2008; Chevalier and Mayzlin, 2006).

First, as it is shown that high-ranked reviewers are more credible to readers, firms may want to request and incentivise these reviewers to review their products more often. This is already done by Amazon, as they send top reviewers free merchandise to review (Chow, 2013), which has its pros and cons in my opinion. On the one hand, these top reviewers may not take into account factors such customer service, which in my opinion is an important factor in evaluating whether or not to buy the product. On the other hand, the purchasing bias and under-reporting bias are mitigated, which may result in a more ‘true’ rating as these biases normally result in a skewed product rating distribution (Hu et al, 2009). However, this ‘true’ rating may therefore differ from the average rating, which in turn decreases – as found in the study–  the perceived helpfulness of the review. Consequently, I think this issue could be a very interesting field for further research.

Furthermore, to increase review helpfulness, a division between high-priced, low-priced, search and experience goods could be made. For example for high-priced and search goods the firm may want to encourage customers to write detailed messages, whereas for low-priced and experience goods reviewer credibility and review rating should emphasized more.

In addition, online retailers may face a trade-off between perceived helpfulness and positivity of a review. Some retailers encourage customers to write positive reviews, however this undermines the perceived usefulness of the review, which in turn may decrease the number of customers using the retailers’ service. Therefore, in my opinion in the long run it would be more helpful to encourage customers to write honest reviews.

Finally, I would like to make a suggestion for improvement. As review helpfulness is measured only though the customers who voted on whether a review was helpful or not, the findings might be less generalizable for the customers who did not vote. Consequently, the researchers may want to conduct an experiment to increase generalizability.

References

Baek, H.; Ahn, J.; and Choi, Y. Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17, 2 (2012), 99-126.

Chen, P. Y., Dhanasobhon, S., & Smith, M. D. (2008). All reviews are not created equal: The disaggregate impact of reviews and reviewers at amazon. com.

Chevalier, J.A., and Mayzlin, D. The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43, 3 (2006), 345–354.

Hu, N., J. Zhang, and Paul A. Pavlou. (2009) “Overcoming The J-Shaped Distribution Of Product Reviews”. Communications of the ACM 52.10h: 144.

Tsekouras, D. (2 March 2017), Lecture Customer Centric Digital Commerce, “Post-consumption Worth of Mouth”.

NPR.org. (2013). Top Reviewers On Amazon Get Tons Of Free Stuff. [online] Available at: http://www.npr.org [Accessed 4 Mar. 2017].

 

Estimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics


We have all been there, scrolling through all the reviews before we buy something. You want to see all of this user-generated content, since you are afraid you will regret the wrong choice (Tsekouras, 2017). Also, this information overload leads to being less satisfied, less confident and more confused (Park & Lee, 2009). You could look at the average rating of the product, however these are often bimodal distributed and therefore less helpful (Zhang & Pavlou, 2009). How can you feel confident that you have seen all the important reviews, without having to read all of them?

This is what Ghose & Ipeirotis (2011) studied.

The authors looked at data from Amazon over a period of 15 months to study the impact of reviews on products sales and perceived usefulness. They looked at audio and video players (144 products), digital cameras (109 products) and DVDs (158 products) and their reviews.

The paper identified multiple features that affect product sales and helpfulness, by incorporating two streams of research. First, the information within the review is relevant. Second, reviewer attributes might influence consumer response.

What did they find?

An explanatory study found that the following factors are important:

results

Thus, perceived helpfulness does not necessarily lead to higher product sales.

They also performed a predictive model, which showed the importance of reviewer-related, subjectivity and readability features on predicting the impact of reviews. Furthermore, the predictive model showed that the predictions were less accurate for experience goods, like DVDs, in comparison to search goods, such as electronics.

What are the managerial implications?

Amazon currently uses ‘spotlight reviews’, which displays the most important reviews. However, it requires enough votes on reviews before a ‘spotlight review’ is determined. The predictive model is able to overcome this limitation, since it is possible to immediately identify reviews that are expected to be helpful for consumers and display them first.

On the other hand, it is useful for manufacturers, since they are able to modify future versions of the product or the marketing strategy, based on the reviews that affected sales most.

The main strength of this paper is that it has relevant managerial implications for both consumers and manufacturers, since it studied both the effect on sales and on helpfulness for consumers.

Would the findings be similar on different websites?

Probably, findings will be similar for other retailers of electronics, therefore Coolblue and Mediamarkt could benefit. On the other hand, book reviews on Bol.com are not expected to have as much benefit from the model, since they are experience goods, similar to DVDs.

Not as straightforward, are the implications for clothing retailers. However, I expect these retailers will not benefit as much from the model, since often there is no overload of reviews on clothing websites and therefore there is no need to reduce the information.

References

Ghose, A., & Ipeirotis, P. G. (2011). Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering23(10), 1498-1512.

Hu, N., Zhang, J. and Pavlou, P.A. (2009). Overcoming the J-shaped distribution of product reviews. Communications of the ACM, 52(10), pp.144-147.

Park, D. H., & Lee, J. (2009). eWOM overload and its effect on consumer behavioral intention depending on consumer involvement. Electronic Commerce Research and Applications7(4), 386-398.

Tsekouras, D. (2017). Customer centric digital commerce: Personalization & Product Recommendations [PowerPoint slide]. Retrieved from Blackboard.

Feature image retrieved from: Enzer, J. (2016, August 17). How to reward product reviews and supercharge your e-commerce business. Retrieved from: http://blog.swellrewards.com/2016/08/how-to-reward-product-reviews-and-supercharge-your-e-commerce-business/

Competing for attention: an empirical study of online reviewers’ strategic behaviour


Overview

This study found how online reviewers specifically choose what products to review and what rating to post, in order to gain attention and raise their reputation. These days, the top online reviewers can make a lot of financial gains when they have lots of attention of consumers. They can also monetize the attention and reputation they build up. This study focuses on Barnes & Noble and Amazon, from where the researchers used book reviews. The research focussed on the effect of a reviewer ranking system. This is a ranking system which lets top reviewers consistently gain future attention, because this system build up online reputation for reviewers. When looking at Barnes & Noble, which does not have a reviewer ranking system, we see that reviewers cannot build up a reputation. The research proposes that reviewers may behave more strategically so that they gain attention and enhance reputation when there is a mechanism to quantify their online reputation (Shen et al., 2015).

The researchers found that reviewers on Amazon are sensitive to the competition between existing reviews and they thus lean to avoiding crowded review segments. However, when looking at Barnes and Noble, reviewers clearly not respond to competition effect. Again, it is important to note that Amazon has a ranking mechanism, where Barnes & Noble has not. The study shows that there are more differentiated ratings on Amazon, compared with Barnes & Noble, due to the more intense competition for attention on Amazon. Overall, the researchers state that there is more strategical behaviour among reviewers on Amazon, compared to Barnes & Noble (Shen et al., 2015).

Results

The results of this study show that online reviewers certainly do not randomly select a book to review. Instead, they lean to choosing a popular book to review. When a ranking system is present, reviewers tend to avoid products having a crowded review segment. This means that reviewers try to reduce competition for attention. To conclude, reviewers post more differentiated ratings when there is a reviewer ranking system that enhances the competition for attention between reviewers (Shen et al., 2015).

The study presents some evident strengths. First, the study presents important managerial relevance. Companies can improve the design of their online review system, increasing their own understanding of the strategic behaviour of reviewers. The second strength of this study that the researchers randomly selected 500 electronic products on Amazon (TV’s, laptops, tablets, etc.). This was done to conduct cross category comparison, in order to be able to generalize the found results from books to other product categories (Shen et al., 2015).

A potential weakness of this study could have been the sample selection bias (reviewers may behave differently when reviewing different types of products). But to eliminate this bias from only using books, the researchers collected reviews from the electronic products category. The electronic products category results are consistent with the previous results from the Amazon books, showing that the main results are robust (Shen et al., 2015).

I think the managerial relevance is the most important part of this study. Increasing the design of the online review system will most likely lead to strategic advantages for online retailers. However, retailers can pay top reviewers to promote products regardless of product quality. This transparency question is a discussion point in my opinion.

References

Bremner, S. (2016). How Do Amazon Reviews and Rankings REALLY Work?. [online] Steve Bremner. Available at: http://stevebremner.com/2016/05/how-do-amazon-reviews-and-rankings-really-work/ [Accessed 16 Feb. 2017]. (cover image)

Shen, W., Hu, Y.J., Ulmer, J.R. (2015) ‘Competing for attention: an empirical study of online reviewers’ strategic behaviour’, MIS Quarterly 39(3): 683-696.

 

Knozen: a rating system for personalities


Knozen is an app that started as rating and review system for colleagues. Now you can rate everyone you know anonymously. Knozen asks you funny questions about someone’s personality but also about your own personality such as: “Denise is more likely to leave work early for a date – true or false”. On each profile 12 characteristics are shown with a rating scale from 1 to 10. The answers on the questions in the quiz influence the score on each characteristic. As a result, a personality chart will give you an idea of someone’s personality.

Continue reading Knozen: a rating system for personalities

There are two kinds of people, which one are you?


Do you prefer Coke or Pepsi? Do you eat your burger with cheese or without? And what about coffee, Americano or espresso? Zomato ensures that every meal, for users with all kinds of preferences, is a great experience.

Zomato is an India-based restaurant directory startup, that provides detailed information regarding restaurants nearby, including scanned menus, and also user’s reviews and photos of their gastronomic experiences. Zomato also includes real-time information about the restaurant and lets users book tables through its iOS and Android apps.

The image bellow summarizes Zomato’s key features:

Source: zomato.com/portugal
Source: zomato.com/portugal

Zomato has 1,398,900 listed restaurants in 22 countries. With the recent acquisition of Urbanspoon, Zomato will break into the US market, competing against services like Foursquare and Yelp.

The business model is quite simple, Zomato hires people to visit restaurants and to send the data to the team, including up to date information on new openings and scanned copies of menus. Users can then share photos of their dishes and evaluate restaurants in order to help other users decide where to eat.

The detailed informations available for each restaurant is then a result of the combined inputs of both the Zomato’s team and the users.

Example of a restaurant in Lisbon. Source: zomato.com/portugal
Layout of the information available for each restaurant.  Source: zomato.com/portugal

At Zomato, user’s evaluation takes the form of both a rating, using a 5-points scale and a review. Given that consumers with more extreme opinions (very satisfied or dissatisfied) are more likely to rate (Li and Hitt, 2008), most restaurants have a score of either close to 1 or higher than 4, as  the image bellow exemplifies.

Results of restaurantes for breakfast in Lisbon. Source: zomato.com/portugal
Results of restaurants in Lisbon. Source: zomato.com/portugal

Product rating is crucial for Zomato given that it is an integral element of online businesses especially for experience goods (Tsekouras, 2015) and because they are a reflection of product quality (Hu et al., 2009). Also, consumers tend to trust more opinions derived from others customers than information provided by the vendors themselves (Chevalier and Mayzlin, 2006), which is why having a high number of reviews is a key success factor for Zomato.

Social surroundings are then of crucial importance given that the success of Zomato relies on the degree to which there is interaction between users, through comments, ratings and a community creation of “foodies”, and the degree to which network effects take place i.e. where a good or service becomes more valuable because more people use it (Katz and Shapiro, 1994).  As according to Grönroos and Voima (2013), the customer’s well-being of Zomato is increased through the process, as more user’s feedback is available for each restaurant.

As a startup, Zomato relies in eWOM in order to attract new users and generate brand awareness. Unlike traditional WOM, eWOM has much broader effects, in part because there is no need to have a pre existing connection between the sender and the receiver. As so, eWOM applies to Zomato since it operates in an online context whereas traditional WOM typically happens in a face-to-face context (King et al., 2014).

Zomato is providing value for the consumers whilst the consumers are also creating value for each other, through their evaluations and photos. This reflects a finding in the article by Saarijarvi et al (2013), which states that it is important to evaluate what kind of value is co-created and for whom, meaning that value can have a different meaning for different actors in the co-creation process.

Zomato also generates great value for restaurants. In fact it is one of the most cost-effective high-impact marketing platform for dining establishments.

Hungry?

Check out the best place for you at https://www.zomato.com/!

REFERENCES

https://www.zomato.com/portugal

http://www.forbes.com/sites/anuraghunathan/2015/03/24/indian-restaurant-search-service-zomato-is-expanding-across-the-globe/

http://articles.economictimes.indiatimes.com/2015-03-17/news/60211899_1_foodpanda-countries-bank-account

http://blogs.ft.com/beyond-brics/2014/10/20/zomatos-special-sauce-coming-to-a-server-near-you/

Chevalier, J. A. and D. Mayzlin (2006). “The Effect of Word of Mouth on Sales: Online Book Re- views.” Journal of Marketing Research 43(3), 345–354.

Grönroos, C., & Voima, P. (2013). Critical service logic: making sense of value creation and co-creation. Journal of the Academy of Marketing Science, 41(2), 133-150

Katz, M.L. & Shapiro, C. (1994). Systems Competition and Network Effects, The Journal of Economic Perspectives, 8(2), 93-115.

King, R.A., Racherla, P., & Bush, V.D. (2014). What We Know and Don’t Know About Online Word-of-Mouth: A Review and Synthesis of the Literature. Journal of Interactive Marketing, 28(3), 167-183

Li, X., & Hitt, L. M. (2008). Self-selection and information role of online product reviews. Infor- mation Systems Research, 19(4), 456-474.

Saarijärvi, H., Kannan, P.K., & Kuusela, H. (2013). Value co-creation: theoretical approaches and practical implications. European Business Review, 25(1), 6-19.

Tsekouras, D. (2015) Variations On A Rating Scale: The Effect On Extreme Response Tendency In Product Ratings, working paper.

 

What Makes a Helpful Online Review?


We have all been there; browsing for too long on Tripadvisor.com or Amazon.com trying to find that one review that could be the decisive factor in buying (or not buying) that specific product. But what exactly is it that we are looking for? What makes one review more helpful than another? The article of Mudambi and Schuff (2010) tries to find the answers to these questions by reviewing almost 1600 reviews on Amazon.com throughout several products and product categories.

When browsing online, individuals are presented an increasing amount of customer reviews; these reviews have proven to increase buyers’ trust, aid customer decision making and increase product sales (Mudambi, Schuff & Zhang, 2014). In addition, customer reviews can attract potential visitors and can increase the amount spent on the website.  Hence, retail sites with more helpful reviews hold greater potential to offer value to consumers, sellers as well as the platform hosting the customer reviews.

In order to increase the helpfulness of customer reviews, several websites such as Amazon.com and Yelp.nl ask the question “was this review helpful to you?” and list more helpful reviews more prominently on the product information page.  Mudambi and Schuff (2010: 186) define a helpful review as a “peer-generated product evaluation that facilitates the consumer’s purchase decision process”.

The article distinguishes between two types of goods when looking for products online: search goods and experience goods. Search goods possess attributes that can be measured objectively, whereas the attributes of experience goods are not as easily objectively evaluated, but are rather dependent on taste. Examples of search goods are printers and cameras; examples of experience goods are CD’s and food products.

Past research showed conflicting findings as to whether extreme ratings (rating very negatively or very positively) are more helpful that moderate reviews; some argue that extreme ratings are more influential, whereas others argue that moderate reviews are more credible. Mudambi and Schuff (2010) argue that taste often plays a large role with experience goods as consumers are quite subjective when rating; hence, consumers would value moderate ratings of experience goods more, as they could represent a more objective assessment (H1).

Next, Mudambi and Schuff (2010 scrutinize the review depth of customer reviews. Since longer reviews often include more product details, and more details about the context it was used in, the authors hypothesize that review depth has a positive impact on the helpfulness of the review (H2). Nevertheless, the review-depth of a review might not be equally important for all products. Reviews for experience goods often include unrelated comments or comments so subjective that they are not interesting to the reader. For example, movie reviews often entail elaborate opinions on actors/actresses that are not important for the reader. On the other hand, reviews of search goods are often presented in a fact-based manner as attributes can be objectively measured. As a result, it is argued that review depth has a greater positive effect on the helpfulness of the review for search goods than for experience goods (H3).

By evaluating almost 1600 reviews (distributed over 6 products; 3 experience goods and 3 search goods) and excluding the ones that did not get any vote whether it was helpful or not, the researchers were able to confirm all three hypotheses. The article teaches us that there is no one-size-fits-all method as to what makes a reviewhelpful. Experience goods prove to be less helpful with extreme ratings, whereas search goods benefit from in-depth reviews.

Knipsel11

Mudambi, S. & Schuff, D. (2010). What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.com. MIS Quarterly, Vol 34 (1), pp 185-200.

Mudambi, S., Schuff, D. & Zhang, Z. (2014). Why Aren’t the Stars Aligned? An Analysis of Online Review Content and Star Ratings. IEEE Computer Science, 3139 -3147.

Do Movie Reviews Affect the Box Office Revenues?


The existence of Internet has changed our way of living. It has been a huge part of our life, one we simply cannot live without. We rely on it in almost every aspect of our lives, including when we seek for information. This also applies when we’re deciding what movies to watch. Before go to the cinema and watch a particular movie, some people usually checked the movie’s online reviews first. These movies’ reviews are online user reviews, and it is a form of electronic word-of-mouth (eWOM). According to Duan, Gu, and Whinston (2008), eWOM influences consumer purchase behaviour while it’s also the outcome of consumer purchases. But then, how these online user reviews actually impact the offline purchase?

There are three measures of online user reviews, the volume (Liu 2006, Duan et al. 2008), the valence or the average (Liu 2006, Duan et al. 2008, Chevalier and Mayzlin 2006), and the variance in reviews (Godes and Mayzlin 2004). Chintagunta, Gopinath and Venkataraman (2010) measured the impact (valence, volume, and variance) of national online user reviews on designated market area (DMA)-level local geographic box office performance of movies in the United States. What’s different about their study is they used local geographic data instead of national-level data (used by previous studies) and the ‘when’ and ‘where’ a movie is released are taken into account. Thus, they measured user reviews when a movie was released in a market and those were written by users where the movie was previously released. The impact was measured by combining data from daily box office ticket sales on 148 movies released from November 2003 to February 2005 with user ratings from the Yahoo! Movies website. They found that the overall movie revenues is greatly affected by the opening day gross. As it was conducted on DMA, movie and market fixed effects were included thus taking into account their differences including movie genre and market size, and some other variables was also controlled such as advertising level and number of theaters. In their first study, using the local data, they found that the average user ratings influenced the box office performance the most. This finding is interesting since most previous studies found that it is the volume of reviews which matters the most to box office revenues. But when the national-level data was used, they arrived at the same results as previous studies. And at the last part of the study, they attempted to explain these results difference by using national-level models with market-level controls. This method gave the same result as the first study, the average of user ratings has the greatest impact on the box office revenues. It concluded that it is important to determine where the movie was played, whether on “new markets” or “old markets”, and only then the “true” effect of user ratings can be measured.

As for us the movie goers, what the paper discovered is that we’re mostly affected by the average of ratings in deciding what movies to watch. Yet, how many people rated the movies (volume) is also an important aspect, as I would believe a slightly lower rating with much higher volume rather than a higher rating with much lower volume. In other words, volume and variance make a rating/review more trustworthy. Which one would you prefer?

Screen Shot 2015-04-24 at 7.33.53 PM

Screen Shot 2015-04-24 at 7.33.38 PM

Source : IMDB

References

Chevalier, J. A., and Mayzlin, D. 2006. ‘The effect of word of mouth on sales: Online book reviews.’ Journal of Marketing Research, 43(3), 345-354.

Chintagunta, P.K., Gopinath, S. and Venkataraman S. (2010). ‘The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets.’ Marketing Science, 29(5), 944–957.

Duan, W., Gu, B., and Whinston, A.B. (2008), ‘The Dynamics of Online Word-of-Mouth and Product Sales — An Empirical Investigation of the Movie Industry,’ Journal of Retailing, 84(2) 233-242.

Godes, D., and Mayzlin, D. 2004. ‘Using online conversations to study word-of-mouth communication.’ Marketing Science, 23(4), 545–560.

Liu, Y. 2006. ‘Word of mouth for movies: Its dynamics and impact on box office revenue.’ J. Marketing, 70(3), 74-89.

Reviews & Ratings: Consumer online-posting behavior


“Unfiltered feedback from customers is a positive even when it’s negative. A bad or so-so online review can actually help you because it gives customers certainty that the opinion is unbiased.” 

– Source: Gail Goodman, Entrepreneur, 2011

Social media delivers an ultimate platform for customers to broadcast their personal opinions regarding purchased products and services and therefore accelerate word-of-mouth (WOM) or consumer reviews to travel fast. Nearly 63% of consumers are more prone to buy products on a website that has online consumer reviews (iPerceptions, 2011). Online consumers reviews are trusted 12 times more, in comparison with descriptions of the product stated by the manufacturers themselves (eMarketer, February 2010). Companies who provide space for reviews on their websites, have an increase in company sales of nearly 18% (Reevoo). This video below defines how customers can assess online consumer reviews and recommendations while researching and shopping online.

Youtube: “Online Reviews and Recommendations”

Chen et. Al (2011) examined the interactions amongst consumer posting behavior and marketing variables such as product price and quality. An important part of the research was about how such interactions progress as the Internet and consumer review websites draw widespread approval where people use it more often. The study’s new automobile models data comprised of two samples that were gathered from 2001 and from 2008. As an automobile involves thorough searching before making a significant financial decision, these years were seen appropriate. Also more consumers made use of the Internet between 2001 and 2008 when considering purchasing an automobile. A total of 54% of new-automobile consumers made use of the Internet in when buying a car in 2001, reported by Morton, Zettelmeyer and Silva-Risso. According to a report by eMarketer, in 2008 this percentage was increased to nearly 80%. This study included prominent automobile review websites that covered the distinctive sections of the market— leading car enthusiasts (experts) as well as amateur consumers.

91-review-sites-e1415383495712

Motivations for Posting Online Consumer Reviews.

Gaining social approval – self-approval – indicating a level of expertise or social ranking – by demonstrating their superb purchase decisions, are all psychological reasons why consumers post online reviews. It can also be used to state satisfaction or dissatisfaction. Diverse types of customers are driven by distinctive motivations for posting reviews online. The earlier group of Internet users (in the study – year 2001) differs from the second group of Internet users (year 2008) when it comes to the reasoning as to why they post online. The consumers categorized as early group of users (a.k.a. experts – early adaptors of innovation) have high levels of product expertise, making them more likely to be psychologically seeking status and engaging in noticeable consumption. They are seeking to representing know-how and social ranking is particularly significant in the Internet’s early years (2001), as they tend to have high incomes and are more so price insensitive.

Conversely, the Internet has advanced and developed over this period, and it has appealed to a bigger population of types of consumers. Where, in 2001 it used to be a select group of Internet users who would post reviews, the Internet usage and online consumer review sites of today have become more mainstream. The Late adopters (2008) cultivate to be more no-nonsense and price focused compared to early adopters.

Marketing variables – effect on consumer online-posting behavior

Marketing variables indeed have an influence on consumer online-posting behavior. In the early stages of the Internet (2001) the price of products had negative relationship but premium- brand image has a positive relationship with the number of online consumer postings; differently, product quality has a U-shaped relationship with the number of online consumer postings. These different relationships are likely to be driven by early adopters of Internet usage.

The Internet infiltrates to mass consumers online, who are more inclined to be price sensitive as well as value driven.

Though certain marketing variables can lead to a big number of consumers engaging in online posting activities, these consumers do not automatically give higher ratings. The study shows that mass consumers lean towards posting online consumer reviews at higher as well as lower purchase price levels. In contrast to posting online consumer reviews primarily at lower price levels, which happened frequently during the early stages of Internet usage. The Internet has been accepted more by mass consumers online, where they express (dis)liking a product or service. This motivation of sharing reviews has become more important compared to sharing expertise of social status.

In conclusion, this research showed that the connections between marketing variables and consumer online-posting behavior are distinctive at the early phases compared to mature phases when it comes to Internet usage. High prices increase the overall consumer review ratings, which may be good news for a firm’s pricing decision. They found that the search for status is a core driver behind consumer-review behavior, predominantly in the early Internet stage. In market where it is difficult to assess quality, costly to assess quality, and where heterogeneous tastes are important factors when choosing a purchase, customers are occupied in all-encompassing decision-making. These conditions make it more likely for consumers to request external opinions online, before they make a decision on what they will be purchasing.

References:

Chen, Y., Fay, S., & Wang, Q. (2011). The role of marketing in social media: How online consumer reviews evolve. Journal of Interactive Marketing25(2), 85-94.

Charlton, G. (2012) “Ecommerce Consumer Reviews: Why You Need Them and How to Use Them.” Econsultancy.com

Featured image: http://splumber.com/wp-content/uploads/2014/12/Plumbing-Online-reviews-1030×574.jpg

GemShare – The trustworthy recommendation agent


Finding the best restaurant in town is not easy. Therefore a vast number of applications and websites provide services to facilitate the search. The “online urban guide” and business review site Yelp is the most popular among them. It uses automated software to recommend the most helpful and reliable reviews for its users and to help them connect with local businesses. Like most other recommendation websites, it combines numerical ratings with textual reviews. Yelp contains over 57 million local reviews and attracts around 130 million users monthly.

Companies are aware of the power of word-of-mouth. Online user reviews have become an increasingly important source of information for consumers. However, when it comes to more personal local services such as finding a trustworthy craftsman, lawyer, or the most competent physiotherapist online recommendation websites like Yelp are only used as a last resort. „People don’t go to Yelp for doctors or lawyers because of trust issues“, says Mohit, the founder of GemShare. „Positive reviews from strangers don’t guarantee that you, too, will value what is likely a very personalized and intimate experience.“ Besides the relevance of taste and trust for these services, people are also aware of fraud within online recommendation systems. Especially local services that are not used by a sufficient quantity of people to obtain the wisdom-of-the-crowd-effect, reviews and ratings can easily be manipulated by self-ratings of companies.

The alternative to the time-consuming and sometimes untrustworthy use of common online recommendation platforms is to ask the own personal network for advice, but even with all our social networks and technical devices, this approach can also be time-consuming and frustrating.

screenshots_static215-gem-detail

GemShare, launched in April, 2014, is a recommendation platform and application that focuses on trust and personal recommendations to solve this issue. “We have several members who have said two thumbs up from a friend is worth more than 40-star reviews,” says Mohit. Users create their own trusted network of friends and like-minded people, via Facebook, Gmail, or phone contacts, for the specific purpose of finding out where to find the best service.

Continue reading GemShare – The trustworthy recommendation agent

Can you really rely on online product reviews?


Product reviews on online platforms are growing in popularity1,2. Platforms like Amazon, Google or the App store use product reviews to show which products have the best experience in usage by other consumers. Most of these product reviews are extremely positive about the product3, but does this indicate that all products are extremely good and that there is no moderate product on the online market? Let’s give it a try to search on Amazon three random product reviews from books, video games and sports. The results are shown in table 1.

table 1

As can be seen from the table, two of those random reviews are extremely positive (the book and the sport watch) and one is extremely negative (the video game). An experiment done by Hu et al., (2009) asked customers to rate a music CD on a 5-star scale. This experiment shows an almost normal distribution, which can be expected if the ratings are randomly done by every buyer of the product. Most of the reviews on Amazon (table 1) show a so called J-shape distribution and not the outcome of the explained experiment. What could be the cause of those differences?

The first explanation is the purchasing bias, which states that customers with higher product valuation are more likely to purchase the product than customers with a low product valuation. Continue reading Can you really rely on online product reviews?

Selling your products to Justin Bieber? No way!


With the total amount of social media fans across different platforms (Twitter: 51 million followers (1), Facebook: 66 million likes (2) and Instagram: 15 million followers (3)) exceeding the population of Japan (which currently has around 127 million inhabitants), Justin Bieber is arguably the most popular person on the planet. With this popularity come a lot of perks: the best perks according to regular human beings are the endorsement deals that the Canadian superstar signs on a regular basis. One of these deals was to design his personal nail polish called ‘One Less Lonely Girl’. Certainly this was worth around $12,500,000 (4).

Since almost all major brands engage in this behavior, these celebrity endorsement deals must give the brand something in return. However, not every company has a spare $163.75 million in cash to endorse athletes (or other celebrities) like Nike does (5). So what can the smaller companies do to get the same exposure as these large global brands? Because those smaller companies can simply not afford to sponsor the superstars of today, it seems that they can only hope and pray that somebody like Justin Bieber enters their store and purchases their product. If he then writes an objective review about the product on one of his social media platforms, a logical consequence seems that the owner of the smaller company will be able to retire at an early age.

Just imagine that Justin Bieber does write objective product reviews on his social media platforms. Continue reading Selling your products to Justin Bieber? No way!