Tag Archives: Product reviews

How Trending Status and Online Ratings Affect Prices of Homogeneous Products


The Internet and Word-of-Mouth (WOM)

Ever since the inception of the Internet, consumers have benefited from extensive opportunities to share their evaluations of products online. Most e-commerce platforms allow consumers to review products, and an increasing number of opinion platforms have been introduced that offer online consumer ratings and reviews. Furthermore, most online retailers are now listing and selling trending products, defined as products that large groups of individuals are currently purchasing or discussing (Kocas and Akkan, 2016). In their article “How trending status and online ratings affect prices of homogeneous products”, Kocas and Akkan explore the pricing implications of these reviews and trending status. The following research questions result:

RQ1: How do standardised average prices vary with product popularity (measured by the trending status)? 

RQ2: When controlled for popularity, how do standardized average prices vary with average consumer ratings?

Related Theory

Research in marketing and economics have shown that it is profitable for retailers to sell popular products at a discount as advertising the low price is an effective and cheap method to inform consumers of the extra surplus they could get by purchasing these products (Elberse, 2008). In the present study, trending is considered an indicator of product popularity as well as a costless form of advertising – trending products signal desirability and potential positive surplus to consumers (Hosken and Reiffen, 2004). Hence, one can assume that trending products are priced lower by retailers, as the resulting increase in demand more than likely compensates for the decrease in marginal revenue per item sold.

Furthermore, several studies have shown that positive ratings and reviews have a positive effect on sales (Baek et al., 2012). Similarly to trending status, high ratings can act as a signal of desirability. Hence one can reasonably assume that highly rated products should be priced lower by retailer for the same reason as aforementioned.

Formally stated,

H1: Retailers randomize prices of products independently. The average and minimum profit-maximizing prices for the trending products are lower than the prices for non-trending products given identical average consumer ratings.

H2: The average and minimum profit-maximizing prices for the product with higher average consumer ratings are lower than the product with lower average consumer ratings given identical trending status.

Results

This study analyses data gathered from 24 of the 28 categories of books available on Amazon.com from May 25 to September 13, 2011 and includes a sample of 466’190 books. Both hypotheses are supported by the experiment, showing that a trending product should be priced lower than other products in order to exploit the higher number of browsers these trending items attract. Similarly, highly rated products lead to a higher conversion rate (from browsing to purchasing) and, hence deserve lower prices.

Strengths & Weaknesses

5-stars-no-padding Whereas several studies have examined the impact of viral characteristics of products on consumer behaviour and pricing policies, this study is the first to empirically examine the influence of trending status on pricing online in a field experiment with a large dataset. Similarly, whereas several studies have examined the impact of online reviews on consumer behaviour, no prior work has examined how online reviews and ratings affect prices of homogeneous goods. A strong point of this paper is that it acts on these 2 gaps to provide novel findings, and tangible and actionable insights to practitioners.

5-stars-no-padding  Another strength is that this paper provides a detailed methodology, which is complemented by an appendix as well as a detailed explanation of the economic foundations behind the theory (including formulas). This level of details increases the academic relevance of the paper, and allows other researcher to easily replicate the experiments, hence facilitating continuous research on the topic.

1 star    One of the weaknesses of this study is the fact that it only examines one type of products – books. Several studies (e.g. Abdullah-Al-Mamun and Robel, 2014) have shown that price sensitivity varies from one product category to another. Similarly, product reviews are generally more important for certain types of products than others. For instance, for a product such as a microwave, personal taste doesn’t really matter, hence one could expect product reviews to be more important as it provides an objective evaluation. However, for a product such as a science-fiction book, personal taste is important, hence the influence of product reviews is likely lower. Thus, it would be beneficial to replicate this study while taking into account category- and product-specific features as a predictor of prices. This can easily be done by replicating the experiment with more product categories on Amazon, and would validate the robustness of this study’s findings across product categories.

1 star    A second weakness of this paper is the fact that it examines the impact of online ratings by relying only on single-dimensional rating schemes. Online platforms display reviews using a variety of formats, and many platforms provide separate ratings for different product attributes. Research has shown that multi-dimensional and single-dimensional rating schemes in online review platforms have different impact on consumers (Tunc et al., 2017). Similarly, this study only looks at the ratings but not at the content of the review. However, studies have shown that the latter can influence consumer behaviour. Both these factors can influence the conversion rate from browser to buyer (Mudambi and Schuff, 2010) and thus the profitability of retailers. Hence, it would be interesting to replicate the present research in the context of multi-dimensional rating schemes, and take into account the actual content of online reviews.

Implications

We have seen that there are significant advantages to demand-based pricing for popular products with a relatively high market share. Hence, online retailers should monitor signs of trending as they act as a positive desirability signal that increases the demand of price-comparing consumers. By responding to trending signs and adjusting their prices, retailers can optimise their profits. Nevertheless, managers should be cautious of the research findings and conduct further experiments when applying them to products other than books. Finally, managers should be careful about the pace at which they adjust their prices – popularity status can change extremely quickly, but consumers will not react well to frequent price changes.

References

Abdullah-Al-Mamun, M. K. R., & Robel, S. D. (2014). A Critical Review of Consumers’ Sensitivity to Price: Managerial and Theoretical Issues. Journal of International Business and Economics, 2(2), 01-09.

Baek, H., Ahn, J., & Choi, Y. (2012). Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17(2), 99-126.

Brynjolfsson, E., Hu, Y., & Smith, M. D. (2010). Research commentary—long tails vs. superstars: The effect of information technology on product variety and sales concentration patterns. Information Systems Research, 21(4), 736-747.

Elberse, A. (2008). Should you invest in the long tail?. Harvard business review, 86(7/8), 88.

Hosken, D., & Reiffen, D. (2004). How retailers determine which products should go on sale: Evidence from store-level data. Journal of Consumer Policy, 27(2), 141-177.

Kocas, C., & Akkan, C. (2016). How Trending Status and Online Ratings Affect Prices of Homogeneous Products. International Journal of Electronic Commerce, 20(3), 384-407.

Mudambi, S. M., & Schuff, D. (2010). Research note: What makes a helpful online review? A study of customer reviews on Amazon. com. MIS quarterly, 185-200.

Tunc, M. M., Cavusoglu, H., & Raghunathan, S. (2017). Single-Dimensional Versus Multi-Dimensional Product Ratings in Online Marketplaces.

The swaying effects of online product reviews


Based on the ‘wisdom of the crowd’ effect (Surowiecki, 2005), consumers make use of reviews to make accurate product evaluations. However, due to the large amount of information and conflicting opinions in reviews, it becomes difficult for them to identify and consider the attributes that are relevant to their consumer situation.

Imagine you are browsing a webstore, looking for a new camera to take on your backpacking trip. For this situation, you prefer a camera that is lightweight, easy to use, shock-resistant and cheap. You don’t have a lot of experience with camera’s, so you decide to look at the reviews of other consumers that bought Camera X. As you browse through several reviews, you start to notice that a lot of reviews mention things like FPS, image stabilization, Wi-Fi connection and GPS tracking. However, the reviews are in conflict about the quality of the image stabilization and many mention the lack of a Wi-Fi connection. After reading most of the reviews, you decide that you want to look for a camera that has better image stabilization and a Wi-Fi connection, attributes which you originally didn’t pick as relevant for your situation …

The scenario above, is what Liu & Karahanna (2017) describe as the ‘swaying’ effect. After reading reviews, people might over-weigh irrelevant attributes and under-weigh relevant attributes. They suggest that attribute preferences are more heavily influenced by characteristics of the online reviews rather than by the relevance of the attributes to the consumers decision context.

Theory development & methodology

Liu & Karahanna (2017) developed their theory from the constructive preference perspective theory (Bettman, Luce, & Payne, 1998; Payne, Bettman, Coupey, & Johnson, 1992). This theory suggests that preferences are shaped by the interaction between the properties of the information environment of the choice problem and the properties of the human information-processing system. Liu & Karahanna (2017) propose that three characteristics of online reviews affect the assessment of attribute preference and theorize that these characteristics together may ‘sway’ attribute preferences.

  1. the amount of information about attribute level performance,
  2. the degree of information conflict about attribute level performance
  3. the overall numeric rating and the attribute-level performance information

They conducted three studies, in which they provided the participants with a consumer scenario, asked them to weigh different attributes in terms of relevance and made them evaluate a digital camera based on reviews.

In study 1 they manipulated the three hypothesized factors and examined their effects on the attribute preferences. In study 2, they reproduced this study but added a monetary incentive to induce high motivation to process review information. The third study was a free simulation experiment to provide more realism and to allow for higher generalizability, in which verbal protocol analysis was used to capture and measure the factors.

Main findings

When the participants were asked to weigh the attributed based on the provided scenario, they placed more weight on the relevant attributed than the irrelevant attributes (in the scenario above, the attributes cost, ease-of-use and weight are relevant attributes, whereas image stabilization is not). But when they had to evaluate the camera based on reviews (that contained an uneven amount of information across different attributes, varying degrees of information conflict, and a numeric overall rating), the relevance of the attributes did not have a significant impact on attribute preferences.

CCDC
Figure 1. Participants’ Constructed Attribute Preferences  (Liu & Karahanna, 2017)

The amount of attribute information in the reviews had the greatest impact on attribute preferences. Study 2 showed that the degree of attribute information conflict only affects attribute preferences when people have high motivation to process information. Study 3 showed consistent results. The studies provided evidence that attribute preferences that result from reading the reviews are primarily driven by the review characteristics and not by attribute relevance, thus supporting the hypothesized ‘swaying’ effect of online product reviews.

Practical implication.

What implications can be derived from these results? To support informed consumer decision making, it should be investigated how reviews should be organized and presented and how making sense of information conflicts can become less cognitively demanding. The effectiveness of some practical suggestions, such as providing a short description of the reviewer’s background (newegg.com), displaying the amount of positive and negative comments on an attribute (Q. (Ben) Liu, Karahanna, & Watson, 2011) and allowing people to see the overall rating from reviewers who have similar decision context, need to be investigated. Implementation of these suggestions allows consumer to filter reviews from people in a similar consumer scenario, makes making sense of conflicts become less demanding and causes the numeric overall rating to make more sense.

Strengths, weaknesses, suggested improvements

By conducting multiple studies with consistent results, the article provides strong evidence for generalizability & robust hypotheses, which enhances the external validity of the results. Nevertheless, there are some limitations. The study only examines a single product category (camera) and a single scenario. Additionally, the samples only consisted of students with a similar expertise of cameras. It would be interesting to examine whether the effects differ based on the consumer’s level of expertise with the product category (camera) or the product category itself. Additionally, to increase the generalizability of this study, it would be interesting to see if these results also apply on a sample that is more representative of the population (not only students).

I would love to hear your opinions on this. Do you recognize yourself in the ‘swaying’ effect? Are reviews influencing your preferences? 

References

Bettman, J. R., Luce, M. F., & Payne, J. W. (1998). Constructive Consumer Choice Processes. Journal of Consumer Research, 25(3), 187–217. https://doi.org/10.1086/209535

Liu, Q. (Ben), Karahanna, E., & Watson, R. T. (2011). Unveiling user-generated content: Designing websites to best present customer reviews. Business Horizons, 54(3), 231–240. https://doi.org/10.1016/j.bushor.2011.01.004

Liu, Q. Ben, & Karahanna, E. (2017). The dark side of reviews: The swaying effects of online product reviews on attribute preference construction. MIS Quarterly, 41(2), 427–448. https://doi.org/10.25300/misq/2017/41.2.05

Payne, J. W., Bettman, J. R., Coupey, E., & Johnson, E. J. (1992). A constructive process view of decision making: Multiple strategies in judgment and choice. Acta Psychologica, 80(1–3), 107–141. https://doi.org/10.1016/0001-6918(92)90043-D

Surowiecki, J. (2005). The Wisdom of Crowds. American Journal of Physics, 75(908), 336. https://doi.org/10.1038/climate.2009.73

 

To Keep Or Not To Keep: Effects of Online Customer Reviews on Product Returns


By Madeleine van Spaendonck (365543ms)

In the US, the current average return rate for products bought online is approximately around 30% of purchases (The Economist, 2013). Most returns take place due to customers’ negative post-purchase product evaluation rather than product defects. One factor that is found to have an impact on this is the role of Online Consumer Reviews.

This is what Minnema et al. (2016) investigated in their study “To Keep or Not to Keep: Effects of Online Customer Reviews on Product Returns”. It uses a multi-year (2011-2013) dataset from a European online retailer that offers both electronics and furniture products. The paper examines the impact of three OCR characteristics (valence, volume and variance) on return decisions (figure 1). The researchers evaluate the net effect of OCRs, looking at its influence on both purchase and return decisions.

Screen Shot 2017-03-10 at 23.41.04.png

Theory

The hypotheses examined are based on the ‘expectation disconfirmation mechanism’. Post-purchase satisfaction results from the combination of customer expectations formed at the purchase-moment, product performance, and the difference between them. Negative expectation disconfirmation therefore decreases satisfaction, leading to a higher return probability. Therefore, higher expectation levels should lead to higher purchase and return probabilities, while higher expectation uncertainty should lower these.

Main results

Figure 2 presents a summary of the results of the study.

Screen Shot 2017-03-10 at 23.42.40.png

A particularly counterintuitive insight is that overly positive review valence (whereby the current OCR valence is higher than the long-term product average) leads to not only more sales but also a higher return probability. A potential reason for this is that OCRs induce the customer to form product expectations at the moment of purchase, leading to higher purchase probability. However, high expectations due to overly positive reviews may not be met. This leads to negative expectation confirmation, which then leads to higher return probability. Review volume and variance mostly affect purchase decisions, having little to no effect on product returns.

Strengths, Weaknesses and Suggested Improvements

While the majority of scholarly work in this field focuses on OCRs effects on product sales, this paper also addresses the lack of understanding of its effects on product returns. Taking into account both aspects is vital, because the prediction of OCR effects on retailer performance may be overly optimistic or pessimistic if only its effects on sales are considered. The study also shows that OCR effects advance beyond the moment of purchase and have the power to affect the decision to return a product. However, the model did not incorporate other information sources available at the purchase-moment that affect return-likelihood, such as product descriptions and pictures provided by the retailer. A comparative analysis could be used to evaluate whether reviews or retailer-provided information have the strongest impact on returns.

Managerial Implications

The study highlights the importance of considering product returns when evaluating OCR effects, as overly positive reviews may have negative consequences for retailers’ financial performance. Overly positive reviews, leading to more product returns, result in large reverse logistics costs. To reduce negative expectation disconfirmation, retailers should provide information and tools (besides OCRs) that allow consumers to set the right expectations and see if the product really meets their needs.

Sources:

Minnema, A., Bijmolt, T.H.A., Gensler, S., Wiesel, T. (2016). “To Keep or Not to Keep: Effects of Online Customer Reviews on Product Returns.” Journal of Retailing, 92(3), pp. 253–267.

The Economist. (2013). Return to Santa. December 21, (latest accessed March 8, 2017), http://www.economist.com/news/business/21591874-e- commerce-firms-have-hard-core-costly-impossible-please-customers- return-santa

Source for cover photo:

Ministry Ideaz, (2016), How do I return a product I no longer want? [ONLINE]. Available at: http://support.ministryideaz.com/customer/portal/articles/1022650-how-do-i-return-a-product-i-no-longer-want- [Accessed 8 March 2017].

Helpfulness of online consumer reviews: Readers’ objectives and review cues.


Generally, customers seek for quality information about a product before purchasing it. The emergence of the internet has facilitated convenient access to a variety of information sources to obtain this quality information such as consumer generated ratings and reviews. These consumer-generated product evaluations are generally found on portal (e.g. google.com), retailer (e.g. Amazon), manufacturer (e.g. Nike) or product evaluation websites (e.g. yelp.com). These evaluations have strong effects on consumer persuasion, willingness to pay and trust (Tsekouras, lecture 2017). However, not all customer reviews have the same effect on purchase decisions as some reviews are perceived more helpful than others (Chen et al, 2008).

Building on this research, Baek et al, 2012  tried to determine which factors influence the perceived helpfulness of online reviews. In addition, they researched  which factors are more important depending on the purpose of reading a review.

Furthermore, the paper extends previous research by considering two ways of looking at online reviews. Customers may take both peripheral and central cues into account when determining whether a review is helpful. Persuasion through the peripheral route requires less cognitive effort, hereby readers focus on more accessible information (e.g. the author of the review). Persuasion though the central route requires more cognitive effort, hereby customers focus on the content of the message.

Methodology
Data collection occurred through Amazon.com on a subset of 23 products, from a variety of categories. For these products they collected the reviews and information related to the reviewer. The final dataset included 15.059 online consumer reviews written by 1,796 reviewers. Review helpfulness is measured though customers who rated whether the review was helpful or not.

Results
The results show that a reviews’ helpfulness is affected by how inconsistent a review rating is with the average rating for that product, whether or not the review is written by a high-ranked reviewer, the length of the review message and the number of negative words included in the review message. The former finding is consistent with the negativity bias stating that negative reviews tend to be more salient than positive reviews (Tsekouras, lecture 2017).
Furthermore, it shows that customers assess the helpfulness of a review merely with central cues when they buy search goodsnd high-priced products. On the other hand, they use more peripheral cues when buying experience goods and low-priced products.

framework
Conceptual framework and hypotheses confirmation

So what does this imply?
The findings of this research raise several practical managerial implications for firms, which I consider the main strength of the research. However,  some implications may rely on the goal of the retailer, which I will elaborate on below. The findings may help web designers and marketers to design and shape their reviewing systems in such a way that review helpfulness is maximized. When more helpful reviews are written, the success of their service may increase as it leads to more customers using their service and increased sales (Chen et al. 2008; Chevalier and Mayzlin, 2006).

First, as it is shown that high-ranked reviewers are more credible to readers, firms may want to request and incentivise these reviewers to review their products more often. This is already done by Amazon, as they send top reviewers free merchandise to review (Chow, 2013), which has its pros and cons in my opinion. On the one hand, these top reviewers may not take into account factors such customer service, which in my opinion is an important factor in evaluating whether or not to buy the product. On the other hand, the purchasing bias and under-reporting bias are mitigated, which may result in a more ‘true’ rating as these biases normally result in a skewed product rating distribution (Hu et al, 2009). However, this ‘true’ rating may therefore differ from the average rating, which in turn decreases – as found in the study–  the perceived helpfulness of the review. Consequently, I think this issue could be a very interesting field for further research.

Furthermore, to increase review helpfulness, a division between high-priced, low-priced, search and experience goods could be made. For example for high-priced and search goods the firm may want to encourage customers to write detailed messages, whereas for low-priced and experience goods reviewer credibility and review rating should emphasized more.

In addition, online retailers may face a trade-off between perceived helpfulness and positivity of a review. Some retailers encourage customers to write positive reviews, however this undermines the perceived usefulness of the review, which in turn may decrease the number of customers using the retailers’ service. Therefore, in my opinion in the long run it would be more helpful to encourage customers to write honest reviews.

Finally, I would like to make a suggestion for improvement. As review helpfulness is measured only though the customers who voted on whether a review was helpful or not, the findings might be less generalizable for the customers who did not vote. Consequently, the researchers may want to conduct an experiment to increase generalizability.

References

Baek, H.; Ahn, J.; and Choi, Y. Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17, 2 (2012), 99-126.

Chen, P. Y., Dhanasobhon, S., & Smith, M. D. (2008). All reviews are not created equal: The disaggregate impact of reviews and reviewers at amazon. com.

Chevalier, J.A., and Mayzlin, D. The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43, 3 (2006), 345–354.

Hu, N., J. Zhang, and Paul A. Pavlou. (2009) “Overcoming The J-Shaped Distribution Of Product Reviews”. Communications of the ACM 52.10h: 144.

Tsekouras, D. (2 March 2017), Lecture Customer Centric Digital Commerce, “Post-consumption Worth of Mouth”.

NPR.org. (2013). Top Reviewers On Amazon Get Tons Of Free Stuff. [online] Available at: http://www.npr.org [Accessed 4 Mar. 2017].

 

Estimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics


We have all been there, scrolling through all the reviews before we buy something. You want to see all of this user-generated content, since you are afraid you will regret the wrong choice (Tsekouras, 2017). Also, this information overload leads to being less satisfied, less confident and more confused (Park & Lee, 2009). You could look at the average rating of the product, however these are often bimodal distributed and therefore less helpful (Zhang & Pavlou, 2009). How can you feel confident that you have seen all the important reviews, without having to read all of them?

This is what Ghose & Ipeirotis (2011) studied.

The authors looked at data from Amazon over a period of 15 months to study the impact of reviews on products sales and perceived usefulness. They looked at audio and video players (144 products), digital cameras (109 products) and DVDs (158 products) and their reviews.

The paper identified multiple features that affect product sales and helpfulness, by incorporating two streams of research. First, the information within the review is relevant. Second, reviewer attributes might influence consumer response.

What did they find?

An explanatory study found that the following factors are important:

results

Thus, perceived helpfulness does not necessarily lead to higher product sales.

They also performed a predictive model, which showed the importance of reviewer-related, subjectivity and readability features on predicting the impact of reviews. Furthermore, the predictive model showed that the predictions were less accurate for experience goods, like DVDs, in comparison to search goods, such as electronics.

What are the managerial implications?

Amazon currently uses ‘spotlight reviews’, which displays the most important reviews. However, it requires enough votes on reviews before a ‘spotlight review’ is determined. The predictive model is able to overcome this limitation, since it is possible to immediately identify reviews that are expected to be helpful for consumers and display them first.

On the other hand, it is useful for manufacturers, since they are able to modify future versions of the product or the marketing strategy, based on the reviews that affected sales most.

The main strength of this paper is that it has relevant managerial implications for both consumers and manufacturers, since it studied both the effect on sales and on helpfulness for consumers.

Would the findings be similar on different websites?

Probably, findings will be similar for other retailers of electronics, therefore Coolblue and Mediamarkt could benefit. On the other hand, book reviews on Bol.com are not expected to have as much benefit from the model, since they are experience goods, similar to DVDs.

Not as straightforward, are the implications for clothing retailers. However, I expect these retailers will not benefit as much from the model, since often there is no overload of reviews on clothing websites and therefore there is no need to reduce the information.

References

Ghose, A., & Ipeirotis, P. G. (2011). Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering23(10), 1498-1512.

Hu, N., Zhang, J. and Pavlou, P.A. (2009). Overcoming the J-shaped distribution of product reviews. Communications of the ACM, 52(10), pp.144-147.

Park, D. H., & Lee, J. (2009). eWOM overload and its effect on consumer behavioral intention depending on consumer involvement. Electronic Commerce Research and Applications7(4), 386-398.

Tsekouras, D. (2017). Customer centric digital commerce: Personalization & Product Recommendations [PowerPoint slide]. Retrieved from Blackboard.

Feature image retrieved from: Enzer, J. (2016, August 17). How to reward product reviews and supercharge your e-commerce business. Retrieved from: http://blog.swellrewards.com/2016/08/how-to-reward-product-reviews-and-supercharge-your-e-commerce-business/

Knozen: a rating system for personalities


Knozen is an app that started as rating and review system for colleagues. Now you can rate everyone you know anonymously. Knozen asks you funny questions about someone’s personality but also about your own personality such as: “Denise is more likely to leave work early for a date – true or false”. On each profile 12 characteristics are shown with a rating scale from 1 to 10. The answers on the questions in the quiz influence the score on each characteristic. As a result, a personality chart will give you an idea of someone’s personality.

Continue reading Knozen: a rating system for personalities

There are two kinds of people, which one are you?


Do you prefer Coke or Pepsi? Do you eat your burger with cheese or without? And what about coffee, Americano or espresso? Zomato ensures that every meal, for users with all kinds of preferences, is a great experience.

Zomato is an India-based restaurant directory startup, that provides detailed information regarding restaurants nearby, including scanned menus, and also user’s reviews and photos of their gastronomic experiences. Zomato also includes real-time information about the restaurant and lets users book tables through its iOS and Android apps.

The image bellow summarizes Zomato’s key features:

Source: zomato.com/portugal
Source: zomato.com/portugal

Zomato has 1,398,900 listed restaurants in 22 countries. With the recent acquisition of Urbanspoon, Zomato will break into the US market, competing against services like Foursquare and Yelp.

The business model is quite simple, Zomato hires people to visit restaurants and to send the data to the team, including up to date information on new openings and scanned copies of menus. Users can then share photos of their dishes and evaluate restaurants in order to help other users decide where to eat.

The detailed informations available for each restaurant is then a result of the combined inputs of both the Zomato’s team and the users.

Example of a restaurant in Lisbon. Source: zomato.com/portugal
Layout of the information available for each restaurant.  Source: zomato.com/portugal

At Zomato, user’s evaluation takes the form of both a rating, using a 5-points scale and a review. Given that consumers with more extreme opinions (very satisfied or dissatisfied) are more likely to rate (Li and Hitt, 2008), most restaurants have a score of either close to 1 or higher than 4, as  the image bellow exemplifies.

Results of restaurantes for breakfast in Lisbon. Source: zomato.com/portugal
Results of restaurants in Lisbon. Source: zomato.com/portugal

Product rating is crucial for Zomato given that it is an integral element of online businesses especially for experience goods (Tsekouras, 2015) and because they are a reflection of product quality (Hu et al., 2009). Also, consumers tend to trust more opinions derived from others customers than information provided by the vendors themselves (Chevalier and Mayzlin, 2006), which is why having a high number of reviews is a key success factor for Zomato.

Social surroundings are then of crucial importance given that the success of Zomato relies on the degree to which there is interaction between users, through comments, ratings and a community creation of “foodies”, and the degree to which network effects take place i.e. where a good or service becomes more valuable because more people use it (Katz and Shapiro, 1994).  As according to Grönroos and Voima (2013), the customer’s well-being of Zomato is increased through the process, as more user’s feedback is available for each restaurant.

As a startup, Zomato relies in eWOM in order to attract new users and generate brand awareness. Unlike traditional WOM, eWOM has much broader effects, in part because there is no need to have a pre existing connection between the sender and the receiver. As so, eWOM applies to Zomato since it operates in an online context whereas traditional WOM typically happens in a face-to-face context (King et al., 2014).

Zomato is providing value for the consumers whilst the consumers are also creating value for each other, through their evaluations and photos. This reflects a finding in the article by Saarijarvi et al (2013), which states that it is important to evaluate what kind of value is co-created and for whom, meaning that value can have a different meaning for different actors in the co-creation process.

Zomato also generates great value for restaurants. In fact it is one of the most cost-effective high-impact marketing platform for dining establishments.

Hungry?

Check out the best place for you at https://www.zomato.com/!

REFERENCES

https://www.zomato.com/portugal

http://www.forbes.com/sites/anuraghunathan/2015/03/24/indian-restaurant-search-service-zomato-is-expanding-across-the-globe/

http://articles.economictimes.indiatimes.com/2015-03-17/news/60211899_1_foodpanda-countries-bank-account

http://blogs.ft.com/beyond-brics/2014/10/20/zomatos-special-sauce-coming-to-a-server-near-you/

Chevalier, J. A. and D. Mayzlin (2006). “The Effect of Word of Mouth on Sales: Online Book Re- views.” Journal of Marketing Research 43(3), 345–354.

Grönroos, C., & Voima, P. (2013). Critical service logic: making sense of value creation and co-creation. Journal of the Academy of Marketing Science, 41(2), 133-150

Katz, M.L. & Shapiro, C. (1994). Systems Competition and Network Effects, The Journal of Economic Perspectives, 8(2), 93-115.

King, R.A., Racherla, P., & Bush, V.D. (2014). What We Know and Don’t Know About Online Word-of-Mouth: A Review and Synthesis of the Literature. Journal of Interactive Marketing, 28(3), 167-183

Li, X., & Hitt, L. M. (2008). Self-selection and information role of online product reviews. Infor- mation Systems Research, 19(4), 456-474.

Saarijärvi, H., Kannan, P.K., & Kuusela, H. (2013). Value co-creation: theoretical approaches and practical implications. European Business Review, 25(1), 6-19.

Tsekouras, D. (2015) Variations On A Rating Scale: The Effect On Extreme Response Tendency In Product Ratings, working paper.

 

What Makes a Helpful Online Review?


We have all been there; browsing for too long on Tripadvisor.com or Amazon.com trying to find that one review that could be the decisive factor in buying (or not buying) that specific product. But what exactly is it that we are looking for? What makes one review more helpful than another? The article of Mudambi and Schuff (2010) tries to find the answers to these questions by reviewing almost 1600 reviews on Amazon.com throughout several products and product categories.

When browsing online, individuals are presented an increasing amount of customer reviews; these reviews have proven to increase buyers’ trust, aid customer decision making and increase product sales (Mudambi, Schuff & Zhang, 2014). In addition, customer reviews can attract potential visitors and can increase the amount spent on the website.  Hence, retail sites with more helpful reviews hold greater potential to offer value to consumers, sellers as well as the platform hosting the customer reviews.

In order to increase the helpfulness of customer reviews, several websites such as Amazon.com and Yelp.nl ask the question “was this review helpful to you?” and list more helpful reviews more prominently on the product information page.  Mudambi and Schuff (2010: 186) define a helpful review as a “peer-generated product evaluation that facilitates the consumer’s purchase decision process”.

The article distinguishes between two types of goods when looking for products online: search goods and experience goods. Search goods possess attributes that can be measured objectively, whereas the attributes of experience goods are not as easily objectively evaluated, but are rather dependent on taste. Examples of search goods are printers and cameras; examples of experience goods are CD’s and food products.

Past research showed conflicting findings as to whether extreme ratings (rating very negatively or very positively) are more helpful that moderate reviews; some argue that extreme ratings are more influential, whereas others argue that moderate reviews are more credible. Mudambi and Schuff (2010) argue that taste often plays a large role with experience goods as consumers are quite subjective when rating; hence, consumers would value moderate ratings of experience goods more, as they could represent a more objective assessment (H1).

Next, Mudambi and Schuff (2010 scrutinize the review depth of customer reviews. Since longer reviews often include more product details, and more details about the context it was used in, the authors hypothesize that review depth has a positive impact on the helpfulness of the review (H2). Nevertheless, the review-depth of a review might not be equally important for all products. Reviews for experience goods often include unrelated comments or comments so subjective that they are not interesting to the reader. For example, movie reviews often entail elaborate opinions on actors/actresses that are not important for the reader. On the other hand, reviews of search goods are often presented in a fact-based manner as attributes can be objectively measured. As a result, it is argued that review depth has a greater positive effect on the helpfulness of the review for search goods than for experience goods (H3).

By evaluating almost 1600 reviews (distributed over 6 products; 3 experience goods and 3 search goods) and excluding the ones that did not get any vote whether it was helpful or not, the researchers were able to confirm all three hypotheses. The article teaches us that there is no one-size-fits-all method as to what makes a reviewhelpful. Experience goods prove to be less helpful with extreme ratings, whereas search goods benefit from in-depth reviews.

Knipsel11

Mudambi, S. & Schuff, D. (2010). What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.com. MIS Quarterly, Vol 34 (1), pp 185-200.

Mudambi, S., Schuff, D. & Zhang, Z. (2014). Why Aren’t the Stars Aligned? An Analysis of Online Review Content and Star Ratings. IEEE Computer Science, 3139 -3147.

Do Movie Reviews Affect the Box Office Revenues?


The existence of Internet has changed our way of living. It has been a huge part of our life, one we simply cannot live without. We rely on it in almost every aspect of our lives, including when we seek for information. This also applies when we’re deciding what movies to watch. Before go to the cinema and watch a particular movie, some people usually checked the movie’s online reviews first. These movies’ reviews are online user reviews, and it is a form of electronic word-of-mouth (eWOM). According to Duan, Gu, and Whinston (2008), eWOM influences consumer purchase behaviour while it’s also the outcome of consumer purchases. But then, how these online user reviews actually impact the offline purchase?

There are three measures of online user reviews, the volume (Liu 2006, Duan et al. 2008), the valence or the average (Liu 2006, Duan et al. 2008, Chevalier and Mayzlin 2006), and the variance in reviews (Godes and Mayzlin 2004). Chintagunta, Gopinath and Venkataraman (2010) measured the impact (valence, volume, and variance) of national online user reviews on designated market area (DMA)-level local geographic box office performance of movies in the United States. What’s different about their study is they used local geographic data instead of national-level data (used by previous studies) and the ‘when’ and ‘where’ a movie is released are taken into account. Thus, they measured user reviews when a movie was released in a market and those were written by users where the movie was previously released. The impact was measured by combining data from daily box office ticket sales on 148 movies released from November 2003 to February 2005 with user ratings from the Yahoo! Movies website. They found that the overall movie revenues is greatly affected by the opening day gross. As it was conducted on DMA, movie and market fixed effects were included thus taking into account their differences including movie genre and market size, and some other variables was also controlled such as advertising level and number of theaters. In their first study, using the local data, they found that the average user ratings influenced the box office performance the most. This finding is interesting since most previous studies found that it is the volume of reviews which matters the most to box office revenues. But when the national-level data was used, they arrived at the same results as previous studies. And at the last part of the study, they attempted to explain these results difference by using national-level models with market-level controls. This method gave the same result as the first study, the average of user ratings has the greatest impact on the box office revenues. It concluded that it is important to determine where the movie was played, whether on “new markets” or “old markets”, and only then the “true” effect of user ratings can be measured.

As for us the movie goers, what the paper discovered is that we’re mostly affected by the average of ratings in deciding what movies to watch. Yet, how many people rated the movies (volume) is also an important aspect, as I would believe a slightly lower rating with much higher volume rather than a higher rating with much lower volume. In other words, volume and variance make a rating/review more trustworthy. Which one would you prefer?

Screen Shot 2015-04-24 at 7.33.53 PM

Screen Shot 2015-04-24 at 7.33.38 PM

Source : IMDB

References

Chevalier, J. A., and Mayzlin, D. 2006. ‘The effect of word of mouth on sales: Online book reviews.’ Journal of Marketing Research, 43(3), 345-354.

Chintagunta, P.K., Gopinath, S. and Venkataraman S. (2010). ‘The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets.’ Marketing Science, 29(5), 944–957.

Duan, W., Gu, B., and Whinston, A.B. (2008), ‘The Dynamics of Online Word-of-Mouth and Product Sales — An Empirical Investigation of the Movie Industry,’ Journal of Retailing, 84(2) 233-242.

Godes, D., and Mayzlin, D. 2004. ‘Using online conversations to study word-of-mouth communication.’ Marketing Science, 23(4), 545–560.

Liu, Y. 2006. ‘Word of mouth for movies: Its dynamics and impact on box office revenue.’ J. Marketing, 70(3), 74-89.

Reviews & Ratings: Consumer online-posting behavior


“Unfiltered feedback from customers is a positive even when it’s negative. A bad or so-so online review can actually help you because it gives customers certainty that the opinion is unbiased.” 

– Source: Gail Goodman, Entrepreneur, 2011

Social media delivers an ultimate platform for customers to broadcast their personal opinions regarding purchased products and services and therefore accelerate word-of-mouth (WOM) or consumer reviews to travel fast. Nearly 63% of consumers are more prone to buy products on a website that has online consumer reviews (iPerceptions, 2011). Online consumers reviews are trusted 12 times more, in comparison with descriptions of the product stated by the manufacturers themselves (eMarketer, February 2010). Companies who provide space for reviews on their websites, have an increase in company sales of nearly 18% (Reevoo). This video below defines how customers can assess online consumer reviews and recommendations while researching and shopping online.

Youtube: “Online Reviews and Recommendations”

Chen et. Al (2011) examined the interactions amongst consumer posting behavior and marketing variables such as product price and quality. An important part of the research was about how such interactions progress as the Internet and consumer review websites draw widespread approval where people use it more often. The study’s new automobile models data comprised of two samples that were gathered from 2001 and from 2008. As an automobile involves thorough searching before making a significant financial decision, these years were seen appropriate. Also more consumers made use of the Internet between 2001 and 2008 when considering purchasing an automobile. A total of 54% of new-automobile consumers made use of the Internet in when buying a car in 2001, reported by Morton, Zettelmeyer and Silva-Risso. According to a report by eMarketer, in 2008 this percentage was increased to nearly 80%. This study included prominent automobile review websites that covered the distinctive sections of the market— leading car enthusiasts (experts) as well as amateur consumers.

91-review-sites-e1415383495712

Motivations for Posting Online Consumer Reviews.

Gaining social approval – self-approval – indicating a level of expertise or social ranking – by demonstrating their superb purchase decisions, are all psychological reasons why consumers post online reviews. It can also be used to state satisfaction or dissatisfaction. Diverse types of customers are driven by distinctive motivations for posting reviews online. The earlier group of Internet users (in the study – year 2001) differs from the second group of Internet users (year 2008) when it comes to the reasoning as to why they post online. The consumers categorized as early group of users (a.k.a. experts – early adaptors of innovation) have high levels of product expertise, making them more likely to be psychologically seeking status and engaging in noticeable consumption. They are seeking to representing know-how and social ranking is particularly significant in the Internet’s early years (2001), as they tend to have high incomes and are more so price insensitive.

Conversely, the Internet has advanced and developed over this period, and it has appealed to a bigger population of types of consumers. Where, in 2001 it used to be a select group of Internet users who would post reviews, the Internet usage and online consumer review sites of today have become more mainstream. The Late adopters (2008) cultivate to be more no-nonsense and price focused compared to early adopters.

Marketing variables – effect on consumer online-posting behavior

Marketing variables indeed have an influence on consumer online-posting behavior. In the early stages of the Internet (2001) the price of products had negative relationship but premium- brand image has a positive relationship with the number of online consumer postings; differently, product quality has a U-shaped relationship with the number of online consumer postings. These different relationships are likely to be driven by early adopters of Internet usage.

The Internet infiltrates to mass consumers online, who are more inclined to be price sensitive as well as value driven.

Though certain marketing variables can lead to a big number of consumers engaging in online posting activities, these consumers do not automatically give higher ratings. The study shows that mass consumers lean towards posting online consumer reviews at higher as well as lower purchase price levels. In contrast to posting online consumer reviews primarily at lower price levels, which happened frequently during the early stages of Internet usage. The Internet has been accepted more by mass consumers online, where they express (dis)liking a product or service. This motivation of sharing reviews has become more important compared to sharing expertise of social status.

In conclusion, this research showed that the connections between marketing variables and consumer online-posting behavior are distinctive at the early phases compared to mature phases when it comes to Internet usage. High prices increase the overall consumer review ratings, which may be good news for a firm’s pricing decision. They found that the search for status is a core driver behind consumer-review behavior, predominantly in the early Internet stage. In market where it is difficult to assess quality, costly to assess quality, and where heterogeneous tastes are important factors when choosing a purchase, customers are occupied in all-encompassing decision-making. These conditions make it more likely for consumers to request external opinions online, before they make a decision on what they will be purchasing.

References:

Chen, Y., Fay, S., & Wang, Q. (2011). The role of marketing in social media: How online consumer reviews evolve. Journal of Interactive Marketing25(2), 85-94.

Charlton, G. (2012) “Ecommerce Consumer Reviews: Why You Need Them and How to Use Them.” Econsultancy.com

Featured image: http://splumber.com/wp-content/uploads/2014/12/Plumbing-Online-reviews-1030×574.jpg

GemShare – The trustworthy recommendation agent


Finding the best restaurant in town is not easy. Therefore a vast number of applications and websites provide services to facilitate the search. The “online urban guide” and business review site Yelp is the most popular among them. It uses automated software to recommend the most helpful and reliable reviews for its users and to help them connect with local businesses. Like most other recommendation websites, it combines numerical ratings with textual reviews. Yelp contains over 57 million local reviews and attracts around 130 million users monthly.

Companies are aware of the power of word-of-mouth. Online user reviews have become an increasingly important source of information for consumers. However, when it comes to more personal local services such as finding a trustworthy craftsman, lawyer, or the most competent physiotherapist online recommendation websites like Yelp are only used as a last resort. „People don’t go to Yelp for doctors or lawyers because of trust issues“, says Mohit, the founder of GemShare. „Positive reviews from strangers don’t guarantee that you, too, will value what is likely a very personalized and intimate experience.“ Besides the relevance of taste and trust for these services, people are also aware of fraud within online recommendation systems. Especially local services that are not used by a sufficient quantity of people to obtain the wisdom-of-the-crowd-effect, reviews and ratings can easily be manipulated by self-ratings of companies.

The alternative to the time-consuming and sometimes untrustworthy use of common online recommendation platforms is to ask the own personal network for advice, but even with all our social networks and technical devices, this approach can also be time-consuming and frustrating.

screenshots_static215-gem-detail

GemShare, launched in April, 2014, is a recommendation platform and application that focuses on trust and personal recommendations to solve this issue. “We have several members who have said two thumbs up from a friend is worth more than 40-star reviews,” says Mohit. Users create their own trusted network of friends and like-minded people, via Facebook, Gmail, or phone contacts, for the specific purpose of finding out where to find the best service.

Continue reading GemShare – The trustworthy recommendation agent

Can you really rely on online product reviews?


Product reviews on online platforms are growing in popularity1,2. Platforms like Amazon, Google or the App store use product reviews to show which products have the best experience in usage by other consumers. Most of these product reviews are extremely positive about the product3, but does this indicate that all products are extremely good and that there is no moderate product on the online market? Let’s give it a try to search on Amazon three random product reviews from books, video games and sports. The results are shown in table 1.

table 1

As can be seen from the table, two of those random reviews are extremely positive (the book and the sport watch) and one is extremely negative (the video game). An experiment done by Hu et al., (2009) asked customers to rate a music CD on a 5-star scale. This experiment shows an almost normal distribution, which can be expected if the ratings are randomly done by every buyer of the product. Most of the reviews on Amazon (table 1) show a so called J-shape distribution and not the outcome of the explained experiment. What could be the cause of those differences?

The first explanation is the purchasing bias, which states that customers with higher product valuation are more likely to purchase the product than customers with a low product valuation. Continue reading Can you really rely on online product reviews?

Selling your products to Justin Bieber? No way!


With the total amount of social media fans across different platforms (Twitter: 51 million followers (1), Facebook: 66 million likes (2) and Instagram: 15 million followers (3)) exceeding the population of Japan (which currently has around 127 million inhabitants), Justin Bieber is arguably the most popular person on the planet. With this popularity come a lot of perks: the best perks according to regular human beings are the endorsement deals that the Canadian superstar signs on a regular basis. One of these deals was to design his personal nail polish called ‘One Less Lonely Girl’. Certainly this was worth around $12,500,000 (4).

Since almost all major brands engage in this behavior, these celebrity endorsement deals must give the brand something in return. However, not every company has a spare $163.75 million in cash to endorse athletes (or other celebrities) like Nike does (5). So what can the smaller companies do to get the same exposure as these large global brands? Because those smaller companies can simply not afford to sponsor the superstars of today, it seems that they can only hope and pray that somebody like Justin Bieber enters their store and purchases their product. If he then writes an objective review about the product on one of his social media platforms, a logical consequence seems that the owner of the smaller company will be able to retire at an early age.

Just imagine that Justin Bieber does write objective product reviews on his social media platforms. Continue reading Selling your products to Justin Bieber? No way!