Strange ways consumers interpret online reviews

    Justin Saddlemyer
    Justin Saddlemyer

    An online review is more than just the average score or product information.

    In a previous post, I discussed how where and when an online review is written can affect its tone, but in this article, I would like to look at those reading reviews, not writing them. Why? Because a number of studies have shown that people don’t use online reviews as you might expect. Rather than only relying on the informational content or average score, research has shown that readers of online reviews often react to surprising characteristics. Below are four examples of this type of behavior.

    1. Irrelevant negative information can be great!

    Do you ever scroll down through product reviews, only to discover something along the lines of:
    “1/5 stars, my cat scratched it to shreds.”
    Or
    “2.5/10, the author’s name reminds me of my cheating husband.”
    Did reading these types of reviews reduce your willingness to buy the product? Probably not. On the contrary, recent research featured in the Journal of Consumer Psychology shows that these types of reviews can actually enhance people’s attitudes towards the product (Shoham, Moldovan, & Steinhart, 2017). In one study, the authors showed participants a set of reviews for an espresso maker. All participants saw four positive and relevant reviews of the machine. However, some participants were also exposed to an additional irrelevant review (about the taste of tap water) that was either negatively valenced (one star) or positively valenced (five stars). As the researchers predicted, the set of reviews with a negative irrelevant review was most likely to increase quality perceptions.

    In essence, people often see negative information as more important than positive information. However, negative information that is irrelevant can be dismissed, and give consumers a greater feeling of certainty when making their purchase. The lesson? Retailers don’t necessarily need to be concerned about negative reviews when they are clearly unrelated to the product. Such information may even boost evaluations.

    2. Sometimes divisive reviews are attractive

    Not all product ratings cluster neatly around a particular score. Occasionally you’ll encounter a product that some people hate, and some people love. You might expect that such polarity can be off-putting, as it is difficult to understand what the true value of the product is. How can some customers hate the product while others absolutely adore it?
    However, researchers have found that these types of reviews can actually increase preferences when people are looking to strengthen their identity (Rozenkrants, Wheeler, & Shiv, 2017). In a series of studies, researchers looked at how unimodal (scores clustered around one value) versus bimodal (scores clustered around the lower and higher ends of the scale) review distributions affected self-expression and preference. First, the authors found participants rated movie reviews with a bimodal distribution as more self-expressive (revealing about a person’s tastes, identity). This is an intuitive result, as a movie with divisive reviews indicates that it’s incredibly subjective.
    However, subsequent experiments revealed something interesting. When consumers had a weak self-identity, they were more likely to indicate that they would see a movie with polarized reviews. This was even replicated when the authors weakened participants self-views by asking them to remember a time they didn’t (versus did) have a strong self-identity. Again, those with the weakened self-identity were more likely to indicate a preference for the polarized reviews. Therefore, although there are times when a divisive product might hurt product intentions, some consumers, whether chronically or temporarily in self-identity doubts, may actually prefer such products!

    3. Politeness matters!

    “I don’t want to be rude but….”, “I’ll be honest and admit…” “Don’t get me wrong…”. These types of expressions are common ways of introducing a piece of negative information. Such phrases are called ‘dispreferred markers’, and are used cross-culturally. Dispreferred markers typically function as a way to soften negative information, as if acknowledging that the following words may be unpleasant or awkward. In everyday speech, such politeness may be appreciated, but how do they function in online reviews?
    One might expect that in the context of online reviews people appreciate direct and unfiltered information. Why should they feel offended by a blunt and scathing review of a product they have yet to purchase? But researchers have indeed shown that when a review contains a dispreferred marker (versus direct information), readers interpret the reviewer to be more likable and credible (Hamilton, Vohs, & McGill, 2014). Moreover, a study showed that participants even indicated a greater willingness-to-pay for a luxury watch when a review contained the expression “I don’t want to be mean but…”. Evidently, social graces still apply to products that people have yet to own!

    4. Enthusiasm and passion matters, up until a point

    “Product was good.” It is hard to take a review seriously when it appears as though little effort has been put into writing it. But it is equally hard to trust when the review looks as though it has had too much effort put into it, as the person may appear unhinged. But how do you quantify effort? In an interesting series of recent studies, researchers looked at how emotional arousal in online reviews shaped their perceived helpfulness (as measured by up-votes in an app store). Emotional arousal (i.e. excitement, intensity) was measured using a linguistic software that analyzed the content of app reviews, and this was used to predict how useful other users rated the review.
    The results? At lower levels of arousal, increasing emotional intensity was good. It connoted effort, and people found the reviews to be more helpful. But after a point there were diminishing returns, eventually turning negative. As reviews became too intense, users evidently began to make alternative inferences about the reviewer (e.g. that they are venting, showing off, or are irrational). The effect was also replicated in a subsequent lab experiment, where students were asked to rate the helpfulness of reviews that varied in emotional intensity. The takeaway? There is a sweet spot between apathy (no emotional arousal) and crazy (extreme emotional arousal) that guides readers of online reviews.

    More than numbers and features

    Retailers are increasingly featuring social reviewing systems on their websites. These types of studies show that the ways that consumers approach these reviews are not always intuitive. What may seem like a liability (an irrelevant negative review or polarized ratings) may sometimes improve evaluations. Moreover, the way that information is also clearly important, as politeness and emotion are clear signals of reviewer credibility. These types of insights can give marketers and retailers an idea of how reviews are going to affect potential customers, and when an action (such as a response) is warranted.

    References

    Hamilton, R., Vohs, K. D., & McGill, A. L. (2014). We’ll be honest, this won’t be the best article you’ll ever read: The use of dispreferred markers in word-of-mouth communication. Journal of Consumer Research, 41(1), 197-212.

    Rozenkrants, B., Wheeler, S. C., & Shiv, B. (2017). Self-Expression Cues in Product Rating Distributions: When People Prefer Polarizing Products. Journal of Consumer Research.

    Shoham, M., Moldovan, S., & Steinhart, Y. (2016). Positively useless: irrelevant negative information enhances positive impressions. Journal of Consumer Psychology.

    Yin, D., Bond, S. D., & Zhang, H. (2016). Keep Your Cool or Let It Out: Nonlinear Effects of Expressed Arousal on Perceptions of Consumer Reviews. Journal of Marketing Research.

    Leave a comment

    Your email address will not be published.