Visualizing Yelp Review Histories

Scroll down to content

This piece was developed in Sheelagh Carpendale’s CPSC 683 course at the University of Calgary.

Summary

While a rating on a 5 star scale can quickly communicate a business’ overall ranking, it can significantly over simplify the customers’ experience of the venue to a simple number value. Since a business’ star rating is a cumulative combined average of all reviews, the number can also make it challenging for growing businesses to recover from mistakes they’ve made in the past. Unlike a quantitative star rating, the qualitative text of the review on a business page can paint a much more vivid picture of a customer’s experience. However, knowing which reviews to read and identifying trends between them is currently both a difficult and timely task. To temporally connect these ratings to the content inside of each business’ reviews, I developed a custom interactive data visualization for iPad that reveals otherwise hidden patterns in Yelp reviews. The app embeds multiple visual mappings of the fields found in the business and review datasets provided for the 2017 Yelp Dataset Challenge, with an animated layout that encourages the viewer to playfully explore the dataset though touch. As an additional encoding, carefully chosen sounds elevate the experience even further, communicating details about the data under the explorer’s finger.

Problem

When looking for a place to eat or a service on Yelp, there are two major factors that largely contribute to a business’ reputation.

Average Number of Stars

First, the business’ overall average number of stars. Today, a business’ star rating can be used, in combination with the number of reviews, to filter out companies that do not reach a certain threshold. However, a mere 5 star rating does not only significantly over simplify a customer’s experience of the venue to a simple number, but also makes it challenging for growing businesses to recover from mistakes they’ve made in the past. Businesses with very low ratings five years ago, will still have their number of stars impacted today, even if they have exponentially improved since then. On the flip side, businesses with high ratings in the past who have recently began to slip in quality, may not see a significant rating impact, causing recent negative reviews to fall under the consumer radar. While Yelp recently introduced some basic charts (below) for showing how business ratings change over time, the review data is much more rich than a number on a five point scale.

Screen Shot 2018-09-30 at 12.46.00 AM.pngScreen Shot 2018-09-30 at 12.46.04 AM.png

 

Review Text

The second factor that impacts a business’ reputation is the text in the reviews itself. Unlike a quantitative star rating, the qualitative text of the review can paint a much more vivid picture of a customer’s experience, as shown in the example from Yelp’s website above. However, knowing which reviews to read and identifying trends between reviews is currently both a difficult and timely task. While Yelp currently provides three buttons to rank reviews as Useful, Funny, or Cool, these rankings only provide some basic, at a glance, insight into how popular or important a review is. They do not however, highlight the areas that a customer may personally care about, like how good the grilled cheese is at my favourite breakfast joint.

Solution

To better understand how business ratings have changed over time, and to connect these ratings to the content inside of each business’ reviews, I created an iOS based data visualization from this year’s Yelp Dataset Challenge dataset. For the competition, six JSON files were provided containing information about businesses, reviews, users, tips, photos, and check-in data. While the provided dataset is static, in theory the visualization could also be connected to a live Yelp review database to show customers business data around them. Currently, the visualization makes use of two of the provided files in particular: business.json and review.json. I found the review.json dataset to be particularly interesting as it contains qualitative text review data of different businesses dated with a quantitative 5 star scale rating. The business.json file was merely used to retrieve a list of different restaurant categories to provide context in the visualization in the form of titles.

IMG_0244.png

CPSC 683 Final Project Report-5 (dragged).png

Early Concepts of the Visualization

Design + Data Mappings

At its core, the visualization displays Yelp reviews for a given business over time, with the timeline starting at the 12 o’clock position of each ring. The timeline’s date always begins on Jan 1st of the business’ first review year and ends on Jan 1 of the year after the last review. So, for example, if the business’ first review was on Sep 12, 2009 and last review was on Feb 12, 2016, the timeline would begin Jan 1, 2009 and end Jan 1, 2017. Using these dates ensures that each year has an even amount of spacing around the circle. 2×π/years to be exact. Placing time around a circle is also one way to ensure that the entire review history of a business always fits onto the screen in a consistent manner.
The visualization has five prominent rings. Each ring represents a Yelp review star rating and each dot around the visualization represents a review of the business. One star reviews are temporally placed on the inner most ring, while 5 star reviews are on the outer most ring. The radius of the arcs positioned behind the rings encode the average star rating of that year’s reviews. While an additional encoding from number of reviews to arc brightness was considered, it ended up feeling too visually distracting and was removed. If you look closely, some review dots are larger than others. The number of words in the review is subtlety communicated by the size of the dot. When touching a review with a finger, the size of the dot expands seven times, while the other reviews shrink down. This exaggerates the differences in review size as the viewer slides their finger across the screen, while also indicates that the review under their finger is about to be selected.

Screen Shot 2018-09-30 at 12.30.51 AM.png

Additionally, a clicking sound is played when a new review is touched. The pitch of the sound rises as the person touching the screen scrubs through more recent dates on the timeline, while the volume of the sound is mapped to the review word count. When the viewer stops touching the screen, the unselected reviews return to their initial size. If the touch ends and the viewer’s finger is on top of a review, the circle is kept expanded and the text of the review is displayed on the right side of the screen, with the visualization sliding from the centre of the screen to the left. Switching between a focused, visualization only, view and a visualization + text view felt necessary as sometimes it felt distracting to have both on the screen at the same time.

Screen Shot 2018-09-30 at 12.31.16 AM.png

After the viewer releases their finger and selects a review, a plucked string sound is played. The note of the sound played is mapped to the number of stars the review has (above). These notes were chosen because they sound relatively nice together, regardless of the combination played. Notes are two whole notes away from each other, maintaining a consistent spacing similar to the spacing of the star rings in the visualization.

Screen Shot 2018-09-30 at 12.33.57 AM.png

Tapping a word in the review text area on the right causes the word, and all reviews containing the word, in the visualization to become highlighted with a single colour. Additionally, the average rating arcs behind the rings now move to show the average ratings of reviews containing the highlighted word per year. This reveals patterns in word usage both over time and by review rating. A combination of plucked sounds is also played at this time, using the same note to rating mapping as before. However, unlike the single note played when the viewer taps on a review, tapping on a word plays three notes simultaneously (mapping shown above). The first note begins at octave 1 and encodes the lowest rated review that word is mentioned in, under the currently selected year. The second node is rooted at octave 2, with the note encoding a rounded average of the reviews containing the word that year. Finally, the third note begins at octave 3 and encodes the highest star rating that the word is mentioned in that year. Having the three notes play in unison creates a chord that tells a story about the data. To hear a different year’s sound, the person using the visualization can tap on another review, then re-highlight the word.
When the viewer taps on a new word, the new word becomes highlighted and the old word becomes unhighlighted, updating the visualization accordingly. Tapping the currently highlighted word again unhighlights all words. While I did consider implementing the ability to compare multiple highlighted words, having multiple colors in the visualization made it challenging to pick out which review contained certain words. For example, if word A was encoded with a red colour and word B was encoded with a blue colour, should a review that contained both the word A and word B be encoded with a green colour? Since this mapping was not clear, I avoided supporting this in the current version of the project.

Screen Shot 2018-09-30 at 12.38.13 AM.png

In addition to the restaurant name and location, the top of the visualization contains three circles. These circles were designed to colour encode the three Yelp rating traits: Useful, Funny, and Cool, which act similar to Facebook’s “Like” button in three distinct categories. The size of the outer rings is constant, and represents the maximum value of that trait among all of the reviews. The size of the inner circle shows the average number of likes each category has for the currently selected review. When tapping on one of the circles, the main visualization changes color. While hue is designed to encode whether or not a review contains a selected word, a lighter shade of the hue informs the viewer that the review does not have any “likes” in the currently selected category. This might help them pick out reviews that other people found useful, funny, or just plain cool.

When tapping the X in the top left corner of the visualization, the app returns to a small multiples grid view of businesses. This enables the ability to compare multiple business reviews and quickly pick out general trends, without having to dive into any details. In the top left corner of the grid view, a title header indicates which business category is current being viewed. Tapping on this header also enables the ability to pick a different business category from a list. As the person using the app scrolls down the grid of businesses, more data is automatically downloaded from a local MongoDB database containing the full Yelp dataset. As one might come to expect from other iOS apps, tapping on an item in the grid expands the visualization to a full screen interactive size. While not supported in this current version, in the future it would be interesting to also show currently highlighted words in this view and compare them across businesses. Adding different sorting options could also help reveal other trends between businesses.

 
Screen Shot 2018-09-30 at 12.39.36 AM.png
 

A technical rundown of each encoding used in the visualization.

Leave a Reply