top of page

Yelp Mobile Redesign

10 week class redesign project

Timeline: Mar 2021 - Jun 2021

Role: UI/UX Researcher and Designer

Tools: Figma, Google Docs

The main focus was on improving Yelp's mobile user experience while tackling the topic of misinformation in food/drink reviewing apps. Given that reviews are opinions based on a user's experience, we wanted to find a way to remove as much ambiguity of the overall review as possible and make the process much more reflective of each aspect of their experience to make up the whole.

Yelp Screens Preview

Digital mockups of our Yelp rating redesign.

Problem

According to 8 out of 10 Yelp users our team spoke to, Yelp reviews and ratings DO matter whether they are a potential customer, a business owner, or an employee. These individuals also believed Yelp could be improved.

Number of users that believed Yelp's system could be improved.png

For 3 customers we interviewed, when they looked for new places to eat they often used Yelp to help get a preview of a restaurant. While it did not always directly impact their decision whether they ate there or not, they found that they thought twice about lower rated restaurants. An interesting bit of information we found was that roughly 20% of Yelp reviews are suspicious and sometimes considered to be fraudulent.

Suschart-yelp.png

When it comes to business owners and employees on the receiving end of reviews (good or bad), 2 business owners and 3 employees agreed that they "absolutely take reviews seriously and make changes as necessary" but wish that there was some way to quality check the reviews. When asked what they would change about rating systems, they desired a system that allowed for more detailed reviews.

This led to our guiding questions: How could we revise Yelp’s system such that it allowed users to read and write more detailed and honest reviews? What aspects about the process would we change and how would that affect the user experience?

Users

The targets of this Yelp rating redesign were the three groups of stakeholders we found from our initial interview process: customers who use Yelp, business owners, and employees. Implementing a more robust process would allow users to read and leave more constructive reviews while discouraging misinformed, ridiculous reviews that some of our stakeholders find. This type of system would then allow business owners, small ones especially, to trust in the quality of reviews and make changes according to feedback.

Team and Role

We were a team of 5 designers with varying skill levels in the different aspects of the design process. Some were better at researching, interviewing, and managing while others were stronger in sketching and prototyping.

My main contributions were in creating the digital prototype, interface layout, prototyping interactions between screens, and making persona templates for group members to use. I also contributed to interviewing stakeholders, sketching wireframes, and generating user testing protocol based on our prototype flow.

Design Process 

Stakeholders were quick to address any pain points they had with the current rating/review system. The main pain point was that they would like to see more detailed breakdowns to see the specifics in a review. One stakeholder mentioned that this could be as simple as 'pros vs. cons' or categorical ratings for each aspect of a dining experience. Other pain points included difficulty verifying the validity of reviews, blatantly ridiculous reviews, and when experiences don't match up with previous ratings or past visits. Based on this information, I was able to create two personas and storyboards to get an idea how these scenarios might play out in real life.

One of five user personas we generated.

One persona our team generated in light of extensive stakeholder interviews.

Wireframing helped us visualize our ideas of a revised rating system and got the team thinking about how to create a proper flow based on the stakeholder pain points. This activity also helped resolve conflict about how we would present the information and new process to stakeholders/testers. The two wireframes below inspired the flow and process when creating a review in the prototypes. As a team, we learned how to balance out the space to features ratio and figured out our criteria for creating a review. 

Two of our wireframes that influenced the digital prototypes.

Sketches and wireframes that influenced our digital prototypes.

The goal for the first digital prototype was to implement a categorical breakdown when writing a new review. We focused on this system specifically since our stakeholders seemed to have issues with this part of the app specifically. I suggested that we break up the overall into three categories for writers to rate: food quality, service quality, and atmosphere of the restaurant, which we did. The team ran into a conflict deciding how this should be implemented, however, as there were two different ideas for how to complete the process: an all-in-one screen OR a step-by-step process. After much debate, we agreed that we would let the testing determine the proper way to implement the new rating process based on the low fidelity prototype I helped create below.

Our first prototype, used for testing the revised rating process.

The updated rating process implemented into our first digital prototype.

Testing this iteration proved to be valuable and insightful as we learned that 9 of our 10 testers generally liked the new process. All three of the users I did testing with commented on their satisfaction with how the process encourages users to rate each aspect of their experience rather than let one good or bad aspect dictate the whole review. One tester from the customer stakeholder group stated that the addition of categories helped "guide them through what they wanted to speak about in their experience" and also commented that the step-by-step process was intuitive to follow. A few of the testers did critique various bugs and unfinished sections as well as the lack of confirmation and ability to go back and edit.

Our final product addressed the feedback that testers provided, added color to the interface, and implemented proper interactions between screens. Additionally, I added the ability to read existing reviews and filter by a certain time period using overlays and pop-out menus (though it is not fully functioning).

Various screens of the review process in our final prototype.

The updated rating process implemented into our final prototype with color!

Outcome

We did one round of validation testing and our stakeholders were highly satisfied with the final product. Many of them believed that implementing this process would benefit all parties using Yelp, and they expressed that it delivered on their needs and desires. For future work, a few users wanted to see if this could be made into something beyond our interactive digital prototype. Our team's mentor and instructor also thought that this would be an idea that had potential for exploration in the future.

This project helped refine my skills and knowledge of digital prototyping and prototyping tools. I learned plenty about Figma's different features and functions while also finding out its limitations. In addition, the process taught me more about conducting user research and gaining insight from stakeholder interviews. I learned how challenging, yet rewarding it can be to coordinate with a team composed of members with varying levels of skills and different approaches to design solutions. I further understood the creativity of others and the compassion needed to ensure that each teammate feels proud of their contributions.

To view the whole prototype, click here

bottom of page