Backstory. 

Shortly after I joined the design team at Five & Done, I was asked to put my data science hat back on and dig into some user research we had just completed for one of our clients in the automotive industry.

We’re constantly looking for ways to improve online car shopping, and one of our areas of expertise is the Special Offers part of the site. This is where an auto brand will advertise regional incentives, finance deals, lease specials, and other promotions.

Examples of OEM (Original Equipment Manufacturer) Special Offer pages

To improve conversions for our client, our team had to better understand what car shoppers thought of these offers, how well they understood them, and where in their shopping journey they considered them. Armed with tons of caffeine, another designer & myself set out to seek answers (and maybe a few tips for when I eventually buy my first car 👀).

In this post, I’ll share some challenges and lessons we learned on our user research journey.

Survey sorcery.

On average, a car shopper takes 100 days from “I should buy a new car” to actually driving off the lot. Well, when you're looking at website analytics, you don’t always know if that shopper is on Day 1 or Day 100 of their journey to new car ownership. Our team needed to understand more about intent at different stages of the shopping and decision-making process. We also needed a statistically significant data set. User testing would be too anecdotal at this stage. 

We needed to go breadth-first.

This is why we kicked off with an industry-wide online quantitative survey. We cast a wide enough net to collect data points from 500 recent car shoppers & purchasers, and then manually performed cross-analysis on the Qualtrics platform (as a former data scientist in big tech, I nerded out here). So! Much! Data!

This method served our purpose, which was to understand general roadblocks in the shopping funnel. We also found it to be an efficient way to gather statistically significant answers, such as discovering that 60% of customers shop with a ‘vehicle-first’ mindset and use OEM websites mainly to understand which models & trims fit their needs. 

But that doesn’t mean the survey on its own was the most comprehensive for our research goals.

For example, one of the eyebrow-raising stats we gathered from our survey analysis that was relevant to our goal of revamping our client’s offers page was: 

“65% of shoppers are price-conscious, but only 18% considered OEM offers”

The data told us that people cared about price, but also told us they did not care about offers. Unfortunately, that’s all the data told us on this topic. It did not address the reasoning behind the apparent contradiction there. This example actually represents a limitation we discovered with surveys when trying to unravel the mystery behind car shoppers being lost in the sea of car offers. Sure, all of those stats are great. But when you can’t pinpoint the why, there isn’t much to evaluate and consider when designing new screens later on. 

This limitation became more obvious when paired with other research challenges we encountered - 

  1. Poor response quality. Conflicting survey data, incomplete answers, & ambiguity of answer options all made it difficult for us to identify shopping frustrations and trends.
  1. Recall bias. A good chunk of surveyed shoppers had purchased a new car several months prior, leading to unreliable responses without knowing more about the person.
  1. Lack of user familiarity. Several shoppers may not understand how offers are applied to a car purchase and their final payment.

So at the end of the day, while we surfaced what some pain points were and how many shoppers experienced them, we still didn’t fully understand shopper habits and motivations…

…which brings me to my next point. 

Context matters. 

As a next step, our team conducted remote moderated 1-1 user interviews to dive deeper into the survey data we found interesting, such as the one pointed out earlier. Specifically, we talked to 10 recent purchasers, 5 early-stage shoppers, & 5 late-stage shoppers across the country.

How did we reach this decision? 

While the survey was a good way to get insights on how widespread certain problems were, we chose user interviews to explore individual motivations & experiences that numbers alone couldn’t reveal. At the end of the day, car shopping is a niche activity. If we were studying user behaviors in e-commerce, our approach may have been different. 

Our decision to continue with this form of qualitative research was fruitful, as we got juicy insights we wouldn’t have otherwise. 

Getting to talk to shoppers about the actions they actually took/are taking was a stark contrast to the previous survey responses, in which people indicated what they would do in theory. For example, in our 1-1 interviews, many users stated that during their journey, it was mentally taxing to interpret the information & overall value of online OEM offers amidst an already stressful process. As one of them best put it - “It’s a $50,000 purchase, I’m not just buying shoes!”

An interviewee resorting to ChatGPT to try to make sense of an offer

One challenge that we faced was deciding the number of users we should talk to. If you’re a stats nerd, you’ve heard a large sample size is foolproof because of lower margins of error & standards of deviation. But, this is a manual process and not scalable. It was critical to get valuable insights as efficiently as possible. In our scenario, we found that less is more. After talking to just 5-7 users, the same patterns started to emerge. If we were to go back and save some time & money on this project, we probably could have gained the same insights with a smaller sample size. 

Couple final thoughts.

Still there? 

If you’re currently in the midst of combing through lots of data or are about to start your UX research journey, some things to consider - 

It’s okay to combine different methods. A two-pronged approach that focused on breadth & depth was very useful for this project. At Five & Done, we value diverse perspectives and using different methods covered unique POVs that may have been missed with a single approach. It was also helpful because of cross-validation, as combining methods allowed for validation of certain survey findings, increasing the reliability of insights. 

(If you’re short on budget and time, however, I’d suggest going straight into a few user interviews.)

Moving forward, how can we leverage AI for UX research? We all know AI is taking the world by storm, and there seems to be a new AI-based tool for just about everything. Even an ‘AI Excuse Generator’, a godsend for the perpetually tardy among us. 

AI is already revolutionizing UX research by automating data analysis and uncovering user needs faster. 

Tools like - 

A huge challenge in our project included using a not-so-user-friendly Qualtrics platform to run multivariate analysis with their cross-tabulation tool while remembering different combos to try, and going back & forth to watch interview clips at 1.5x speed while reviewing transcripts in-depth. If we were to redo our approach (or apply this learning to future projects), I believe leveraging AI-based tools will save time in manual processes, reduce human error in data calculations, & take it to the next level of contextualizing user pain points. 

Anddd that’s a wrap!

Thanks for reading my reflections as a data scientist-turned designer. Hope you learned a thing or two from our user research journey, and stay tuned for more content in the near future!