Immediate Value of CX/UX Research, Part 1

Debbie Levitt
R Before D
Published in
6 min readFeb 1, 2024

--

For some unfortunate reason, important people at our jobs keep hearing from UX and non-UX people that it’s hard or impossible to determine the value of CX and UX work. These messages make it up the leadership chain, and the general perception, especially of qualitative research, is that it must have low value; otherwise, it would be easy to demonstrate the value.

Before we look at some of the ways in which qualitative research done well has immediate positive value (part 2 of this two-article series), let’s consider how we can know during and after releasing products or services that we have done the wrong research or not enough research.

This article comes from a slide from my multi-day SPACE (“Strategizing Products and Customer Experiences”) workshop, which I give live a few times a year, and is under $250 USD to attend. cxcc.to/space

Explained below.

Bad or no evidence is a root cause of poor strategies, decisions, and outcomes.

During a project, watch for symptoms that show we are working from bad evidence or a lack of knowledge:

  • There is no overarching CX, UX, or product strategy. Sure, someone has a to-do list of features, but what are we really doing and why? Did we challenge projects with bad or no strategy? Is our strategy about “making the numbers” for the company, but we have no customer-focused strategy?
  • Do we have success criteria? Do the criteria mention customer success or only business success? Do we have clear and reasonable KPIs that relate to the work we’re doing? Are we measuring anything around how well we solved customers’ unmet needs?
  • Do we have customer-focused problem statements? We might have a statement about what the business hopes to achieve, but where is the customer? “Higher conversion rate” isn’t a customer-focused problem since it’s not something the customer necessarily wishes for themselves. If you lack problem statements based on evidence and knowledge, then you are probably not working from good data about users and their problems. A reverse-engineered guess at why people might need the thing you want to build anyway is not a customer-focused problem statement created from good research.
  • Making decisions without the right evidence is hard, so we tend to fight over opinions. Qualitative data and a clear strategy don’t lead us, so it’s often “what does a PM or Engineer like.” When we pretend to have empathy and wonder what the user would like, we don’t really know. We’re guessing and assuming, which is risky. When we have reliable and strong qualitative and quantitative data, our team understands customers and their needs, tasks, and perspectives. We are then more likely to align around how we balance that with the outcomes the business wants for itself.
  • As our project progresses, check in with your company’s values. Are your concepts, solutions, and designs trustworthy? Honest? Ethical? Showing respect to customers? Empowering them? You would have to understand customers and users well enough to know what would be honest, trustworthy, and valuable to them. If you are not sure, then you lack the good evidence that comes from qualitative research.

Another sign: the quality of our research process.

We could know quickly or immediately that research was planned or executed badly in one or more ways.

Someone on the team usually has research expertise, often in your CX or UX department. You often have access to someone who is an experienced, professional Researcher. Show them your plan: what you are trying to learn, the method(s) you chose, who you plan to recruit, what you plan to ask them, and how you intend to use the insights later.

Will this research be done well? Better evidence, data, and knowledge come from better research. Specialized Researchers will know if your plan is likely to lead to skewed, flawed, or wrong “information” that will color your decisions. I can tell you that a pizza made of only potatoes, cut grass, and red wine (and certainly no cheese, wheat, or sauce) won’t make a good pizza. I know this, and can warn you early about how that “pizza” might turn out.

One area that’s easy to get wrong is your research participants: the wrong number of people, the wrong audience, or both. It’s common for non-Researchers to imagine that great research insights can come from asking three people what they think, what they need, or what our products are missing. These aren’t good UX or user research questions. Perhaps these are “market research” questions.

But more importantly, why only three when that doesn’t meet any of the guidelines UX and User Researchers have used for decades? What if those three are all white guys? Is this still good research? If one guy says he wants something, do we jump on that? Or do we wait for more data from a larger population to decide or strategize?

Symptoms that we’re working from bad evidence are really clear after we release products and services to the public.

By then, it will cost us way more to fix the problems we created. We might lose customers and not be able to get them back. Risks, waste, and the Costs of Poor Quality will be high when we don’t notice or care about bad evidence as a root cause. Symptoms include:

  • High Support utilization. Is there a wave of Support tickets about your change or new feature? Then something is wrong: wrong feature, wrong idea, or wrong design. We didn’t create the right thing for our audience. Perhaps if you add Support costs to the project budget, someone will care when we push our problems off on the Support team.
  • Negative Voice of the Customer sentiments are hard to ignore. Angry tweets and social posts, poor ratings and reviews, and lower customer satisfaction or NPS scores tell us clearly we got something wrong.
  • Did those unhappy customers speak with their dollar and downgrade or leave? Are we seeing lower retention or loyalty? We’re not meeting customers’ needs, and they are tired of our guesses and the crumbs we thought would be “good enough.”
  • An A/B test where B loses tells us that our new idea is worse than the one we already had. We knew our existing product wasn’t great, and we were trying to fix or replace it. We must be working from bad or no evidence if we got this far and didn’t know that B was worse than A.
  • Other failures in experiments or releases. We might have offered something we were confident about. Or we knew it kinda sucked, and we figured we’d see how it does and fix it later. Failing, especially frequent failures, is a sign that we don’t understand users well enough to deliver something that works for them.
  • Low feature utilization. We expected people to like or even need our new feature, and then we found that people ignored or avoided it. We must not have understood their tasks, habits, or needs well.
  • We didn’t meet or exceed the KPIs and OKRs we set up as part of the project’s strategy. We missed our goals, and now we have to figure out why.
  • We’re doing questionable or unethical things to try to show “good” results or metrics. If our high-quality, high-value products and services naturally made customers and users want to buy more or grow with us, we wouldn’t need to “nudge” them or treat them like pawns being pushed around a chessboard. I once saw a company remove nearly every button and link from a screen because the VP of Product told the team to do whatever they had to do to get more clicks on the one button left on the screen. This is questionable, and a clearly poor experience for users who got the B variant that removed choices and features they relied on.

Poor research, the wrong research, not enough research, working from outdated research, working only from desk research… any of these types of bad evidence are root causes for our disaster projects, and poor customer and business outcomes.

Smarter and more strategic teams care more about the quality of research insights than how fast someone got anything we could call “data,” especially bad data that led us astray.

Please continue to part 2, where we examine the value of evaluative and generative research in more detail.

Connect with us or learn more:

--

--

“The Mary Poppins of CX & UX.” CX and UX Strategist, Researcher, Architect, Speaker, Trainer. Algorithms suck, so pls follow me on Patreon.com/cxcc