AI Tools in UX Research (From AI’s Perspective)

Debbie Levitt
R Before D
Published in
8 min readFeb 8, 2024

--

Here’s a conversation I had with Claude.AI (#NotSponsored).

Screenshot of my chat with Claude. Everything is pasted below.

Prompt:

There are many AI tools that claim they can act as your company’s users, and help you know what those people want. There are AI tools that claim that they can interview participants for UX research. There are AI tools that claim that they can analyze UX research data better and faster than a human. Can you please tell me who is the target audience for these tools?

Claude’s response:

Based on your question, it seems you’re asking about the intended users and customers for AI tools that claim to help with user research and analysis. Some potential target audiences for these AI tools could include:

  • Product and UX designers at tech/software companies who want insight into user needs and feedback to inform product decisions. These AI tools promise faster analysis to allow designers to iterate more quickly.
  • Market researchers at consumer companies who want to supplement traditional market research by getting quick insights from “interviews” with many synthesized users.
  • Solo founders and entrepreneurs who don’t have big research budgets and want DIY tools to help understand potential customer needs.
  • Business analysts at large enterprises exploring voice-of-the-customer analysis to feed into product roadmaps or service improvements.

The value proposition of these AI tools is often speed and scale — they claim the ability to generate insights from large volumes of synthesized data faster than manual user research. The promised benefit is quicker ideation and product iteration.

The target customer tends to be anyone responsible for gathering user insights for product or service development, but who lacks time, budget, or access to customers for more traditional UX research. But claims around effectively substituting AI for real user research should be examined closely.

In summary — the audience is often UX practitioners, analysts, and product developers who want faster and cheaper ways to gather user insights to inform business and product decisions. But caution is warranted around claims of parity with true ethnographic and user research.

Prompt:

I noticed that you didn’t list UX Researchers among the target audience. Why not?

Claude’s response:

You’re right, I did not explicitly include UX researchers as the target audience and customers for these AI tools. There’s a good reason for that.

While UX researchers could technically use these tools to supplement some limited aspects of their user research workflows, they are less likely to be the true target audience. Here is my thinking:

  • Skilled UX researchers deeply value direct engagement with real users through interviews, observations, and ethnography. They are unlikely to see AI synthesis as an adequate substitute.
  • Seasoned researchers apply advanced qualitative analysis skills to make sense of human behavior. They would be critically aware of the limitations of solely quantitative AI analysis.
  • Researchers uphold ethical codes. They may object to some methods of synthesizing fake user data and perspectives without clear disclosure.

However, the vendors marketing these AI tools often present them as being able to replace traditional UX research. The messaging seems better aligned with managers and product developers unfamiliar with the nature of user research.

In essence, practiced UX researchers possess the human-centered skill sets and ethical values that would make them skeptical buyers. They are unlikely to be the eager early adopters — thus I did not consider them the primary targets.

Prompt:

Given AI’s current capabilities, and looking at the points you made about where AI is helpful, what are the potential risks or downsides in using AI for these purposes?

Claude’s response:

I don’t think AI tools can fully replace human researchers and designers at this point in terms of depth of insight and understanding context. However, AI tools can complement human-led UX research and analysis in certain situations:

The AI tools are most helpful when:

  • There is a need to quickly gather or synthesize perspectives from a very large sample size that would be infeasible for human researchers to interview qualitatively. AI can generate useful quantitative data.
  • There is value in rapidly iterating on a high volume of ideas or concepts. AI tools can quickly indicate which versions resonate best with an audience.
  • There are budget or access constraints that limit conducting traditional user interviews and ethnography. AI provides a low-cost supplementary approach.

However, human researchers still significantly outperform AI tools in:

  • Building Empathy and reading emotional cues during qualitative interviews.
  • Picking up on body language, micro-expressions and contextual details.
  • Improvising lines of questioning based on participant responses and behaviors.
  • Understanding cultural nuances that shape perspectives.

I suggest leveraging both — using AI to rapidly gather data at scale and humans to conduct deeper ethnographic and observational research. With a balanced approach, they can complement each other based on project needs and resource constraints. Relying solely on AI risks missing out on the rich insights only qualitative human research can reveal.

The best application depends greatly on context — the product, users, phase of development, etc. Blanket claims about AI replacing human UX research should always be scrutinized as both bring distinct value.

Prompt:

Given AI’s current capabilities, and looking at the points you made about where AI is helpful, what are the potential risks or downsides in using AI for these purposes?

Claude’s response:

There are several potential risks and downsides to relying too heavily on current AI tools for user research and analysis:

Biases and inaccuracies:

  • AI models can reflect and amplify existing societal biases around gender, race, age if the training data contains imbalanced samples. This could lead to skewed or misleading findings.
  • Synthetic interviews and data analysis may miss nuances, emotions, sarcasm that humans would catch leading to incorrect insights.

Over-generalization:

  • Findings from AI analysis of limited samples may not generalize well to the broader target user population and their needs. Results could steer product decisions in the wrong direction.

False precision and rigor:

  • Quantifying subjective emotional preferences on a 1–5 scale via AI is misleading since user needs are multilayered and contextual. Too much precision is claimed.
  • Presenting emotively-rendered infographics, confidence scores and stats can falsely signal legitimacy and scientific rigor of the analysis.

Ethical issues:

  • Is it appropriate for businesses to deploy tools that synthesize fake personalized data without informed user consent just to aid product decisions?

To conclude, over-reliance on AI bots and analysis currently risks introducing bias, inaccurate findings, false precision, and other harms. As technology advisors, we have an ethical duty to provide guidance around responsible use. AI should complement human understanding rather than aim to replicate the richness of real user insights. A balanced approach is best moving forward.

Prompt:

Which AI tools are most accurate when it comes to analyzing research data?

Claude’s response:

Unfortunately, there is no definitive or impartial answer regarding which specific AI tools are “most accurate” for analyzing research data. The accuracy depends greatly on:

  1. The quality of the training data used: Tools trained on larger, unbiased, and high quality datasets tend to perform better during analysis. Tools relying on limited or skewed data may produce less accurate insights.
  2. Transparency around development: It is hard to reliably evaluate tools that do not openly publish details of their training process and dataset characteristics. Lack of peer review also hinders independent assessment.
  3. The research methodology and rigor in validation: There have been few robust peer-reviewed studies comparing different AI research analysis tools under consistent conditions. Most accuracy claims come directly from vendors and are hard to verify objectively.

Given these limitations, it is hard to highlight one tool as definitively “most accurate” based on current public knowledge. In general, I would advise applying critical thinking skills when evaluating accuracy and methodology claims from AI vendors. Seeking independent peer reviews of tools can also provide more objective assessments over time. But evaluating true accuracy remains challenging. The wise approach is proceeding with caution rather than assuming any tool as the singular best option. As with much AI currently, transparency and responsible skepticism is key.

Prompt:

You’ve mentioned analyzing survey data but what about AI tools for analyzing observational or interview research. Qualitative research. Would AI be good at that?

Claude’s response:

As of now, I would argue that AI has very limited capabilities when it comes to effectively analyzing qualitative user research data like interviews or observational studies in a contextual and meaningful way:

The key strengths of qualitative research come from attributes difficult for current AI to reliably replicate:

  1. Noticing subtle human emotional expressions, micro-reactions and contradictions that provide deeper insight.
  2. Picking up on nuances in language, slang, inferred context, sarcasm, irony that shape meaning.
  3. Building empathy by spending extended engagement understanding subjects’ environments, motivations and reasoning.
  4. Improvising follow-up questions uniquely tailored to the specific responses and behavior of research participants.
  5. Synthesizing and analyzing complex emotional insights across subjects in a holistic, intellectually curious way.

While AI can perform functions like sentiment analysis and basic pattern identification, it lacks the contextual emotional intelligence to deeply understand qualitative data or ask meaningful follow-up questions. The connections made are often circumstantial rather than causal or revealing.

In short, AI cannot replicate the essence of what makes qualitative research so insightful — the ability to form empathetic human connections that reveal richer reasoning behind behaviors. We still far exceed AI in that ability.

So for analyzing interviews or ethnographies, qualified human researchers are still vital. AI at best play a basic assisting role currently when it comes to qualitative analysis. The risk of over-reliance on AI here is very high due to the contextual judgment required.

Conclusions from the human author writing this article:

  1. AI knows that AI tools claiming to replace human research aren’t aimed at professional UX Researchers. They are aimed at others hoping to make research faster and cheaper.
  2. Claude said it best when it said that the messaging around AI tools for user research, “seems better aligned with managers and product developers unfamiliar with the nature of user research.” If you are unfamiliar with user research, how it’s done, or how to do it well, AI tools might sound really appealing.
  3. Your research data or analysis might not be more accurate. You might end up with more bias, mistakes, AI hallucinations. etc.
  4. Garbage in, garbage out. If you wrote a crappy survey, good AI analysis of that data can’t fix purposeful or accidental survey design mistakes.
  5. AI knows that at this time, AI isn’t better than a human when it comes to planning, executing, or analyzing qualitative research.
  6. AI can make some research work faster, but it is likely to be less accurate, adding bias, or causing problems that are worse than “how long did this take.”

— — — — — — — — — — — — — — -

Connect with us or learn more:

  • CustomerCentricity.com Our website
  • youtube.com/@cx-cc YouTube channel
  • cxcc.to/newsletter LinkedIn newsletter
  • patreon.com/cxcc Online community/free mailing list

--

--

“The Mary Poppins of CX & UX.” CX and UX Strategist, Researcher, Architect, Speaker, Trainer. Algorithms suck, so pls follow me on Patreon.com/cxcc