Let’s be honest: in 2025, it’s harder than ever to cut through the noise about AI. Everyone’s got a hot take, a new tool, or a bold prediction. But in the day-to-day of UX research, where nuance, context, and real people matter, it’s not always clear how AI fits in.
Here’s what I’ve come to believe:
AI is powerful, but it doesn’t replace human judgment.
The best research today blends both and is more nuanced than an either/or. And the best researchers know when to trust AI and when to trust themselves.
AI Can Accelerate. You Still Have to Make Meaning.
In my own work and in conversations with fellow researchers, I’ve seen AI shine as an accelerator. Summarizing hundreds of survey responses? Yes, please. Organizing open-ended feedback into themes at 3am so I don’t have to? Thank you, machine!
But here’s the catch: AI is a speedboat, not a compass. It gets you to patterns faster, but it can’t yet tell you which ones matter, why they matter, or what to do about them.
Take this example from a digital journaling app.
AI analyzed dozens of user comments and flagged “lack of reminders” as a top complaint. Many users had written things like “I forget to write” or “wish I got nudged more often.” At first glance, the solution seemed obvious: add stronger, more frequent reminders.
But when a researcher reviewed full interviews and behavioral data, a more complex insight emerged. Users weren’t simply forgetting, they were avoiding. Journaling often brought up difficult emotions. People wanted to write, but only when they felt ready. Standard reminders weren’t helping. They were triggering guilt.
⚡AI surfaced a usability issue. The researcher uncovered an emotional barrier.
Instead of building more notifications, the team introduced a soft, weekly check-in: a mood reflection that invited users to reflect without writing. This felt supportive rather than pushy, and gave users permission to re-engage on their own terms.
Engagement improved not because nudges got louder, but because they got smarter (and more empathetic).
Signal vs Story
💡 Tips
- Treat AI-generated themes as hypotheses, not conclusions. Always validate with raw data.
- Listen to or read a sample of the original user responses to capture nuances AI might miss.
- Look for emotional cues, contradictions, or context that AI can’t interpret.
- Question whether a frequently mentioned issue is truly a root problem, or a symptom of something deeper.
- Use AI to save time on surface-level analysis. Then, dig in for strategic insight.
Pattern Recognition is Not Empathy
Even the most advanced LLM doesn’t feel the tension in a user’s voice during a painful onboarding flow. It doesn’t know when a participant’s offhand comment is actually a game-changing signal. AI can surface themes, but it can’t sense the deeper needs beneath the words.
This is where your emotional radar matters more than ever.
In one usability test I ran, a participant casually said, ‘I guess I’d just Google this instead.’
AI flagged it as a neutral remark. But to us, it was signal that our in-app help wasn’t meeting users where they were.
When we followed the thread, we discovered it wasn’t just a usability issue. It was a trust gap. Users did not expect our self-service support to be helpful, so they bypassed it entirely.
⚡️ AI flagged the surface-level pattern. Human researchers uncovered the deeper emotional need.
That single insight sparked a cross-functional effort to rebuild our help experience and rebuild trust earlier in the journey.
💡 Tips
- Don’t take AI summaries at face value. Ask: What’s behind this pattern?
- Look for emotion, hesitation, and offhand comments. They often hold the real insight.
- Remember: empathy isn’t a soft skill. It’’s a strategic one.
AI Can Help You Spot Signals, But Only You Know What Matters
AI can help you sift, summarize, and spot signals, but it can’t:
- Ask the right questions in the first place
- Detect emotional contradictions or hesitation
- Understand business and product nuances
- Weigh findings by impact, risk, and feasibility
- Craft narratives that resonate with decision makers
That’s where you, the researcher, come in.
AI gets you 70% of the way. But only you know if the last 30% is signal or noise.
The Future Is Hybrid
Today’s strongest researchers aren’t just AI adopters, they’re AI translators. They pair scale with story, and speed with strategy. They know when to trust the algorithm and when to question it.
Think of it like this:
It’s not about replacing yourself. It’s about freeing yourself up to do the kind of work only you can do.
How To Blend AI & Human Judgement (Without Overcomplicating It)
If you’re trying to use AI meaningfully in your workflow, start small. Here’s one approach I use:
- Run AI on open-ended responses or interview transcripts using tools like Sprig, which can help surface initial themes from in-product surveys or concept tests.
- Skim the top themes, but treat them as rough drafts
- Review 3-5 raw responses or recordings per theme
- Ask yourself what feels missing? What stands out emotionally?
- Refine the themes, add narrative, and review with a peer
💡 Tip
Avoid copying and pasting AI output summaries into decks. Instead, layer in product strategy, business context, and why it matters to your team. That’s how you turn patterns into stories, and summaries into strategy.
Why This Matters
Researchers are being asked to do more with less. To move faster. To scale insights across teams. AI helps, but only when paired with sound judgment and a clear point of view.
In a world where everyone has access to the same tools, what sets you apart isn’t what you use. It’s how you think.
Your judgment.
Your empathy.
Your ability to connect the dots others miss.
So yes, embrace AI. Use it to scale your impact. But don’t forget: your human insight is still the most powerful instrument in your stack.
—
TL;DR: AI is a powerful addition to your research stack, but your human judgment is your competitive advantage. Learn the tools, stay critical, and use both sides of your brain. Because in the end, it’s not either/or. It’s both/and.