From Insight to Interface: Building a UX Research Tool with AI Collaboration

As a UX researcher and strategist, I’m always thinking about how to make decision-making easier, faster, and more grounded in evidence — not assumptions. Some of the more frequent questions I get while working are:

“How big of a sample size do we need? Why does it have to be that big? How confident are we in the results?”

These questions are deceptively simple — and incredibly important. So I set out to build a tool that could help answer them quickly and visually.

But rather than mock something up and pass it off to an engineer, I wanted to try something different:

Could I collaborate with an AI (ChatGPT) to build, iterate, and deploy a live web app myself — no dev team required?

The Tool: A Sample Size Confidence Estimator

The result is a live tool that:

  • Shows how much your survey might cost depending on your per-response rate
  • Calculates your estimated confidence level based on your current sample and population size
  • Compares your sample size to the required sizes for 85%, 90%, and 95% confidence
  • Offers clear, plain-language explanations of margin of error and confidence intervals

You can try it live here:

 Sample Size Confidence Estimator

The UX Behind the Interface

This wasn’t just a code experiment — it was driven by UX strategy:

  • I identified a recurring pain point from stakeholders and researchers
  • I mapped out what users need to know — and when — to feel confident in their decisions
  • I focused on reducing cognitive load, with simple sliders, contextual tooltips, and visual benchmarks
  • I designed for progressive disclosure: advanced options like finite population correction are there if you need them, but stay out of the way otherwise

Building With AI (and Not Just Asking for Code)

What made this project special was how I used AI as a creative collaborator, not just a code monkey:

  • I asked questions like “How would we calculate confidence based on margin of error?” and “How should the slider respond at different ranges?”
  • I got help debugging build issues, refactoring the interface, and improving clarity
  • I used the AI to make statistical methods more accessible, turning abstract math into readable, plain-language explanations

This wasn’t a prompt-and-go project. It was an ongoing conversation with the model — much like pair programming or rubber duck debugging — and it let me iterate fast.

Why This Matters

I’m proud of this tool not just because it works — but because it’s a small example of what happens when UX, research, strategy, and modern tools all come together. It reflects how I like to work:

  • Grounded in real user needs
  • Thinking from both the researcher’s and stakeholder’s perspective
  • Open to experimentation and technology — even if it means learning by doing

Whether you’re a fellow researcher, designer, or strategist: I hope this tool helps you get better answers, faster.

And if you’re a hiring manager or collaborator? Let’s talk about what we could build together next.


Leave a Reply

Your email address will not be published. Required fields are marked *