Leapfrog: AI-Powered User Research

Leapfrog: AI-Powered User Research

Timeline

2023-Ongoing

Team

  • Solo founder & partners

Role

  • Founder
  • Product designer
  • Full-stack engineer

Skills

  • Full-stack engineering
  • Product design
  • Systems design
  • AI/ML

References

Leapfrog is a user research platform I founded and built to help teams turn messy qualitative data into structured, defensible insight. I led the project across product strategy, interaction design, system design, and implementation, shaping both the user experience and the technical architecture behind it.

Leapfrog product overview

The core problem was not just that research synthesis was slow. It was that the workflow broke down as soon as teams moved from interviews to interpretation. Transcripts lived in one tool, notes in another, clustering happened on a separate whiteboard, and conclusions ended up disconnected from the evidence that produced them. Once that happened, rigor dropped. Teams could see patterns, but they could not easily prove them.

I wanted to build a system where synthesis stayed connected to source material all the way through.

The opportunity

In 2023, I wrote a post about how AI would change UX research. It resonated quickly and brought in around 700 researchers to my email list.

Early traction and interest around Leapfrog

That response validated two things for me:

  • The pain around qualitative synthesis was real
  • Researchers were actively looking for tools that could help without sacrificing rigor

That became the starting point for Leapfrog.

My role

This was a true design-engineering project. I was not only designing screens. I was designing the workflow, the data model, and the system boundaries that made the experience possible.

I was responsible for:

  • Defining the product thesis and core workflow
  • Designing the interaction model for transcripts, highlights, tags, and synthesis
  • Building the product in Next.js across front-end UI, back-end logic, API routes, and webhooks
  • Modeling the data layer in Firestore and using permissions to safeguard workspace access
  • Implementing real-time collaboration with Liveblocks and Yjs
  • Building the transcription, retrieval, and AI chat pipeline
  • Shipping integrations for email, Google Drive, storage, analytics, monitoring, and testing

Designing for traceability

One of the most important product decisions was to treat grounded theory not just as a research method, but as a product design principle.

Leapfrog highlights and tagging workflow

Instead of flattening interviews into generic summaries, I designed Leapfrog around a chain of traceable evidence:

  • Transcript as the source record
  • Highlights as the smallest meaningful unit of evidence
  • Tags as reusable structure across documents
  • Clusters and canvases as higher-level synthesis

That model shaped both the interface and the underlying architecture. It gave users a fast way to move upward from raw material to patterns, while preserving a direct path back to what people actually said.

Traceable insights connected back to source transcript data

This ended up being the core product differentiator. Researchers do not just want output. They want confidence in how the output was formed.

Engineering the platform

Leapfrog became a full-stack system, not just a feature concept. I designed the platform so that each part of the stack had a clear role:

  • Next.js for the front end, back-end logic, API routes, and webhook handling
  • Firestore as the primary database for users, organizations, workspaces, documents, transcript metadata, invites, and settings
  • Firestore permissions to safeguard access at the workspace and organization level
  • GCP file storage for uploaded media and document assets
  • Google APIs for Drive import and OAuth-based file access
  • Liveblocks + Yjs for real-time collaborative document state, comments, presence, and syncing
  • AssemblyAI for transcript generation
  • ChromaDB for vector indexing and retrieval
  • An AI gateway / LLM layer for grounded chat interactions over research data
  • Resend for transactional email
  • Sentry for monitoring errors across client and server
  • PostHog for product analytics on consent
  • Google Analytics for marketing analytics on consent
  • GitHub Actions for automated test coverage

One of the key engineering decisions was to keep a clear separation of concerns:

  • Firestore as the system of record
  • Liveblocks as the live collaboration layer
  • ChromaDB as the semantic retrieval layer
  • The AI chat layer as an assistant over evidence, not a replacement for evidence

That separation made the product easier to reason about, easier to extend, and more trustworthy in use.

How the system ties together

Leapfrog product evolution from early prototype to refined workflow

The product works as one connected pipeline:

  1. A researcher uploads interview media or imports files from Google Drive.
  2. Media is stored in GCP file storage and metadata is written to Firestore.
  3. The app sends audio to AssemblyAI for transcription.
  4. AssemblyAI returns results asynchronously through webhooks to the Next.js backend.
  5. Transcript content is inserted into the collaborative document model in Liveblocks/Yjs.
  6. Document changes are chunked and indexed into ChromaDB for retrieval.
  7. The chat experience pulls relevant context from Firestore and ChromaDB, sends it through the AI layer, and streams grounded responses back into the UI.
  8. Resend handles workflow email such as invites and transcript notifications.
  9. Sentry, consent-based analytics, and GitHub Actions support reliability, observability, and quality as the product evolves.

What I like about this architecture is that it mirrors the product thesis. The system is designed around preserving context, not losing it.

Building the right thing

One of the hardest lessons in the project was that the first versions were too focused on the novelty of AI itself.

It is easy to imagine impressive AI features. It is much harder to build a workflow that fits the habits, skepticism, and standards of real research teams. Static prototypes were not enough. The interaction model was too new, and too dependent on real data, to validate in Figma alone.

I had to build working software early, put it in front of researchers, and learn from actual use. That shifted the direction of the product in a meaningful way:

  • From one-shot AI outputs to structured, inspectable workflows
  • From generic summaries to traceable highlights and evidence
  • From isolated prompting to collaborative synthesis
  • From AI novelty to researcher trust and control

This was one of the clearest examples in my work of code becoming the prototype.

What I learned

Leapfrog taught me a lot about designing AI products for expert users.

The biggest lesson was that trust is a product feature. In high-stakes workflows, people do not want magic. They want acceleration with visibility. The most valuable AI patterns were the ones that kept evidence inspectable and reasoning legible.

It also reinforced something I care about as a design engineer: the interaction model and the system design are often inseparable. The UX here only works because the architecture supports traceability, live collaboration, asynchronous processing, and retrieval grounded in source material.

Why this project matters to me

Leapfrog sits at the intersection of the work I most want to do: product thinking, interface design, systems design, and applied AI.

It is the strongest example in my portfolio of operating end to end:

  • Identifying the opportunity through research and writing
  • Defining the product thesis
  • Designing a new interaction model
  • Building the full-stack system required to make it real
  • Iterating from real usage rather than speculative prototypes

As a founder, designer, and engineer on the project, I was responsible for both the experience users saw and the infrastructure that made that experience credible.

What is next

Leapfrog is still evolving. The next phase is focused on making team workflows more fluid, deepening the connection between evidence and synthesis, and continuing to improve how AI supports research without turning the process into a black box.

If you are interested in the product, visit leapfrogapp.com.


References: Leapfrog website, AI Is Going to Change UX Research Forever