v0.1 · private beta
Pre-launch UX research, on demand

Find the friction before your users do.

Point doublur at your staging URL with an activation goal. It runs headless browsers as synthetic users, captures screenshots at every moment of confusion, and hands you a ranked report with suggested fixes.

No instrumentation. No test scripts. Just a URL and a goal.

Under the hood

The hard parts are already built.

Headless browsers against staging

Real Chromium sessions hitting your actual pages. No mocking, no synthetic DOM. The persona sees what your users see.

Screenshots at the moment of confusion

Every step is captured. When a persona hesitates or abandons, you get the exact screenshot and context needed to debug quickly.

Memory across runs

Successful paths, error patterns, and page hints persist between simulations so later runs surface new issues instead of rediscovering old dead-ends.

Ranked report, not a log dump

Blockers are sorted by severity and frequency, with persona reasoning, screenshots, and suggested fixes attached to each finding.

Integrations

Ships where your team already works.

Dashboard

Review runs in one place

Use the dashboard to launch simulations, inspect traces, compare batches, and manage API keys for programmatic access.

app.doublur.com
/runs
/batches
/settings/api-keys
GitHub Action

Friction checks on every PR

Run against a preview deploy, post findings as a PR comment with screenshots, and fail the check when critical blockers show up.

- uses: showdownlabs/doublur@v1
  with:
    url: ${{ env.PREVIEW_URL }}
    fail-on: critical
API + MCP

One backend, multiple entrypoints

The API routes live on the same app host today, so editor tools and automations can call the same authenticated endpoints as the web app.

POST https://app.doublur.com/api/runs/start
GET  https://app.doublur.com/api/runs/{id}
Why not just prompt an LLM?

You can ask an AI to imagine a user. We run one.

A prompt alone
  • Imagines a user from a product description
  • Generates plausible-sounding narratives
  • No access to the real UI
  • No screenshots, no DOM, no traces
  • Cannot compare across deploys
  • Output: "the user felt confused"
doublur
  • Executes real actions in a real browser
  • Captures DOM state and screenshots at every step
  • Sees what your users actually see
  • Evidence-backed findings with repro traces
  • Diffs friction between deploys and pull requests
  • Output: "looped /billing → /team 3× over 47s, then quit"
Who it's for

Teams that ship before they have research budget.

  • PMs who own activation but can't recruit a panel before every launch
  • Growth leads watching cohorts and guessing at the why
  • Design teams who want friction signal without a usability lab
  • Founders shipping weekly who need a sanity check before prod

See what it finds in your onboarding.

Start in the app if you already have access, or email us for a pilot run against your staging environment.