Reading time: 6–7 min
Troubleshooting
Updated on: 14/01/2026
Quick Setup Checklist
Use this checklist to plan, run, and learn from a usability test before launching a flow at scale.
- Confirm the usability goal (experience, not logic)
- Prepare participants, roles, and a testing protocol
- Define subjective questions (comfort, clarity, satisfaction)
- Define quantitative metrics (completion, errors, time)
- Run the test with facilitator support (contacts lead)
- Analyze results and document findings
- Prioritize fixes, implement changes, and retest
- Resolve common usability testing issues
Understand What a Usability Test Measures
Once you have selected a final version of your flow, a usability test helps ensure that it provides a clear, comfortable, and effective experience for your contacts.
Usability testing shifts the focus from whether the flow works to how it feels to use. It evaluates ease of use, learnability, clarity, and satisfaction before your flow is launched at scale.
Important: A flow that works technically may still fail if contacts find it confusing, frustrating, or difficult to complete.
Prepare Test Requirements and Roles
Before conducting a usability test, make sure you have:
- A large group of test contacts representative of your target population
- A clear testing protocol, which may include:
- A pre-test questionnaire
- The test itself
- A post-test questionnaire
- A facilitator who can answer questions and provide instructions
- Observers who track contact behavior and interactions
Tip: Unlike a pilot test, facilitators should take a supporting role during usability testing, allowing contacts to lead the experience.
Define Subjective Metrics
Subjective metrics capture how contacts feel about the experience. A usability test plan should include questions asked before, during, and after the test.
Common subjective metrics
- Background questions: asked before the test to understand context and expectations
- Ease of use, comfort, and satisfaction: asked after each task to evaluate immediate reactions
- Overall experience: asked after the test to assess overall ease of use, comfort, and satisfaction
- Likes, dislikes, and recommendations: what contacts liked most, liked least, and how the service could improve
- Continued use: how likely contacts are to continue using the service
- Net Promoter Score (NPS): how likely contacts are to recommend the service to a family member, friend, or colleague
Define Quantitative Metrics
Quantitative metrics measure performance and efficiency. Track these consistently so results can be compared across iterations.
Core quantitative metrics
- Flow completion: a flow is complete when a contact successfully passes through all steps
- Critical errors: issues that prevent completion (e.g., consistently incorrect responses, unsupported formats, opt-outs)
- Non-critical errors: recoverable issues (e.g., “other” responses, redirects that slow completion)
- Error-free rate: percentage of contacts who complete flows without any errors
- Time on task: completion time minus start time
Warning: Critical errors indicate serious usability issues and must be addressed before launch.
Run the Usability Test
During usability testing, participants should lead the experience while the facilitator supports and observers document behavior and outcomes.
Use your protocol to:
- Collect pre-test context (background questions)
- Run the flow tasks end-to-end
- Capture task-level reactions (ease, comfort, satisfaction)
- Collect post-test feedback (overall experience, likes/dislikes, recommendations, continued use, NPS)
Analyze Results and Document Findings
After testing, analyze what you observed and summarize both what worked well and what needs improvement.
You may want to include:
- Completion rates by contact, step, and flow
- Average completion times
- Satisfaction and ease-of-use scores
- Illustrative comments from contacts
Important: Always identify both what worked well and what needs improvement.
Recommend, Implement, and Retest
Use all collected data to list findings by flow and provide recommendations grounded in observed behavior. Balance negatives with positives—maintaining what works is just as important as fixing what doesn’t.
You may not be able to implement every recommendation due to:
- Budget constraints
- Timelines
- External dependencies
Prioritize changes based on:
- Frequency of issues
- Severity of impact
- Contact feedback
Tip: After implementing changes, retest to confirm improvements before scaling.
Common Issues
Contacts complete the flow, but report low satisfaction
Cause: The flow may be functional but feel too long, repetitive, or emotionally uncomfortable.
Fix: Shorten steps, simplify wording, reduce cognitive load, and retest. Compare satisfaction scores before and after changes.
Many participants make “critical errors” and cannot finish
Cause: Input formats may be unclear, validation too strict, or instructions insufficient.
Fix: Add clearer instructions, accept more response variants, and improve fallback handling. Retest until critical errors drop significantly.
Time on task is much longer than expected
Cause: Questions may require too much thinking, choices may be confusing, or the flow may be too long.
Fix: Simplify prompts, reduce steps, and remove unnecessary branches. Track time by step to identify bottlenecks.
Participants frequently respond with “other”
Cause: Answer choices do not match real-world experiences or the wording is ambiguous.
Fix: Expand answer options, clarify wording, and include examples where helpful. Retest and monitor “other” rates.
Feedback is inconsistent or hard to interpret
Cause: The protocol may be missing standardized questions or rating scales.
Fix: Use consistent scales (e.g., 1–5 for ease/comfort/satisfaction) and ask the same post-task questions for all participants.
