Reading time: 6–7 min
Troubleshooting
Updated on: 20/03/2025
Quick Setup Checklist
Use this checklist to run a small-scale pilot test before scaling your program.
- Define the pilot goal and success criteria
- Prepare the pilot group, facilitators, and observers
- Set the pilot conditions and schedule
- Launch the flow to the pilot group (production-like)
- Observe behavior and log issues without coaching
- Collect metrics (performance + preference)
- Evaluate outcomes and decide whether to scale
- Resolve common pilot testing issues
Define the Pilot Goal
A pilot test is a small-scale, real-world trial of your SMS program before full deployment. It helps validate how all components work together in real conditions.
An SMS program is an automated system composed of:
- Contacts
- Phones
- Carriers
- Channels
- Flows
Important: A successful simulator test does not guarantee success in real-world conditions. Pilots reveal issues that simulators cannot.
Prepare Pilot Requirements
Before starting a pilot, make sure you have:
- 5–10 independent test contacts representative of your target population
- The current version(s) of your flow(s)
- One or more pilot facilitators
- One or more observers
Facilitators conduct the test and evaluations. Observers monitor contact behavior and responses via the dashboard.
Set Pilot Conditions and Timing
During a pilot, it’s best to be present with test contacts when possible.
If flows run asynchronously (via campaigns or delayed responses), compensate with:
- Daily check-ins
- Start-of-day or end-of-day briefings
Tip: Run the pilot 3–5 days before your usability test so you have time to fix technical or content issues.
Launch the Pilot (Production-Like)
Launch the flow exactly as you would in production, but only to the pilot group.
[CAPTURE: Show a test contact group selected and a flow being sent to the group.]
Allow contacts to interact naturally while facilitators and observers monitor behavior.
Warning: Do not coach contacts through the flow. Leading participants will invalidate your findings.
Observe What Happens (Without Leading)
During the pilot, evaluate whether:
- Contacts understand the purpose of the flow or campaign
- Contacts feel comfortable responding
- Flow wording is clear and unambiguous
- Contacts are being sorted into the correct groups
- Responses are being categorized properly
- Answer choices match real-world experiences
- Any steps cause confusion, hesitation, or irritation
- Certain questions generate many “other” responses
- The flow is too long or repetitive
- Anything important has been overlooked
Remain neutral: if a contact asks for help, respond with: “What’s your best guess?” Only intervene if the participant completely gives up.
Tip: Observers should pay attention to what contacts do, what they say (in their own words), and both productive and unproductive paths.
Collect Metrics
Performance and preference often differ. For example, contacts may complete tasks successfully but dislike the experience, or enjoy the experience but struggle to complete tasks.
Quantitative metrics
- Completion rate
- Time to completion
- Errors (including “other” responses)
- Opt-outs
Subjective metrics
- Self-reported satisfaction
- Comfort and confidence ratings
Evaluate Results and Decide Whether to Scale
At the end of the pilot, you should be able to answer:
- Overall fit: Was the group’s reaction positive or negative? Is the program appropriate for the target population?
- Resource allocation: Are time and resources being used effectively? Do certain aspects require more attention (timing, flow length, engagement)?
- Evaluation strategy: Are you collecting the right metrics? Are there gaps in your evaluation approach?
- Readiness to scale: Are there unresolved technical or usability issues? Is the team prepared to handle larger volumes?
Important: Only increase scale once all critical pilot issues have been addressed.
Common Issues
Participants keep asking for help during the pilot
Cause: Contacts may be unsure about the goal, wording, or expected answers.
Fix: Stay neutral and avoid coaching. Use the standard response: “What’s your best guess?” Note where confusion occurs and revise wording or answer choices after the pilot.
Messages are delivered slowly or inconsistently across carriers
Cause: Carrier routing, network quality, or channel/provider constraints can affect delivery time.
Fix: Record timing by carrier, confirm channel setup, and test again. If delays persist, review provider logs and adjust expectations or scheduling.
Many replies are categorized as “other”
Cause: Answer choices may not match real-world experiences, or the prompt may be unclear.
Fix: Update wording and expand answer choices based on pilot responses. Consider adding clearer examples or validation rules where appropriate.
Contacts are not being sorted into the expected groups
Cause: Flow conditions or group actions may be misconfigured, or contacts did not reach the relevant nodes.
Fix: Review Results to confirm which paths were taken, then adjust split logic and group actions and re-run the pilot on the updated version.
The team isn’t collecting the data needed to evaluate the pilot
Cause: Metrics or observation steps were not defined in advance.
Fix: Define a short metric checklist (completion, time, errors, opt-outs, satisfaction) and assign roles (facilitator vs observer) before repeating the pilot.
