Conducting a Pilot Test

After testing your flows in the simulator, the next recommended phase is the pilot test. A pilot test serves as an initial trial—a small-scale implementation of your broader project—and represents a critical stage in validating your SMS program. Your program functions as an automated system involving multiple elements (contacts, devices, carriers, channels, and flows), and as it embodies your project, each component warrants thorough evaluation.
A pilot test enables you to:
-
Verify message delivery across major carriers in your country.
-
Estimate the time required for message transmission and reception through your channel or carrier.
-
Offer your team practice in test facilitation.
-
Assess the clarity of your questions and flow logic from the test contacts’ perspective.
-
Implement final adjustments (e.g., to carriers, connection methods, flow structure, or content).
-
Gauge readiness for scaling the program.
Pilot Requirements
-
A group of 5–10 independent test contacts representative of your target audience.
-
The latest version(s) of your flow(s).
-
One or more pilot facilitators to conduct pre- and post-test evaluations and oversee the test.
-
Observers to monitor test contacts’ responses via the dashboard and document overall behavior.
Whenever possible, conduct the pilot in person with your test contacts. If flows are campaign-scheduled or designed for asynchronous responses, consider communicating with participants at the start or end of each day. Run the pilot 3–5 days before usability testing to allow time for technical adjustments or revisions to scenarios and materials.
Key Aspects to Monitor
-
Do test contacts understand the purpose of the flow/campaign?
-
Do they feel comfortable answering questions or performing tasks?
-
Is the wording in your flow(s) clear and unambiguous?
-
Are contacts being assigned to the correct groups? Could grouping be improved?
-
Are responses being categorized accurately?
-
Do answer choices align with test contacts’ experiences?
-
Are any questions overly complex or time-consuming?
-
Do any steps cause frustration or confusion?
-
Which steps generate the most “other” responses?
-
Do the collected responses meet your objectives?
-
Is the flow length appropriate?
-
Did test contacts identify any overlooked elements?
Best Practices
-
Maintain neutrality. If participants ask questions, respond with, “What’s your best guess?” Avoid leading them. If a contact struggles, decide whether to offer a hint or conclude the test.
-
Focus observations equally on productive and unproductive paths. Observers should document actions and verbal feedback in detail. Deeper insight into SMS behavior enhances program effectiveness.
-
Measure both performance and preferences. These do not always align, especially regarding mobile interactions. Contacts may perform poorly despite high subjective ratings, or vice versa.
-
Qualitative metrics: completion rate, time to completion, errors (e.g., “other” responses), opt-outs, etc.
-
Subjective metrics: self-reported satisfaction and comfort levels.
-
Pilot Evaluation
After completing a pilot, you should be able to answer:
-
Was the test group’s overall reaction positive or negative? Feedback helps determine program suitability and whether minor adjustments are needed.
-
Are time and resources allocated effectively? The pilot may reveal needs for changes in engagement methods, flow length, or timing.
-
Does the evaluation strategy require refinement? Use this opportunity to assess metrics collection. Collaboration between evaluation and implementation teams can address logistical issues before scaling.
-
Are you prepared to scale? The pilot highlights potential challenges during larger implementations and ensures your team can manage issues associated with expansion. This depends largely on the answers to the above questions.