6–8 min read
Flows
Updated on: 18/12/2025
Set up a clean A/B test in minutes
If you just need the essentials, use this as your fast path:
- Define what you’re testing and the success metric
- Add Split Randomly and name categories clearly
- Build each variant branch (keep differences minimal)
- Track assignment (group, field, or label) for analysis
- Review results in analytics and keep the winner
You’re done. You now have a measurable experiment you can iterate safely.
Step-by-Step Process
Common A/B test setups include:
- Two versions of a welcome message (A vs B)
- Different question wording
- Different send times or reminder logic
- Different incentive messages (where permitted)
[CAPTURE: A simple flow map showing two branches labeled “A” and “B”.]
- Open your flow in the flow editor.
- Add a new split node and select Split Randomly.
- Choose the number of branches you want (for example, 2 for A/B).
- Name each category clearly (example: Variant A, Variant B).
- Save the split.
[CAPTURE: Split Randomly configuration showing two evenly distributed categories named “Variant A” and “Variant B”.]
For each randomized category:
- Add the messages/actions you want that variant to run.
- Keep differences limited to what you’re testing (so results stay meaningful).
- Continue the flow or end the branch depending on your design.
[CAPTURE: Flow showing Split Randomly branching into A and B, each connected to different Send Message nodes.]
To track which branch a contact received, add one of the following inside each branch:
- Add contact to a group (e.g., “AB Test – A”, “AB Test – B”)
- Save a contact field (e.g.,
experiment_variant = A/B) - Add labels where relevant (depending on your reporting setup)
[CAPTURE: Branch A showing “Add to Group: AB Test – A”, Branch B showing “Add to Group: AB Test – B”.]
- Let the test run until you have enough contacts for a meaningful comparison.
- Compare outcomes between Variant A and Variant B.
- Keep the best-performing version and retire the other branch.
[CAPTURE: Flow analytics view showing counts per category branch for the Split Randomly node.]
Common Issues & Quick Fixes
Problem: My A/B test results look inconsistent
Fix: Ensure each branch is truly different only where intended (avoid extra changes). Confirm contacts aren’t being interrupted by another Messaging flow mid-test. Store the variant assignment in a group or contact field for clean reporting.
Problem: I want uneven distribution (e.g., 80/20)
Fix: If Split Randomly only supports even distribution in your setup, create more buckets and combine them (example: 10 buckets → route 8 to Variant A logic and 2 to Variant B logic). Document your bucketing logic with a sticky note in the flow.
Problem: Contacts repeat the experiment
Fix: Add a Split by Group Membership at the start (e.g., “Already Tested”) and exit those contacts. Add contacts to an “Already Tested” group after assignment.
