Randomly Distribute Contacts for A/B Testing

Set up a clean A/B test in minutes

If you just need the essentials, use this as your fast path:

  1. Define what you’re testing and the success metric
  2. Add Split Randomly and name categories clearly
  3. Build each variant branch (keep differences minimal)
  4. Track assignment (group, field, or label) for analysis
  5. Review results in analytics and keep the winner

You’re done. You now have a measurable experiment you can iterate safely.

Step-by-Step Process

1
Decide what you want to test

Common A/B test setups include:

  • Two versions of a welcome message (A vs B)
  • Different question wording
  • Different send times or reminder logic
  • Different incentive messages (where permitted)

[CAPTURE: A simple flow map showing two branches labeled “A” and “B”.]

💡
Tip: Define one success metric before you start (completion rate, opt-out rate, conversion, response accuracy). It makes results easier to interpret later.

2
Add a “Split Randomly” node

  1. Open your flow in the flow editor.
  2. Add a new split node and select Split Randomly.
  3. Choose the number of branches you want (for example, 2 for A/B).
  4. Name each category clearly (example: Variant A, Variant B).
  5. Save the split.

[CAPTURE: Split Randomly configuration showing two evenly distributed categories named “Variant A” and “Variant B”.]

⚙️
Technical Detail: Split Randomly routes contacts into branches in equal proportions (e.g., 50/50 for two branches), based on randomized assignment.

3
Build the actions inside each branch

For each randomized category:

  1. Add the messages/actions you want that variant to run.
  2. Keep differences limited to what you’re testing (so results stay meaningful).
  3. Continue the flow or end the branch depending on your design.

[CAPTURE: Flow showing Split Randomly branching into A and B, each connected to different Send Message nodes.]

💡
Tip: If you want each branch to run a full separate journey, you can start a dedicated flow per branch (e.g., “A Flow” and “B Flow”) and keep the test logic isolated.

4
Track assignment for analysis (optional)

To track which branch a contact received, add one of the following inside each branch:

  • Add contact to a group (e.g., “AB Test – A”, “AB Test – B”)
  • Save a contact field (e.g., experiment_variant = A/B)
  • Add labels where relevant (depending on your reporting setup)

[CAPTURE: Branch A showing “Add to Group: AB Test – A”, Branch B showing “Add to Group: AB Test – B”.]

⚠️
Warning: If you don’t store variant assignment anywhere, analysis later may be harder. Plan tracking before launching.

5
Review results and iterate

  1. Let the test run until you have enough contacts for a meaningful comparison.
  2. Compare outcomes between Variant A and Variant B.
  3. Keep the best-performing version and retire the other branch.

[CAPTURE: Flow analytics view showing counts per category branch for the Split Randomly node.]

Common Issues & Quick Fixes

Problem: My A/B test results look inconsistent

Fix: Ensure each branch is truly different only where intended (avoid extra changes). Confirm contacts aren’t being interrupted by another Messaging flow mid-test. Store the variant assignment in a group or contact field for clean reporting.

Problem: I want uneven distribution (e.g., 80/20)

Fix: If Split Randomly only supports even distribution in your setup, create more buckets and combine them (example: 10 buckets → route 8 to Variant A logic and 2 to Variant B logic). Document your bucketing logic with a sticky note in the flow.

Problem: Contacts repeat the experiment

Fix: Add a Split by Group Membership at the start (e.g., “Already Tested”) and exit those contacts. Add contacts to an “Already Tested” group after assignment.