A/B Testing: How to Use the “Split Randomly” Action

A/B testing (also called split testing) helps you compare two or more versions of a message, flow, or campaign to see which performs better against a defined objective. In RapidPro, the Split Randomly action lets you route contacts evenly into different paths or versions of a flow. When combined with live testing on real channels, A/B testing supports data-driven decisions before scaling. This article explains what A/B testing is, how to define objectives and hypotheses, how to structure tests so they remain reliable (test one variable at a time, use representative samples, run long enough), how to create flow versions, how to connect them using Split Randomly, and how to evaluate results using metrics like completion rate, opt-outs, and data quality.

Quick Setup Checklist

Follow these steps to run a clean A/B test using Split Randomly and compare outcomes reliably.

  1. Define the objective and success metric
  2. Write a hypothesis you can validate
  3. Choose one variable to test
  4. Create versions of the flow
  5. Route contacts using Split Randomly
  6. Run the test on live channels with representative samples
  7. Compare results using the same timeframe and audience size
  8. Fix common A/B testing issues

1

Understand What A/B Testing Measures

A/B testing (also known as split testing or multivariate testing) allows you to compare two or more versions of a message, flow, or campaign to determine which performs better against a defined objective.

In RapidPro, you can use the Split Randomly action to route contacts evenly through different versions of a flow. Combined with real-world testing on live channels, this approach helps you make data-driven decisions before scaling.

Tip: While the simulator is useful during development, real messages sent to real contacts are essential for reliable A/B test results.

2

Define Your Objective and Success Metrics

Start by clearly identifying what success looks like. Without a clear objective, A/B test results are difficult to interpret.

Examples of objectives include:

  • Distributing information effectively
  • Collecting accurate data
  • Increasing flow completion
  • Reducing opt-outs

Important: Choose one primary metric (for example, completion rate or opt-out rate) to decide which version performed better.

3

State a Hypothesis and Choose One Variable

Decide which version you expect to perform better and why. Then choose a single variable to test so you can attribute results to a specific change.

Example hypothesis: “A shorter flow will have a higher completion rate than a longer one.”

Common variables to test:

  • Flow length
  • Message length
  • Vocabulary or tone
  • Call-to-action wording
  • Engagement method (keyword-triggered vs scheduled)

Warning: Avoid testing too many variables at once. This makes results unreliable.

Important: If versions are too different, you won’t be able to identify which change caused the outcome.

4

Create Flow Versions

To create a new version of a flow:

  1. Open the flow
  2. Click the ☰ menu icon
  3. Select Copy
  4. Rename and edit the duplicated flow

[CAPTURE: Flow editor menu with the Copy option selected.]

5

Route Contacts with Split Randomly

After creating your versions, use a Split Randomly action to distribute contacts across the alternatives.

Best practice:

  • Split your audience evenly
  • Keep both paths identical except for the one variable you are testing

[CAPTURE: Split Randomly action with two paths distributing contacts evenly.]

[CAPTURE: Flow where a Split Randomly node leads to two different flow versions.]

6

Run the Test on Live Channels

For reliable results, run your A/B test using real messages sent to real contacts on a live channel.

Tip: Let tests run long enough to collect meaningful data, especially when response timing varies.

Ensure each version is shown to similar, representative samples so results reflect your target population.

7

Evaluate Results and Choose a Winner

Use the metric defined in your objective (for example, data quality, completion rate, or opt-out rate).

Completion rate

RapidPro provides a completion rate for each flow, displayed as a percentage. Completion rate shows how many contacts entered the flow and how many completed it successfully.

Comparing completion rates is often the simplest and most reliable way to assess effectiveness.

[CAPTURE: Flow Results displaying run counts and completion percentages for two flow versions.]

Important: Always compare results using the same timeframe and audience size.

Tip: Even small improvements can have a large impact when scaled to thousands of contacts.

Common Issues

My A/B test results are hard to interpret

Cause: The objective or success metric was not clearly defined, or multiple variables were changed at once.

Fix: Define one primary metric (e.g., completion rate), test one variable at a time, and keep the rest identical.

One version received much more traffic than the other

Cause: The split was not configured evenly, or contacts were routed by additional logic after the split.

Fix: Use Split Randomly with equal distribution and avoid adding additional filters that bias one path.

Completion rates look different, but the test sizes are not comparable

Cause: Results were compared across different time windows or unequal audience sizes.

Fix: Compare the same timeframe and ensure both groups have similar sample sizes before concluding.

Contacts behave differently in production than in the simulator

Cause: Simulator testing does not reproduce real-world timing, carrier constraints, or natural contact behavior.

Fix: Run A/B tests on live channels with representative contacts and allow enough time for natural engagement.

Results improved, but I’m not sure the change is worth scaling

Cause: The improvement may be small or inconsistent across segments.

Fix: Validate against your hypothesis, consider operational impact, and rerun tests if needed before scaling broadly.