A/B Test: Optimise Your Experience Performance
The A/B Test step allows you to test how effective your Experience is before rolling it out to everyone. It's a powerful way to learn what works—whether you're experimenting with design, copy, or timing.
How It Works
Set the percentage of qualifying users who will see your Experience. The rest won’t see anything, allowing you to compare performance and understand the impact.
For example:
At 50%, half of your target audience will see the Experience, and half won’t.
This gives you a clean control group to measure uplift or behavioural changes against.
Best Practices for Meaningful Results
Just because one version seems to be performing better doesn’t mean it's truly better. There are a few key things to keep in mind:
Use a 50/50 Split Where Possible
A balanced 50:50 split gives the most reliable results. Uneven splits (like 80:20) can make one version appear better due to randomness rather than real performance differences.
Ensure You Have Enough Visitors
If only a small number of users see the Experience, it’s tough to draw meaningful conclusions.
Tip: For lower-traffic or very targeted Experiences, consider running the Experience at 100% for a set period, then turning it off for the same duration. You can compare performance over time even without a strict A/B test.
Let It Run Long Enough
Short tests can be misleading. Give your A/B test time to account for variations like time of day, day of week, and marketing activity.
Reporting Differences
The Experience Report you see will change depending on whether an A/B test has been run or not.
-
If A/B testing is enabled, you’ll see side-by-side performance comparisons for the test and control groups.
-
If A/B testing is off, you’ll see standard engagement metrics.
Learn more about Experience Reporting and A/B Test Results
Reminder: A/B Testing only applies to users who meet your Experience’s targeting rules. Users outside the rule set won’t be included in the test.
Next: on to preview and publish your experience