Testing in Brightback
A core component of Brightback is the ability to set up different tests across your entire cancel Audience, or within a targeted Audience.
Brightback lets you take a statistical approach to establish what works best in your cancel experience. We do this via a combination of test configuration within Brightback and what is currently off-line statistical performance reporting. We are working to include these performance reports in-app, but for the time being, will deliver them via a performance review you can schedule with your CSM.
In Brightback we have a couple of different types of test "treatments" that we support. You can run your classic A/B test of two different Offers or Loss Aversion cards and monitor for a statistically significant winner. You also can run a random Offer test where you select a few offer categories and let our models determine which one is most likely to Save a canceling user. All of this can be set up across your entire cancel population, generating a true random sample, or within a target audience or set of audiences you have identified require different cancel Experiences to be retained.
There are many use cases for split testing, but the three most popular that we see are:
- Random Offer Tests across Entire Audience
- Targeted Offer Tests within Audiences
- A/B Testing in Brightback
If you have any questions or would like help get started with a test in Brightback, please contact your dedicated CSM or firstname.lastname@example.org and we will be happy to assist.