Table of contents
You’ve already started your first Conversion Optimization Test using the TrustBox Optimizer. This guide explains how to monitor and interpret your results.
If you’re not using TrustBox Optimizer yet, check out this guide on how to start a conversion optimization test.
How the TrustBox Optimizer works
When you start a Conversion Optimization Test using the TrustBox Optimizer, we record the total number of visitors seeing each of your TrustBox variations (or no TrustBox). We also keep track of the conversion rate. Using Bayesian statistics, we compute the statistical probability of each variation beating other variations long-term. This is expressed as a percentage between 0% and 100%.
During the first week of your test, or if the level of traffic is very low (under 200 visits), we won’t show a detailed report. It’s too early to draw any conclusions from such a small data sample. Once there is enough data, you’ll be able to see a detailed daily report of your test results. The longer you let the test run, the more data we can collect and the more accurate your results will be. We recommend a test period of eight weeks.
As soon as you correctly implement the code on your site, we’ll start collecting data which will go into your report. If you’ve tested the TrustBox code in your dev environment, or forgot to implement the conversion tracking code when you first released, restart the test once the code is working as expected on your live site. You can do this from the settings page, by clicking Save and restart.
Interpreting your results
The detailed report is generated daily, and is a snapshot of the data we’ve collected so far. The longer you run the test and the more data we collect, the more accurate the results will be.
There are four key data points to consider:
- Chance to win
- Potential loss
- Conversion range
- Estimated revenue
Depending on these key data points, you can implement the winning variation or continue testing to find a more optimal strategy. Here’s a breakdown of what each means:
Chance to win
Chance to win represents the probability of the tested variation outperforming the other variations in the test. There are three possible outcomes:
- Close to 50-60% means that the tested variations are very similar. This usually means either variation could work equally well for your business.
- Above 70% means that there is a difference between the variations and one has a higher chance to outperform other test variations over time.
- Above 90% means that there’s a high probability that one variation will outperform the other test variations. This variation can be declared a winner.
Always consider the conversion rate you could lose when you choose one variation over another. The lower the percentage, the safer the choice you are making.
Use Potential loss in combination with Chance to win to decide when to end the test. When the Chance to win of a variation is high, and its Potential loss is low enough to be within your comfort zone, choose that variation as the winner.
Conversion rate shows the range within which your actual conversion rate lies. A very high conversion rate range or a high overlap in the ranges may indicate that we need more data to detect a difference between the variations in the test. The report’s graph section can be switched to show the conversion rate range and the overlap between the variations.
If you’re also tracking revenue in the conversion script, you will find Estimated Revenue in the detailed report. This number shows what your monthly revenue could be if you run each variation on 100% visibility. The projected revenue numbers are derived from all revenue data we have collected with the relative increase in conversion. Use this as an additional indicator when you decide which TrustBox variation to keep implemented on your site.