In the context of A/B testing experiments, statistical significance is how likely it is that the difference between your experiment’s baseline version and test version isn’t due to error or random chance.
Statistical significance depends on 2 variables:
The number of sessions.
The number of conversions for both baseline and variant experiences.
A successful A/B test answers the question of how confident you are in the results. For example, if you run a test with a 95% significance level, you can be 95% confident that the differences are real.
In our A/B testing dashboard, you can view the statistical significance details of the testing experience by clicking on the 3 dots in the far right of the experience, and selecting View Statistical Significance Details.
From there, a pop-up on the right will appear showing a breakdown of the Weight, Conversion Rate, Traffic (Sessions Served), Current Significance, and Required Sessions To Achieve % Significance.