Hey target community,
I recently launched a few A/B tests on our web pages regarding pages' layout and copy. Changes are pretty noticeable if you're a repeat visitor to the page.
However, what I've seen from the data (past few weeks) is that confidence level fluctuates a lot as we're collecting more data. For one test that has been running on a low to medium volume page, we haven't reached statistical significance for any of the challenger experience to either win or lose to the control experience.
I want to ask the community what is the confidence threshold you 're measuring for your web experiment initiatives? Do you have a specific use case where you would take some calculated risks (say an 85% of confidence) but still implementing the experience on the page?
Any thoughts or inputs are much appreciated.
LI
Solved! Go to Solution.
Hi Li, Nine Common A/B Testing Pitfalls and How to Avoid Them and Plan an A/B Test are good resources to consider while planning your tests.
Hi Li, Nine Common A/B Testing Pitfalls and How to Avoid Them and Plan an A/B Test are good resources to consider while planning your tests.
Thank you. These are great resources. I'll give them a read for sure.