I recently launched a few A/B tests on our web pages regarding pages' layout and copy. Changes are pretty noticeable if you're a repeat visitor to the page.
However, what I've seen from the data (past few weeks) is that confidence level fluctuates a lot as we're collecting more data. For one test that has been running on a low to medium volume page, we haven't reached statistical significance for any of the challenger experience to either win or lose to the control experience.
I want to ask the community what is the confidence threshold you 're measuring for your web experiment initiatives? Do you have a specific use case where you would take some calculated risks (say an 85% of confidence) but still implementing the experience on the page?