We are currently running an A4T A/B test. This test was launch on August 8th but there was no data coming into Analytics, nor Target until August 20th which is the day the issue was fixed. We are now seeing data in Analytics and Target. Also, we'd asked the Adobe client care team to provide us the data while it was not coming into Target. So now we have data from three different platforms: Analytics, Target and the backend database (that the client care provided). I know that the database data will not match Analytics and Target but they should at least show the same trend. Target show B is the winner whereas Analytics shows A as the winner. Not sure which platform to trust. Has anyone ever been in a similar situation? I'd love to hear your thoughts/ recommendations. We could stop the test and start with a clean slate but again what's the guarantee that the data will match?
I have a simple rule: "Always trust Target" because Analytics can not count un-stitched data hits. See: Minimizing inflated visit and visitor counts in A4T Also it's really easy to accidentally inflate numbers in a custom analytics report when you think you are looking at xyz but really you are asking for abc. So if you got the "database data" from The Target team I'd go with that. However as you point out best to start clean.
If this satisfies your query please like, mark as helpful and mark as answer. Otherwise lets keep the conversation going. Hope you have a wonderful day.
So here's my problem with Target- The conversion for A and B based on visits is different from what I see in Analytics for our overall website conversion.
The directionality is different but also the conversion numbers. Target show a higher conversion compared to Analytics. The conversion in Analytics matches out overall website conversion. Also, the confidence level for the test is low like 76% and it's keeps changing drastically. It reached 98% confidence level in the first couple days but since then it's been very low. So we have two issues here- Directionality within platforms and confidence level. Even if I trust Target data, the confidence level is low. What is your recommendation for such situation. If we restart the test, and if we see B is winning but confidence level is still low we will stop the test. How much time should be given to such tests before you decide to quit based on confidence level?