Hello!
We are trying to set up an ABC test in Adobe Target but are having difficulties figuring out to QA each test treatment efficiently when we are testing more than one experience against the control. This is our current process for setting up an AB test and QAing:
- Build out the test experience
- Create and apply a new audience that includes a query parameter
- Set traffic to the test experience to 100%
- Push the test live and QA using the query parameters we set up within the new audience.
- Once the test is QA'd and we know it's working properly, we go back into the activity, update the audience to remove the query parameter, update the traffic to an even split, and push it live.
So far this process has worked for us (but if there is a more efficient way to do this, please share!), but when introducing 2+ experiences in one activity, we can't figure out how to assign a QA parameter to each individual experience. Because of this, each time we need to QA a different experience we go back into the activity and reallocate 100% of the traffic to the next experience we are wanting to QA and repeat.
Although time-consuming, this process does work for us, but our clients like to QA the tests themselves and aren't in the platform everyday to know how to reallocate traffic and not disrupt the current test set up. Is there a way to assign a separate QA parameter to each individual experience within one activity so we can share those QA parameters with our client without having to adjust any traffic within the activity?
Any thoughts and insights will be helpful! Thank you!