What would be the best way to launch five personalized experiences, evaluate them after a given amount of time, and then cycle in two new experiences to replace the worse performing? I am thinking about continuously improving personalized experiences this way by adding new content in cycles after removing poor performing content.
How can I know which experiences are performing the worse? How long should the periods be for each cycle? Would this concept work in the first place?
Solved! Go to Solution.
Topics help categorize Community content and increase your ability to discover relevant content.
Hi @LouisGirifalco ,
Before running any experiment, you should check the sample size calculator to see how long you need the run the experiment(link mentioned below).
Sample size calculator - https://experienceleague.adobe.com/tools/calculator/testcalculator.html
After you run the experiment for the desired time, you will need to look for the lift and confidence level in the report(detailed mentioned below).
https://experienceleague.adobe.com/docs/target/using/activities/abtest/sample-size-determination.htm....
Furthermore, once you deduce the worst experience and want to add a new experience, then, in that case, would suggest you stop the current activity, make a new activity and add all the new experiences with the previous winner experiences and repeat the cycle as per your business needs, also it gives you a clear picture of all the experiences.
Auto Target will deliver the best suitable experience to the end user basis their individual profile. So, in this case you don't need to worry about eliminating worse performing experience, target will do the work for you.
Now, If you want to measure performance of each experience and then evaluate, you should probably run manual split A/B test.
How long should the periods be for each cycle?-
For this you can use Adobe's sample size calculator. The calculator basically looks somewhat like below screenshot.
Input proper values in each field and it'll tell you how many days the test should run to evaluate the result and declare losing experience.
link for calculator - https://experienceleague.adobe.com/tools/calculator/testcalculator.html
screenshot -
Please let me know if this answers your questions OR lets continue the discussion in chat
-Gauresh
Thank you for the reply Gauresh!
My question is all about Auto Target. So what I am wondering, after running an Auto Target activity for a cycle (maybe one month), what is the best practice for replacing poorly performing personalized experiences with new ideas we think might perform better?
You could just make new changes to outperforming experience target won't complete remove that experience in case it's performing bad, there's still some traffic coming in and target keeps on learning, if new changes performing well, again traffic will increase there automatically which means more people will strt seeing new experience as its performing well.
Hi @LouisGirifalco ,
Before running any experiment, you should check the sample size calculator to see how long you need the run the experiment(link mentioned below).
Sample size calculator - https://experienceleague.adobe.com/tools/calculator/testcalculator.html
After you run the experiment for the desired time, you will need to look for the lift and confidence level in the report(detailed mentioned below).
https://experienceleague.adobe.com/docs/target/using/activities/abtest/sample-size-determination.htm....
Furthermore, once you deduce the worst experience and want to add a new experience, then, in that case, would suggest you stop the current activity, make a new activity and add all the new experiences with the previous winner experiences and repeat the cycle as per your business needs, also it gives you a clear picture of all the experiences.
Thank you for the information Vijay. That is what I needed to know. One question I have, we will need to wait for the model to learn the previous experiences again, correct? There is no way to carry over that learning to the new activity right?
Views
Likes
Replies
Views
Like
Replies