Expand my Community achievements bar.

Adobe Journey Optimizer Community Lens 3rd edition is out!
SOLVED

Create A/B Test for Email Campaign and based on engagement performance send the remainder to the winner after X period of hours.

Avatar

Level 2

Most Email tools allow for A/B testing of subject lines and creative for a set percentage, with the remainder sent to the best performing email.  Is this a possibility in AJO?

Our marketing teams expect to be to create A/B Tests for Email Campaign and based on engagement performance send the remainder to the winner after 4-6 hours.

 

Has anyone solved for this or have a similar need/expectation?

Topics

Topics help categorize Community content and increase your ability to discover relevant content.

1 Accepted Solution

Avatar

Correct answer by
Community Advisor

Hi @Caveman77 

You don't have a straightforward A/B testing features but there is a workaround since GA of Content Experiments in May 2023.

 

Adobe Journey Optimizer now supports experiments in Campaigns. Experiments are randomized trials, which in the context of online testing, means that you expose some randomly selected users to a given variation of a message, and another randomly selected set of users to some other variation or treatment. After exposure, you can then measure the outcome metrics you are interested in, such as opens or clicks of emails, subscriptions, or purchases.

 

After running your Experiment and finding the winner, you can deploy this winning idea, either by pushing the best performing treatment to all your customers, or by creating new campaigns where the structure of the best performing treatment is replicated.

 

You can find more details here

 

Thanks,

David

View solution in original post

4 Replies

Avatar

Correct answer by
Community Advisor

Hi @Caveman77 

You don't have a straightforward A/B testing features but there is a workaround since GA of Content Experiments in May 2023.

 

Adobe Journey Optimizer now supports experiments in Campaigns. Experiments are randomized trials, which in the context of online testing, means that you expose some randomly selected users to a given variation of a message, and another randomly selected set of users to some other variation or treatment. After exposure, you can then measure the outcome metrics you are interested in, such as opens or clicks of emails, subscriptions, or purchases.

 

After running your Experiment and finding the winner, you can deploy this winning idea, either by pushing the best performing treatment to all your customers, or by creating new campaigns where the structure of the best performing treatment is replicated.

 

You can find more details here

 

Thanks,

David

Avatar

Level 2

Thanks David.  We will certainly try this approach in the interim.

It works best if when situations allow enough time for a second journey to be planned. 

 

Hoping AJO will allow for a way to record the click events to be quickly tallied based on version as a condition so the "hold" group could be diverted to the best performing creative with out having to schedule a separate journey.  

Avatar

Level 3

Hi @Caveman77  I just posted an idea for this in the idea section of the community. 

Would you mind giving it a like and a comment that you also need this functionality?

Hopefully Adobe will then start working on this functionality. https://experienceleaguecommunities.adobe.com/t5/journey-optimizer-ideas/function-to-test-the-best-t... 

Avatar

Employee Advisor

@Caveman77 ,

Content experimentation in Adobe Journey Optimizer uses a pseudo-random hash of the visitor identity to perform random assignment of users in your target audience to one of the treatments that you have defined. The hashing mechanism ensures that in scenarios where the visitor enters a campaign multiple times, they will deterministically receive the same treatment.

You can refer https://experienceleague.adobe.com/docs/journey-optimizer/using/campaigns/content-experiment/get-sta... for further info.

As an example, in a content experiment with 50% of traffic assigned to each treatment, users falling in buckets 1– 5,000 will receive the first treatment, while users in the buckets 5,001 to 10,000 will receive the second treatment. Since pseudo-random hashing is used, the visitor splits you observe may not be exactly 50-50; nevertheless, the split will be statistically equivalent to your target split percentage.