Replies

Avatar

Avatar

Shani2

Avatar

Shani2

Shani2

14-10-2020

Thank you, I will look into!

Avatar

Avatar

Amelia_Waliany

Employee

Avatar

Amelia_Waliany

Employee

Amelia_Waliany
Employee

14-10-2020

Hi @Jon_Tehero! Thank you for your time today 🙂

@Mau_Gloria posted this question in the Community: 

Hi, Im trying to set a recommendations module on a page but these are not pushed to the webpage. The only way to see them is by using the "Preview" link included in Adobe Target, which is pulling 3 different offers along with their URLs. Feeds are working fine (scheduled every day in the morning) and in the overview screen of the activity I saw "Results ready" with green indicator.

- I tried QA link, nothing.

- I tried visiting the page, nothing.

Mau_Gloria_2-1596574463637.png

Using at.js 2.1.

LINK TO ORIGINAL POST

Avatar

Avatar

Jon_Tehero

Employee

Avatar

Jon_Tehero

Employee

Jon_Tehero
Employee

14-10-2020


@Shani2 wrote:

@Jon_Tehero  Sorry, I am full of questions, you can tell I couldn’t wait for this event 😊 this should be my last one and it’s regarding the AB AA algorithm. The concept/mechanism of AA is a great one, however, I think there are some limitations in a particular use case. I won’t go into the benefits of AA as those are plentiful, but do want to highlight one specifically, i.e. optimization occurs in parallel with learning.


Moreover, in the event there are only 2 experiences, I think there is a true risk with false positives (higher than 5%) with the current algorithm logic, i.e.  after the better performing experience reaches 95% confidence, 100% of traffic is allocated to the experience identified as the winner. Unlike the logic for 3 or more experiences in which 80% of traffic is allocated to the winner and 20% of traffic continues to be served randomly to all experiences – this is key in the event there are user behavior shifts and confidence intervals begin to overlap with other experiences while the test is running.


I’ve encountered a few experiences using Target’s manual A/B test in which the stats engine has called a winner early and a badge was displayed in the activity, however, after hours/days/weeks of collecting more data, the engine removes the badge as it recognizes that confidence levels are still overlapping/fluctuating. This is a prime example of how important it is to determine sample size/tests parameters before running a test to prevent ending a test prematurely and to ensure statistically valid results, but also why I raise my concern with the AA logic specifically for the 2 experiences scenario. Currently, there is no room for the algorithm to correct itself in the event it identified an experience as a winner that truly was not because there isn’t a reserve of traffic allocated for learning if user behavior changes – this is not truly a multi-armed bandit approach in this use case because after 95% confidence is reached optimization no longer occurs in parallel with learning.


Furthermore, another concern on the logic of the algorithm for two experiences is that hypothetically we cannot detect a novelty effect because the algorithm may declare an experience a winner too early. We have observed novelty effects after adding a new feature that is attention grabbing in manual A/B tests, for the first two weeks a challenger may be performing better than the default experience and display a badge, but with time the positive effect wears out as more data is collected – confirming that the lift was only an illusion.


In sum, I hesitate using AA for 2 experiences due to the current AI logic. But the dilemma is that we don’t tend to test in our organization more than 2 experiences. Are there any suggestions on how we can mitigate false positives for 2 experiences for AA? Is enhancing the algorithm for two experiences in the roadmap so that it serves as a true multi-armed bandit approach to optimization? Lastly, in the product roadmap, will users have the ability to set the significance level for AI driven activities? Not all tests are created equal, therefore, they will not have the same risks/costs, thus, some tests may require a false positive-rate less or more than 5%.

Please note I am aware of the time-correlated caveat for AA and the experiences I discussed above re Manual A/B tests were not contextually varying.

Thank you!


@Shani2,

Our logic for 2 experiences and for more than 2 experiences is actually same (in both scenarios, once a winner is declared, we will allocate 80% of traffic to the winner and the remaining 20% traffic is split among all experiences). So in a case where 2 experiences are present, at the time we declare a winner, we'll send 90% of traffic to the winning experience, and 10% of traffic to the other experience.

If for any reason you are seeing behavior different than what I've described above, please submit a ticket to customer care so that we can look take a look. 

Avatar

Avatar

Jon_Tehero

Employee

Avatar

Jon_Tehero

Employee

Jon_Tehero
Employee

14-10-2020


@Amelia_Waliany wrote:

Hi @Jon_Tehero! Thank you for your time today 🙂

@Mau_Gloria posted this question in the Community: 

Hi, Im trying to set a recommendations module on a page but these are not pushed to the webpage. The only way to see them is by using the "Preview" link included in Adobe Target, which is pulling 3 different offers along with their URLs. Feeds are working fine (scheduled every day in the morning) and in the overview screen of the activity I saw "Results ready" with green indicator.

- I tried QA link, nothing.

- I tried visiting the page, nothing.

Mau_Gloria_2-1596574463637.png

Using at.js 2.1.

LINK TO ORIGINAL POST


I concur with what @karandhawan said and would recommend using the "?content-trace=true" option as well.

 

A couple of additional tips:

  1. depending on the type of algorithm, it may require a "key" (which is the entity or category that you are basing the recommendations on. example: for "people who viewed this also viewed these..." the "this" represents the key).
  2. When entities are brand new or when an algorithm first runs, as recommendations are requested, we push the results of the algorithm and the details of the entity to our edges. Sometimes you may need to refresh a couple of times to allow the results to fully propagate to the edge. This is generally something that is only really felt while QA'ing the activity and is limited at most to the first couple of views of an entity.

Avatar

Avatar

peterhartung

Employee

Avatar

peterhartung

Employee

peterhartung
Employee

14-10-2020

One more question from our forums, Jon from @btorres76

With multiple activities on a page, is there a way for one activity to disable certain modifications from other activities?

For example I have this use case:

Activity 1: Updates elem1, elem2, elem3

Activity 2: Updates elem2, elem3, elem4

 

Is there a way for another activity (Activity 3) to disable:

Activity 1: disable update on elem3

Activity 2: disable update on elem2

 

Original Forum Post 

Avatar

Avatar

Shani2

Avatar

Shani2

Shani2

14-10-2020

@Jon_Tehero in the cloud documentation - it states that the logic for only 2 experiences work differently than 3+ experiences. Please see image below, I also included the link for reference.

 

Shani2_0-1602690822263.png

https://docs.adobe.com/content/help/en/target/using/activities/auto-allocate/automated-traffic-alloc...

Avatar

Avatar

Jon_Tehero

Employee

Avatar

Jon_Tehero

Employee

Jon_Tehero
Employee

14-10-2020


@Shani2 wrote:

@Jon_Tehero in the cloud documentation - it states that the logic for only 2 experiences work differently than 3+ experiences. Please see image below, I also included the link for reference.

 

Shani2_0-1602690822263.png

https://docs.adobe.com/content/help/en/target/using/activities/auto-allocate/automated-traffic-alloc...


@Shani2 ,

Thank you for pointing this out. I will review with our engineers and doc writers to make sure our documentation is accurate for the A/A behavior.

Avatar

Avatar

Jon_Tehero

Employee

Avatar

Jon_Tehero

Employee

Jon_Tehero
Employee

14-10-2020


@peterhartung wrote:

One more question from our forums, Jon from @btorres76

With multiple activities on a page, is there a way for one activity to disable certain modifications from other activities?

For example I have this use case:

Activity 1: Updates elem1, elem2, elem3

Activity 2: Updates elem2, elem3, elem4

 

Is there a way for another activity (Activity 3) to disable:

Activity 1: disable update on elem3

Activity 2: disable update on elem2

 

Original Forum Post 


@btorres76 , 

 

There are different ways to achieve this. One way is to add more restricting conditions in activity 1&2 to that make sure the visitor is NOT in activity 3. You can add targeting conditions to determine if someone is in another test or even target based on specific activity membership using the built-in attribute: "user.activeActivities"

Avatar

Avatar

Shani2

Avatar

Shani2

Shani2

14-10-2020

@Jon_Tehero  I would really appreciate it if I somehow can be informed of the result of the discussion. Or should I create a customer service ticket? Thank you!

Avatar

Avatar

Jon_Tehero

Employee

Avatar

Jon_Tehero

Employee

Jon_Tehero
Employee

14-10-2020


@Shani2 wrote:

@Jon_Tehero  I would really appreciate it if I somehow can be informed of the result of the discussion. Or should I create a customer service ticket? Thank you!


Yes, that would be the best way to get automatic updates. Thank you for your great questions today and for your patience on this documentation discrepancy.