Had a question on how Target derives reward probabilities for the MAB algorithms implemented for Auto Allocate, Auto Target and Automated Personalization activities. Was going through your docs and found out that there are three ways of feeding data into Target :
Server side APIs for profile updates.
Since MAB algorithms need reward probabilities of each experience/variant as an input which change over time as more visitors participate in an activity, does Target derive the reward probability from the data supplied using the above methods ?
Additionally, I would like to give you a quick context on what are reward probabilities.
For MAB algorithms, suppose I have 2 variants, A(control) and B(variation).
Based on visitor interactions, and depending on the CTRs (Clickthrough rates) on the variants, we can derive reward probabilities of A and B. Let's say in a single day 1000 visitors are interacting with A and B. Out of those 1000 visitors, A gets 50% of the traffic and B gets 50% of the traffic. So, out of 500 hits on A, only 150 convert on it. And out of the other 500 hits on B, 300 convert on it. A conversion metric equates to being generating a reward (a boolean 0 or 1). So in this case, reward probability of A is 0.3 (150/500) and that of B is 0.6 (300/500). Ofcourse, this will change as more visitors interact in a typical A/B test activity. These reward metrics ideally serves to be the input data to the training models of the algorithms. This example is extremely simple but in real time there might be a lot more complexity involved into deciding what is the reward probability of the experiences controlled by numerous factors.
Hope, this gives you an insight into the reward probabilities.