General Adobe Target topics
Mihnea Docea aka @MihneaD, Technical Support Engineer, will also be in the thread to provide further guidance around the above topics with Ryan!
REQUIREMENTS TO PARTICIPATE
INSTRUCTIONS
Ryan Pizzuto is a Senior Expert Solutions Consultant focused on Adobe Target. He has been working in the optimization space for 15 years, both on the client side building an optimization practice from the ground up and recently consulting some of Adobe’s largest, most complex customers helping them maximize value from Adobe Target. He is passionate about optimization and personalization and evangelizing the possibilities with Adobe Target.
Curious about what an Adobe Target Community Q&A Coffee Break looks like? Be sure to check out the thread from our latest 2/23/22 Adobe Target Coffee Break with Senior Product Manager for Adobe Targe... (@vishalchordia) and Senior Technical Support Engineer, Shelby Goff (@shelbygoff)
Topics help categorize Community content and increase your ability to discover relevant content.
Great opportunity!
@ryan_pizzuto In Adobe Target, is there a way to output a list of all live experiences while also showing what audience each test is using? Our Testing Program is expanding in scope and it is getting difficult keeping track of what tests may touch the same sections of the site at the same time. It would be great if there was a visual way to keep track.
Hey @Studley2021 thanks for the question!
We do have a full range of APIs and you can pull the list of activities and see which ones are live. However, I don’t see the audiences piece in there that you are also interested in. Some companies choose to use naming convention in their activities to indicate where on the site something is running as well as which audiences are included. Other companies (if they have many domains or sites they manage) will choose to leverage the Target Premium feature Enterprise Permissions where they can separate sites and restrict access by user to help keep things in order.
Here is more info on the Activities APIs: http://developers.adobetarget.com/api/#activities
I can take this feedback to our product team (as I agree with you that a visualization could really help here). There are also 3rd party solutions like Mia Prova to help you leverage those APIs into something more accessible/visual.
It’s great to hear that your program is scaling like this! Keep on optimizing!!
Hi @ryan_pizzuto, thanks so much for your time in the Target Community today! This question was posted by @timf43492464 :
Hello,
How do you set up an auto target test for multiple countries? Do you set up an auto target test separate for each country?
We are running into some confusion because if we want to use auto target to learn how multiple countries prefer an image, we run into many different languages on our site and therefore different links.
We are mainly interested because of the fact that auto target can use geo as a data point to build a more efficient decision tree.
Thanks,
Tim
Link to original question: Auto Target - Multiple Countries
Thanks for the question @timf43492464!
How you approach this will depend on what you are hoping to learn from the activity. If your goal is combine all countries into a single group and report on them as such, then you can create a multipage experience and make the edits for each country on their respective page.
If you want to test different images and report out on the countries separately, then it would be better to have a different test per country.
Here is some info on creating a multipage activity: https://experienceleague.adobe.com/docs/target/using/experiences/vec/multipage-activity.html?lang=en
Hi @ryan_pizzuto , thanks so much for your time in the Target Community today! This question was posted by @jackz11447106
Hi Team,
We have to build Auto-Target activities and some have had a positive lift and some have a negative lift. All activities had the green check icon next to each experience. However, in our report, we haven't reached 95% confidence level. My questions are as follows:
1. For an activity with a positive lift for all experiences, should we wait until 95% confidence level to say the model works, or don't need to wait and we can say the model already works. We can split more traffic into experiences than control? (The confidence level always has fluctuations.)
2. If In an Auto-Target activity, one experience has a positive lift, the other one has a negative lift, and all with green check icon there. How could we explain that? Could you please reply with two situations as reach 95% confidence level and below 95% confidence level;
In short, what is the best practice to explain the confidence level and lift in auto-target activity?
Thanks and Regards,
Jack
Link to the original question: Adobe Auto-Target Result Explanation
hey @jackz11447106 great to see you are in the wide world of machine learning and thanks for the question in the Community!
When it comes to Auto-Target and Automated Personalization, the green check marks that appear next to the experiences/offers are not an indication of lift & confidence...they are simply and indication that Adobe Sensei has successfully built a model for those experiences. As we know, the more information a model gets, the better it becomes. So think of the appearance of the green check mark as the beginning of the reporting period. Before those check marks appeared, all experiences were served at random to get Sensei the data it needed to start optimizing.
Pro-tip: make note of when models are built for all experiences, then change the start date of the reporting to when you had models built for all experiences. Note the difference in lift/confidence. Then KEEP GOING! The models get better with time. So think of the green check mark as the beginning of the process, not the end.
@ryan_pizzuto Hi Ryan and team. We primarily use the A4T integration for the majority of our activities. Are there any plans in your roadmap to have A4T reporting for Automated Personalization?
thanks for the question @dfscdk! Everybody loves A4T huh? (me too)
Currently A4T support for Automated Personalization (AP) is not on the roadmap. HOWEVER, don't let that stop you from launching an AP activity. There is so much awesome stuff within AP like offer-level targeting and report groups and the MVT style of activity setup makes putting together hundreds to thousands of possible experiences so easy. You still get the Insights Reports and full lift/confidence in Target reporting, just no A4T for now. You, me, and many other people are asking for it though.
Hi @ryan_pizzuto, thanks again for sharing your insights here in the Target Community today! This question was posted by Community user vidyotma:
Our team got Target Premium license recently. In order to understand how Automated Personalization works we want to start using it asap. But want to rely on Experience Targeting for a quick win. If we activate/configure XT and AP for the same offers in tandem to get faster results from XP and let APs algorithm take its time of 35 days (as calculator suggested) and see its results then. Will setting XTs priority higher than APs, disrupt its algorithmic efficiency in creating decision logs and results delivery?
Link to original question: Will running Experience Targeting and Automated Personalization for same offers in tandem disrupt AP...
@vidyotma I love that you are using AP and XT together like this. However, consider switching them around. Leverage XT to target the known audiences that you want to deliver experiences to and leverage AP to help you find users to put into those XT experiences.
So put AP first (and target to exclude the audiences in the XT) and as part of the AP activity, you can add values to the user profile that qualify users into the XT experiences. Let's say a user engages with one of the AP experiences, use that engagement to qualify them for the XT.
This gives you the control to target who you want to target with the XT and the scale to target everybody else with the AP. Win win!
@ryan_pizzuto Is there any tool available to help gain insight to why a model isn't building quickly? We are currently running and activity that is meeting the traffic requirements. We used the calculator to get an estimate of the time to build the model and it suggested 2 days but it is taking a lot longer. (We've opened a ticket for this BTW).
@dfscdk I think you've chosen the best route by opening a ticket. Lots of smart folks there that can look into specifics for you. Though I would say 2 days is pretty ambitious to get models built. If you have the traffic and the conversions, I would still think (to myself) give it a week. I've seen models build in a few days, but I set the expectation that a little longer will be better. And remember, the longer models go, the better they get!
Thanks everyone for the questions! It's exciting to see you using some of the advanced Target features and getting involved in the Target Experience League Community!
@ryan_pizzuto if we are looking to test which offers out of a handful work best for different customers what is the best approach to start? AP or auto target?
did I just meet an @Optimizing Giraffe?! Amazing!
If you have a single offer/placement in the experience, I would stick with Auto-Target and build like you would a normal A/B test. However, if you're looking at multiple placements (on the same page) or a combination of offers, then AP would be the way to go. Both use the same random forest AI/ML models but how you build them is a little different.
Views
Like
Replies
Views
Likes
Replies
Views
Likes
Replies
Views
Likes
Replies
Views
Likes
Replies