Expand my Community achievements bar.

What is your Analytics API integration workflow?

Avatar

Employee

My name is John Wight and I help document Analytics and CJA APIs for Adobe. I've noticed many helpful contributions in this community so I am hoping to ask a favor. Would you be willing to share your typical process for using an Adobe Analytics API and integrating it into your work flow? I am keen on knowing the order of user's flows and tasks and the tools they use in each step of that flow.

 

For example, are you likely integrating it into a script (such as python)? Is the process like the one listed below? I am only guessing in this flow:

 

1. You get a request to make a reporting feature from Analytics appear programmatically in a third party viewer.

2. You review the available endpoint functions for Analytics APIs.

3. You view the endpoint swagger to get a sense of the data structure or call structure.

4. You review an endpoint guide, if available, to get a sequence and a more specific example of what can be, or what is typically, called.

5. Make a test call in Swagger.

6. Use either swagger or the guide for reference in structuring the call in your script editor.

7. Use your script with the desired third party tool.

 

I honestly don't work around people using our own APIs for real applications, so I don't know the answer to this. I would love to hear all many scenarios and services you use it for. If this is your workflow for integrating an Analytics API, or if it is a different set of steps, I would love to know and follow up with a few more questions.

 

Thank you for any consideration and time you take to help. Your comments will be used to optimize our documentation for presenting information efficiently and completely.

 

John Wight

jwight@adobe.com

1 Reply

Avatar

Adobe Champion

Hi @John_Wight, This is interesting topic let me try and share how have I processed API based integrations/automation requirements -

 

Use-Case: We wanted to automate Adobe Analytics classification process for client account which has over 100+ report suites. It was time consuming to manually perform classification process since themvalues were different as well as classification meta data varied across report suites.

 

High level solution:

 

- Setup OAuth based Adobe IO integration and provision relevant access required for the automation.

 

- We wanted to identify "Unspecified" reporting values first for the variable so we did setup a workspace report with required segment where let's say eVar1 is "unspecified" for classification column value.
(This ensures we're only extracting unclassified values - delta dimension values). Used above workspace with Oberon debugger to grab JSON payload required.

 

- Use Jupyter notebook and setup OAuth authentication functions to generate "access_token" and "global-company-id" for our integration.

- Review Adobe Analytics API 2.0 documentation and plugged into above Jupyter notebook with payload from step-1 to check we're able to see response and transform as required in pandas data-frame.

 

- Now comes the classification part, (I will leave out the meta data extraction part since it will vary for each requirement). We reviewed documentation and found API 1.4 Classification methods will suit the purpose. Next steps involved testing API methods (Curl Commands) in Postman to get understanding of workflow and later plugged into python setup in our Jupyter notebook.

 

- Bonus: Later transformed the entire Jupyter-notebook script into multi-page Streamlit app with friendly error messages, which incorporates all steps (Authentication/GetUnspecified Values)/Meta Data extraction/Review dataframe and classify values. (this can take into account multiple report suites in single go with each having it's own classification values).

My suggestion for documentation: Include CURL commands and payload references as examples, it is easier to copy paste form documentation and make changes as per requirements. I found few attributes in JSON payload are not well documented /or have separate documentation reference .