guard rails for adobe analytics connector to backfill 13month data into data having custom schema | Community
Skip to main content
Pradeep-Jaiswal
Level 5
April 17, 2025
Solved

guard rails for adobe analytics connector to backfill 13month data into data having custom schema

  • April 17, 2025
  • 1 reply
  • 441 views
What are the key considerations, potential challenges, and best practices (or 'guardrails') for backfilling 13 million historical Adobe Analytics records into Adobe Experience Platform (AEP)? We need to map the historical props/evars to a custom schema (with custom field groups) so that this historical data aligns seamlessly with our current WebSDK data format, providing a unified view to CJA user. CJA user should not be forced to use props/evar for historical data and webSDK data with custom name. My preference is linking adobe connector with the custom dataset and using dataprep to map those variable with custom name. Is there a better way ?
This post is no longer active and is closed to new replies. Need help? Start a new post to ask your question.
Best answer by Harveer_SinghGi1

Hi @pradeep-jaiswal ,

I think you meant to use custom schema and then map standard AA XDM fields to custom XDM fields in data prep as dataset is automatically created in Analytics Source Connector. I think this the best possible option as the transformation needs to happen at some point to align the different sources and having it done at source makes downstream processes easy and removes the need of repeated transformations every time you need to use this data.

Cheers!

1 reply

Harveer_SinghGi1
Community Advisor and Adobe Champion
Harveer_SinghGi1Community Advisor and Adobe ChampionAccepted solution
Community Advisor and Adobe Champion
April 17, 2025

Hi @pradeep-jaiswal ,

I think you meant to use custom schema and then map standard AA XDM fields to custom XDM fields in data prep as dataset is automatically created in Analytics Source Connector. I think this the best possible option as the transformation needs to happen at some point to align the different sources and having it done at source makes downstream processes easy and removes the need of repeated transformations every time you need to use this data.

Cheers!