Expand my Community achievements bar.

Join us on September 25th for a must-attend webinar featuring Adobe Experience Maker winner Anish Raul. Discover how leading enterprises are adopting AI into their workflows securely, responsibly, and at scale.
SOLVED

Is it possible to stream and batch ingest data simultaneously from different sources into the same profile dataset in Adobe Experience Platform (AEP)?

Avatar

Level 1

I’m curious about the technical feasibility and best practices. Is combining streaming and batch ingestion into a single profile dataset advisable? What are the key considerations or potential pitfalls to be aware of when setting this up?

Thanks in advance for any insights!

1 Accepted Solution

Avatar

Correct answer by
Community Advisor

@YatinSh It is technically feasible to stream and batch ingest data simultaneously into the same profile dataset. Key Considerations: Define identity strategy, XDM schema, and source mappings. Test in sandbox to identify and resolve issues like schema mismatch, data duplication, conflict merge rules, profile fragments etc.

View solution in original post

2 Replies

Avatar

Correct answer by
Community Advisor

@YatinSh It is technically feasible to stream and batch ingest data simultaneously into the same profile dataset. Key Considerations: Define identity strategy, XDM schema, and source mappings. Test in sandbox to identify and resolve issues like schema mismatch, data duplication, conflict merge rules, profile fragments etc.

Avatar

Community Advisor

Hi @YatinSh ,

As @Asheesh_Pandey mentioned, yes — it is technically possible.

However, from a future-proofing and best practices perspective, it's not the ideal approach. If there's an issue with one data source, it can become difficult to debug or analyze the root cause.

Also, if incorrect data is sent from one source and you need to delete it, there's a risk of having to clean the entire dataset — which could result in loss of data from both sources.

A better solution would be to create two separate datasets, one for each source. You can then stitch the data together based on identity namespaces and control how the data is prioritized using merge policies.

Hope this helps!

Kind regards,
Parvesh

Parvesh Parmar – Adobe Community Advisor
https://www.linkedin.com/in/parvesh-parmar/