Timestamp Based Data ingestion in AEP | Community
Skip to main content
Level 1
January 30, 2026
Solved

Timestamp Based Data ingestion in AEP

  • January 30, 2026
  • 4 replies
  • 78 views

We are using CSV based data ingestion from ADSL 2.0 source connector. There is a field called lastUpdatedDate which is datetime field. Normal ingestion is working fine. But I want to do ingestion based on the lastUpdatedDate field. Issue I am facing is the source system is not able to send me incremental file and always sends full file. So i need to ingest only those records whose lastUpdatedDate field is newer than one we have.. This feature exists in some form in DB connectors but not present by defualt in CSV based ingestion. Possible reason for this is when we are ingesting the data from CSV and setting up rules, we dont have access to AEP dataset and hence can not figure out last updated timestamp to compare… Let me know if this is possible in some way.

Regards, Kishor

 

Best answer by Tof_Jossic

@KishorPh1 

I believe there is no built‑in way in ADLS 2.0 CSV sources to do “row‑level incrementals” based on a lastUpdatedDate that lives in the target dataset. The connector only filters which files to ingest (by last‑modified timestamp in storage), not which rows inside a file.

Someone else might chime in here but altering the process that writes the CSV file so the system only ingests “delta only” files might be an option if feasible at your end.

4 replies

Tof_Jossic
Adobe Employee
Tof_JossicAdobe EmployeeAccepted solution
Adobe Employee
January 30, 2026

@KishorPh1 

I believe there is no built‑in way in ADLS 2.0 CSV sources to do “row‑level incrementals” based on a lastUpdatedDate that lives in the target dataset. The connector only filters which files to ingest (by last‑modified timestamp in storage), not which rows inside a file.

Someone else might chime in here but altering the process that writes the CSV file so the system only ingests “delta only” files might be an option if feasible at your end.

SG14_97
Level 3
February 2, 2026

@KishorPh1

I think the incremental logic needs to be setup in your source system (not sure if you have a datamart or warehouse CRM system) you can calculate the logic there. The other option is to make use of Query service to compute the latest version for each record and populate it towards UPS (profiling) post initial ingestion in the datalake but it will be compute heavy since ADSL 2.0 probably will not be able to check for the incremental updates

Sukrity_Wadhwa
Community Manager
Community Manager
February 13, 2026

Hi ​@KishorPh1,

Were you able to resolve this query with the help of the provided solutions, or do you still need further assistance? Please let us know. If any of the answers were helpful in moving you closer to a resolution, even partially, we encourage you to mark the one that helped the most as the 'Best Answer'.

Thank you!

Sukrity Wadhwa
itsMeTechy
Level 4
February 17, 2026

are you ingesting profile data or event data? if profile data, you can see if it is possible option to ingest it weekly. i have seen profile data with millions of data is been ingested as full file on a weekly basis, and that should not be a problem. even after ingesting the full file weekly or daily, you can plan to delete the older batches using APIs, and that will also help in cleaning up the old files and keep the data lake not with huge amount of data. profile store will always have the latest data from the latest file as it is a full file.