Hi @AnkitJasani29 , Since we created the data flows via API, we now need to update them with a new encryption key using a PATCH request, rather than creating new data flows each time. I performed a PATCH request to update the dataflow with the new encryption key, but it removed the existing mapping ...
Hi @AnkitJasani29 , Thanks for the information. I just wanted to clarify — if encryption keys expire, does that mean we would need to create new data flows each time with new encryption key? Since we manage multiple data flows, I’m a bit concerned this might lead to a growing number of redundant dat...
Hi Everyone, The encryption keys in Adobe Experience Platform (AEP) have expired. We created new encryption keys with the same names as the expired ones and deleted the expired keys from AEP. Since then, our existing incremental data flows have started to fail.The public key has been shared with the...
Hi @AnkitJasani29 Thanks for the information. I was able to create a static value in the dataset using the static source type in the mapping set:{"sourceType": "STATIC","source": "Moved","destination": "abc.xyz.customer_status"}Now, we will have both 'active' and 'moved' values for customer_status...
Hi everyone, Using AEP Query Service, we have created a derived dataset by joining selected fields from the Profile dataset and the Event dataset, along with performing deduplication. The resulting data is stored in a derived dataset, which we then export to cloud storage destinations such as SFTP ...
Hi everyone, We are currently trying to export datasets from AEP to SFTP server and data landing zone. However, we don’t have the option to select CSV as a file type during export. Are there any alternative solutions or approaches we can take to ensure the data is exported in CSV format? Currently w...
Hi Everyone, We have csv.pgp encrypted source files stored in Azure Blob Storage. These files are ingested into the Data Landing Zone using an API-based ingestion process.Our source and target schemas are different. We would like to reference a field from the source file (e.g., CRMID) and use it to ...
Hi @TylerKrause , Gotcha! Thanks for the information.Once the schema is created and enabled for profile we can only modify few things. So, changing the datatype is one thing that we cannot do after schema is created. But I did not enable the schema and dataset yet. What happens in this case?Yes,I pr...
Hi @TylerKrause , Yes, data is already loaded in the dataset, but as I'm saying it's incorrect data and I'm going to delete it and create a new dataset. So in this is case I can just update Schema in UI correct? and then create a new dataset pointing to updated schema.If at all if I need the old da...
We are using default Time-based merge policy. If I create a new dataflow for one-time bulk data load from missing date till today, what about the target dataset ?Should I use same exiting dataset?(any duplicates will be created?) or should I create a new dataset, enable it for profile and then once...