コミュニティアチーブメントバーを展開する。

Join us on September 25th for a must-attend webinar featuring Adobe Experience Maker winner Anish Raul. Discover how leading enterprises are adopting AI into their workflows securely, responsibly, and at scale.
解決済み

Data Ingestion ETL options for Encrypted

Avatar

Level 2

We have .pgp encrypted files being ingested from Data Landing Zone via API. We would like to reference a source field attribute and update an existing target attribute that does not exist in the schema (or source file). Is this possible via Data Prep? Or would this require a post ingestion ETL with something like Data Distiller?

トピック

トピックはコミュニティのコンテンツの分類に役立ち、関連コンテンツを発見する可能性を広げます。

1 受け入れられたソリューション

Avatar

正解者
Community Advisor

@LizaFrancis  Interesting question, as target field isn't in the schema or the source file, Data Prep can’t handle it alone. You’ll need a post-ingestion step using Data Distiller or another ETL to generate or populate that field based on your logic.

元の投稿で解決策を見る

3 返信

Avatar

正解者
Community Advisor

@LizaFrancis  Interesting question, as target field isn't in the schema or the source file, Data Prep can’t handle it alone. You’ll need a post-ingestion step using Data Distiller or another ETL to generate or populate that field based on your logic.

Avatar

Level 8

Hi @LizaFrancis ,

Data Prep is strictly a map‐and‐transform‐during‐ingestion step and only knows about the XDM attributes you've already defined in your schema.

You can:

  • Pass through existing source fields
  • Create calculated fields
  • Map those to XDM attributes
  • Validate against XDM so only valid attributes get ingested

Reference: https://experienceleague.adobe.com/en/docs/experience-platform/data-prep/home

 

However, if the target attribute you need to update doesn’t exist in the schema (and thus isn’t in the source), Data Prep has no “upsert new XDM attribute” capability. To add or enrich with a brand-new field you have two options:

  1. Extend your XDM schema to include that attribute, then revise your Data Prep mapping.
  2. Use a post‐ingestion ETL - Data Distiller lets you run SQL‐based batch queries against your ingested data to create Derived Datasets with new or updated attributes and then write them back into the Data Lake. Reference: https://experienceleague.adobe.com/en/docs/experience-platform/query/data-distiller/overview

Thanks

Ankit

 

Avatar

Level 2

Thank you @Asheesh_Pandey and @AnkitJasani29  I figured as such but it's nice to see the validation and instructions.   I appreciate your quick feedback.