Expand my Community achievements bar.

SOLVED

Ingest parquet file example

Avatar

Level 3

Does anyone have any example of ingesting a parquet file into AEP?  Documentation and Adobe Support says the parquet file must exactly match the XDM schema in AEP.  I am have a test file created to do just that.   However, there has to be a way to ingest a parquet file and use a mapping set.  I prefer to ingest data from client systems without requiring the client to create a specific format just for Adobe.

1 Accepted Solution

Avatar

Correct answer by
Level 3

Solved.  There is no way to ingest non-xdm compliant parquet.  I solved by making the parquet xdm compliant using a data pipeline, ensuring the datetime values are of a format AEP can ingest without errors, and sizing the data files to optimize load time.  Parquet file loads 60% faster compared to csv load time. 

View solution in original post

12 Replies

Avatar

Employee Advisor

@DavidSlaw1 not sure if I've got this right but I assume you are possibly ingesting the data using drag and drop on the dataset UI page, that does not allow for any mapping options.

However most of the Cloud Storage options  would let you select the parquet format in your dedicated repository and then go through the 'Mapping' step.

 

See Map data fields to an XDM schema

 

Screenshot 2024-05-14 at 10.37.49.png

Let me know if that helps

 

 

 

 

Avatar

Level 3

Does not help.  selecting a parquet file from S3 is fine.  No options to map in the UI workflow.

Avatar

Employee

Hello @DavidSlaw1 

 

When you create your mapping of the file there is an option to be xdm compliant or not.  If the data is not xdm compliant then you should be able to create a mapping flow.

Avatar

Level 3

I did try this before.  Job runs and result is success, but no records loaded.  The Adobe docs lack clarity on configuring the mapping.  Do I still use the ATTRIBUTE and EXPRESSION sourceType?  Then source is the attribute name in the file and target is the fully qualified XDM path, like this?

 

"mappings": [
{
"sourceType": "ATTRIBUTE",
"source": "var1",
"destination": "_mytenant.fieldGroupObject.var1",
"identity": true,
"primaryIdentity": false,
"namespace" : "var1Namespace"
}

Avatar

Level 3

Certainly no options in the UI.  Not clear where specifying non-xdm is to be done in the API.   Thoughts?

Avatar

Employee

Hello @DavidSlaw1 

 

Apologize in the detail in my response I had provided you the incorrect information.

 

When ingesting Parquet files they must be XDM compliant.  There is no mapping step required if the data is in XDM compliant.  

 

  • Apache Parquet: Parquet-formatted data files must be XDM-compliant.

 

https://experienceleague.adobe.com/en/docs/experience-platform/sources/ui-tutorials/dataflow/cloud-s...

 

If you are ingesting data in JSON or CSV then you can use the mapping step as this data may not be xdm compliant.

Avatar

Level 3

There must be a way to ingest data from a parquet file that is not XDM compliant.  It seems silly to ask a client to reformat / restructure data just for AEP to consume it.

Avatar

Employee

At this time parquet is only XDM compliant.  You have more flexibility if the data in in json to create a mapping flow.

Avatar

Correct answer by
Level 3

Solved.  There is no way to ingest non-xdm compliant parquet.  I solved by making the parquet xdm compliant using a data pipeline, ensuring the datetime values are of a format AEP can ingest without errors, and sizing the data files to optimize load time.  Parquet file loads 60% faster compared to csv load time. 

Avatar

Level 1

I create the XDM structure to receive the data

<profileSchemaName>

-> <tenant id> | Object

      -> <profile data object> | Object

           -> <list of attributes, their types, set identity parameters>

-> _repo | Object

      -> createDate | DateTime

      -> modifyDate | DateTime

-> _id | string <and the remaining standard profile record attributes>

 

Gave that structure to the database developers and asked them to create a parquet file using the XDM schema structure such as the following.  Once that was done, the ingestion process was great!  Parquet file on 1 million records ran 60% faster than csv file.:

Pseudocode

       With the source parquet file object (pf)

      Create a new parquet file object (pf_parquet_for_aep)

              Containing a column “<tenant ID>”

                        Containing a structure “populationIdentityMap”

                               Containing all the attributes in the identity graph

                        Calculate an attribute “uuid” derived from a uuid() function (Note this is to be as the _id attribute value)

root

|-- _<tenant id>: struct (nullable = false)

| |-- populationIdentityMap: struct (nullable = false)

| | |-- ipv4_home: string (nullable = true)

| | |-- maid: string (nullable = true)

| | |-- email_sha1: string (nullable = true)

| | |-- email_md5: string (nullable = true)

| | |-- email_sha256: string (nullable = true)

| | |-- transactionId: string (nullable = true)

| | |-- timestampSampled: timestamp (nullable = true)

| | |-- timestampRetain: timestamp (nullable = true)

| | |-- batch_id: string (nullable = true)

| | |-- uuid: string (nullable = false)

 

Also, be sure to tell the developers any DateTime values need to be ISO8601 compliant and be sent as UTC timezone, which they may need to convert.  AEP assumes UTC timezone as default.