Ingest parquet file example | Community
Skip to main content
DavidSlaw1
May 13, 2024
Solved

Ingest parquet file example

  • May 13, 2024
  • 1 reply
  • 4569 views

Does anyone have any example of ingesting a parquet file into AEP?  Documentation and Adobe Support says the parquet file must exactly match the XDM schema in AEP.  I am have a test file created to do just that.   However, there has to be a way to ingest a parquet file and use a mapping set.  I prefer to ingest data from client systems without requiring the client to create a specific format just for Adobe.

This post is no longer active and is closed to new replies. Need help? Start a new post to ask your question.
Best answer by DavidSlaw1

There must be a way to ingest data from a parquet file that is not XDM compliant.  It seems silly to ask a client to reformat / restructure data just for AEP to consume it.


Solved.  There is no way to ingest non-xdm compliant parquet.  I solved by making the parquet xdm compliant using a data pipeline, ensuring the datetime values are of a format AEP can ingest without errors, and sizing the data files to optimize load time.  Parquet file loads 60% faster compared to csv load time. 

1 reply

Tof_Jossic
Adobe Employee
Adobe Employee
May 14, 2024

@davidslaw1 not sure if I've got this right but I assume you are possibly ingesting the data using drag and drop on the dataset UI page, that does not allow for any mapping options.

However most of the Cloud Storage options  would let you select the parquet format in your dedicated repository and then go through the 'Mapping' step.

 

See Map data fields to an XDM schema

 

Let me know if that helps

 

 

 

 

DavidSlaw1
May 14, 2024

Does not help.  selecting a parquet file from S3 is fine.  No options to map in the UI workflow.

DavidSlaw1
September 16, 2024

@davidslaw1 how did you create XDM complaint parquet structure?


I create the XDM structure to receive the data

<profileSchemaName>

-> <tenant id> | Object

      -> <profile data object> | Object

           -> <list of attributes, their types, set identity parameters>

-> _repo | Object

      -> createDate | DateTime

      -> modifyDate | DateTime

-> _id | string <and the remaining standard profile record attributes>

 

Gave that structure to the database developers and asked them to create a parquet file using the XDM schema structure such as the following.  Once that was done, the ingestion process was great!  Parquet file on 1 million records ran 60% faster than csv file.:

Pseudocode

       With the source parquet file object (pf)

      Create a new parquet file object (pf_parquet_for_aep)

              Containing a column “<tenant ID>”

                        Containing a structure “populationIdentityMap”

                               Containing all the attributes in the identity graph

                        Calculate an attribute “uuid” derived from a uuid() function (Note this is to be as the _id attribute value)

root

|-- _<tenant id>: struct (nullable = false)

| |-- populationIdentityMap: struct (nullable = false)

| | |-- ipv4_home: string (nullable = true)

| | |-- maid: string (nullable = true)

| | |-- email_sha1: string (nullable = true)

| | |-- email_md5: string (nullable = true)

| | |-- email_sha256: string (nullable = true)

| | |-- transactionId: string (nullable = true)

| | |-- timestampSampled: timestamp (nullable = true)

| | |-- timestampRetain: timestamp (nullable = true)

| | |-- batch_id: string (nullable = true)

| | |-- uuid: string (nullable = false)

 

Also, be sure to tell the developers any DateTime values need to be ISO8601 compliant and be sent as UTC timezone, which they may need to convert.  AEP assumes UTC timezone as default.