Expandir minha barra de realizações na Comunidade.

JSON file doesn't show all fields based on preview

Avatar

Level 3

I am trying to load some data to populate String[] arrays in and AEP dataset.

Here is my schema with 3 array String fields like below:

GabrielaNa1_0-1741111169837.png

 

I prepped a JSON file to load some data in a dataset based on this schema. For the same customer ID - one, two or all three of these fields might be populated. Most of the time they are only one or two populated for one individual, but occasionally all three are.

The file I am trying to load has value on all 3 of these fields on some record or another. However, when I am trying to load the file, I only see two of these fields when it comes to mapping, like this for example:

picture.jpg

I believe that is because the preview on the file is looking at the first 10 rows or so, and in the first 10 rows only these two arrays are present. The third is also present somewhere in the file, but not in what the preview show.

---

I tried to force the mapping using a mapping file like below:

picture.jpg

But with this mapping I get this error:

picture.jpg

picture.jpg

How can I map all three fields from file to schema, so all values are loaded? I hope I don’t have to export a separate file for each JSON field I have to ingest!

FYI: with a different row distribution in the file, where the first 10 rows have all 3 fields populated in one or more rows, all three array fields are available for mapping:

GabrielaNa1_5-1741111798711.png

 

4 Respostas

Avatar

Community Advisor

@GabrielaNa1 Great Question, with JSON as ingestion payload the data flow setup through UI with "add data" looks for all the possible json elements used in mapping for validation, this wouldn't be the case with CSV as we do provide headers.

 

Please change the json payload added at step "Add Data"(https://experienceleague.adobe.com/en/docs/experience-platform/sources/ui-tutorials/dataflow/cloud-s...) to have all elements you want to map.

 

This would not be the case if you setup the dataflow through API's.

 

Let me know if you need more help on this.

 

~cheers,

NN.

 

 

Avatar

Level 3

Hi NN, thanks for your answer, but I am not sure I understand what you suggest.

 

1. I cannot use CSV here because I have arrays, which don't load well through CSV (or at least I haven't managed to load them correctly).

2. The JSON - it seems this is not true: "data flow setup through UI with "add data" looks for all the possible json elements used in mapping for validation". It seems that it only looks in the rows considered for Preview (10 rows or so). So, unless I find a better method, I will export each JSON array in a separate field. I will also look into the dataflow setup through API and see if it might offer different possibilities/outcome.

Avatar

Community Advisor

@GabrielaNa1 you are correct "Add Data" looks for first 100 rows with all elements for mapping setup to proceed, there are couple of ways to circumvent this.

 

#1 Prep a sample ND-JSON file with few records with all elements for mapping step to validate and move forward.

#2 Code through API's, I have done this through data flow & dataprep (mapping set) API's for a client in the past with out no issues as "add data" step is only for UI driven setup.

 

Let me know if you need more help on this, happy to share postman scripts.

 

~cheers,

NN.

 

 

Avatar

Level 3

Thanks, if you can share the postman scripts it would save me a lot of digging. Really appreciate.