Expand my Community achievements bar.

SOLVED

Multiple schemas for different type of Recipients

Avatar

Level 3

I have a situation where we need to use 2 different type of Targeting dimensions. One of them is Doctor and other is some general caregiver (who assists aged persons). So do you suggest to create 2 different schemas for these? Here are the different options I thought and pros and cons. So need some expert advice on choosing the best one.

 

Option 1:

- Create new schema for Doctor and use the Recipient Schema (extend as needed) for Caregiver as Caregivers fields are more similar to existing Recipient Schema

Pros: No need to create 2 schemas

Cons: You have to consider the heavy lifting of creating a new Delivery Mapping for the Doctor to be available in email personalization

 

Options 2:

- Create 2 new schemas for Doctor and Caregiver

Pros: No unnecessary fields as many of the default Recipient fields are not needed for either of Doc/Caregiver. So limited null values

Cons: Same as option 1 but for both the schemas

 

Option 3:

- Use the recipient schema to store both Doctor and Caregiver. Use a field to distinguish the type of records

Pros: No need to create a new Delivery Mapping and associated complexity

Cons: There are many distinct fields for Doctor and Caregiver which are not applicable for the other type of record. Thus it will end up having too many null fields for each record. Also for any segmentation you have to run the filtration with the type and then the remaining filtration.

 

So please share the most optimum option to be used.

 

Also i will appreciate if someone can share some detail documentation for the Delivery Mapping.

1 Accepted Solution

Avatar

Correct answer by
Level 1

Given that the main challenge with extending schemas is ensuring proper aggregate calculation and reporting,  I would go with Option 1 of the three you offered,  and seriously consider extending the recipient schema.  The product calculates aggregate values off of the unique identifiers in the tables,  so the out of the box reporting would be dependent on the recipient schema for delivery counts and tracking per person. 

Our AGS team is excellent at defining these solutions and verifying and testing the dependencies.   If you choose to develop this on your own,  the best practice is to do so on a DEV environment and verify that your aggregate reported values are correct,  that your extended values are easily accessible through query and enrichment activities, and that any web applications or tracking information can also be reported successfully for the additional "people" in the system. 

View solution in original post

4 Replies

Avatar

Level 10

Option 4:

- Create new schema for both by extending the Recipient Schema use one extra field say enumeration type Caregiver, Doctor.

<enumeration basetype="string" name="type"> <value label="Doctor" value="doctor" name="doctor"/> <value label="Caregiver" name="caregiver" value="caregiver"/> </enumeration> .... <attribute advanced="true" desc="type of User" enum="type" label="type of User" name="type" type="string"/>

-Now for the field which are applicable only to doctor use applicableIf="@type == 'Doctor'"  and vice versa for caregiver

Cons: None

Pros: No need to create delivery mapping  or delivery log mapping table. 

Avatar

Correct answer by
Level 1

Given that the main challenge with extending schemas is ensuring proper aggregate calculation and reporting,  I would go with Option 1 of the three you offered,  and seriously consider extending the recipient schema.  The product calculates aggregate values off of the unique identifiers in the tables,  so the out of the box reporting would be dependent on the recipient schema for delivery counts and tracking per person. 

Our AGS team is excellent at defining these solutions and verifying and testing the dependencies.   If you choose to develop this on your own,  the best practice is to do so on a DEV environment and verify that your aggregate reported values are correct,  that your extended values are easily accessible through query and enrichment activities, and that any web applications or tracking information can also be reported successfully for the additional "people" in the system. 

Avatar

Level 10

Heather St.Peter wrote...

Given that the main challenge with extending schemas is ensuring proper aggregate calculation and reporting,  I would go with Option 1 of the three you offered,  and seriously consider extending the recipient schema.  The product calculates aggregate values off of the unique identifiers in the tables,  so the out of the box reporting would be dependent on the recipient schema for delivery counts and tracking per person. 

Our AGS team is excellent at defining these solutions and verifying and testing the dependencies.   If you choose to develop this on your own,  the best practice is to do so on a DEV environment and verify that your aggregate reported values are correct,  that your extended values are easily accessible through query and enrichment activities, and that any web applications or tracking information can also be reported successfully for the additional "people" in the system. 

 

Hi Heather St.Peter,

Can You tell me any cons with my approach?

I agree with you about extending the Recipient schema but why not for both entities?

With option 1, You have to create a new schema which will create a problem for aggregates and reporting if not managed properly.

Avatar

Level 5

HI

I am running into similiar Situation where we have extended the recipient Schema for one of the brand and now we have to add another brand, which doesnt fit any of the previous brand style. the first brand is more on the retail industry the other one ( a new company that they acquire) is more on the marketing industry.

The client wants to consolidate the records by just using the flags ( isAbrand, is B brand). is that ok to create another recipient extended schema ( different namespace) and load the data there or load them on the one that they have already extended? the fields are slightly different from brands. Also they have a many rules on the data loading that it will take long time to load the data. Currently they load 50000 records and their ETL process takes almost 1 hour.  this new brand have close to 20 mil records.

thanks