Hi,
I have created a data layer which has an event information. Refer the below screen shot:
So, for every event tracking, I want to use below code:
And then I can create a Direct Call Rule for 'Learn More event', which would send the event info to Analytics.
I am very much particular about using a data layer for every metadata information (including events), Wanted to know how relevant it will be if we can store the event information in data layer and later can use push method for updating those events rather than using an Event Based Rule mapped with html tags of the page ?
I am very new to DTM, just wanted to know the best approach for event tracking using DTM. I have been stuck here since long in identifying an ideal approach for Event Tracking using DTM, but looks like, none of the available docs talks about best practices ofEvent Tracking.
Your help is appreciated!
Thanks,
Solved! Go to Solution.
Views
Replies
Total Likes
Hi Jaya ,
In practice, we usually see and use all of the data collection methods mentioned in combination. Few large company websites have built out a complete data collector or data layer model with all the data they want to capture from their webpages and applications. Whichever data collection strategies we choose, adequate planning, documentation, and timely communication across teams can go a long way in helping us ensure that the first link in our data collection supply chain is a strong one.
But One of the recommended Implementation method of tracking user actions i.e. Events is to use pass the event information in data layer and then use push method for updating those events in data layer and subsequently fire the direct call rule. This approach is preferred due to the following reasons :
1. Capturing data by traversing the DOM is very easy and can be even easier with DTM through Event based rules i.e. simply use the dropdown identifiers and CSS selectors for your page elements, and you’re done. However the potential pitfall behind this is that HTML markup of many large websites is sometimes poorly formed, invalid, or difficult to access using common DOM traversal and selection methods. It can also be fragile and can break when pages are redesigned or content is updated or when the markup of the page (or application) changes, our data collection has to change in sync to remain consistent. If the markup changes, and no one changes the data collection, we could end up with inconsistent reporting and issues in analysis and validation that are difficult to troubleshoot and correct.
2. We no longer have to add unique Id attributes & custom data attributes to the individual elements as it does require more careful planning Because this involves adding metadata to individual elements and not just unique container elements, it requires more thought and planning to ensure consistent taxonomies and implementation across our sites.
Thanks & Regards
Parit Mittal
Views
Replies
Total Likes
Hi Jaya ,
In practice, we usually see and use all of the data collection methods mentioned in combination. Few large company websites have built out a complete data collector or data layer model with all the data they want to capture from their webpages and applications. Whichever data collection strategies we choose, adequate planning, documentation, and timely communication across teams can go a long way in helping us ensure that the first link in our data collection supply chain is a strong one.
But One of the recommended Implementation method of tracking user actions i.e. Events is to use pass the event information in data layer and then use push method for updating those events in data layer and subsequently fire the direct call rule. This approach is preferred due to the following reasons :
1. Capturing data by traversing the DOM is very easy and can be even easier with DTM through Event based rules i.e. simply use the dropdown identifiers and CSS selectors for your page elements, and you’re done. However the potential pitfall behind this is that HTML markup of many large websites is sometimes poorly formed, invalid, or difficult to access using common DOM traversal and selection methods. It can also be fragile and can break when pages are redesigned or content is updated or when the markup of the page (or application) changes, our data collection has to change in sync to remain consistent. If the markup changes, and no one changes the data collection, we could end up with inconsistent reporting and issues in analysis and validation that are difficult to troubleshoot and correct.
2. We no longer have to add unique Id attributes & custom data attributes to the individual elements as it does require more careful planning Because this involves adding metadata to individual elements and not just unique container elements, it requires more thought and planning to ensure consistent taxonomies and implementation across our sites.
Thanks & Regards
Parit Mittal
Views
Replies
Total Likes
Thanks Parit! That clarifies my doubts!
Views
Replies
Total Likes
Views
Likes
Replies
Views
Likes
Replies