Expand my Community achievements bar.

Join us January 15th for an AMA with Champion Achaia Walton, who will be talking about her article on Event-Based Reporting and Measuring Content Groups!
SOLVED

Video Tracking Audit Findings and Questions

Avatar

Level 4

Dear Team,

The client requested an audit of their video tracking due to some issues. After reviewing the setup, here are my findings:

1) They are capturing the video title in eVar19 as follows:

1230674401112:abc basierter Gentransfer

It seems that both the video ID and video title are being stored together in eVar19. Is this considered a best practice? If so, could you clarify why?

I also noticed that the video title is not present in the page view source or the data layer—it only appears on the video screen, not within the content itself.

2) In addition, I observed that they are not tracking video pauses or video mutes. Is it beneficial ?

Could you also provide a list of additional information that should be tracked, which would be beneficial for the client?

Thank you!

1 Accepted Solution

Avatar

Correct answer by
Community Advisor

Hi @priyankagupta20 ,

I assume there is a reason why they combined ID and video name. There are no clear guidelines on how to best do this, but I would think having the video ID at hand is surely beneficial as additional payload. So, not necessarily in the same eVar but for instance eVar20. Anything that helps you understand the data better at a later stage is always a plus.

But nothing inherently wrong about the current integration.

 

About pause and mutes. I think pause could be interesting to understand if people are pausing the content to maybe take notes or have it sink in for a moment. Obvioudly a bit of guessing involved in interpreting the events.

 

But could also give you indication on whether the content could/should be optimized.

Same with the mute button. If people immediately mute your video, the music and/or audio can be interpreted as not interesting or annoying.

 

So, in both cases pause and mute, you could derive some additional information on the content quality.

For instance

  • people mute a specific video after 5 seconds: is the audio too loud or annoying? Does the music (if music plays) maybe annoy people or the target audience and could be exchanged against something that resonates better? Or worst case: is the voice annoying and people are not even interested in listening? (in that case you will for sure want to offer closed captions)

As you can see, as usual, you can track everything. In the end it comes down to what you do with this information, meaning, insights have to be followed up on.

 

Hope that helps

 

 

Cheers from Switzerland!


View solution in original post

2 Replies

Avatar

Correct answer by
Community Advisor

Hi @priyankagupta20 ,

I assume there is a reason why they combined ID and video name. There are no clear guidelines on how to best do this, but I would think having the video ID at hand is surely beneficial as additional payload. So, not necessarily in the same eVar but for instance eVar20. Anything that helps you understand the data better at a later stage is always a plus.

But nothing inherently wrong about the current integration.

 

About pause and mutes. I think pause could be interesting to understand if people are pausing the content to maybe take notes or have it sink in for a moment. Obvioudly a bit of guessing involved in interpreting the events.

 

But could also give you indication on whether the content could/should be optimized.

Same with the mute button. If people immediately mute your video, the music and/or audio can be interpreted as not interesting or annoying.

 

So, in both cases pause and mute, you could derive some additional information on the content quality.

For instance

  • people mute a specific video after 5 seconds: is the audio too loud or annoying? Does the music (if music plays) maybe annoy people or the target audience and could be exchanged against something that resonates better? Or worst case: is the voice annoying and people are not even interested in listening? (in that case you will for sure want to offer closed captions)

As you can see, as usual, you can track everything. In the end it comes down to what you do with this information, meaning, insights have to be followed up on.

 

Hope that helps

 

 

Cheers from Switzerland!


Avatar

Community Advisor and Adobe Champion

I know this is already solved, but just adding my 2 cents.

 

I actually have multiple "combined" dimensions...

  1. This helps me to save on variable usage (we have a lot of different uses, and I don't want to risk running out of dimensions - we aren't close, yet... but just watching out for the future)
  2. Adobe Analytics doesn't have "flat table" visualizations... so this can allow me to see multiple important fields, without having to do multiple breakdowns, which can be an overwhelming display
  3. I will pair my combined dimensions with classifications, so that I can pull individual values, such as the ID, or the Title in isolation... so I get the best of both worlds

 

As for tracking the mute.... if your videos autoplay as muted, and users have to un-mute.. then I would say this is very important to track... since it's a good chance that if the entire video is played in a muted state, it's very likely the users just ignored the video, and didn't really consume it.

 

In our video tracking, I track play, pause, complete, and some specific progresses (10%, 25%, 50%, 75%, 90%), but when the video is played/paused, I also track a rounded % complete. So I can see where the user paused, if they resumed again from that location, or if they scanned to a later part of the video... also, since there is often an "outro" to the video, a lot of users will stop the video slightly earlier, so that (along with the 90% complete) can give me a better understanding if most of the video was consumed...